{"id": "38cccc443c5d-0", "text": "API Reference\u00b6\nlangchain.agents: Agents\u00b6\nInterface for agents.\nClasses\u00b6\nagents.agent.Agent\nClass responsible for calling the language model and deciding the action.\nagents.agent.AgentExecutor\nConsists of an agent using tools.\nagents.agent.AgentOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nagents.agent.BaseMultiActionAgent\nBase Agent class.\nagents.agent.BaseSingleActionAgent\nBase Agent class.\nagents.agent.ExceptionTool\nCreate a new model by parsing and validating input data from keyword arguments.\nagents.agent.LLMSingleActionAgent\nCreate a new model by parsing and validating input data from keyword arguments.\nagents.agent_toolkits.azure_cognitive_services.toolkit.AzureCognitiveServicesToolkit\nToolkit for Azure Cognitive Services.\nagents.agent_toolkits.base.BaseToolkit\nClass representing a collection of related tools.\nagents.agent_toolkits.file_management.toolkit.FileManagementToolkit\nToolkit for interacting with a Local Files.\nagents.agent_toolkits.gmail.toolkit.GmailToolkit\nToolkit for interacting with Gmail.\nagents.agent_toolkits.jira.toolkit.JiraToolkit\nJira Toolkit.\nagents.agent_toolkits.json.toolkit.JsonToolkit\nToolkit for interacting with a JSON spec.\nagents.agent_toolkits.nla.tool.NLATool\nNatural Language API Tool.\nagents.agent_toolkits.nla.toolkit.NLAToolkit\nNatural Language API Toolkit Definition.\nagents.agent_toolkits.office365.toolkit.O365Toolkit\nToolkit for interacting with Office365.\nagents.agent_toolkits.openapi.planner.RequestsDeleteToolWithParsing\nCreate a new model by parsing and validating input data from keyword arguments.\nagents.agent_toolkits.openapi.planner.RequestsGetToolWithParsing\nCreate a new model by parsing and validating input data from keyword arguments.\nagents.agent_toolkits.openapi.planner.RequestsPatchToolWithParsing\nCreate a new model by parsing and validating input data from keyword arguments.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-1", "text": "Create a new model by parsing and validating input data from keyword arguments.\nagents.agent_toolkits.openapi.planner.RequestsPostToolWithParsing\nCreate a new model by parsing and validating input data from keyword arguments.\nagents.agent_toolkits.openapi.toolkit.OpenAPIToolkit\nToolkit for interacting with an OpenAPI API.\nagents.agent_toolkits.openapi.toolkit.RequestsToolkit\nToolkit for making requests.\nagents.agent_toolkits.playwright.toolkit.PlayWrightBrowserToolkit\nToolkit for web browser tools.\nagents.agent_toolkits.powerbi.toolkit.PowerBIToolkit\nToolkit for interacting with PowerBI dataset.\nagents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit\nToolkit for interacting with Spark SQL.\nagents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit\nToolkit for interacting with SQL databases.\nagents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo\nInformation about a vectorstore.\nagents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit\nToolkit for routing between vector stores.\nagents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit\nToolkit for interacting with a vector store.\nagents.agent_toolkits.zapier.toolkit.ZapierToolkit\nZapier Toolkit.\nagents.agent_types.AgentType(value[,\u00a0names,\u00a0...])\nEnumerator with the Agent types.\nagents.chat.base.ChatAgent\nCreate a new model by parsing and validating input data from keyword arguments.\nagents.chat.output_parser.ChatOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nagents.conversational.base.ConversationalAgent\nAn agent designed to hold a conversation in addition to using tools.\nagents.conversational.output_parser.ConvoOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nagents.conversational_chat.base.ConversationalChatAgent\nAn agent designed to hold a conversation in addition to using tools.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-2", "text": "An agent designed to hold a conversation in addition to using tools.\nagents.conversational_chat.output_parser.ConvoOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nagents.mrkl.base.ChainConfig(action_name,\u00a0...)\nConfiguration for chain to use in MRKL system.\nagents.mrkl.base.MRKLChain\nChain that implements the MRKL system.\nagents.mrkl.base.ZeroShotAgent\nAgent for the MRKL chain.\nagents.mrkl.output_parser.MRKLOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nagents.openai_functions_agent.base.OpenAIFunctionsAgent\nAn Agent driven by OpenAIs function powered API.\nagents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent\nAn Agent driven by OpenAIs function powered API.\nagents.react.base.ReActChain\nChain that implements the ReAct paper.\nagents.react.base.ReActDocstoreAgent\nAgent for the ReAct chain.\nagents.react.base.ReActTextWorldAgent\nAgent for the ReAct TextWorld chain.\nagents.react.output_parser.ReActOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nagents.schema.AgentScratchPadChatPromptTemplate\nCreate a new model by parsing and validating input data from keyword arguments.\nagents.self_ask_with_search.base.SelfAskWithSearchAgent\nAgent for the self-ask-with-search paper.\nagents.self_ask_with_search.base.SelfAskWithSearchChain\nChain that does self-ask with search.\nagents.self_ask_with_search.output_parser.SelfAskOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nagents.structured_chat.base.StructuredChatAgent\nCreate a new model by parsing and validating input data from keyword arguments.\nagents.structured_chat.output_parser.StructuredChatOutputParser", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-3", "text": "agents.structured_chat.output_parser.StructuredChatOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nagents.structured_chat.output_parser.StructuredChatOutputParserWithRetries\nCreate a new model by parsing and validating input data from keyword arguments.\nagents.tools.InvalidTool\nTool that is run when invalid tool name is encountered by agent.\nFunctions\u00b6\nagents.agent_toolkits.csv.base.create_csv_agent(...)\nCreate csv agent by loading to a dataframe and using pandas agent.\nagents.agent_toolkits.json.base.create_json_agent(...)\nConstruct a json agent from an LLM and tools.\nagents.agent_toolkits.openapi.base.create_openapi_agent(...)\nConstruct a json agent from an LLM and tools.\nagents.agent_toolkits.openapi.planner.create_openapi_agent(...)\nInstantiate API planner and controller for a given spec.\nagents.agent_toolkits.openapi.spec.dereference_refs(...)\nTry to substitute $refs.\nagents.agent_toolkits.openapi.spec.reduce_openapi_spec(spec)\nSimplify/distill/minify a spec somehow.\nagents.agent_toolkits.pandas.base.create_pandas_dataframe_agent(llm,\u00a0df)\nConstruct a pandas agent from an LLM and dataframe.\nagents.agent_toolkits.powerbi.base.create_pbi_agent(llm)\nConstruct a pbi agent from an LLM and tools.\nagents.agent_toolkits.powerbi.chat_base.create_pbi_chat_agent(llm)\nConstruct a Power BI agent from a Chat LLM and tools.\nagents.agent_toolkits.python.base.create_python_agent(...)\nConstruct a python agent from an LLM and tool.\nagents.agent_toolkits.spark.base.create_spark_dataframe_agent(llm,\u00a0df)\nConstruct a spark agent from an LLM and dataframe.\nagents.agent_toolkits.spark_sql.base.create_spark_sql_agent(...)\nConstruct a sql agent from an LLM and tools.\nagents.agent_toolkits.sql.base.create_sql_agent(...)", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-4", "text": "agents.agent_toolkits.sql.base.create_sql_agent(...)\nConstruct a sql agent from an LLM and tools.\nagents.agent_toolkits.vectorstore.base.create_vectorstore_agent(...)\nConstruct a vectorstore agent from an LLM and tools.\nagents.agent_toolkits.vectorstore.base.create_vectorstore_router_agent(...)\nConstruct a vectorstore router agent from an LLM and tools.\nagents.initialize.initialize_agent(tools,\u00a0llm)\nLoad an agent executor given tools and LLM.\nagents.load_tools.get_all_tool_names()\nGet a list of all possible tool names.\nagents.load_tools.load_huggingface_tool(...)\nLoads a tool from the HuggingFace Hub.\nagents.load_tools.load_tools(tool_names[,\u00a0...])\nLoad tools based on their name.\nagents.loading.load_agent(path,\u00a0**kwargs)\nUnified method for loading a agent from LangChainHub or local fs.\nagents.loading.load_agent_from_config(config)\nLoad agent from Config Dict.\nagents.utils.validate_tools_single_input(...)\nValidate tools for single input.\nlangchain.cache: Cache\u00b6\nBeta Feature: base interface for cache.\nClasses\u00b6\ncache.BaseCache()\nBase interface for cache.\ncache.FullLLMCache(**kwargs)\nSQLite table for full LLM Cache (all generations).\ncache.GPTCache([init_func])\nCache that uses GPTCache as a backend.\ncache.InMemoryCache()\nCache that stores things in memory.\ncache.MomentoCache(cache_client,\u00a0cache_name,\u00a0*)\nCache that uses Momento as a backend.\ncache.RedisCache(redis_)\nCache that uses Redis as a backend.\ncache.RedisSemanticCache(redis_url,\u00a0embedding)\nCache that uses Redis as a vector-store backend.\ncache.SQLAlchemyCache(engine,\u00a0cache_schema)\nCache that uses SQAlchemy as a backend.\ncache.SQLiteCache([database_path])", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-5", "text": "Cache that uses SQAlchemy as a backend.\ncache.SQLiteCache([database_path])\nCache that uses SQLite as a backend.\nlangchain.callbacks: Callbacks\u00b6\nCallback handlers that allow listening to events in LangChain.\nClasses\u00b6\ncallbacks.aim_callback.AimCallbackHandler([...])\nCallback Handler that logs to Aim.\ncallbacks.argilla_callback.ArgillaCallbackHandler(...)\nCallback Handler that logs into Argilla.\ncallbacks.arize_callback.ArizeCallbackHandler([...])\nCallback Handler that logs to Arize.\ncallbacks.arthur_callback.ArthurCallbackHandler(...)\nCallback Handler that logs to Arthur platform.\ncallbacks.base.AsyncCallbackHandler()\nAsync callback handler that can be used to handle callbacks from langchain.\ncallbacks.base.BaseCallbackHandler()\nBase callback handler that can be used to handle callbacks from langchain.\ncallbacks.base.BaseCallbackManager(handlers)\nBase callback manager that can be used to handle callbacks from LangChain.\ncallbacks.clearml_callback.ClearMLCallbackHandler([...])\nCallback Handler that logs to ClearML.\ncallbacks.comet_ml_callback.CometCallbackHandler([...])\nCallback Handler that logs to Comet.\ncallbacks.context_callback.ContextCallbackHandler([...])\nCallback Handler that records transcripts to Context (https://getcontext.ai).\ncallbacks.file.FileCallbackHandler(filename)\nCallback Handler that writes to a file.\ncallbacks.flyte_callback.FlyteCallbackHandler()\nThis callback handler is designed specifically for usage within a Flyte task.\ncallbacks.human.HumanApprovalCallbackHandler(...)\nCallback for manually validating values.\ncallbacks.human.HumanRejectedException\nException to raise when a person manually review and rejects a value.\ncallbacks.infino_callback.InfinoCallbackHandler([...])\nCallback Handler that logs to Infino.\ncallbacks.manager.AsyncCallbackManager(handlers)\nAsync callback manager that can be used to handle callbacks from LangChain.\ncallbacks.manager.AsyncCallbackManagerForChainRun(*,\u00a0...)", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-6", "text": "callbacks.manager.AsyncCallbackManagerForChainRun(*,\u00a0...)\nAsync callback manager for chain run.\ncallbacks.manager.AsyncCallbackManagerForLLMRun(*,\u00a0...)\nAsync callback manager for LLM run.\ncallbacks.manager.AsyncCallbackManagerForRetrieverRun(*,\u00a0...)\nAsync callback manager for retriever run.\ncallbacks.manager.AsyncCallbackManagerForToolRun(*,\u00a0...)\nAsync callback manager for tool run.\ncallbacks.manager.AsyncParentRunManager(*,\u00a0...)\nAsync Parent Run Manager.\ncallbacks.manager.AsyncRunManager(*,\u00a0run_id,\u00a0...)\nAsync Run Manager.\ncallbacks.manager.BaseRunManager(*,\u00a0run_id,\u00a0...)\nBase class for run manager (a bound callback manager).\ncallbacks.manager.CallbackManager(handlers)\nCallback manager that can be used to handle callbacks from langchain.\ncallbacks.manager.CallbackManagerForChainRun(*,\u00a0...)\nCallback manager for chain run.\ncallbacks.manager.CallbackManagerForLLMRun(*,\u00a0...)\nCallback manager for LLM run.\ncallbacks.manager.CallbackManagerForRetrieverRun(*,\u00a0...)\nCallback manager for retriever run.\ncallbacks.manager.CallbackManagerForToolRun(*,\u00a0...)\nCallback manager for tool run.\ncallbacks.manager.ParentRunManager(*,\u00a0...[,\u00a0...])\nSync Parent Run Manager.\ncallbacks.manager.RunManager(*,\u00a0run_id,\u00a0...)\nSync Run Manager.\ncallbacks.mlflow_callback.MlflowCallbackHandler([...])\nCallback Handler that logs metrics and artifacts to mlflow server.\ncallbacks.openai_info.OpenAICallbackHandler()\nCallback Handler that tracks OpenAI info.\ncallbacks.promptlayer_callback.PromptLayerCallbackHandler([...])\nCallback handler for promptlayer.\ncallbacks.stdout.StdOutCallbackHandler([color])\nCallback Handler that prints to std out.\ncallbacks.streaming_aiter.AsyncIteratorCallbackHandler()\nCallback handler that returns an async iterator.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-7", "text": "callbacks.streaming_aiter.AsyncIteratorCallbackHandler()\nCallback handler that returns an async iterator.\ncallbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler(*)\nCallback handler that returns an async iterator.\ncallbacks.streaming_stdout.StreamingStdOutCallbackHandler()\nCallback handler for streaming.\ncallbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler(*)\nCallback handler for streaming in agents.\ncallbacks.streamlit.mutable_expander.ChildRecord(...)\nThe child record as a NamedTuple.\ncallbacks.streamlit.mutable_expander.ChildType(value)\nThe enumerator of the child type.\ncallbacks.streamlit.streamlit_callback_handler.LLMThoughtState(value)\nEnumerator of the LLMThought state.\ncallbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler(...)\nA callback handler that writes to a Streamlit app.\ncallbacks.streamlit.streamlit_callback_handler.ToolRecord(...)\nThe tool record as a NamedTuple.\ncallbacks.tracers.base.BaseTracer(**kwargs)\nBase interface for tracers.\ncallbacks.tracers.base.TracerException\nBase class for exceptions in tracers module.\ncallbacks.tracers.evaluation.EvaluatorCallbackHandler(...)\nA tracer that runs a run evaluator whenever a run is persisted.\ncallbacks.tracers.langchain.LangChainTracer([...])\nAn implementation of the SharedTracer that POSTS to the langchain endpoint.\ncallbacks.tracers.langchain_v1.LangChainTracerV1(...)\nAn implementation of the SharedTracer that POSTS to the langchain endpoint.\ncallbacks.tracers.run_collector.RunCollectorCallbackHandler([...])\nA tracer that collects all nested runs in a list.\ncallbacks.tracers.schemas.BaseRun\nBase class for Run.\ncallbacks.tracers.schemas.ChainRun\nClass for ChainRun.\ncallbacks.tracers.schemas.LLMRun\nClass for LLMRun.\ncallbacks.tracers.schemas.Run\nRun schema for the V2 API in the Tracer.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-8", "text": "callbacks.tracers.schemas.Run\nRun schema for the V2 API in the Tracer.\ncallbacks.tracers.schemas.ToolRun\nClass for ToolRun.\ncallbacks.tracers.schemas.TracerSession\nTracerSessionV1 schema for the V2 API.\ncallbacks.tracers.schemas.TracerSessionBase\nA creation class for TracerSession.\ncallbacks.tracers.schemas.TracerSessionV1\nTracerSessionV1 schema.\ncallbacks.tracers.schemas.TracerSessionV1Base\nBase class for TracerSessionV1.\ncallbacks.tracers.schemas.TracerSessionV1Create\nCreate class for TracerSessionV1.\ncallbacks.tracers.stdout.ConsoleCallbackHandler(...)\nTracer that prints to the console.\ncallbacks.tracers.wandb.WandbRunArgs\nArguments for the WandbTracer.\ncallbacks.tracers.wandb.WandbTracer([run_args])\nCallback Handler that logs to Weights and Biases.\ncallbacks.wandb_callback.WandbCallbackHandler([...])\nCallback Handler that logs to Weights and Biases.\ncallbacks.whylabs_callback.WhyLabsCallbackHandler(logger)\nCallback Handler for logging to WhyLabs.\nFunctions\u00b6\ncallbacks.aim_callback.import_aim()\nImport the aim python package and raise an error if it is not installed.\ncallbacks.clearml_callback.import_clearml()\nImport the clearml python package and raise an error if it is not installed.\ncallbacks.comet_ml_callback.import_comet_ml()\nImport comet_ml and raise an error if it is not installed.\ncallbacks.context_callback.import_context()\ncallbacks.flyte_callback.analyze_text(text)\nAnalyze text using textstat and spacy.\ncallbacks.flyte_callback.import_flytekit()\nImport flytekit and flytekitplugins-deck-standard.\ncallbacks.infino_callback.import_infino()", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-9", "text": "callbacks.infino_callback.import_infino()\nImport the infino client.\ncallbacks.manager.env_var_is_set(env_var)\nCheck if an environment variable is set.\ncallbacks.manager.get_openai_callback()\nGet the OpenAI callback handler in a context manager.\ncallbacks.manager.trace_as_chain_group(...)\nGet a callback manager for a chain group in a context manager.\ncallbacks.manager.tracing_enabled([session_name])\nGet the Deprecated LangChainTracer in a context manager.\ncallbacks.manager.tracing_v2_enabled([...])\nInstruct LangChain to log all runs in context to LangSmith.\ncallbacks.manager.wandb_tracing_enabled([...])\nGet the WandbTracer in a context manager.\ncallbacks.mlflow_callback.analyze_text(text)\nAnalyze text using textstat and spacy.\ncallbacks.mlflow_callback.construct_html_from_prompt_and_generation(...)\nConstruct an html element from a prompt and a generation.\ncallbacks.mlflow_callback.import_mlflow()\nImport the mlflow python package and raise an error if it is not installed.\ncallbacks.openai_info.get_openai_token_cost_for_model(...)\nGet the cost in USD for a given model and number of tokens.\ncallbacks.openai_info.standardize_model_name(...)\nStandardize the model name to a format that can be used in the OpenAI API. :param model_name: Model name to standardize. :param is_completion: Whether the model is used for completion or not. Defaults to False.\ncallbacks.streamlit.__init__.StreamlitCallbackHandler(...)\nConstruct a new StreamlitCallbackHandler.\ncallbacks.tracers.langchain.log_error_once(...)\nLog an error once.\ncallbacks.tracers.langchain.wait_for_all_tracers()\nWait for all tracers to finish.\ncallbacks.tracers.langchain_v1.get_headers()\nGet the headers for the LangChain API.\ncallbacks.tracers.stdout.elapsed(run)", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-10", "text": "Get the headers for the LangChain API.\ncallbacks.tracers.stdout.elapsed(run)\nGet the elapsed time of a run.\ncallbacks.tracers.stdout.try_json_stringify(...)\nTry to stringify an object to JSON.\ncallbacks.utils.flatten_dict(nested_dict[,\u00a0...])\nFlattens a nested dictionary into a flat dictionary.\ncallbacks.utils.hash_string(s)\nHash a string using sha1.\ncallbacks.utils.import_pandas()\nImport the pandas python package and raise an error if it is not installed.\ncallbacks.utils.import_spacy()\nImport the spacy python package and raise an error if it is not installed.\ncallbacks.utils.import_textstat()\nImport the textstat python package and raise an error if it is not installed.\ncallbacks.utils.load_json(json_path)\nLoad json file to a string.\ncallbacks.wandb_callback.analyze_text(text)\nAnalyze text using textstat and spacy.\ncallbacks.wandb_callback.construct_html_from_prompt_and_generation(...)\nConstruct an html element from a prompt and a generation.\ncallbacks.wandb_callback.import_wandb()\nImport the wandb python package and raise an error if it is not installed.\ncallbacks.wandb_callback.load_json_to_dict(...)\nLoad json file to a dictionary.\ncallbacks.whylabs_callback.import_langkit([...])\nImport the langkit python package and raise an error if it is not installed.\nlangchain.chains: Chains\u00b6\nChains are easily reusable components which can be linked together.\nClasses\u00b6\nchains.api.base.APIChain\nChain that makes API calls and summarizes the responses to answer a question.\nchains.api.openapi.chain.OpenAPIEndpointChain\nChain interacts with an OpenAPI endpoint using natural language.\nchains.api.openapi.requests_chain.APIRequesterChain\nGet the request parser.\nchains.api.openapi.requests_chain.APIRequesterOutputParser\nParse the request and error tags.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-11", "text": "chains.api.openapi.requests_chain.APIRequesterOutputParser\nParse the request and error tags.\nchains.api.openapi.response_chain.APIResponderChain\nGet the response parser.\nchains.api.openapi.response_chain.APIResponderOutputParser\nParse the response and error tags.\nchains.base.Chain\nAbstract base class for creating structured sequences of calls to components.\nchains.combine_documents.base.AnalyzeDocumentChain\nChain that splits documents, then analyzes it in pieces.\nchains.combine_documents.base.BaseCombineDocumentsChain\nBase interface for chains combining documents.\nchains.combine_documents.map_reduce.MapReduceDocumentsChain\nCombining documents by mapping a chain over them, then combining results.\nchains.combine_documents.map_rerank.MapRerankDocumentsChain\nCombining documents by mapping a chain over them, then reranking results.\nchains.combine_documents.reduce.AsyncCombineDocsProtocol(...)\nInterface for the combine_docs method.\nchains.combine_documents.reduce.CombineDocsProtocol(...)\nInterface for the combine_docs method.\nchains.combine_documents.reduce.ReduceDocumentsChain\nCombining documents by recursively reducing them.\nchains.combine_documents.refine.RefineDocumentsChain\nCombine documents by doing a first pass and then refining on more documents.\nchains.combine_documents.stuff.StuffDocumentsChain\nChain that combines documents by stuffing into context.\nchains.constitutional_ai.base.ConstitutionalChain\nChain for applying constitutional principles.\nchains.constitutional_ai.models.ConstitutionalPrinciple\nClass for a constitutional principle.\nchains.conversation.base.ConversationChain\nChain to have a conversation and load context from memory.\nchains.conversational_retrieval.base.BaseConversationalRetrievalChain\nChain for chatting with an index.\nchains.conversational_retrieval.base.ChatVectorDBChain\nChain for chatting with a vector database.\nchains.conversational_retrieval.base.ConversationalRetrievalChain\nChain for having a conversation based on retrieved documents.\nchains.flare.base.FlareChain", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-12", "text": "Chain for having a conversation based on retrieved documents.\nchains.flare.base.FlareChain\nCreate a new model by parsing and validating input data from keyword arguments.\nchains.flare.base.QuestionGeneratorChain\nCreate a new model by parsing and validating input data from keyword arguments.\nchains.flare.prompts.FinishedOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nchains.graph_qa.base.GraphQAChain\nChain for question-answering against a graph.\nchains.graph_qa.cypher.GraphCypherQAChain\nChain for question-answering against a graph by generating Cypher statements.\nchains.graph_qa.hugegraph.HugeGraphQAChain\nChain for question-answering against a graph by generating gremlin statements.\nchains.graph_qa.kuzu.KuzuQAChain\nChain for question-answering against a graph by generating Cypher statements for K\u00f9zu.\nchains.graph_qa.nebulagraph.NebulaGraphQAChain\nChain for question-answering against a graph by generating nGQL statements.\nchains.graph_qa.sparql.GraphSparqlQAChain\nChain for question-answering against an RDF or OWL graph by generating SPARQL statements.\nchains.hyde.base.HypotheticalDocumentEmbedder\nGenerate hypothetical document for query, and then embed that.\nchains.llm.LLMChain\nChain to run queries against LLMs.\nchains.llm_bash.base.LLMBashChain\nChain that interprets a prompt and executes bash code to perform bash operations.\nchains.llm_bash.prompt.BashOutputParser\nParser for bash output.\nchains.llm_checker.base.LLMCheckerChain\nChain for question-answering with self-verification.\nchains.llm_math.base.LLMMathChain\nChain that interprets a prompt and executes python code to do math.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-13", "text": "Chain that interprets a prompt and executes python code to do math.\nchains.llm_requests.LLMRequestsChain\nChain that hits a URL and then uses an LLM to parse results.\nchains.llm_summarization_checker.base.LLMSummarizationCheckerChain\nChain for question-answering with self-verification.\nchains.mapreduce.MapReduceChain\nMap-reduce chain.\nchains.moderation.OpenAIModerationChain\nPass input through a moderation endpoint.\nchains.natbot.base.NatBotChain\nImplement an LLM driven browser.\nchains.natbot.crawler.ElementInViewPort\nA typed dictionary containing information about elements in the viewport.\nchains.openai_functions.citation_fuzzy_match.FactWithEvidence\nClass representing single statement.\nchains.openai_functions.citation_fuzzy_match.QuestionAnswer\nA question and its answer as a list of facts each one should have a source.\nchains.openai_functions.openapi.SimpleRequestChain\nCreate a new model by parsing and validating input data from keyword arguments.\nchains.openai_functions.qa_with_structure.AnswerWithSources\nAn answer to the question being asked, with sources.\nchains.pal.base.PALChain\nImplements Program-Aided Language Models.\nchains.prompt_selector.BasePromptSelector\nCreate a new model by parsing and validating input data from keyword arguments.\nchains.prompt_selector.ConditionalPromptSelector\nPrompt collection that goes through conditionals.\nchains.qa_generation.base.QAGenerationChain\nCreate a new model by parsing and validating input data from keyword arguments.\nchains.qa_with_sources.base.BaseQAWithSourcesChain\nQuestion answering with sources over documents.\nchains.qa_with_sources.base.QAWithSourcesChain\nQuestion answering with sources over documents.\nchains.qa_with_sources.loading.LoadingCallable(...)\nInterface for loading the combine documents chain.\nchains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-14", "text": "chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain\nQuestion-answering with sources over an index.\nchains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain\nQuestion-answering with sources over a vector database.\nchains.query_constructor.base.StructuredQueryOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nchains.query_constructor.ir.Comparator(value)\nEnumerator of the comparison operators.\nchains.query_constructor.ir.Comparison\nA comparison to a value.\nchains.query_constructor.ir.Expr\nCreate a new model by parsing and validating input data from keyword arguments.\nchains.query_constructor.ir.FilterDirective\nA filtering expression.\nchains.query_constructor.ir.Operation\nA logical operation over other directives.\nchains.query_constructor.ir.Operator(value)\nEnumerator of the operations.\nchains.query_constructor.ir.StructuredQuery\nCreate a new model by parsing and validating input data from keyword arguments.\nchains.query_constructor.ir.Visitor()\nDefines interface for IR translation using visitor pattern.\nchains.query_constructor.parser.QueryTransformer\nchains.query_constructor.schema.AttributeInfo\nInformation about a data source attribute.\nchains.question_answering.__init__.LoadingCallable(...)\nInterface for loading the combine documents chain.\nchains.retrieval_qa.base.BaseRetrievalQA\nCreate a new model by parsing and validating input data from keyword arguments.\nchains.retrieval_qa.base.RetrievalQA\nChain for question-answering against an index.\nchains.retrieval_qa.base.VectorDBQA\nChain for question-answering against a vector database.\nchains.router.base.MultiRouteChain\nUse a single chain to route an input to one of multiple candidate chains.\nchains.router.base.Route(destination,\u00a0...)\nCreate new instance of Route(destination, next_inputs)\nchains.router.base.RouterChain\nChain that outputs the name of a destination chain and the inputs to it.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-15", "text": "Chain that outputs the name of a destination chain and the inputs to it.\nchains.router.embedding_router.EmbeddingRouterChain\nClass that uses embeddings to route between options.\nchains.router.llm_router.LLMRouterChain\nA router chain that uses an LLM chain to perform routing.\nchains.router.llm_router.RouterOutputParser\nParser for output of router chain int he multi-prompt chain.\nchains.router.multi_prompt.MultiPromptChain\nA multi-route chain that uses an LLM router chain to choose amongst prompts.\nchains.router.multi_retrieval_qa.MultiRetrievalQAChain\nA multi-route chain that uses an LLM router chain to choose amongst retrieval qa chains.\nchains.sequential.SequentialChain\nChain where the outputs of one chain feed directly into next.\nchains.sequential.SimpleSequentialChain\nSimple chain where the outputs of one step feed directly into next.\nchains.sql_database.base.SQLDatabaseChain\nChain for interacting with SQL Database.\nchains.sql_database.base.SQLDatabaseSequentialChain\nChain for querying SQL database that is a sequential chain.\nchains.summarize.__init__.LoadingCallable(...)\nInterface for loading the combine documents chain.\nchains.transform.TransformChain\nChain transform chain output.\nFunctions\u00b6\nchains.graph_qa.cypher.extract_cypher(text)\nExtract Cypher code from a text.\nchains.loading.load_chain(path,\u00a0**kwargs)\nUnified method for loading a chain from LangChainHub or local fs.\nchains.loading.load_chain_from_config(...)\nLoad chain from Config Dict.\nchains.openai_functions.base.convert_python_function_to_openai_function(...)\nConvert a Python function to an OpenAI function-calling API compatible dict.\nchains.openai_functions.base.convert_to_openai_function(...)\nConvert a raw function/class to an OpenAI function.\nchains.openai_functions.base.create_openai_fn_chain(...)\nCreate an LLM chain that uses OpenAI functions.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-16", "text": "Create an LLM chain that uses OpenAI functions.\nchains.openai_functions.base.create_structured_output_chain(...)\nCreate an LLMChain that uses an OpenAI function to get a structured output.\nchains.openai_functions.citation_fuzzy_match.create_citation_fuzzy_match_chain(llm)\nCreate a citation fuzzy match chain.\nchains.openai_functions.extraction.create_extraction_chain(...)\nCreates a chain that extracts information from a passage.\nchains.openai_functions.extraction.create_extraction_chain_pydantic(...)\nCreates a chain that extracts information from a passage using pydantic schema.\nchains.openai_functions.openapi.get_openapi_chain(spec)\nCreate a chain for querying an API from a OpenAPI spec.\nchains.openai_functions.openapi.openapi_spec_to_openai_fn(spec)\nConvert a valid OpenAPI spec to the JSON Schema format expected for OpenAI\nchains.openai_functions.qa_with_structure.create_qa_with_sources_chain(...)\nCreate a question answering chain that returns an answer with sources.\nchains.openai_functions.qa_with_structure.create_qa_with_structure_chain(...)\nCreate a question answering chain that returns an answer with sources.\nchains.openai_functions.tagging.create_tagging_chain(...)\nCreates a chain that extracts information from a passage.\nchains.openai_functions.tagging.create_tagging_chain_pydantic(...)\nCreates a chain that extracts information from a passage.\nchains.openai_functions.utils.get_llm_kwargs(...)\nReturns the kwargs for the LLMChain constructor.\nchains.prompt_selector.is_chat_model(llm)\nCheck if the language model is a chat model.\nchains.prompt_selector.is_llm(llm)\nCheck if the language model is a LLM.\nchains.qa_with_sources.loading.load_qa_with_sources_chain(llm)\nLoad question answering with sources chain.\nchains.query_constructor.base.load_query_constructor_chain(...)\nLoad a query constructor chain.\nchains.query_constructor.parser.get_parser([...])", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-17", "text": "Load a query constructor chain.\nchains.query_constructor.parser.get_parser([...])\nReturns a parser for the query language.\nchains.question_answering.__init__.load_qa_chain(llm)\nLoad question answering chain.\nchains.summarize.__init__.load_summarize_chain(llm)\nLoad summarizing chain.\nlangchain.chat_models: Chat Models\u00b6\nClasses\u00b6\nchat_models.anthropic.ChatAnthropic\nWrapper around Anthropic's large language model.\nchat_models.azure_openai.AzureChatOpenAI\nWrapper around Azure OpenAI Chat Completion API.\nchat_models.base.BaseChatModel\nCreate a new model by parsing and validating input data from keyword arguments.\nchat_models.base.SimpleChatModel\nCreate a new model by parsing and validating input data from keyword arguments.\nchat_models.fake.FakeListChatModel\nFake ChatModel for testing purposes.\nchat_models.google_palm.ChatGooglePalm\nWrapper around Google's PaLM Chat API.\nchat_models.google_palm.ChatGooglePalmError\nError raised when there is an issue with the Google PaLM API.\nchat_models.human.HumanInputChatModel\nChatModel wrapper which returns user input as the response..\nchat_models.jinachat.JinaChat\nJinaChat is a wrapper for Jina AI's LLM service, providing cost-effective image chat capabilities in comparison to other LLM APIs.\nchat_models.openai.ChatOpenAI\nWrapper around OpenAI Chat large language models.\nchat_models.promptlayer_openai.PromptLayerChatOpenAI\nWrapper around OpenAI Chat large language models and PromptLayer.\nchat_models.vertexai.ChatVertexAI\nWrapper around Vertex AI large language models.\nFunctions\u00b6\nchat_models.google_palm.chat_with_retry(llm,\u00a0...)\nUse tenacity to retry the completion call.\nlangchain.client: Client\u00b6\nLangChain + Client.\nClasses\u00b6\nclient.runner_utils.InputFormatError", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-18", "text": "LangChain + Client.\nClasses\u00b6\nclient.runner_utils.InputFormatError\nRaised when the input format is invalid.\nFunctions\u00b6\nclient.runner_utils.run_llm(llm,\u00a0inputs,\u00a0...)\nRun the language model on the example.\nclient.runner_utils.run_llm_or_chain(...[,\u00a0...])\nRun the Chain or language model synchronously.\nclient.runner_utils.run_on_dataset(...[,\u00a0...])\nRun the Chain or language model on a dataset and store traces to the specified project name.\nclient.runner_utils.run_on_examples(...[,\u00a0...])\nRun the Chain or language model on examples and store traces to the specified project name.\nlangchain.docstore: Docstore\u00b6\nWrappers on top of docstores.\nClasses\u00b6\ndocstore.arbitrary_fn.DocstoreFn(lookup_fn)\nLangchain Docstore via arbitrary lookup function.\ndocstore.base.AddableMixin()\nMixin class that supports adding texts.\ndocstore.base.Docstore()\nInterface to access to place that stores documents.\ndocstore.in_memory.InMemoryDocstore([_dict])\nSimple in memory docstore in the form of a dict.\ndocstore.wikipedia.Wikipedia()\nWrapper around wikipedia API.\nlangchain.document_loaders: Document Loaders\u00b6\nAll different types of document loaders.\nClasses\u00b6\ndocument_loaders.acreom.AcreomLoader(path[,\u00a0...])\nLoader that loads acreom vault from a directory.\ndocument_loaders.airbyte_json.AirbyteJSONLoader(...)\nLoader that loads local airbyte json files.\ndocument_loaders.airtable.AirtableLoader(...)\nLoader for Airtable tables.\ndocument_loaders.apify_dataset.ApifyDatasetLoader\nLoading Documents from Apify datasets.\ndocument_loaders.arxiv.ArxivLoader(query[,\u00a0...])\nLoads a query result from arxiv.org into a list of Documents.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-19", "text": "Loads a query result from arxiv.org into a list of Documents.\ndocument_loaders.azlyrics.AZLyricsLoader(...)\nLoader that loads AZLyrics webpages.\ndocument_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader(...)\nLoading Documents from Azure Blob Storage.\ndocument_loaders.azure_blob_storage_file.AzureBlobStorageFileLoader(...)\nLoading Documents from Azure Blob Storage.\ndocument_loaders.base.BaseBlobParser()\nAbstract interface for blob parsers.\ndocument_loaders.base.BaseLoader()\nInterface for loading Documents.\ndocument_loaders.bibtex.BibtexLoader(...[,\u00a0...])\nLoads a bibtex file into a list of Documents.\ndocument_loaders.bigquery.BigQueryLoader(query)\nLoads a query result from BigQuery into a list of documents.\ndocument_loaders.bilibili.BiliBiliLoader(...)\nLoader that loads bilibili transcripts.\ndocument_loaders.blackboard.BlackboardLoader(...)\nLoads all documents from a Blackboard course.\ndocument_loaders.blob_loaders.file_system.FileSystemBlobLoader(path,\u00a0*)\nBlob loader for the local file system.\ndocument_loaders.blob_loaders.schema.Blob\nA blob is used to represent raw data by either reference or value.\ndocument_loaders.blob_loaders.schema.BlobLoader()\nAbstract interface for blob loaders implementation.\ndocument_loaders.blob_loaders.youtube_audio.YoutubeAudioLoader(...)\nLoad YouTube urls as audio file(s).\ndocument_loaders.blockchain.BlockchainDocumentLoader(...)\nLoads elements from a blockchain smart contract into Langchain documents.\ndocument_loaders.blockchain.BlockchainType(value)\nEnumerator of the supported blockchains.\ndocument_loaders.brave_search.BraveSearchLoader(...)\nLoads a query result from Brave Search engine into a list of Documents.\ndocument_loaders.chatgpt.ChatGPTLoader(log_file)\nLoad conversations from exported ChatGPT data.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-20", "text": "Load conversations from exported ChatGPT data.\ndocument_loaders.college_confidential.CollegeConfidentialLoader(...)\nLoader that loads College Confidential webpages.\ndocument_loaders.confluence.ConfluenceLoader(url)\nLoad Confluence pages.\ndocument_loaders.confluence.ContentFormat(value)\nEnumerator of the content formats of Confluence page.\ndocument_loaders.conllu.CoNLLULoader(file_path)\nLoad CoNLL-U files.\ndocument_loaders.csv_loader.CSVLoader(file_path)\nLoads a CSV file into a list of documents.\ndocument_loaders.csv_loader.UnstructuredCSVLoader(...)\nLoader that uses unstructured to load CSV files.\ndocument_loaders.cube_semantic.CubeSemanticLoader(...)\nLoad Cube semantic layer metadata.\ndocument_loaders.dataframe.DataFrameLoader(...)\nLoad Pandas DataFrame.\ndocument_loaders.diffbot.DiffbotLoader(...)\nLoads Diffbot file json.\ndocument_loaders.directory.DirectoryLoader(...)\nLoad documents from a directory.\ndocument_loaders.discord.DiscordChatLoader(...)\nLoad Discord chat logs.\ndocument_loaders.docugami.DocugamiLoader\nLoads processed docs from Docugami.\ndocument_loaders.duckdb_loader.DuckDBLoader(query)\nLoads a query result from DuckDB into a list of documents.\ndocument_loaders.email.OutlookMessageLoader(...)\nLoads Outlook Message files using extract_msg.\ndocument_loaders.email.UnstructuredEmailLoader(...)\nLoader that uses unstructured to load email files.\ndocument_loaders.embaas.BaseEmbaasLoader\nBase class for embedding a model into an Embaas document extraction API.\ndocument_loaders.embaas.EmbaasBlobLoader\nEmbaas's document byte loader.\ndocument_loaders.embaas.EmbaasDocumentExtractionParameters\nParameters for the embaas document extraction API.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-21", "text": "Parameters for the embaas document extraction API.\ndocument_loaders.embaas.EmbaasDocumentExtractionPayload\nPayload for the Embaas document extraction API.\ndocument_loaders.embaas.EmbaasLoader\nEmbaas's document loader.\ndocument_loaders.epub.UnstructuredEPubLoader(...)\nLoader that uses unstructured to load epub files.\ndocument_loaders.evernote.EverNoteLoader(...)\nEverNote Loader.\ndocument_loaders.excel.UnstructuredExcelLoader(...)\nLoader that uses unstructured to load Microsoft Excel files.\ndocument_loaders.facebook_chat.FacebookChatLoader(path)\nLoads Facebook messages json directory dump.\ndocument_loaders.fauna.FaunaLoader(query,\u00a0...)\nFaunaDB Loader.\ndocument_loaders.figma.FigmaFileLoader(...)\nLoads Figma file json.\ndocument_loaders.gcs_directory.GCSDirectoryLoader(...)\nLoads Documents from GCS.\ndocument_loaders.gcs_file.GCSFileLoader(...)\nLoad Documents from a GCS file.\ndocument_loaders.generic.GenericLoader(...)\nA generic document loader.\ndocument_loaders.git.GitLoader(repo_path[,\u00a0...])\nLoads files from a Git repository into a list of documents.\ndocument_loaders.gitbook.GitbookLoader(web_page)\nLoad GitBook data.\ndocument_loaders.github.BaseGitHubLoader\nLoad issues of a GitHub repository.\ndocument_loaders.github.GitHubIssuesLoader\nLoad issues of a GitHub repository.\ndocument_loaders.googledrive.GoogleDriveLoader\nLoads Google Docs from Google Drive.\ndocument_loaders.gutenberg.GutenbergLoader(...)\nLoader that uses urllib to load .txt web files.\ndocument_loaders.helpers.FileEncoding(...)\nA file encoding as the NamedTuple.\ndocument_loaders.hn.HNLoader(web_path[,\u00a0...])", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-22", "text": "document_loaders.hn.HNLoader(web_path[,\u00a0...])\nLoad Hacker News data from either main page results or the comments page.\ndocument_loaders.html.UnstructuredHTMLLoader(...)\nLoader that uses unstructured to load HTML files.\ndocument_loaders.html_bs.BSHTMLLoader(file_path)\nLoader that uses beautiful soup to parse HTML files.\ndocument_loaders.hugging_face_dataset.HuggingFaceDatasetLoader(path)\nLoad Documents from the Hugging Face Hub.\ndocument_loaders.ifixit.IFixitLoader(web_path)\nLoad iFixit repair guides, device wikis and answers.\ndocument_loaders.image.UnstructuredImageLoader(...)\nLoader that uses unstructured to load image files, such as PNGs and JPGs.\ndocument_loaders.image_captions.ImageCaptionLoader(...)\nLoads the captions of an image\ndocument_loaders.imsdb.IMSDbLoader(web_path)\nLoads IMSDb webpages.\ndocument_loaders.iugu.IuguLoader(resource[,\u00a0...])\nLoader that fetches data from IUGU.\ndocument_loaders.joplin.JoplinLoader([...])\nLoader that fetches notes from Joplin.\ndocument_loaders.json_loader.JSONLoader(...)\nLoads a JSON file using a jq schema.\ndocument_loaders.larksuite.LarkSuiteDocLoader(...)\nLoads LarkSuite (FeiShu) document.\ndocument_loaders.markdown.UnstructuredMarkdownLoader(...)\nLoader that uses unstructured to load markdown files.\ndocument_loaders.mastodon.MastodonTootsLoader(...)\nMastodon toots loader.\ndocument_loaders.max_compute.MaxComputeLoader(...)\nLoads a query result from Alibaba Cloud MaxCompute table into documents.\ndocument_loaders.mediawikidump.MWDumpLoader(...)\nLoad MediaWiki dump from XML file .\ndocument_loaders.merge.MergedDataLoader(loaders)", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-23", "text": "document_loaders.merge.MergedDataLoader(loaders)\nMerge documents from a list of loaders\ndocument_loaders.mhtml.MHTMLLoader(file_path)\nLoader that uses beautiful soup to parse HTML files.\ndocument_loaders.modern_treasury.ModernTreasuryLoader(...)\nLoader that fetches data from Modern Treasury.\ndocument_loaders.notebook.NotebookLoader(path)\nLoader that loads .ipynb notebook files.\ndocument_loaders.notion.NotionDirectoryLoader(path)\nLoader that loads Notion directory dump.\ndocument_loaders.notiondb.NotionDBLoader(...)\nNotion DB Loader.\ndocument_loaders.obsidian.ObsidianLoader(path)\nLoader that loads Obsidian files from disk.\ndocument_loaders.odt.UnstructuredODTLoader(...)\nLoader that uses unstructured to load open office ODT files.\ndocument_loaders.onedrive.OneDriveLoader\nCreate a new model by parsing and validating input data from keyword arguments.\ndocument_loaders.onedrive_file.OneDriveFileLoader\nCreate a new model by parsing and validating input data from keyword arguments.\ndocument_loaders.open_city_data.OpenCityDataLoader(...)\nLoader that loads Open city data.\ndocument_loaders.org_mode.UnstructuredOrgModeLoader(...)\nLoader that uses unstructured to load Org-Mode files.\ndocument_loaders.parsers.audio.OpenAIWhisperParser([...])\nTranscribe and parse audio files.\ndocument_loaders.parsers.generic.MimeTypeBasedParser(...)\nA parser that uses mime-types to determine how to parse a blob.\ndocument_loaders.parsers.grobid.GrobidParser(...)\nLoader that uses Grobid to load article PDF files.\ndocument_loaders.parsers.grobid.ServerUnavailableException\ndocument_loaders.parsers.html.bs4.BS4HTMLParser(*)\nParser that uses beautiful soup to parse HTML files.\ndocument_loaders.parsers.language.code_segmenter.CodeSegmenter(code)", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-24", "text": "document_loaders.parsers.language.code_segmenter.CodeSegmenter(code)\nThe abstract class for the code segmenter.\ndocument_loaders.parsers.language.javascript.JavaScriptSegmenter(code)\nThe code segmenter for JavaScript.\ndocument_loaders.parsers.language.language_parser.LanguageParser([...])\nLanguage parser that split code using the respective language syntax.\ndocument_loaders.parsers.language.python.PythonSegmenter(code)\nThe code segmenter for Python.\ndocument_loaders.parsers.pdf.PDFMinerParser()\nParse PDFs with PDFMiner.\ndocument_loaders.parsers.pdf.PDFPlumberParser([...])\nParse PDFs with PDFPlumber.\ndocument_loaders.parsers.pdf.PyMuPDFParser([...])\nParse PDFs with PyMuPDF.\ndocument_loaders.parsers.pdf.PyPDFParser([...])\nLoads a PDF with pypdf and chunks at character level.\ndocument_loaders.parsers.pdf.PyPDFium2Parser()\nParse PDFs with PyPDFium2.\ndocument_loaders.parsers.txt.TextParser()\nParser for text blobs.\ndocument_loaders.pdf.BasePDFLoader(file_path)\nBase loader class for PDF files.\ndocument_loaders.pdf.MathpixPDFLoader(file_path)\nInitialize with file path.\ndocument_loaders.pdf.OnlinePDFLoader(file_path)\nLoader that loads online PDFs.\ndocument_loaders.pdf.PDFMinerLoader(file_path)\nLoader that uses PDFMiner to load PDF files.\ndocument_loaders.pdf.PDFMinerPDFasHTMLLoader(...)\nLoader that uses PDFMiner to load PDF files as HTML content.\ndocument_loaders.pdf.PDFPlumberLoader(file_path)\nLoader that uses pdfplumber to load PDF files.\ndocument_loaders.pdf.PyMuPDFLoader(file_path)\nLoader that uses PyMuPDF to load PDF files.\ndocument_loaders.pdf.PyPDFDirectoryLoader(path)", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-25", "text": "document_loaders.pdf.PyPDFDirectoryLoader(path)\nLoads a directory with PDF files with pypdf and chunks at character level.\ndocument_loaders.pdf.PyPDFLoader(file_path)\nLoads a PDF with pypdf and chunks at character level.\ndocument_loaders.pdf.PyPDFium2Loader(file_path)\nLoads a PDF with pypdfium2 and chunks at character level.\ndocument_loaders.pdf.UnstructuredPDFLoader(...)\nLoader that uses unstructured to load PDF files.\ndocument_loaders.powerpoint.UnstructuredPowerPointLoader(...)\nLoader that uses unstructured to load powerpoint files.\ndocument_loaders.psychic.PsychicLoader(...)\nLoader that loads documents from Psychic.dev.\ndocument_loaders.pyspark_dataframe.PySparkDataFrameLoader([...])\nLoad PySpark DataFrames\ndocument_loaders.python.PythonLoader(file_path)\nLoad Python files, respecting any non-default encoding if specified.\ndocument_loaders.readthedocs.ReadTheDocsLoader(path)\nLoader that loads ReadTheDocs documentation directory dump.\ndocument_loaders.recursive_url_loader.RecursiveUrlLoader(url)\nLoader that loads all child links from a given url.\ndocument_loaders.reddit.RedditPostsLoader(...)\nReddit posts loader.\ndocument_loaders.roam.RoamLoader(path)\nLoader that loads Roam files from disk.\ndocument_loaders.rst.UnstructuredRSTLoader(...)\nLoader that uses unstructured to load RST files.\ndocument_loaders.rtf.UnstructuredRTFLoader(...)\nLoader that uses unstructured to load rtf files.\ndocument_loaders.s3_directory.S3DirectoryLoader(bucket)\nLoading logic for loading documents from s3.\ndocument_loaders.s3_file.S3FileLoader(...)\nLoading logic for loading documents from s3.\ndocument_loaders.sitemap.SitemapLoader(web_path)", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-26", "text": "document_loaders.sitemap.SitemapLoader(web_path)\nLoader that fetches a sitemap and loads those URLs.\ndocument_loaders.slack_directory.SlackDirectoryLoader(...)\nLoader for loading documents from a Slack directory dump.\ndocument_loaders.snowflake_loader.SnowflakeLoader(...)\nLoads a query result from Snowflake into a list of documents.\ndocument_loaders.spreedly.SpreedlyLoader(...)\nLoader that fetches data from Spreedly API.\ndocument_loaders.srt.SRTLoader(file_path)\nLoader for .srt (subtitle) files.\ndocument_loaders.stripe.StripeLoader(resource)\nLoader that fetches data from Stripe.\ndocument_loaders.telegram.TelegramChatApiLoader([...])\nLoader that loads Telegram chat json directory dump.\ndocument_loaders.telegram.TelegramChatFileLoader(path)\nLoader that loads Telegram chat json directory dump.\ndocument_loaders.tencent_cos_directory.TencentCOSDirectoryLoader(...)\nLoading logic for loading documents from Tencent Cloud COS.\ndocument_loaders.tencent_cos_file.TencentCOSFileLoader(...)\nLoading logic for loading documents from Tencent Cloud COS.\ndocument_loaders.text.TextLoader(file_path)\nLoad text files.\ndocument_loaders.tomarkdown.ToMarkdownLoader(...)\nLoader that loads HTML to markdown using 2markdown.\ndocument_loaders.toml.TomlLoader(source)\nA TOML document loader that inherits from the BaseLoader class.\ndocument_loaders.trello.TrelloLoader(client,\u00a0...)\nTrello loader.\ndocument_loaders.twitter.TwitterTweetLoader(...)\nTwitter tweets loader.\ndocument_loaders.unstructured.UnstructuredAPIFileIOLoader(file)\nUnstructuredAPIFileIOLoader uses the Unstructured API to load files.\ndocument_loaders.unstructured.UnstructuredAPIFileLoader([...])\nUnstructuredAPIFileLoader uses the Unstructured API to load files.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-27", "text": "UnstructuredAPIFileLoader uses the Unstructured API to load files.\ndocument_loaders.unstructured.UnstructuredBaseLoader([mode])\nLoader that uses unstructured to load files.\ndocument_loaders.unstructured.UnstructuredFileIOLoader(file)\nUnstructuredFileIOLoader uses unstructured to load files.\ndocument_loaders.unstructured.UnstructuredFileLoader(...)\nUnstructuredFileLoader uses unstructured to load files.\ndocument_loaders.url.UnstructuredURLLoader(urls)\nLoader that uses unstructured to load HTML files.\ndocument_loaders.url_playwright.PlaywrightURLLoader(urls)\nLoader that uses Playwright and to load a page and unstructured to load the html.\ndocument_loaders.url_selenium.SeleniumURLLoader(urls)\nLoader that uses Selenium and to load a page and unstructured to load the html.\ndocument_loaders.weather.WeatherDataLoader(...)\nWeather Reader.\ndocument_loaders.web_base.WebBaseLoader(web_path)\nLoader that uses urllib and beautiful soup to load webpages.\ndocument_loaders.whatsapp_chat.WhatsAppChatLoader(path)\nLoader that loads WhatsApp messages text file.\ndocument_loaders.wikipedia.WikipediaLoader(query)\nLoads a query result from www.wikipedia.org into a list of Documents.\ndocument_loaders.word_document.Docx2txtLoader(...)\nLoads a DOCX with docx2txt and chunks at character level.\ndocument_loaders.word_document.UnstructuredWordDocumentLoader(...)\nLoader that uses unstructured to load word documents.\ndocument_loaders.xml.UnstructuredXMLLoader(...)\nLoader that uses unstructured to load XML files.\ndocument_loaders.youtube.GoogleApiYoutubeLoader(...)\nLoader that loads all Videos from a Channel\ndocument_loaders.youtube.YoutubeLoader(video_id)\nLoader that loads Youtube transcripts.\nFunctions\u00b6\ndocument_loaders.chatgpt.concatenate_rows(...)\nCombine message information in a readable format ready to be used.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-28", "text": "Combine message information in a readable format ready to be used.\ndocument_loaders.facebook_chat.concatenate_rows(row)\nCombine message information in a readable format ready to be used.\ndocument_loaders.helpers.detect_file_encodings(...)\nTry to detect the file encoding.\ndocument_loaders.notebook.concatenate_cells(...)\nCombine cells information in a readable format ready to be used.\ndocument_loaders.notebook.remove_newlines(x)\nRemove recursively newlines, no matter the data structure they are stored in.\ndocument_loaders.parsers.registry.get_parser(...)\nGet a parser by parser name.\ndocument_loaders.telegram.concatenate_rows(row)\nCombine message information in a readable format ready to be used.\ndocument_loaders.telegram.text_to_docs(text)\nConverts a string or list of strings to a list of Documents with metadata.\ndocument_loaders.unstructured.get_elements_from_api([...])\nRetrieves a list of elements from the Unstructured API.\ndocument_loaders.unstructured.satisfies_min_unstructured_version(...)\nChecks to see if the installed unstructured version exceeds the minimum version for the feature in question.\ndocument_loaders.unstructured.validate_unstructured_version(...)\nRaises an error if the unstructured version does not exceed the specified minimum.\ndocument_loaders.whatsapp_chat.concatenate_rows(...)\nCombine message information in a readable format ready to be used.\nlangchain.document_transformers: Document Transformers\u00b6\nTransform documents\nClasses\u00b6\ndocument_transformers.EmbeddingsClusteringFilter\nPerform K-means clustering on document vectors.\ndocument_transformers.EmbeddingsRedundantFilter\nFilter that drops redundant documents by comparing their embeddings.\nFunctions\u00b6\ndocument_transformers.get_stateful_documents(...)\nConvert a list of documents to a list of documents with state.\nlangchain.embeddings: Embeddings\u00b6\nWrappers around embedding modules.\nClasses\u00b6\nembeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-29", "text": "Classes\u00b6\nembeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding\nWrapper for Aleph Alpha's Asymmetric Embeddings AA provides you with an endpoint to embed a document and a query.\nembeddings.aleph_alpha.AlephAlphaSymmetricSemanticEmbedding\nThe symmetric version of the Aleph Alpha's semantic embeddings.\nembeddings.base.Embeddings()\nInterface for embedding models.\nembeddings.bedrock.BedrockEmbeddings\nEmbeddings provider to invoke Bedrock embedding models.\nembeddings.clarifai.ClarifaiEmbeddings\nWrapper around Clarifai embedding models.\nembeddings.cohere.CohereEmbeddings\nWrapper around Cohere embedding models.\nembeddings.dashscope.DashScopeEmbeddings\nWrapper around DashScope embedding models.\nembeddings.deepinfra.DeepInfraEmbeddings\nWrapper around Deep Infra's embedding inference service.\nembeddings.elasticsearch.ElasticsearchEmbeddings(...)\nWrapper around Elasticsearch embedding models.\nembeddings.embaas.EmbaasEmbeddings\nWrapper around embaas's embedding service.\nembeddings.embaas.EmbaasEmbeddingsPayload\nPayload for the embaas embeddings API.\nembeddings.fake.FakeEmbeddings\nCreate a new model by parsing and validating input data from keyword arguments.\nembeddings.google_palm.GooglePalmEmbeddings\nCreate a new model by parsing and validating input data from keyword arguments.\nembeddings.huggingface.HuggingFaceEmbeddings\nWrapper around sentence_transformers embedding models.\nembeddings.huggingface.HuggingFaceInstructEmbeddings\nWrapper around sentence_transformers embedding models.\nembeddings.huggingface_hub.HuggingFaceHubEmbeddings\nWrapper around HuggingFaceHub embedding models.\nembeddings.jina.JinaEmbeddings\nCreate a new model by parsing and validating input data from keyword arguments.\nembeddings.llamacpp.LlamaCppEmbeddings", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-30", "text": "embeddings.llamacpp.LlamaCppEmbeddings\nWrapper around llama.cpp embedding models.\nembeddings.minimax.MiniMaxEmbeddings\nWrapper around MiniMax's embedding inference service.\nembeddings.modelscope_hub.ModelScopeEmbeddings\nWrapper around modelscope_hub embedding models.\nembeddings.mosaicml.MosaicMLInstructorEmbeddings\nWrapper around MosaicML's embedding inference service.\nembeddings.octoai_embeddings.OctoAIEmbeddings\nWrapper around OctoAI Compute Service embedding models.\nembeddings.openai.OpenAIEmbeddings\nWrapper around OpenAI embedding models.\nembeddings.sagemaker_endpoint.EmbeddingsContentHandler()\nContent handler for LLM class.\nembeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings\nWrapper around custom Sagemaker Inference Endpoints.\nembeddings.self_hosted.SelfHostedEmbeddings\nRuns custom embedding models on self-hosted remote hardware.\nembeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings\nRuns sentence_transformers embedding models on self-hosted remote hardware.\nembeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings\nRuns InstructorEmbedding embedding models on self-hosted remote hardware.\nembeddings.spacy_embeddings.SpacyEmbeddings\nSpacyEmbeddings is a class for generating embeddings using the Spacy library.\nembeddings.tensorflow_hub.TensorflowHubEmbeddings\nWrapper around tensorflow_hub embedding models.\nembeddings.vertexai.VertexAIEmbeddings\nCreate a new model by parsing and validating input data from keyword arguments.\nFunctions\u00b6\nembeddings.dashscope.embed_with_retry(...)\nUse tenacity to retry the embedding call.\nembeddings.google_palm.embed_with_retry(...)\nUse tenacity to retry the completion call.\nembeddings.minimax.embed_with_retry(...)\nUse tenacity to retry the completion call.\nembeddings.openai.embed_with_retry(...)", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-31", "text": "Use tenacity to retry the completion call.\nembeddings.openai.embed_with_retry(...)\nUse tenacity to retry the embedding call.\nembeddings.self_hosted_hugging_face.load_embedding_model(...)\nLoad the embedding model.\nlangchain.env: Env\u00b6\nFunctions\u00b6\nenv.get_runtime_environment()\nGet information about the environment.\nlangchain.evaluation: Evaluation\u00b6\nEvaluation chains for grading LLM and Chain outputs.\nThis module contains off-the-shelf evaluation chains for grading the output of\nLangChain primitives such as language models and chains.\nLoading an evaluator\nTo load an evaluator, you can use the load_evaluators or\nload_evaluator functions with the\nnames of the evaluators to load.\nfrom langchain.evaluation import load_evaluator\nevaluator = load_evaluator(\"qa\")\nevaluator.evaluate_strings(\n prediction=\"We sold more than 40,000 units last week\",\n input=\"How many units did we sell last week?\",\n reference=\"We sold 32,378 units\",\n)\nThe evaluator must be one of EvaluatorType.\nDatasets\nTo load one of the LangChain HuggingFace datasets, you can use the load_dataset function with the\nname of the dataset to load.\nfrom langchain.evaluation import load_dataset\nds = load_dataset(\"llm-math\")\nSome common use cases for evaluation include:\nGrading the accuracy of a response against ground truth answers: QAEvalChain\nComparing the output of two models: PairwiseStringEvalChain\nJudging the efficacy of an agent\u2019s tool usage: TrajectoryEvalChain\nChecking whether an output complies with a set of criteria: CriteriaEvalChain\nComputing semantic difference between a prediction and reference: EmbeddingDistanceEvalChain or between two predictions: PairwiseEmbeddingDistanceEvalChain", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-32", "text": "Measuring the string distance between a prediction and reference StringDistanceEvalChain or between two predictions PairwiseStringDistanceEvalChain\nLow-level API\nThese evaluators implement one of the following interfaces:\nStringEvaluator: Evaluate a prediction string against a reference label and/or input context.\nPairwiseStringEvaluator: Evaluate two prediction strings against each other. Useful for scoring preferences, measuring similarity between two chain or llm agents, or comparing outputs on similar inputs.\nAgentTrajectoryEvaluator Evaluate the full sequence of actions taken by an agent.\nThese interfaces enable easier composability and usage within a higher level evaluation framework.\nClasses\u00b6\nevaluation.agents.trajectory_eval_chain.TrajectoryEval(...)\nCreate new instance of TrajectoryEval(score, reasoning)\nevaluation.agents.trajectory_eval_chain.TrajectoryEvalChain\nA chain for evaluating ReAct style agents.\nevaluation.agents.trajectory_eval_chain.TrajectoryOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nevaluation.comparison.eval_chain.PairwiseStringEvalChain\nA chain for comparing two outputs, such as the outputs\nevaluation.comparison.eval_chain.PairwiseStringResultOutputParser\nA parser for the output of the PairwiseStringEvalChain.\nevaluation.criteria.eval_chain.CriteriaEvalChain\nLLM Chain for evaluating runs against criteria.\nevaluation.criteria.eval_chain.CriteriaResultOutputParser\nA parser for the output of the CriteriaEvalChain.\nevaluation.embedding_distance.base.EmbeddingDistance(value)\nEmbedding Distance Metric.\nevaluation.embedding_distance.base.EmbeddingDistanceEvalChain\nUse embedding distances to score semantic difference between a prediction and reference.\nevaluation.embedding_distance.base.PairwiseEmbeddingDistanceEvalChain\nUse embedding distances to score semantic difference between two predictions.\nevaluation.qa.eval_chain.ContextQAEvalChain\nLLM Chain specifically for evaluating QA w/o GT based on context\nevaluation.qa.eval_chain.CotQAEvalChain", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-33", "text": "evaluation.qa.eval_chain.CotQAEvalChain\nLLM Chain specifically for evaluating QA using chain of thought reasoning.\nevaluation.qa.eval_chain.QAEvalChain\nLLM Chain specifically for evaluating question answering.\nevaluation.qa.generate_chain.QAGenerateChain\nLLM Chain specifically for generating examples for question answering.\nevaluation.run_evaluators.base.RunEvaluatorChain\nEvaluate Run and optional examples.\nevaluation.run_evaluators.base.RunEvaluatorOutputParser\nParse the output of a run.\nevaluation.run_evaluators.implementations.ChoicesOutputParser\nParse a feedback run with optional choices.\nevaluation.run_evaluators.implementations.CriteriaOutputParser\nParse a criteria results into an evaluation result.\nevaluation.run_evaluators.implementations.StringRunEvaluatorInputMapper\nMaps the Run and Optional[Example] to a dictionary.\nevaluation.run_evaluators.implementations.TrajectoryInputMapper\nMaps the Run and Optional[Example] to a dictionary.\nevaluation.run_evaluators.implementations.TrajectoryRunEvalOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nevaluation.run_evaluators.string_run_evaluator.ChainStringRunMapper\nExtract items to evaluate from the run object from a chain.\nevaluation.run_evaluators.string_run_evaluator.LLMStringRunMapper\nExtract items to evaluate from the run object.\nevaluation.run_evaluators.string_run_evaluator.StringExampleMapper\nMap an example, or row in the dataset, to the inputs of an evaluation.\nevaluation.run_evaluators.string_run_evaluator.StringRunEvaluatorChain\nEvaluate Run and optional examples.\nevaluation.run_evaluators.string_run_evaluator.StringRunMapper\nExtract items to evaluate from the run object.\nevaluation.run_evaluators.string_run_evaluator.ToolStringRunMapper\nMap an input to the tool.\nevaluation.schema.AgentTrajectoryEvaluator()\nInterface for evaluating agent trajectories.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-34", "text": "evaluation.schema.AgentTrajectoryEvaluator()\nInterface for evaluating agent trajectories.\nevaluation.schema.EvaluatorType(value[,\u00a0...])\nThe types of the evaluators.\nevaluation.schema.LLMEvalChain\nA base class for evaluators that use an LLM.\nevaluation.schema.PairwiseStringEvaluator()\nCompare the output of two models (or two outputs of the same model).\nevaluation.schema.StringEvaluator()\nGrade, tag, or otherwise evaluate predictions relative to their inputs and/or reference labels.\nevaluation.string_distance.base.PairwiseStringDistanceEvalChain\nCompute string edit distances between two predictions.\nevaluation.string_distance.base.StringDistance(value)\nDistance metric to use.\nevaluation.string_distance.base.StringDistanceEvalChain\nCompute string distances between the prediction and the reference.\nFunctions\u00b6\nevaluation.loading.load_dataset(uri)\nLoad a dataset from the LangChainDatasets HuggingFace org.\nevaluation.loading.load_evaluator(evaluator,\u00a0*)\nLoad the requested evaluation chain specified by a string.\nevaluation.loading.load_evaluators(evaluators,\u00a0*)\nLoad evaluators specified by a list of evaluator types.\nevaluation.run_evaluators.implementations.get_criteria_evaluator(...)\nGet an eval chain for grading a model's response against a map of criteria.\nevaluation.run_evaluators.implementations.get_qa_evaluator(llm,\u00a0*)\nGet an eval chain that compares response against ground truth.\nevaluation.run_evaluators.implementations.get_trajectory_evaluator(...)\nGet an eval chain for grading a model's response against a map of criteria.\nevaluation.run_evaluators.loading.load_run_evaluator_for_model(...)\nLoad evaluators specified by a list of evaluator types.\nevaluation.run_evaluators.loading.load_run_evaluators_for_model(...)\nLoad evaluators specified by a list of evaluator types.\nlangchain.example_generator: Example Generator\u00b6\nUtility functions for working with prompts.\nFunctions\u00b6", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-35", "text": "langchain.example_generator: Example Generator\u00b6\nUtility functions for working with prompts.\nFunctions\u00b6\nexample_generator.generate_example(examples,\u00a0...)\nReturn another example given a list of examples for a prompt.\nlangchain.experimental: Experimental\u00b6\nClasses\u00b6\nexperimental.autonomous_agents.autogpt.memory.AutoGPTMemory\nCreate a new model by parsing and validating input data from keyword arguments.\nexperimental.autonomous_agents.autogpt.output_parser.AutoGPTAction(...)\nCreate new instance of AutoGPTAction(name, args)\nexperimental.autonomous_agents.autogpt.output_parser.AutoGPTOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nexperimental.autonomous_agents.autogpt.output_parser.BaseAutoGPTOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nexperimental.autonomous_agents.autogpt.prompt.AutoGPTPrompt\nCreate a new model by parsing and validating input data from keyword arguments.\nexperimental.autonomous_agents.baby_agi.baby_agi.BabyAGI\nController model for the BabyAGI agent.\nexperimental.autonomous_agents.baby_agi.task_creation.TaskCreationChain\nChain to generates tasks.\nexperimental.autonomous_agents.baby_agi.task_execution.TaskExecutionChain\nChain to execute tasks.\nexperimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain\nChain to prioritize tasks.\nexperimental.generative_agents.generative_agent.GenerativeAgent\nA character with memory and innate characteristics.\nexperimental.generative_agents.memory.GenerativeAgentMemory\nCreate a new model by parsing and validating input data from keyword arguments.\nexperimental.llms.jsonformer_decoder.JsonFormer\nCreate a new model by parsing and validating input data from keyword arguments.\nexperimental.llms.rellm_decoder.RELLM\nCreate a new model by parsing and validating input data from keyword arguments.\nexperimental.plan_and_execute.agent_executor.PlanAndExecute", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-36", "text": "experimental.plan_and_execute.agent_executor.PlanAndExecute\nCreate a new model by parsing and validating input data from keyword arguments.\nexperimental.plan_and_execute.executors.base.BaseExecutor\nCreate a new model by parsing and validating input data from keyword arguments.\nexperimental.plan_and_execute.executors.base.ChainExecutor\nCreate a new model by parsing and validating input data from keyword arguments.\nexperimental.plan_and_execute.planners.base.BasePlanner\nCreate a new model by parsing and validating input data from keyword arguments.\nexperimental.plan_and_execute.planners.base.LLMPlanner\nCreate a new model by parsing and validating input data from keyword arguments.\nexperimental.plan_and_execute.planners.chat_planner.PlanningOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nexperimental.plan_and_execute.schema.BaseStepContainer\nCreate a new model by parsing and validating input data from keyword arguments.\nexperimental.plan_and_execute.schema.ListStepContainer\nCreate a new model by parsing and validating input data from keyword arguments.\nexperimental.plan_and_execute.schema.Plan\nCreate a new model by parsing and validating input data from keyword arguments.\nexperimental.plan_and_execute.schema.PlanOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nexperimental.plan_and_execute.schema.Step\nCreate a new model by parsing and validating input data from keyword arguments.\nexperimental.plan_and_execute.schema.StepResponse\nCreate a new model by parsing and validating input data from keyword arguments.\nFunctions\u00b6\nexperimental.autonomous_agents.autogpt.output_parser.preprocess_json_input(...)\nPreprocesses a string to be parsed as json.\nexperimental.autonomous_agents.autogpt.prompt_generator.get_prompt(tools)\nThis function generates a prompt string.\nexperimental.llms.jsonformer_decoder.import_jsonformer()\nLazily import jsonformer.\nexperimental.llms.rellm_decoder.import_rellm()\nLazily import rellm.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-37", "text": "Lazily import rellm.\nexperimental.plan_and_execute.executors.agent_executor.load_agent_executor(...)\nLoad an agent executor.\nexperimental.plan_and_execute.planners.chat_planner.load_chat_planner(llm)\nLoad a chat planner.\nlangchain.formatting: Formatting\u00b6\nUtilities for formatting strings.\nClasses\u00b6\nformatting.StrictFormatter()\nA subclass of formatter that checks for extra keys.\nlangchain.graphs: Graphs\u00b6\nGraph implementations.\nClasses\u00b6\ngraphs.networkx_graph.KnowledgeTriple(...)\nA triple in the graph.\nFunctions\u00b6\ngraphs.networkx_graph.get_entities(entity_str)\nExtract entities from entity string.\ngraphs.networkx_graph.parse_triples(...)\nParse knowledge triples from the knowledge string.\nlangchain.indexes: Indexes\u00b6\nAll index utils.\nClasses\u00b6\nindexes.graph.GraphIndexCreator\nFunctionality to create graph index.\nindexes.vectorstore.VectorStoreIndexWrapper\nWrapper around a vectorstore for easy access.\nindexes.vectorstore.VectorstoreIndexCreator\nLogic for creating indexes.\nlangchain.input: Input\u00b6\nHandle chained inputs.\nFunctions\u00b6\ninput.get_bolded_text(text)\nGet bolded text.\ninput.get_color_mapping(items[,\u00a0excluded_colors])\nGet mapping for items to a support color.\ninput.get_colored_text(text,\u00a0color)\nGet colored text.\ninput.print_text(text[,\u00a0color,\u00a0end,\u00a0file])\nPrint text with highlighting and no end characters.\nlangchain.llms: LLMs\u00b6\nWrappers on top of large language models APIs.\nClasses\u00b6\nllms.ai21.AI21\nWrapper around AI21 large language models.\nllms.ai21.AI21PenaltyData\nParameters for AI21 penalty data.\nllms.aleph_alpha.AlephAlpha\nWrapper around Aleph Alpha large language models.\nllms.amazon_api_gateway.AmazonAPIGateway", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-38", "text": "llms.amazon_api_gateway.AmazonAPIGateway\nWrapper around custom Amazon API Gateway\nllms.anthropic.Anthropic\nWrapper around Anthropic's large language models.\nllms.anyscale.Anyscale\nWrapper around Anyscale Services.\nllms.aviary.Aviary\nAllow you to use an Aviary.\nllms.azureml_endpoint.AzureMLEndpointClient(...)\nWrapper around AzureML Managed Online Endpoint Client.\nllms.azureml_endpoint.AzureMLOnlineEndpoint\nWrapper around Azure ML Hosted models using Managed Online Endpoints.\nllms.azureml_endpoint.DollyContentFormatter()\nContent handler for the Dolly-v2-12b model\nllms.azureml_endpoint.HFContentFormatter()\nContent handler for LLMs from the HuggingFace catalog.\nllms.azureml_endpoint.OSSContentFormatter()\nContent handler for LLMs from the OSS catalog.\nllms.bananadev.Banana\nWrapper around Banana large language models.\nllms.base.BaseLLM\nLLM wrapper should take in a prompt and return a string.\nllms.base.LLM\nLLM class that expect subclasses to implement a simpler call method.\nllms.baseten.Baseten\nUse your Baseten models in Langchain\nllms.beam.Beam\nWrapper around Beam API for gpt2 large language model.\nllms.bedrock.Bedrock\nLLM provider to invoke Bedrock models.\nllms.cerebriumai.CerebriumAI\nWrapper around CerebriumAI large language models.\nllms.clarifai.Clarifai\nWrapper around Clarifai's large language models.\nllms.cohere.Cohere\nWrapper around Cohere large language models.\nllms.ctransformers.CTransformers\nWrapper around the C Transformers LLM interface.\nllms.databricks.Databricks", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-39", "text": "Wrapper around the C Transformers LLM interface.\nllms.databricks.Databricks\nLLM wrapper around a Databricks serving endpoint or a cluster driver proxy app.\nllms.deepinfra.DeepInfra\nWrapper around DeepInfra deployed models.\nllms.fake.FakeListLLM\nFake LLM wrapper for testing purposes.\nllms.forefrontai.ForefrontAI\nWrapper around ForefrontAI large language models.\nllms.google_palm.GooglePalm\nCreate a new model by parsing and validating input data from keyword arguments.\nllms.gooseai.GooseAI\nWrapper around OpenAI large language models.\nllms.gpt4all.GPT4All\nWrapper around GPT4All language models.\nllms.huggingface_endpoint.HuggingFaceEndpoint\nWrapper around HuggingFaceHub Inference Endpoints.\nllms.huggingface_hub.HuggingFaceHub\nWrapper around HuggingFaceHub models.\nllms.huggingface_pipeline.HuggingFacePipeline\nWrapper around HuggingFace Pipeline API.\nllms.huggingface_text_gen_inference.HuggingFaceTextGenInference\nHuggingFace text generation inference API.\nllms.human.HumanInputLLM\nA LLM wrapper which returns user input as the response.\nllms.llamacpp.LlamaCpp\nWrapper around the llama.cpp model.\nllms.manifest.ManifestWrapper\nWrapper around HazyResearch's Manifest library.\nllms.modal.Modal\nWrapper around Modal large language models.\nllms.mosaicml.MosaicML\nWrapper around MosaicML's LLM inference service.\nllms.nlpcloud.NLPCloud\nWrapper around NLPCloud large language models.\nllms.octoai_endpoint.OctoAIEndpoint\nWrapper around OctoAI Inference Endpoints.\nllms.openai.AzureOpenAI\nWrapper around Azure-specific OpenAI large language models.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-40", "text": "llms.openai.AzureOpenAI\nWrapper around Azure-specific OpenAI large language models.\nllms.openai.BaseOpenAI\nWrapper around OpenAI large language models.\nllms.openai.OpenAI\nWrapper around OpenAI large language models.\nllms.openai.OpenAIChat\nWrapper around OpenAI Chat large language models.\nllms.openllm.IdentifyingParams\nParameters for identifying a model as a typed dict.\nllms.openllm.OpenLLM\nWrapper for accessing OpenLLM, supporting both in-process model instance and remote OpenLLM servers.\nllms.openlm.OpenLM\nCreate a new model by parsing and validating input data from keyword arguments.\nllms.petals.Petals\nWrapper around Petals Bloom models.\nllms.pipelineai.PipelineAI\nWrapper around PipelineAI large language models.\nllms.predictionguard.PredictionGuard\nWrapper around Prediction Guard large language models.\nllms.promptlayer_openai.PromptLayerOpenAI\nWrapper around OpenAI large language models.\nllms.promptlayer_openai.PromptLayerOpenAIChat\nWrapper around OpenAI large language models.\nllms.replicate.Replicate\nWrapper around Replicate models.\nllms.rwkv.RWKV\nWrapper around RWKV language models.\nllms.sagemaker_endpoint.ContentHandlerBase()\nA handler class to transform input from LLM to a format that SageMaker endpoint expects.\nllms.sagemaker_endpoint.LLMContentHandler()\nContent handler for LLM class.\nllms.sagemaker_endpoint.SagemakerEndpoint\nWrapper around custom Sagemaker Inference Endpoints.\nllms.self_hosted.SelfHostedPipeline\nRun model inference on self-hosted remote hardware.\nllms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM\nWrapper around HuggingFace Pipeline API to run on self-hosted remote hardware.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-41", "text": "Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware.\nllms.stochasticai.StochasticAI\nWrapper around StochasticAI large language models.\nllms.textgen.TextGen\nWrapper around the text-generation-webui model.\nllms.vertexai.VertexAI\nWrapper around Google Vertex AI large language models.\nllms.writer.Writer\nWrapper around Writer large language models.\nFunctions\u00b6\nllms.aviary.get_completions(model,\u00a0prompt[,\u00a0...])\nGet completions from Aviary models.\nllms.aviary.get_models()\nList available models\nllms.base.create_base_retry_decorator(...[,\u00a0...])\nCreate a retry decorator for a given LLM and provided list of error types.\nllms.base.get_prompts(params,\u00a0prompts)\nGet prompts that are already cached.\nllms.base.update_cache(existing_prompts,\u00a0...)\nUpdate the cache and get the LLM output.\nllms.cohere.completion_with_retry(llm,\u00a0**kwargs)\nUse tenacity to retry the completion call.\nllms.databricks.get_default_api_token()\nGets the default Databricks personal access token.\nllms.databricks.get_default_host()\nGets the default Databricks workspace hostname.\nllms.databricks.get_repl_context()\nGets the notebook REPL context if running inside a Databricks notebook.\nllms.google_palm.generate_with_retry(llm,\u00a0...)\nUse tenacity to retry the completion call.\nllms.loading.load_llm(file)\nLoad LLM from file.\nllms.loading.load_llm_from_config(config)\nLoad LLM from Config Dict.\nllms.openai.completion_with_retry(llm,\u00a0**kwargs)\nUse tenacity to retry the completion call.\nllms.openai.update_token_usage(keys,\u00a0...)\nUpdate token usage.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-42", "text": "llms.openai.update_token_usage(keys,\u00a0...)\nUpdate token usage.\nllms.utils.enforce_stop_tokens(text,\u00a0stop)\nCut off the text as soon as any stop words occur.\nllms.vertexai.completion_with_retry(llm,\u00a0...)\nUse tenacity to retry the completion call.\nllms.vertexai.is_codey_model(model_name)\nReturns True if the model name is a Codey model.\nlangchain.load: Load\u00b6\nClasses\u00b6\nload.serializable.BaseSerialized\nBase class for serialized objects.\nload.serializable.Serializable\nSerializable base class.\nload.serializable.SerializedConstructor\nSerialized constructor.\nload.serializable.SerializedNotImplemented\nSerialized not implemented.\nload.serializable.SerializedSecret\nSerialized secret.\nFunctions\u00b6\nload.dump.default(obj)\nReturn a default value for a Serializable object or a SerializedNotImplemented object.\nload.dump.dumpd(obj)\nReturn a json dict representation of an object.\nload.dump.dumps(obj,\u00a0*[,\u00a0pretty])\nReturn a json string representation of an object.\nload.load.loads(text,\u00a0*[,\u00a0secrets_map])\nLoad a JSON object from a string.\nload.serializable.to_json_not_implemented(obj)\nSerialize a \"not implemented\" object.\nlangchain.math_utils: Math Utils\u00b6\nMath utils.\nFunctions\u00b6\nmath_utils.cosine_similarity(X,\u00a0Y)\nRow-wise cosine similarity between two equal-width matrices.\nmath_utils.cosine_similarity_top_k(X,\u00a0Y[,\u00a0...])\nRow-wise cosine similarity with optional top-k and score threshold filtering.\nlangchain.memory: Memory\u00b6\nClasses\u00b6\nmemory.buffer.ConversationBufferMemory\nBuffer for storing conversation memory.\nmemory.buffer.ConversationStringBufferMemory\nBuffer for storing conversation memory.\nmemory.buffer_window.ConversationBufferWindowMemory\nBuffer for storing conversation memory.\nmemory.chat_memory.BaseChatMemory", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-43", "text": "Buffer for storing conversation memory.\nmemory.chat_memory.BaseChatMemory\nCreate a new model by parsing and validating input data from keyword arguments.\nmemory.chat_message_histories.cassandra.CassandraChatMessageHistory(...)\nChat message history that stores history in Cassandra.\nmemory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory(...)\nChat history backed by Azure CosmosDB.\nmemory.chat_message_histories.dynamodb.DynamoDBChatMessageHistory(...)\nChat message history that stores history in AWS DynamoDB.\nmemory.chat_message_histories.file.FileChatMessageHistory(...)\nChat message history that stores history in a local file.\nmemory.chat_message_histories.firestore.FirestoreChatMessageHistory(...)\nChat history backed by Google Firestore.\nmemory.chat_message_histories.in_memory.ChatMessageHistory\nIn memory implementation of chat message history.\nmemory.chat_message_histories.momento.MomentoChatMessageHistory(...)\nChat message history cache that uses Momento as a backend.\nmemory.chat_message_histories.mongodb.MongoDBChatMessageHistory(...)\nChat message history that stores history in MongoDB.\nmemory.chat_message_histories.postgres.PostgresChatMessageHistory(...)\nChat message history stored in a Postgres database.\nmemory.chat_message_histories.redis.RedisChatMessageHistory(...)\nChat message history stored in a Redis database.\nmemory.chat_message_histories.sql.SQLChatMessageHistory(...)\nChat message history stored in an SQL database.\nmemory.chat_message_histories.zep.ZepChatMessageHistory(...)\nA ChatMessageHistory implementation that uses Zep as a backend.\nmemory.combined.CombinedMemory\nClass for combining multiple memories' data together.\nmemory.entity.BaseEntityStore\nCreate a new model by parsing and validating input data from keyword arguments.\nmemory.entity.ConversationEntityMemory\nEntity extractor & summarizer memory.\nmemory.entity.InMemoryEntityStore\nBasic in-memory entity store.\nmemory.entity.RedisEntityStore\nRedis-backed Entity store.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-44", "text": "Basic in-memory entity store.\nmemory.entity.RedisEntityStore\nRedis-backed Entity store.\nmemory.entity.SQLiteEntityStore\nSQLite-backed Entity store\nmemory.kg.ConversationKGMemory\nKnowledge graph memory for storing conversation memory.\nmemory.motorhead_memory.MotorheadMemory\nCreate a new model by parsing and validating input data from keyword arguments.\nmemory.readonly.ReadOnlySharedMemory\nA memory wrapper that is read-only and cannot be changed.\nmemory.simple.SimpleMemory\nSimple memory for storing context or other bits of information that shouldn't ever change between prompts.\nmemory.summary.ConversationSummaryMemory\nConversation summarizer to memory.\nmemory.summary.SummarizerMixin\nCreate a new model by parsing and validating input data from keyword arguments.\nmemory.summary_buffer.ConversationSummaryBufferMemory\nBuffer with summarizer for storing conversation memory.\nmemory.token_buffer.ConversationTokenBufferMemory\nBuffer for storing conversation memory.\nmemory.vectorstore.VectorStoreRetrieverMemory\nClass for a VectorStore-backed memory object.\nFunctions\u00b6\nmemory.chat_message_histories.sql.create_message_model(...)\nCreate a message model for a given table name.\nmemory.utils.get_prompt_input_key(inputs,\u00a0...)\nGet the prompt input key.\nlangchain.output_parsers: Output Parsers\u00b6\nClasses\u00b6\noutput_parsers.boolean.BooleanOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\noutput_parsers.combining.CombiningOutputParser\nClass to combine multiple output parsers into one.\noutput_parsers.datetime.DatetimeOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\noutput_parsers.enum.EnumOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\noutput_parsers.fix.OutputFixingParser\nWraps a parser and tries to fix parsing errors.\noutput_parsers.list.CommaSeparatedListOutputParser\nParse out comma separated lists.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-45", "text": "output_parsers.list.CommaSeparatedListOutputParser\nParse out comma separated lists.\noutput_parsers.list.ListOutputParser\nClass to parse the output of an LLM call to a list.\noutput_parsers.openai_functions.JsonKeyOutputFunctionsParser\nCreate a new model by parsing and validating input data from keyword arguments.\noutput_parsers.openai_functions.JsonOutputFunctionsParser\nCreate a new model by parsing and validating input data from keyword arguments.\noutput_parsers.openai_functions.OutputFunctionsParser\nCreate a new model by parsing and validating input data from keyword arguments.\noutput_parsers.openai_functions.PydanticAttrOutputFunctionsParser\nCreate a new model by parsing and validating input data from keyword arguments.\noutput_parsers.openai_functions.PydanticOutputFunctionsParser\nCreate a new model by parsing and validating input data from keyword arguments.\noutput_parsers.pydantic.PydanticOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\noutput_parsers.rail_parser.GuardrailsOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\noutput_parsers.regex.RegexParser\nClass to parse the output into a dictionary.\noutput_parsers.regex_dict.RegexDictParser\nClass to parse the output into a dictionary.\noutput_parsers.retry.RetryOutputParser\nWraps a parser and tries to fix parsing errors.\noutput_parsers.retry.RetryWithErrorOutputParser\nWraps a parser and tries to fix parsing errors.\noutput_parsers.structured.ResponseSchema\nCreate a new model by parsing and validating input data from keyword arguments.\noutput_parsers.structured.StructuredOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nFunctions\u00b6\noutput_parsers.json.parse_and_check_json_markdown(...)\nParse a JSON string from a Markdown string and check that it contains the expected keys.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-46", "text": "Parse a JSON string from a Markdown string and check that it contains the expected keys.\noutput_parsers.json.parse_json_markdown(...)\nParse a JSON string from a Markdown string.\noutput_parsers.loading.load_output_parser(config)\nLoad output parser.\nlangchain.prompts: Prompts\u00b6\nPrompt template classes.\nClasses\u00b6\nprompts.base.StringPromptTemplate\nString prompt should expose the format method, returning a prompt.\nprompts.base.StringPromptValue\nCreate a new model by parsing and validating input data from keyword arguments.\nprompts.chat.AIMessagePromptTemplate\nCreate a new model by parsing and validating input data from keyword arguments.\nprompts.chat.BaseChatPromptTemplate\nCreate a new model by parsing and validating input data from keyword arguments.\nprompts.chat.BaseMessagePromptTemplate\nCreate a new model by parsing and validating input data from keyword arguments.\nprompts.chat.BaseStringMessagePromptTemplate\nCreate a new model by parsing and validating input data from keyword arguments.\nprompts.chat.ChatMessagePromptTemplate\nCreate a new model by parsing and validating input data from keyword arguments.\nprompts.chat.ChatPromptTemplate\nCreate a new model by parsing and validating input data from keyword arguments.\nprompts.chat.ChatPromptValue\nCreate a new model by parsing and validating input data from keyword arguments.\nprompts.chat.HumanMessagePromptTemplate\nCreate a new model by parsing and validating input data from keyword arguments.\nprompts.chat.MessagesPlaceholder\nPrompt template that assumes variable is already list of messages.\nprompts.chat.SystemMessagePromptTemplate\nCreate a new model by parsing and validating input data from keyword arguments.\nprompts.example_selector.base.BaseExampleSelector()\nInterface for selecting examples to include in prompts.\nprompts.example_selector.length_based.LengthBasedExampleSelector\nSelect examples based on length.\nprompts.example_selector.ngram_overlap.NGramOverlapExampleSelector", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-47", "text": "Select examples based on length.\nprompts.example_selector.ngram_overlap.NGramOverlapExampleSelector\nSelect and order examples based on ngram overlap score (sentence_bleu score).\nprompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector\nExampleSelector that selects examples based on Max Marginal Relevance.\nprompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector\nExample selector that selects examples based on SemanticSimilarity.\nprompts.few_shot.FewShotPromptTemplate\nPrompt template that contains few shot examples.\nprompts.few_shot_with_templates.FewShotPromptWithTemplates\nPrompt template that contains few shot examples.\nprompts.pipeline.PipelinePromptTemplate\nA prompt template for composing multiple prompts together.\nprompts.prompt.PromptTemplate\nSchema to represent a prompt for an LLM.\nFunctions\u00b6\nprompts.base.check_valid_template(template,\u00a0...)\nCheck that template string is valid.\nprompts.base.jinja2_formatter(template,\u00a0**kwargs)\nFormat a template using jinja2.\nprompts.base.validate_jinja2(template,\u00a0...)\nValidate that the input variables are valid for the template.\nprompts.example_selector.ngram_overlap.ngram_overlap_score(...)\nCompute ngram overlap score of source and example as sentence_bleu score.\nprompts.example_selector.semantic_similarity.sorted_values(values)\nReturn a list of values in dict sorted by key.\nprompts.loading.load_prompt(path)\nUnified method for loading a prompt from LangChainHub or local fs.\nprompts.loading.load_prompt_from_config(config)\nLoad prompt from Config Dict.\nlangchain.requests: Requests\u00b6\nLightweight wrapper around requests library, with async support.\nClasses\u00b6\nrequests.Requests\nWrapper around requests to handle auth and async.\nrequests.TextRequestsWrapper\nLightweight wrapper around requests library.\nlangchain.retrievers: Retrievers\u00b6\nClasses\u00b6", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-48", "text": "langchain.retrievers: Retrievers\u00b6\nClasses\u00b6\nretrievers.arxiv.ArxivRetriever\nIt is effectively a wrapper for ArxivAPIWrapper.\nretrievers.azure_cognitive_search.AzureCognitiveSearchRetriever\nWrapper around Azure Cognitive Search.\nretrievers.chaindesk.ChaindeskRetriever\nRetriever that uses the Chaindesk API.\nretrievers.chatgpt_plugin_retriever.ChatGPTPluginRetriever\nCreate a new model by parsing and validating input data from keyword arguments.\nretrievers.contextual_compression.ContextualCompressionRetriever\nRetriever that wraps a base retriever and compresses the results.\nretrievers.databerry.DataberryRetriever\nRetriever that uses the Databerry API.\nretrievers.docarray.DocArrayRetriever\nRetriever class for DocArray Document Indices.\nretrievers.docarray.SearchType(value[,\u00a0...])\nEnumerator of the types of search to perform.\nretrievers.document_compressors.base.BaseDocumentCompressor\nBase abstraction interface for document compression.\nretrievers.document_compressors.base.DocumentCompressorPipeline\nDocument compressor that uses a pipeline of transformers.\nretrievers.document_compressors.chain_extract.LLMChainExtractor\nCreate a new model by parsing and validating input data from keyword arguments.\nretrievers.document_compressors.chain_extract.NoOutputParser\nParse outputs that could return a null string of some sort.\nretrievers.document_compressors.chain_filter.LLMChainFilter\nFilter that drops documents that aren't relevant to the query.\nretrievers.document_compressors.cohere_rerank.CohereRerank\nCreate a new model by parsing and validating input data from keyword arguments.\nretrievers.document_compressors.embeddings_filter.EmbeddingsFilter\nCreate a new model by parsing and validating input data from keyword arguments.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-49", "text": "Create a new model by parsing and validating input data from keyword arguments.\nretrievers.elastic_search_bm25.ElasticSearchBM25Retriever\nWrapper around Elasticsearch using BM25 as a retrieval method.\nretrievers.kendra.AdditionalResultAttribute\nCreate a new model by parsing and validating input data from keyword arguments.\nretrievers.kendra.AdditionalResultAttributeValue\nCreate a new model by parsing and validating input data from keyword arguments.\nretrievers.kendra.AmazonKendraRetriever\nRetriever class to query documents from Amazon Kendra Index.\nretrievers.kendra.DocumentAttribute\nCreate a new model by parsing and validating input data from keyword arguments.\nretrievers.kendra.DocumentAttributeValue\nCreate a new model by parsing and validating input data from keyword arguments.\nretrievers.kendra.Highlight\nCreate a new model by parsing and validating input data from keyword arguments.\nretrievers.kendra.QueryResult\nCreate a new model by parsing and validating input data from keyword arguments.\nretrievers.kendra.QueryResultItem\nCreate a new model by parsing and validating input data from keyword arguments.\nretrievers.kendra.RetrieveResult\nCreate a new model by parsing and validating input data from keyword arguments.\nretrievers.kendra.RetrieveResultItem\nCreate a new model by parsing and validating input data from keyword arguments.\nretrievers.kendra.TextWithHighLights\nCreate a new model by parsing and validating input data from keyword arguments.\nretrievers.knn.KNNRetriever\nKNN Retriever.\nretrievers.llama_index.LlamaIndexGraphRetriever\nQuestion-answering with sources over an LlamaIndex graph data structure.\nretrievers.llama_index.LlamaIndexRetriever\nQuestion-answering with sources over an LlamaIndex data structure.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-50", "text": "Question-answering with sources over an LlamaIndex data structure.\nretrievers.merger_retriever.MergerRetriever\nThis class merges the results of multiple retrievers.\nretrievers.metal.MetalRetriever\nRetriever that uses the Metal API.\nretrievers.milvus.MilvusRetriever\nRetriever that uses the Milvus API.\nretrievers.multi_query.LineList\nCreate a new model by parsing and validating input data from keyword arguments.\nretrievers.multi_query.LineListOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nretrievers.multi_query.MultiQueryRetriever\nGiven a user query, use an LLM to write a set of queries.\nretrievers.pinecone_hybrid_search.PineconeHybridSearchRetriever\nCreate a new model by parsing and validating input data from keyword arguments.\nretrievers.pubmed.PubMedRetriever\nIt is effectively a wrapper for PubMedAPIWrapper.\nretrievers.remote_retriever.RemoteLangChainRetriever\nCreate a new model by parsing and validating input data from keyword arguments.\nretrievers.self_query.base.SelfQueryRetriever\nRetriever that wraps around a vector store and uses an LLM to generate the vector store queries.\nretrievers.self_query.chroma.ChromaTranslator()\nLogic for converting internal query language elements to valid filters.\nretrievers.self_query.myscale.MyScaleTranslator([...])\nLogic for converting internal query language elements to valid filters.\nretrievers.self_query.pinecone.PineconeTranslator()\nLogic for converting internal query language elements to valid filters.\nretrievers.self_query.qdrant.QdrantTranslator(...)\nLogic for converting internal query language elements to valid filters.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-51", "text": "Logic for converting internal query language elements to valid filters.\nretrievers.self_query.weaviate.WeaviateTranslator()\nLogic for converting internal query language elements to valid filters.\nretrievers.svm.SVMRetriever\nSVM Retriever.\nretrievers.tfidf.TFIDFRetriever\nCreate a new model by parsing and validating input data from keyword arguments.\nretrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever\nRetriever combining embedding similarity with recency.\nretrievers.vespa_retriever.VespaRetriever\nRetriever that uses the Vespa.\nretrievers.weaviate_hybrid_search.WeaviateHybridSearchRetriever\nRetriever that uses Weaviate's hybrid search to retrieve documents.\nretrievers.wikipedia.WikipediaRetriever\nIt is effectively a wrapper for WikipediaAPIWrapper.\nretrievers.zep.ZepRetriever\nA Retriever implementation for the Zep long-term memory store.\nretrievers.zilliz.ZillizRetriever\nRetriever that uses the Zilliz API.\nFunctions\u00b6\nretrievers.document_compressors.chain_extract.default_get_input(...)\nReturn the compression chain input.\nretrievers.document_compressors.chain_filter.default_get_input(...)\nReturn the compression chain input.\nretrievers.kendra.clean_excerpt(excerpt)\nCleans an excerpt from Kendra.\nretrievers.kendra.combined_text(title,\u00a0excerpt)\nCombines a title and an excerpt into a single string.\nretrievers.knn.create_index(contexts,\u00a0embeddings)\nCreate an index of embeddings for a list of contexts.\nretrievers.milvus.MilvusRetreiver(*args,\u00a0...)\nDeprecated MilvusRetreiver.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-52", "text": "Deprecated MilvusRetreiver.\nretrievers.pinecone_hybrid_search.create_index(...)\nCreate a Pinecone index from a list of contexts.\nretrievers.pinecone_hybrid_search.hash_text(text)\nHash a text using SHA256.\nretrievers.self_query.myscale.DEFAULT_COMPOSER(op_name)\nDefault composer for logical operators.\nretrievers.self_query.myscale.FUNCTION_COMPOSER(op_name)\nComposer for functions.\nretrievers.svm.create_index(contexts,\u00a0embeddings)\nCreate an index of embeddings for a list of contexts.\nretrievers.zilliz.ZillizRetreiver(*args,\u00a0...)\nDeprecated ZillizRetreiver.\nlangchain.schema: Schema\u00b6\nClasses\u00b6\nschema.agent.AgentFinish(return_values,\u00a0log)\nThe final return value of an ActionAgent.\nschema.document.BaseDocumentTransformer()\nAbstract base class for document transformation systems.\nschema.document.Document\nClass for storing a piece of text and associated metadata.\nschema.language_model.BaseLanguageModel\nAbstract base class for interfacing with language models.\nschema.memory.BaseChatMessageHistory()\nAbstract base class for storing chat message history.\nschema.memory.BaseMemory\nBase abstract class for memory in Chains.\nschema.messages.AIMessage\nA Message from an AI.\nschema.messages.BaseMessage\nThe base abstract Message class.\nschema.messages.ChatMessage\nA Message that can be assigned an arbitrary speaker (i.e.\nschema.messages.FunctionMessage\nA Message for passing the result of executing a function back to a model.\nschema.messages.HumanMessage\nA Message from a human.\nschema.messages.SystemMessage\nA Message for priming AI behavior, usually passed in as the first of a sequence of input messages.\nschema.output.ChatGeneration\nA single chat generation output.\nschema.output.ChatResult", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-53", "text": "schema.output.ChatGeneration\nA single chat generation output.\nschema.output.ChatResult\nClass that contains all results for a single chat model call.\nschema.output.Generation\nA single text generation output.\nschema.output.LLMResult\nClass that contains all results for a batched LLM call.\nschema.output.RunInfo\nClass that contains metadata for a single execution of a Chain or model.\nschema.output_parser.BaseLLMOutputParser\nAbstract base class for parsing the outputs of a model.\nschema.output_parser.BaseOutputParser\nClass to parse the output of an LLM call.\nschema.output_parser.NoOpOutputParser\n'No operation' OutputParser that returns the text as is.\nschema.output_parser.OutputParserException(error)\nException that output parsers should raise to signify a parsing error.\nschema.prompt.PromptValue\nBase abstract class for inputs to any language model.\nschema.prompt_template.BasePromptTemplate\nBase class for all prompt templates, returning a prompt.\nschema.retriever.BaseRetriever\nAbstract base class for a Document retrieval system.\nFunctions\u00b6\nschema.messages.get_buffer_string(messages)\nConvert sequence of Messages to strings and concatenate them into one string.\nschema.messages.messages_from_dict(messages)\nConvert a sequence of messages from dicts to Message objects.\nschema.messages.messages_to_dict(messages)\nConvert a sequence of Messages to a list of dictionaries.\nschema.prompt_template.format_document(doc,\u00a0...)\nFormat a document into a string based on a prompt template.\nlangchain.server: Server\u00b6\nScript to run langchain-server locally using docker-compose.\nFunctions\u00b6\nserver.main()\nRun the langchain server locally.\nlangchain.sql_database: Sql Database\u00b6\nSQLAlchemy wrapper around a database.\nFunctions\u00b6\nsql_database.truncate_word(content,\u00a0*,\u00a0length)\nTruncate a string to a certain number of words, based on the max string length.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-54", "text": "Truncate a string to a certain number of words, based on the max string length.\nlangchain.text_splitter: Text Splitter\u00b6\nFunctionality for splitting text.\nClasses\u00b6\ntext_splitter.CharacterTextSplitter([separator])\nImplementation of splitting text that looks at characters.\ntext_splitter.HeaderType\nHeader type as typed dict.\ntext_splitter.Language(value[,\u00a0names,\u00a0...])\nEnum of the programming languages.\ntext_splitter.LatexTextSplitter(**kwargs)\nAttempts to split the text along Latex-formatted layout elements.\ntext_splitter.LineType\nLine type as typed dict.\ntext_splitter.MarkdownTextSplitter(**kwargs)\nAttempts to split the text along Markdown-formatted headings.\ntext_splitter.NLTKTextSplitter([separator])\nImplementation of splitting text that looks at sentences using NLTK.\ntext_splitter.PythonCodeTextSplitter(**kwargs)\nAttempts to split the text along Python syntax.\ntext_splitter.RecursiveCharacterTextSplitter([...])\nImplementation of splitting text that looks at characters.\ntext_splitter.SentenceTransformersTokenTextSplitter([...])\nImplementation of splitting text that looks at tokens.\ntext_splitter.SpacyTextSplitter([separator,\u00a0...])\nImplementation of splitting text that looks at sentences using Spacy.\ntext_splitter.TextSplitter(chunk_size,\u00a0...)\nInterface for splitting text into chunks.\ntext_splitter.TokenTextSplitter([...])\nImplementation of splitting text that looks at tokens.\nFunctions\u00b6\ntext_splitter.split_text_on_tokens(*,\u00a0text,\u00a0...)\nSplit incoming text and return chunks.\nlangchain.tools: Tools\u00b6\nCore toolkit implementations.\nClasses\u00b6\ntools.arxiv.tool.ArxivQueryRun\nTool that adds the capability to search using the Arxiv API.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-55", "text": "Tool that adds the capability to search using the Arxiv API.\ntools.azure_cognitive_services.form_recognizer.AzureCogsFormRecognizerTool\nTool that queries the Azure Cognitive Services Form Recognizer API.\ntools.azure_cognitive_services.image_analysis.AzureCogsImageAnalysisTool\nTool that queries the Azure Cognitive Services Image Analysis API.\ntools.azure_cognitive_services.speech2text.AzureCogsSpeech2TextTool\nTool that queries the Azure Cognitive Services Speech2Text API.\ntools.azure_cognitive_services.text2speech.AzureCogsText2SpeechTool\nTool that queries the Azure Cognitive Services Text2Speech API.\ntools.base.BaseTool\nInterface LangChain tools must implement.\ntools.base.SchemaAnnotationError\nRaised when 'args_schema' is missing or has an incorrect type annotation.\ntools.base.StructuredTool\nTool that can operate on any number of inputs.\ntools.base.Tool\nTool that takes in function or coroutine directly.\ntools.base.ToolException\nAn optional exception that tool throws when execution error occurs.\ntools.base.ToolMetaclass(name,\u00a0bases,\u00a0dct)\nMetaclass for BaseTool to ensure the provided args_schema\ntools.bing_search.tool.BingSearchResults\nTool that has capability to query the Bing Search API and get back json.\ntools.bing_search.tool.BingSearchRun\nTool that adds the capability to query the Bing search API.\ntools.brave_search.tool.BraveSearch\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.convert_to_openai.FunctionDescription\nRepresentation of a callable function to the OpenAI API.\ntools.dataforseo_api_search.tool.DataForSeoAPISearchResults\nTool that has capability to query the DataForSeo Google Search API and get back json.\ntools.dataforseo_api_search.tool.DataForSeoAPISearchRun\nTool that adds the capability to query the DataForSeo Google search API.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-56", "text": "Tool that adds the capability to query the DataForSeo Google search API.\ntools.ddg_search.tool.DuckDuckGoSearchResults\nTool that queries the Duck Duck Go Search API and get back json.\ntools.ddg_search.tool.DuckDuckGoSearchRun\nTool that adds the capability to query the DuckDuckGo search API.\ntools.file_management.copy.CopyFileTool\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.file_management.copy.FileCopyInput\nInput for CopyFileTool.\ntools.file_management.delete.DeleteFileTool\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.file_management.delete.FileDeleteInput\nInput for DeleteFileTool.\ntools.file_management.file_search.FileSearchInput\nInput for FileSearchTool.\ntools.file_management.file_search.FileSearchTool\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.file_management.list_dir.DirectoryListingInput\nInput for ListDirectoryTool.\ntools.file_management.list_dir.ListDirectoryTool\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.file_management.move.FileMoveInput\nInput for MoveFileTool.\ntools.file_management.move.MoveFileTool\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.file_management.read.ReadFileInput\nInput for ReadFileTool.\ntools.file_management.read.ReadFileTool\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.file_management.utils.BaseFileToolMixin\nMixin for file system tools.\ntools.file_management.utils.FileValidationError\nError for paths outside the root directory.\ntools.file_management.write.WriteFileInput\nInput for WriteFileTool.\ntools.file_management.write.WriteFileTool\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.gmail.base.GmailBaseTool\nCreate a new model by parsing and validating input data from keyword arguments.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-57", "text": "Create a new model by parsing and validating input data from keyword arguments.\ntools.gmail.create_draft.CreateDraftSchema\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.gmail.create_draft.GmailCreateDraft\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.gmail.get_message.GmailGetMessage\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.gmail.get_message.SearchArgsSchema\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.gmail.get_thread.GetThreadSchema\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.gmail.get_thread.GmailGetThread\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.gmail.search.GmailSearch\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.gmail.search.Resource(value[,\u00a0names,\u00a0...])\nEnumerator of Resources to search.\ntools.gmail.search.SearchArgsSchema\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.gmail.send_message.GmailSendMessage\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.gmail.send_message.SendMessageSchema\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.google_places.tool.GooglePlacesSchema\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.google_places.tool.GooglePlacesTool\nTool that adds the capability to query the Google places API.\ntools.google_search.tool.GoogleSearchResults\nTool that has capability to query the Google Search API and get back json.\ntools.google_search.tool.GoogleSearchRun\nTool that adds the capability to query the Google search API.\ntools.google_serper.tool.GoogleSerperResults\nTool that has capability to query the Serper.dev Google Search API and get back json.\ntools.google_serper.tool.GoogleSerperRun", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-58", "text": "tools.google_serper.tool.GoogleSerperRun\nTool that adds the capability to query the Serper.dev Google search API.\ntools.graphql.tool.BaseGraphQLTool\nBase tool for querying a GraphQL API.\ntools.human.tool.HumanInputRun\nTool that adds the capability to ask user for input.\ntools.ifttt.IFTTTWebhook\nIFTTT Webhook.\ntools.jira.tool.JiraAction\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.json.tool.JsonGetValueTool\nTool for getting a value in a JSON spec.\ntools.json.tool.JsonListKeysTool\nTool for listing keys in a JSON spec.\ntools.json.tool.JsonSpec\nBase class for JSON spec.\ntools.metaphor_search.tool.MetaphorSearchResults\nTool that has capability to query the Metaphor Search API and get back json.\ntools.office365.base.O365BaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.office365.create_draft_message.CreateDraftMessageSchema\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.office365.create_draft_message.O365CreateDraftMessage\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.office365.events_search.O365SearchEvents\nClass for searching calendar events in Office 365\ntools.office365.events_search.SearchEventsInput\nInput for SearchEmails Tool.\ntools.office365.messages_search.O365SearchEmails\nClass for searching email messages in Office 365\ntools.office365.messages_search.SearchEmailsInput\nInput for SearchEmails Tool.\ntools.office365.send_event.O365SendEvent\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.office365.send_event.SendEventSchema\nInput for CreateEvent Tool.\ntools.office365.send_message.O365SendMessage", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-59", "text": "Input for CreateEvent Tool.\ntools.office365.send_message.O365SendMessage\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.office365.send_message.SendMessageSchema\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.openapi.utils.api_models.APIOperation\nA model for a single API operation.\ntools.openapi.utils.api_models.APIProperty\nA model for a property in the query, path, header, or cookie params.\ntools.openapi.utils.api_models.APIPropertyBase\nBase model for an API property.\ntools.openapi.utils.api_models.APIPropertyLocation(value)\nThe location of the property.\ntools.openapi.utils.api_models.APIRequestBody\nA model for a request body.\ntools.openapi.utils.api_models.APIRequestBodyProperty\nA model for a request body property.\ntools.openweathermap.tool.OpenWeatherMapQueryRun\nTool that adds the capability to query using the OpenWeatherMap API.\ntools.playwright.base.BaseBrowserTool\nBase class for browser tools.\ntools.playwright.click.ClickTool\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.playwright.click.ClickToolInput\nInput for ClickTool.\ntools.playwright.current_page.CurrentWebPageTool\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.playwright.extract_hyperlinks.ExtractHyperlinksTool\nExtract all hyperlinks on the page.\ntools.playwright.extract_hyperlinks.ExtractHyperlinksToolInput\nInput for ExtractHyperlinksTool.\ntools.playwright.extract_text.ExtractTextTool\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.playwright.get_elements.GetElementsTool\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.playwright.get_elements.GetElementsToolInput\nInput for GetElementsTool.\ntools.playwright.navigate.NavigateTool\nCreate a new model by parsing and validating input data from keyword arguments.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-60", "text": "Create a new model by parsing and validating input data from keyword arguments.\ntools.playwright.navigate.NavigateToolInput\nInput for NavigateToolInput.\ntools.playwright.navigate_back.NavigateBackTool\nNavigate back to the previous page in the browser history.\ntools.plugin.AIPlugin\nAI Plugin Definition.\ntools.plugin.AIPluginTool\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.plugin.AIPluginToolSchema\nAIPLuginToolSchema.\ntools.plugin.ApiConfig\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.powerbi.tool.InfoPowerBITool\nTool for getting metadata about a PowerBI Dataset.\ntools.powerbi.tool.ListPowerBITool\nTool for getting tables names.\ntools.powerbi.tool.QueryPowerBITool\nTool for querying a Power BI Dataset.\ntools.pubmed.tool.PubmedQueryRun\nTool that adds the capability to search using the PubMed API.\ntools.python.tool.PythonAstREPLTool\nA tool for running python code in a REPL.\ntools.python.tool.PythonREPLTool\nA tool for running python code in a REPL.\ntools.requests.tool.BaseRequestsTool\nBase class for requests tools.\ntools.requests.tool.RequestsDeleteTool\nTool for making a DELETE request to an API endpoint.\ntools.requests.tool.RequestsGetTool\nTool for making a GET request to an API endpoint.\ntools.requests.tool.RequestsPatchTool\nTool for making a PATCH request to an API endpoint.\ntools.requests.tool.RequestsPostTool\nTool for making a POST request to an API endpoint.\ntools.requests.tool.RequestsPutTool\nTool for making a PUT request to an API endpoint.\ntools.scenexplain.tool.SceneXplainInput\nInput for SceneXplain.\ntools.scenexplain.tool.SceneXplainTool\nTool that adds the capability to explain images.\ntools.searx_search.tool.SearxSearchResults", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-61", "text": "tools.searx_search.tool.SearxSearchResults\nTool that has the capability to query a Searx instance and get back json.\ntools.searx_search.tool.SearxSearchRun\nTool that adds the capability to query a Searx instance.\ntools.shell.tool.ShellInput\nCommands for the Bash Shell tool.\ntools.shell.tool.ShellTool\nTool to run shell commands.\ntools.sleep.tool.SleepInput\nInput for CopyFileTool.\ntools.sleep.tool.SleepTool\nTool that adds the capability to sleep.\ntools.spark_sql.tool.BaseSparkSQLTool\nBase tool for interacting with Spark SQL.\ntools.spark_sql.tool.InfoSparkSQLTool\nTool for getting metadata about a Spark SQL.\ntools.spark_sql.tool.ListSparkSQLTool\nTool for getting tables names.\ntools.spark_sql.tool.QueryCheckerTool\nUse an LLM to check if a query is correct.\ntools.spark_sql.tool.QuerySparkSQLTool\nTool for querying a Spark SQL.\ntools.sql_database.tool.BaseSQLDatabaseTool\nBase tool for interacting with a SQL database.\ntools.sql_database.tool.InfoSQLDatabaseTool\nTool for getting metadata about a SQL database.\ntools.sql_database.tool.ListSQLDatabaseTool\nTool for getting tables names.\ntools.sql_database.tool.QuerySQLCheckerTool\nUse an LLM to check if a query is correct.\ntools.sql_database.tool.QuerySQLDataBaseTool\nTool for querying a SQL database.\ntools.steamship_image_generation.tool.ModelName(value)\nSupported Image Models for generation.\ntools.steamship_image_generation.tool.SteamshipImageGenerationTool\nTool used to generate images from a text-prompt.\ntools.vectorstore.tool.BaseVectorStoreTool\nBase class for tools that use a VectorStore.\ntools.vectorstore.tool.VectorStoreQATool\nTool for the VectorDBQA chain.\ntools.vectorstore.tool.VectorStoreQAWithSourcesTool", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-62", "text": "Tool for the VectorDBQA chain.\ntools.vectorstore.tool.VectorStoreQAWithSourcesTool\nTool for the VectorDBQAWithSources chain.\ntools.wikipedia.tool.WikipediaQueryRun\nTool that adds the capability to search using the Wikipedia API.\ntools.wolfram_alpha.tool.WolframAlphaQueryRun\nTool that adds the capability to query using the Wolfram Alpha SDK.\ntools.youtube.search.YouTubeSearchTool\nCreate a new model by parsing and validating input data from keyword arguments.\ntools.zapier.tool.ZapierNLAListActions\nReturns a list of all exposed (enabled) actions associated with\ntools.zapier.tool.ZapierNLARunAction\nExecutes an action that is identified by action_id, must be exposed\nFunctions\u00b6\ntools.azure_cognitive_services.utils.detect_file_src_type(...)\nDetect if the file is local or remote.\ntools.azure_cognitive_services.utils.download_audio_from_url(...)\nDownload audio from url to local.\ntools.base.create_schema_from_function(...)\nCreate a pydantic schema from a function's signature.\ntools.base.tool(*args[,\u00a0return_direct,\u00a0...])\nMake tools out of functions, can be used with or without arguments.\ntools.convert_to_openai.format_tool_to_openai_function(tool)\nFormat tool into the OpenAI function API.\ntools.ddg_search.tool.DuckDuckGoSearchTool(...)\nDeprecated.\ntools.file_management.utils.get_validated_relative_path(...)\nResolve a relative path, raising an error if not within the root directory.\ntools.file_management.utils.is_relative_to(...)\nCheck if path is relative to root.\ntools.gmail.utils.build_resource_service([...])\nBuild a Gmail service.\ntools.gmail.utils.clean_email_body(body)\nClean email body.\ntools.gmail.utils.get_gmail_credentials([...])\nGet credentials.\ntools.gmail.utils.import_google()\nImport google libraries.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-63", "text": "Get credentials.\ntools.gmail.utils.import_google()\nImport google libraries.\ntools.gmail.utils.import_googleapiclient_resource_builder()\nImport googleapiclient.discovery.build function.\ntools.gmail.utils.import_installed_app_flow()\nImport InstalledAppFlow class.\ntools.interaction.tool.StdInInquireTool(...)\nTool for asking the user for input.\ntools.office365.utils.authenticate()\nAuthenticate using the Microsoft Grah API\ntools.office365.utils.clean_body(body)\nClean body of a message or event.\ntools.playwright.base.lazy_import_playwright_browsers()\nLazy import playwright browsers.\ntools.playwright.utils.create_async_playwright_browser([...])\nCreate an async playwright browser.\ntools.playwright.utils.create_sync_playwright_browser([...])\nCreate a playwright browser.\ntools.playwright.utils.get_current_page(browser)\nGet the current page of the browser.\ntools.playwright.utils.run_async(coro)\nRun an async coroutine.\ntools.plugin.marshal_spec(txt)\nConvert the yaml or json serialized spec to a dict.\ntools.python.tool.sanitize_input(query)\nSanitize input to the python REPL.\ntools.steamship_image_generation.utils.make_image_public(...)\nUpload a block to a signed URL and return the public URL.\nlangchain.utilities: Utilities\u00b6\nGeneral utilities.\nClasses\u00b6\nutilities.apify.ApifyWrapper\nWrapper around Apify.\nutilities.arxiv.ArxivAPIWrapper\nWrapper around ArxivAPI.\nutilities.awslambda.LambdaWrapper\nWrapper for AWS Lambda SDK.\nutilities.bibtex.BibtexparserWrapper\nWrapper around bibtexparser.\nutilities.bing_search.BingSearchAPIWrapper\nWrapper for Bing Search API.\nutilities.brave_search.BraveSearchWrapper\nCreate a new model by parsing and validating input data from keyword arguments.\nutilities.dataforseo_api_search.DataForSeoAPIWrapper", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-64", "text": "utilities.dataforseo_api_search.DataForSeoAPIWrapper\nCreate a new model by parsing and validating input data from keyword arguments.\nutilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper\nWrapper for DuckDuckGo Search API.\nutilities.google_places_api.GooglePlacesAPIWrapper\nWrapper around Google Places API.\nutilities.google_search.GoogleSearchAPIWrapper\nWrapper for Google Search API.\nutilities.google_serper.GoogleSerperAPIWrapper\nWrapper around the Serper.dev Google Search API.\nutilities.graphql.GraphQLAPIWrapper\nWrapper around GraphQL API.\nutilities.jira.JiraAPIWrapper\nWrapper for Jira API.\nutilities.metaphor_search.MetaphorSearchAPIWrapper\nWrapper for Metaphor Search API.\nutilities.openapi.HTTPVerb(value[,\u00a0names,\u00a0...])\nEnumerator of the HTTP verbs.\nutilities.openapi.OpenAPISpec\nOpenAPI Model that removes misformatted parts of the spec.\nutilities.openweathermap.OpenWeatherMapAPIWrapper\nWrapper for OpenWeatherMap API using PyOWM.\nutilities.powerbi.PowerBIDataset\nCreate PowerBI engine from dataset ID and credential or token.\nutilities.pupmed.PubMedAPIWrapper\nWrapper around PubMed API.\nutilities.python.PythonREPL\nSimulates a standalone Python REPL.\nutilities.scenexplain.SceneXplainAPIWrapper\nWrapper for SceneXplain API.\nutilities.searx_search.SearxResults(data)\nDict like wrapper around search api results.\nutilities.searx_search.SearxSearchWrapper\nWrapper for Searx API.\nutilities.serpapi.SerpAPIWrapper\nWrapper around SerpAPI.\nutilities.twilio.TwilioAPIWrapper\nMessaging Client using Twilio.\nutilities.wikipedia.WikipediaAPIWrapper\nWrapper around WikipediaAPI.\nutilities.wolfram_alpha.WolframAlphaAPIWrapper\nWrapper for Wolfram Alpha.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-65", "text": "utilities.wolfram_alpha.WolframAlphaAPIWrapper\nWrapper for Wolfram Alpha.\nutilities.zapier.ZapierNLAWrapper\nWrapper for Zapier NLA.\nFunctions\u00b6\nutilities.loading.try_load_from_hub(path,\u00a0...)\nLoad configuration from hub.\nutilities.powerbi.fix_table_name(table)\nAdd single quotes around table names that contain spaces.\nutilities.powerbi.json_to_md(json_contents)\nConverts a JSON object to a markdown table.\nutilities.vertexai.init_vertexai([project,\u00a0...])\nInit vertexai.\nutilities.vertexai.raise_vertex_import_error()\nRaise ImportError related to Vertex SDK being not available.\nlangchain.utils: Utils\u00b6\nGeneric utility functions.\nFunctions\u00b6\nutils.check_package_version(package[,\u00a0...])\nCheck the version of a package.\nutils.comma_list(items)\nutils.get_from_dict_or_env(data,\u00a0key,\u00a0env_key)\nGet a value from a dictionary or an environment variable.\nutils.get_from_env(key,\u00a0env_key[,\u00a0default])\nGet a value from a dictionary or an environment variable.\nutils.guard_import(module_name,\u00a0*[,\u00a0...])\nDynamically imports a module and raises a helpful exception if the module is not installed.\nutils.mock_now(dt_value)\nContext manager for mocking out datetime.now() in unit tests.\nutils.raise_for_status_with_text(response)\nRaise an error with the response text.\nutils.stringify_dict(data)\nStringify a dictionary.\nutils.stringify_value(val)\nStringify a value.\nutils.xor_args(*arg_groups)\nValidate specified keyword args are mutually exclusive.\nlangchain.vectorstores: Vectorstores\u00b6\nWrappers on top of vector stores.\nClasses\u00b6\nvectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch(...)\nAlibaba Cloud OpenSearch Vector Store", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-66", "text": "Alibaba Cloud OpenSearch Vector Store\nvectorstores.analyticdb.AnalyticDB(...[,\u00a0...])\nVectorStore implementation using AnalyticDB.\nvectorstores.annoy.Annoy(embedding_function,\u00a0...)\nWrapper around Annoy vector database.\nvectorstores.atlas.AtlasDB(name[,\u00a0...])\nWrapper around Atlas: Nomic's neural database and rhizomatic instrument.\nvectorstores.awadb.AwaDB([table_name,\u00a0...])\nInterface implemented by AwaDB vector stores.\nvectorstores.azuresearch.AzureSearch(...[,\u00a0...])\nInitialize with necessary components.\nvectorstores.azuresearch.AzureSearchVectorStoreRetriever\nCreate a new model by parsing and validating input data from keyword arguments.\nvectorstores.base.VectorStore()\nInterface for vector stores.\nvectorstores.base.VectorStoreRetriever\nCreate a new model by parsing and validating input data from keyword arguments.\nvectorstores.cassandra.Cassandra(embedding,\u00a0...)\nWrapper around Cassandra embeddings platform.\nvectorstores.chroma.Chroma([...])\nWrapper around ChromaDB embeddings platform.\nvectorstores.clarifai.Clarifai([user_id,\u00a0...])\nWrapper around Clarifai AI platform's vector store.\nvectorstores.clickhouse.Clickhouse(embedding)\nWrapper around ClickHouse vector database\nvectorstores.clickhouse.ClickhouseSettings\nClickHouse Client Configuration\nvectorstores.deeplake.DeepLake([...])\nWrapper around Deep Lake, a data lake for deep learning applications.\nvectorstores.docarray.base.DocArrayIndex(...)\nInitialize a vector store from DocArray's DocIndex.\nvectorstores.docarray.hnsw.DocArrayHnswSearch(...)\nWrapper around HnswLib storage.\nvectorstores.docarray.in_memory.DocArrayInMemorySearch(...)\nWrapper around in-memory storage for exact search.\nvectorstores.elastic_vector_search.ElasticKnnSearch(...)", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-67", "text": "vectorstores.elastic_vector_search.ElasticKnnSearch(...)\nA class for performing k-Nearest Neighbors (k-NN) search on an Elasticsearch index.\nvectorstores.elastic_vector_search.ElasticVectorSearch(...)\nWrapper around Elasticsearch as a vector database.\nvectorstores.faiss.FAISS(embedding_function,\u00a0...)\nWrapper around FAISS vector database.\nvectorstores.hologres.Hologres(...[,\u00a0ndims,\u00a0...])\nVectorStore implementation using Hologres.\nvectorstores.lancedb.LanceDB(connection,\u00a0...)\nWrapper around LanceDB vector database.\nvectorstores.marqo.Marqo(client,\u00a0index_name)\nWrapper around Marqo database.\nvectorstores.matching_engine.MatchingEngine(...)\nVertex Matching Engine implementation of the vector store.\nvectorstores.milvus.Milvus(embedding_function)\nInitialize wrapper around the milvus vector database.\nvectorstores.mongodb_atlas.MongoDBAtlasVectorSearch(...)\nWrapper around MongoDB Atlas Vector Search.\nvectorstores.myscale.MyScale(embedding[,\u00a0config])\nWrapper around MyScale vector database\nvectorstores.myscale.MyScaleSettings\nMyScale Client Configuration\nvectorstores.opensearch_vector_search.OpenSearchVectorSearch(...)\nWrapper around OpenSearch as a vector database.\nvectorstores.pgembedding.BaseModel(**kwargs)\nA simple constructor that allows initialization from kwargs.\nvectorstores.pgembedding.CollectionStore(...)\nA simple constructor that allows initialization from kwargs.\nvectorstores.pgembedding.EmbeddingStore(**kwargs)\nA simple constructor that allows initialization from kwargs.\nvectorstores.pgembedding.PGEmbedding(...[,\u00a0...])", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-68", "text": "vectorstores.pgembedding.PGEmbedding(...[,\u00a0...])\nVectorStore implementation using Postgres and the pg_embedding extension. pg_embedding uses sequential scan by default. but you can create a HNSW index using the create_hnsw_index method. - connection_string is a postgres connection string. - embedding_function any embedding function implementing langchain.embeddings.base.Embeddings interface. - collection_name is the name of the collection to use. (default: langchain) - NOTE: This is not the name of the table, but the name of the collection. The tables will be created when initializing the store (if not exists) So, make sure the user has the right permissions to create tables. - distance_strategy is the distance strategy to use. (default: EUCLIDEAN) - EUCLIDEAN is the euclidean distance. - pre_delete_collection if True, will delete the collection if it exists. (default: False) - Useful for testing.\nvectorstores.pgvector.BaseModel(**kwargs)\nA simple constructor that allows initialization from kwargs.\nvectorstores.pgvector.CollectionStore(**kwargs)\nA simple constructor that allows initialization from kwargs.\nvectorstores.pgvector.DistanceStrategy(value)\nEnumerator of the Distance strategies.\nvectorstores.pgvector.PGVector(...[,\u00a0...])\nVectorStore implementation using Postgres and pgvector.\nvectorstores.pinecone.Pinecone(index,\u00a0...[,\u00a0...])\nWrapper around Pinecone vector database.\nvectorstores.qdrant.Qdrant(client,\u00a0...[,\u00a0...])\nWrapper around Qdrant vector database.\nvectorstores.redis.Redis(redis_url,\u00a0...)\nWrapper around Redis vector database.\nvectorstores.redis.RedisVectorStoreRetriever\nCreate a new model by parsing and validating input data from keyword arguments.\nvectorstores.rocksetdb.Rockset(client,\u00a0...)\nWrapper arpund Rockset vector database.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-69", "text": "Wrapper arpund Rockset vector database.\nvectorstores.singlestoredb.DistanceStrategy(value)\nEnumerator of the Distance strategies for SingleStoreDB.\nvectorstores.singlestoredb.SingleStoreDB(...)\nThis class serves as a Pythonic interface to the SingleStore DB database.\nvectorstores.singlestoredb.SingleStoreDBRetriever\nRetriever for SingleStoreDB vector stores.\nvectorstores.sklearn.BaseSerializer(persist_path)\nAbstract base class for saving and loading data.\nvectorstores.sklearn.BsonSerializer(persist_path)\nSerializes data in binary json using the bson python package.\nvectorstores.sklearn.JsonSerializer(persist_path)\nSerializes data in json using the json package from python standard library.\nvectorstores.sklearn.ParquetSerializer(...)\nSerializes data in Apache Parquet format using the pyarrow package.\nvectorstores.sklearn.SKLearnVectorStore(...)\nA simple in-memory vector store based on the scikit-learn library NearestNeighbors implementation.\nvectorstores.sklearn.SKLearnVectorStoreException\nException raised by SKLearnVectorStore.\nvectorstores.starrocks.StarRocks(embedding)\nWrapper around StarRocks vector database\nvectorstores.starrocks.StarRocksSettings\nStarRocks Client Configuration\nvectorstores.supabase.SupabaseVectorStore(...)\nVectorStore for a Supabase postgres database.\nvectorstores.tair.Tair(embedding_function,\u00a0...)\nWrapper around Tair Vector store.\nvectorstores.tigris.Tigris(client,\u00a0...)\nInitialize Tigris vector store\nvectorstores.typesense.Typesense(...[,\u00a0...])\nWrapper around Typesense vector search.\nvectorstores.vectara.Vectara([...])\nImplementation of Vector Store using Vectara.\nvectorstores.vectara.VectaraRetriever\nCreate a new model by parsing and validating input data from keyword arguments.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "38cccc443c5d-70", "text": "Create a new model by parsing and validating input data from keyword arguments.\nvectorstores.weaviate.Weaviate(client,\u00a0...)\nWrapper around Weaviate vector database.\nvectorstores.zilliz.Zilliz(embedding_function)\nInitialize wrapper around the Zilliz vector database.\nFunctions\u00b6\nvectorstores.alibabacloud_opensearch.create_metadata(fields)\nCreate metadata from fields.\nvectorstores.annoy.dependable_annoy_import()\nImport annoy if available, otherwise raise error.\nvectorstores.clickhouse.has_mul_sub_str(s,\u00a0*args)\nCheck if a string contains multiple substrings.\nvectorstores.faiss.dependable_faiss_import([...])\nImport faiss if available, otherwise raise error.\nvectorstores.myscale.has_mul_sub_str(s,\u00a0*args)\nCheck if a string contains multiple substrings.\nvectorstores.starrocks.debug_output(s)\nPrint a debug message if DEBUG is True.\nvectorstores.starrocks.get_named_result(...)\nGet a named result from a query.\nvectorstores.starrocks.has_mul_sub_str(s,\u00a0*args)\nCheck if a string has multiple substrings.\nvectorstores.utils.maximal_marginal_relevance(...)\nCalculate maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/api_reference.html"} {"id": "e7dbc5d007f9-0", "text": "langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings\u00b6\nclass langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings(*, cache: ~typing.Optional[bool] = None, verbose: bool = None, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, pipeline_ref: ~typing.Any = None, client: ~typing.Any = None, inference_fn: ~typing.Callable = , hardware: ~typing.Any = None, model_load_fn: ~typing.Callable = , load_fn_kwargs: ~typing.Optional[dict] = None, model_reqs: ~typing.List[str] = ['./', 'InstructorEmbedding', 'torch'], inference_kwargs: ~typing.Any = None, model_id: str = 'hkunlp/instructor-large', embed_instruction: str = 'Represent the document for retrieval: ', query_instruction: str = 'Represent the question for retrieving supporting documents: ')[source]\u00b6\nBases: SelfHostedHuggingFaceEmbeddings\nRuns InstructorEmbedding embedding models on self-hosted remote hardware.\nSupported hardware includes auto-launched instances on AWS, GCP, Azure,\nand Lambda, as well as servers specified\nby IP address and SSH credentials (such as on-prem, or another\ncloud like Paperspace, Coreweave, etc.).\nTo use, you should have the runhouse python package installed.\nExample", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html"} {"id": "e7dbc5d007f9-1", "text": "To use, you should have the runhouse python package installed.\nExample\nfrom langchain.embeddings import SelfHostedHuggingFaceInstructEmbeddings\nimport runhouse as rh\nmodel_name = \"hkunlp/instructor-large\"\ngpu = rh.cluster(name='rh-a10x', instance_type='A100:1')\nhf = SelfHostedHuggingFaceInstructEmbeddings(\n model_name=model_name, hardware=gpu)\nInitialize the remote inference function.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam embed_instruction: str = 'Represent the document for retrieval: '\u00b6\nInstruction to use for embedding documents.\nparam hardware: Any = None\u00b6\nRemote hardware to send the inference function to.\nparam inference_fn: Callable = \u00b6\nInference function to extract the embeddings.\nparam inference_kwargs: Any = None\u00b6\nAny kwargs to pass to the model\u2019s inference function.\nparam load_fn_kwargs: Optional[dict] = None\u00b6\nKey word arguments to pass to the model load function.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_id: str = 'hkunlp/instructor-large'\u00b6\nModel name to use.\nparam model_load_fn: Callable = \u00b6\nFunction to load the model remotely on the server.\nparam model_reqs: List[str] = ['./', 'InstructorEmbedding', 'torch']\u00b6\nRequirements to install on hardware to inference the model.\nparam pipeline_ref: Any = None\u00b6\nparam query_instruction: str = 'Represent the question for retrieving supporting documents: '\u00b6\nInstruction to use for embedding query.\nparam tags: Optional[List[str]] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html"} {"id": "e7dbc5d007f9-2", "text": "Instruction to use for embedding query.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html"} {"id": "e7dbc5d007f9-3", "text": "text generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html"} {"id": "e7dbc5d007f9-4", "text": "Returns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]\u00b6\nCompute doc embeddings using a HuggingFace instruct model.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]\u00b6\nCompute query embeddings using a HuggingFace instruct model.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nclassmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) \u2192 LLM\u00b6\nInit the SelfHostedPipeline from a pipeline object or string.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html"} {"id": "e7dbc5d007f9-5", "text": "Parameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html"} {"id": "e7dbc5d007f9-6", "text": "Pass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html"} {"id": "e7dbc5d007f9-7", "text": "to_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html"} {"id": "162c53aa0c60-0", "text": "langchain.embeddings.embaas.EmbaasEmbeddingsPayload\u00b6\nclass langchain.embeddings.embaas.EmbaasEmbeddingsPayload[source]\u00b6\nBases: TypedDict\nPayload for the embaas embeddings API.\nMethods\n__init__(*args,\u00a0**kwargs)\nclear()\ncopy()\nfromkeys([value])\nCreate a new dictionary with keys from iterable and values set to value.\nget(key[,\u00a0default])\nReturn the value for key if key is in the dictionary, else default.\nitems()\nkeys()\npop(k[,d])\nIf the key is not found, return the default if given; otherwise, raise a KeyError.\npopitem()\nRemove and return a (key, value) pair as a 2-tuple.\nsetdefault(key[,\u00a0default])\nInsert key with a value of default if key is not in the dictionary.\nupdate([E,\u00a0]**F)\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]\nvalues()\nAttributes\nmodel\ntexts\ninstruction\nclear() \u2192 None.\u00a0 Remove all items from D.\u00b6\ncopy() \u2192 a shallow copy of D\u00b6\nfromkeys(value=None, /)\u00b6\nCreate a new dictionary with keys from iterable and values set to value.\nget(key, default=None, /)\u00b6\nReturn the value for key if key is in the dictionary, else default.\nitems() \u2192 a set-like object providing a view on D's items\u00b6\nkeys() \u2192 a set-like object providing a view on D's keys\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.embaas.EmbaasEmbeddingsPayload.html"} {"id": "162c53aa0c60-1", "text": "keys() \u2192 a set-like object providing a view on D's keys\u00b6\npop(k[, d]) \u2192 v, remove specified key and return the corresponding value.\u00b6\nIf the key is not found, return the default if given; otherwise,\nraise a KeyError.\npopitem()\u00b6\nRemove and return a (key, value) pair as a 2-tuple.\nPairs are returned in LIFO (last-in, first-out) order.\nRaises KeyError if the dict is empty.\nsetdefault(key, default=None, /)\u00b6\nInsert key with a value of default if key is not in the dictionary.\nReturn the value for key if key is in the dictionary, else default.\nupdate([E, ]**F) \u2192 None.\u00a0 Update D from dict/iterable E and F.\u00b6\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k]\nIf E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v\nIn either case, this is followed by: for k in F: D[k] = F[k]\nvalues() \u2192 an object providing a view on D's values\u00b6\ninstruction: str\u00b6\nmodel: str\u00b6\ntexts: List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.embaas.EmbaasEmbeddingsPayload.html"} {"id": "097789db18e6-0", "text": "langchain.embeddings.google_palm.embed_with_retry\u00b6\nlangchain.embeddings.google_palm.embed_with_retry(embeddings: GooglePalmEmbeddings, *args: Any, **kwargs: Any) \u2192 Any[source]\u00b6\nUse tenacity to retry the completion call.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.google_palm.embed_with_retry.html"} {"id": "51c6c4014f23-0", "text": "langchain.embeddings.minimax.embed_with_retry\u00b6\nlangchain.embeddings.minimax.embed_with_retry(embeddings: MiniMaxEmbeddings, *args: Any, **kwargs: Any) \u2192 Any[source]\u00b6\nUse tenacity to retry the completion call.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.minimax.embed_with_retry.html"} {"id": "ec8d6b3e9d64-0", "text": "langchain.embeddings.jina.JinaEmbeddings\u00b6\nclass langchain.embeddings.jina.JinaEmbeddings(*, client: Any = None, model_name: str = 'ViT-B-32::openai', jina_auth_token: Optional[str] = None, jina_api_url: str = 'https://api.clip.jina.ai/api/v1/models/', request_headers: Optional[dict] = None)[source]\u00b6\nBases: BaseModel, Embeddings\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam jina_api_url: str = 'https://api.clip.jina.ai/api/v1/models/'\u00b6\nparam jina_auth_token: Optional[str] = None\u00b6\nparam model_name: str = 'ViT-B-32::openai'\u00b6\nModel name to use.\nparam request_headers: Optional[dict] = None\u00b6\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]\u00b6\nCall out to Jina\u2019s embedding endpoint.\n:param texts: The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]\u00b6\nCall out to Jina\u2019s embedding endpoint.\n:param text: The text to embed.\nReturns\nEmbeddings for the text.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that auth token exists in environment.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.jina.JinaEmbeddings.html"} {"id": "cd7b9bf0befc-0", "text": "langchain.embeddings.elasticsearch.ElasticsearchEmbeddings\u00b6\nclass langchain.embeddings.elasticsearch.ElasticsearchEmbeddings(client: MlClient, model_id: str, *, input_field: str = 'text_field')[source]\u00b6\nBases: Embeddings\nWrapper around Elasticsearch embedding models.\nThis class provides an interface to generate embeddings using a model deployed\nin an Elasticsearch cluster. It requires an Elasticsearch connection object\nand the model_id of the model deployed in the cluster.\nIn Elasticsearch you need to have an embedding model loaded and deployed.\n- https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-trained-model.html\n- https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-deploy-models.html\nInitialize the ElasticsearchEmbeddings instance.\nParameters\nclient (MlClient) \u2013 An Elasticsearch ML client object.\nmodel_id (str) \u2013 The model_id of the model deployed in the Elasticsearch\ncluster.\ninput_field (str) \u2013 The name of the key for the input text field in the\ndocument. Defaults to \u2018text_field\u2019.\nMethods\n__init__(client,\u00a0model_id,\u00a0*[,\u00a0input_field])\nInitialize the ElasticsearchEmbeddings instance.\naembed_documents(texts)\nEmbed search docs.\naembed_query(text)\nEmbed query text.\nembed_documents(texts)\nGenerate embeddings for a list of documents.\nembed_query(text)\nGenerate an embedding for a single query text.\nfrom_credentials(model_id,\u00a0*[,\u00a0es_cloud_id,\u00a0...])\nInstantiate embeddings from Elasticsearch credentials.\nfrom_es_connection(model_id,\u00a0es_connection)\nInstantiate embeddings from an existing Elasticsearch connection.\nasync aembed_documents(texts: List[str]) \u2192 List[List[float]]\u00b6\nEmbed search docs.\nasync aembed_query(text: str) \u2192 List[float]\u00b6\nEmbed query text.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.elasticsearch.ElasticsearchEmbeddings.html"} {"id": "cd7b9bf0befc-1", "text": "async aembed_query(text: str) \u2192 List[float]\u00b6\nEmbed query text.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]\u00b6\nGenerate embeddings for a list of documents.\nParameters\ntexts (List[str]) \u2013 A list of document text strings to generate embeddings\nfor.\nReturns\nA list of embeddings, one for each document in the inputlist.\nReturn type\nList[List[float]]\nembed_query(text: str) \u2192 List[float][source]\u00b6\nGenerate an embedding for a single query text.\nParameters\ntext (str) \u2013 The query text to generate an embedding for.\nReturns\nThe embedding for the input query text.\nReturn type\nList[float]\nclassmethod from_credentials(model_id: str, *, es_cloud_id: Optional[str] = None, es_user: Optional[str] = None, es_password: Optional[str] = None, input_field: str = 'text_field') \u2192 ElasticsearchEmbeddings[source]\u00b6\nInstantiate embeddings from Elasticsearch credentials.\nParameters\nmodel_id (str) \u2013 The model_id of the model deployed in the Elasticsearch\ncluster.\ninput_field (str) \u2013 The name of the key for the input text field in the\ndocument. Defaults to \u2018text_field\u2019.\nes_cloud_id \u2013 (str, optional): The Elasticsearch cloud ID to connect to.\nes_user \u2013 (str, optional): Elasticsearch username.\nes_password \u2013 (str, optional): Elasticsearch password.\nExample\nfrom langchain.embeddings import ElasticsearchEmbeddings\n# Define the model ID and input field name (if different from default)\nmodel_id = \"your_model_id\"\n# Optional, only if different from 'text_field'\ninput_field = \"your_input_field\"\n# Credentials can be passed in two ways. Either set the env vars\n# ES_CLOUD_ID, ES_USER, ES_PASSWORD and they will be automatically", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.elasticsearch.ElasticsearchEmbeddings.html"} {"id": "cd7b9bf0befc-2", "text": "# ES_CLOUD_ID, ES_USER, ES_PASSWORD and they will be automatically\n# pulled in, or pass them in directly as kwargs.\nembeddings = ElasticsearchEmbeddings.from_credentials(\n model_id,\n input_field=input_field,\n # es_cloud_id=\"foo\",\n # es_user=\"bar\",\n # es_password=\"baz\",\n)\ndocuments = [\n \"This is an example document.\",\n \"Another example document to generate embeddings for.\",\n]\nembeddings_generator.embed_documents(documents)\nclassmethod from_es_connection(model_id: str, es_connection: Elasticsearch, input_field: str = 'text_field') \u2192 ElasticsearchEmbeddings[source]\u00b6\nInstantiate embeddings from an existing Elasticsearch connection.\nThis method provides a way to create an instance of the ElasticsearchEmbeddings\nclass using an existing Elasticsearch connection. The connection object is used\nto create an MlClient, which is then used to initialize the\nElasticsearchEmbeddings instance.\nArgs:\nmodel_id (str): The model_id of the model deployed in the Elasticsearch cluster.\nes_connection (elasticsearch.Elasticsearch): An existing Elasticsearch\nconnection object. input_field (str, optional): The name of the key for the\ninput text field in the document. Defaults to \u2018text_field\u2019.\nReturns:\nElasticsearchEmbeddings: An instance of the ElasticsearchEmbeddings class.\nExample\nfrom elasticsearch import Elasticsearch\nfrom langchain.embeddings import ElasticsearchEmbeddings\n# Define the model ID and input field name (if different from default)\nmodel_id = \"your_model_id\"\n# Optional, only if different from 'text_field'\ninput_field = \"your_input_field\"\n# Create Elasticsearch connection\nes_connection = Elasticsearch(\n hosts=[\"localhost:9200\"], http_auth=(\"user\", \"password\")\n)\n# Instantiate ElasticsearchEmbeddings using the existing connection", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.elasticsearch.ElasticsearchEmbeddings.html"} {"id": "cd7b9bf0befc-3", "text": ")\n# Instantiate ElasticsearchEmbeddings using the existing connection\nembeddings = ElasticsearchEmbeddings.from_es_connection(\n model_id,\n es_connection,\n input_field=input_field,\n)\ndocuments = [\n \"This is an example document.\",\n \"Another example document to generate embeddings for.\",\n]\nembeddings_generator.embed_documents(documents)", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.elasticsearch.ElasticsearchEmbeddings.html"} {"id": "bbcdfc6d4e24-0", "text": "langchain.embeddings.openai.embed_with_retry\u00b6\nlangchain.embeddings.openai.embed_with_retry(embeddings: OpenAIEmbeddings, **kwargs: Any) \u2192 Any[source]\u00b6\nUse tenacity to retry the embedding call.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.openai.embed_with_retry.html"} {"id": "412ef3582522-0", "text": "langchain.embeddings.tensorflow_hub.TensorflowHubEmbeddings\u00b6\nclass langchain.embeddings.tensorflow_hub.TensorflowHubEmbeddings(*, embed: Any = None, model_url: str = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3')[source]\u00b6\nBases: BaseModel, Embeddings\nWrapper around tensorflow_hub embedding models.\nTo use, you should have the tensorflow_text python package installed.\nExample\nfrom langchain.embeddings import TensorflowHubEmbeddings\nurl = \"https://tfhub.dev/google/universal-sentence-encoder-multilingual/3\"\ntf = TensorflowHubEmbeddings(model_url=url)\nInitialize the tensorflow_hub and tensorflow_text.\nparam model_url: str = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3'\u00b6\nModel name to use.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]\u00b6\nCompute doc embeddings using a TensorflowHub embedding model.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]\u00b6\nCompute query embeddings using a TensorflowHub embedding model.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.tensorflow_hub.TensorflowHubEmbeddings.html"} {"id": "836040cc28d8-0", "text": "langchain.embeddings.llamacpp.LlamaCppEmbeddings\u00b6\nclass langchain.embeddings.llamacpp.LlamaCppEmbeddings(*, client: Any = None, model_path: str, n_ctx: int = 512, n_parts: int = - 1, seed: int = - 1, f16_kv: bool = False, logits_all: bool = False, vocab_only: bool = False, use_mlock: bool = False, n_threads: Optional[int] = None, n_batch: Optional[int] = 8, n_gpu_layers: Optional[int] = None)[source]\u00b6\nBases: BaseModel, Embeddings\nWrapper around llama.cpp embedding models.\nTo use, you should have the llama-cpp-python library installed, and provide the\npath to the Llama model as a named parameter to the constructor.\nCheck out: https://github.com/abetlen/llama-cpp-python\nExample\nfrom langchain.embeddings import LlamaCppEmbeddings\nllama = LlamaCppEmbeddings(model_path=\"/path/to/model.bin\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam f16_kv: bool = False\u00b6\nUse half-precision for key/value cache.\nparam logits_all: bool = False\u00b6\nReturn logits for all tokens, not just the last token.\nparam model_path: str [Required]\u00b6\nparam n_batch: Optional[int] = 8\u00b6\nNumber of tokens to process in parallel.\nShould be a number between 1 and n_ctx.\nparam n_ctx: int = 512\u00b6\nToken context window.\nparam n_gpu_layers: Optional[int] = None\u00b6\nNumber of layers to be loaded into gpu memory. Default None.\nparam n_parts: int = -1\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.llamacpp.LlamaCppEmbeddings.html"} {"id": "836040cc28d8-1", "text": "param n_parts: int = -1\u00b6\nNumber of parts to split the model into.\nIf -1, the number of parts is automatically determined.\nparam n_threads: Optional[int] = None\u00b6\nNumber of threads to use. If None, the number\nof threads is automatically determined.\nparam seed: int = -1\u00b6\nSeed. If -1, a random seed is used.\nparam use_mlock: bool = False\u00b6\nForce system to keep model in RAM.\nparam vocab_only: bool = False\u00b6\nOnly load the vocabulary, no weights.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]\u00b6\nEmbed a list of documents using the Llama model.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]\u00b6\nEmbed a query using the Llama model.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that llama-cpp-python library is installed.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.llamacpp.LlamaCppEmbeddings.html"} {"id": "7ff68c7a4aad-0", "text": "langchain.embeddings.fake.FakeEmbeddings\u00b6\nclass langchain.embeddings.fake.FakeEmbeddings(*, size: int)[source]\u00b6\nBases: Embeddings, BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam size: int [Required]\u00b6\nasync aembed_documents(texts: List[str]) \u2192 List[List[float]]\u00b6\nEmbed search docs.\nasync aembed_query(text: str) \u2192 List[float]\u00b6\nEmbed query text.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]\u00b6\nEmbed search docs.\nembed_query(text: str) \u2192 List[float][source]\u00b6\nEmbed query text.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.fake.FakeEmbeddings.html"} {"id": "9472dabe824a-0", "text": "langchain.embeddings.huggingface.HuggingFaceInstructEmbeddings\u00b6\nclass langchain.embeddings.huggingface.HuggingFaceInstructEmbeddings(*, client: Any = None, model_name: str = 'hkunlp/instructor-large', cache_folder: Optional[str] = None, model_kwargs: Dict[str, Any] = None, encode_kwargs: Dict[str, Any] = None, embed_instruction: str = 'Represent the document for retrieval: ', query_instruction: str = 'Represent the question for retrieving supporting documents: ')[source]\u00b6\nBases: BaseModel, Embeddings\nWrapper around sentence_transformers embedding models.\nTo use, you should have the sentence_transformers\nand InstructorEmbedding python packages installed.\nExample\nfrom langchain.embeddings import HuggingFaceInstructEmbeddings\nmodel_name = \"hkunlp/instructor-large\"\nmodel_kwargs = {'device': 'cpu'}\nencode_kwargs = {'normalize_embeddings': True}\nhf = HuggingFaceInstructEmbeddings(\n model_name=model_name,\n model_kwargs=model_kwargs,\n encode_kwargs=encode_kwargs\n)\nInitialize the sentence_transformer.\nparam cache_folder: Optional[str] = None\u00b6\nPath to store models.\nCan be also set by SENTENCE_TRANSFORMERS_HOME environment variable.\nparam embed_instruction: str = 'Represent the document for retrieval: '\u00b6\nInstruction to use for embedding documents.\nparam encode_kwargs: Dict[str, Any] [Optional]\u00b6\nKey word arguments to pass when calling the encode method of the model.\nparam model_kwargs: Dict[str, Any] [Optional]\u00b6\nKey word arguments to pass to the model.\nparam model_name: str = 'hkunlp/instructor-large'\u00b6\nModel name to use.\nparam query_instruction: str = 'Represent the question for retrieving supporting documents: '\u00b6\nInstruction to use for embedding query.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface.HuggingFaceInstructEmbeddings.html"} {"id": "9472dabe824a-1", "text": "Instruction to use for embedding query.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]\u00b6\nCompute doc embeddings using a HuggingFace instruct model.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]\u00b6\nCompute query embeddings using a HuggingFace instruct model.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface.HuggingFaceInstructEmbeddings.html"} {"id": "7c4a109a6b6a-0", "text": "langchain.embeddings.vertexai.VertexAIEmbeddings\u00b6\nclass langchain.embeddings.vertexai.VertexAIEmbeddings(*, client: '_LanguageModel' = None, model_name: str = 'textembedding-gecko', temperature: float = 0.0, max_output_tokens: int = 128, top_p: float = 0.95, top_k: int = 40, stop: Optional[List[str]] = None, project: Optional[str] = None, location: str = 'us-central1', credentials: Any = None, request_parallelism: int = 5, max_retries: int = 6)[source]\u00b6\nBases: _VertexAICommon, Embeddings\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam credentials: Any = None\u00b6\nThe default custom credentials (google.auth.credentials.Credentials) to use\nparam location: str = 'us-central1'\u00b6\nThe default location to use when making API calls.\nparam max_output_tokens: int = 128\u00b6\nToken limit determines the maximum amount of text output from one prompt.\nparam max_retries: int = 6\u00b6\nThe maximum number of retries to make when generating.\nparam model_name: str = 'textembedding-gecko'\u00b6\nModel name to use.\nparam project: Optional[str] = None\u00b6\nThe default GCP project to use when making Vertex API calls.\nparam request_parallelism: int = 5\u00b6\nThe amount of parallelism allowed for requests issued to VertexAI models.\nparam stop: Optional[List[str]] = None\u00b6\nOptional list of stop words to use when generating.\nparam temperature: float = 0.0\u00b6\nSampling temperature, it controls the degree of randomness in token selection.\nparam top_k: int = 40\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.vertexai.VertexAIEmbeddings.html"} {"id": "7c4a109a6b6a-1", "text": "param top_k: int = 40\u00b6\nHow the model selects tokens for output, the next token is selected from\nparam top_p: float = 0.95\u00b6\nTokens are selected from most probable to least until the sum of their\nembed_documents(texts: List[str], batch_size: int = 5) \u2192 List[List[float]][source]\u00b6\nEmbed a list of strings. Vertex AI currently\nsets a max batch size of 5 strings.\nParameters\ntexts \u2013 List[str] The list of strings to embed.\nbatch_size \u2013 [int] The batch size of embeddings to send to the model\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]\u00b6\nEmbed a text.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbedding for the text.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidates that the python package exists in environment.\nproperty is_codey_model: bool\u00b6\ntask_executor: ClassVar[Optional[Executor]] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.vertexai.VertexAIEmbeddings.html"} {"id": "d003133aaab0-0", "text": "langchain.embeddings.aleph_alpha.AlephAlphaSymmetricSemanticEmbedding\u00b6\nclass langchain.embeddings.aleph_alpha.AlephAlphaSymmetricSemanticEmbedding(*, client: Any = None, model: Optional[str] = 'luminous-base', hosting: Optional[str] = 'https://api.aleph-alpha.com', normalize: Optional[bool] = True, compress_to_size: Optional[int] = 128, contextual_control_threshold: Optional[int] = None, control_log_additive: Optional[bool] = True, aleph_alpha_api_key: Optional[str] = None)[source]\u00b6\nBases: AlephAlphaAsymmetricSemanticEmbedding\nThe symmetric version of the Aleph Alpha\u2019s semantic embeddings.\nThe main difference is that here, both the documents and\nqueries are embedded with a SemanticRepresentation.Symmetric\n.. rubric:: Example\nfrom aleph_alpha import AlephAlphaSymmetricSemanticEmbedding\nembeddings = AlephAlphaAsymmetricSemanticEmbedding()\ntext = \"This is a test text\"\ndoc_result = embeddings.embed_documents([text])\nquery_result = embeddings.embed_query(text)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam aleph_alpha_api_key: Optional[str] = None\u00b6\nAPI key for Aleph Alpha API.\nparam client: Any = None\u00b6\nparam compress_to_size: Optional[int] = 128\u00b6\nShould the returned embeddings come back as an original 5120-dim vector,\nor should it be compressed to 128-dim.\nparam contextual_control_threshold: Optional[int] = None\u00b6\nAttention control parameters only apply to those tokens that have\nexplicitly been set in the request.\nparam control_log_additive: Optional[bool] = True\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.aleph_alpha.AlephAlphaSymmetricSemanticEmbedding.html"} {"id": "d003133aaab0-1", "text": "param control_log_additive: Optional[bool] = True\u00b6\nApply controls on prompt items by adding the log(control_factor)\nto attention scores.\nparam hosting: Optional[str] = 'https://api.aleph-alpha.com'\u00b6\nOptional parameter that specifies which datacenters may process the request.\nparam model: Optional[str] = 'luminous-base'\u00b6\nModel name to use.\nparam normalize: Optional[bool] = True\u00b6\nShould returned embeddings be normalized\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]\u00b6\nCall out to Aleph Alpha\u2019s Document endpoint.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]\u00b6\nCall out to Aleph Alpha\u2019s asymmetric, query embedding endpoint\n:param text: The text to embed.\nReturns\nEmbeddings for the text.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that api key and python package exists in environment.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.aleph_alpha.AlephAlphaSymmetricSemanticEmbedding.html"} {"id": "7cba0af7fead-0", "text": "langchain.embeddings.spacy_embeddings.SpacyEmbeddings\u00b6\nclass langchain.embeddings.spacy_embeddings.SpacyEmbeddings(*, nlp: Any = None)[source]\u00b6\nBases: BaseModel, Embeddings\nSpacyEmbeddings is a class for generating embeddings using the Spacy library.\nIt only supports the \u2018en_core_web_sm\u2019 model.\nnlp\u00b6\nThe Spacy model loaded into memory.\nType\nAny\nembed_documents(texts\nList[str]) -> List[List[float]]:\nGenerates embeddings for a list of documents.\nembed_query(text\nstr) -> List[float]:\nGenerates an embedding for a single piece of text.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam nlp: Any = None\u00b6\nasync aembed_documents(texts: List[str]) \u2192 List[List[float]][source]\u00b6\nAsynchronously generates embeddings for a list of documents.\nThis method is not implemented and raises a NotImplementedError.\nParameters\ntexts (List[str]) \u2013 The documents to generate embeddings for.\nRaises\nNotImplementedError \u2013 This method is not implemented.\nasync aembed_query(text: str) \u2192 List[float][source]\u00b6\nAsynchronously generates an embedding for a single piece of text.\nThis method is not implemented and raises a NotImplementedError.\nParameters\ntext (str) \u2013 The text to generate an embedding for.\nRaises\nNotImplementedError \u2013 This method is not implemented.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]\u00b6\nGenerates embeddings for a list of documents.\nParameters\ntexts (List[str]) \u2013 The documents to generate embeddings for.\nReturns\nA list of embeddings, one for each document.\nembed_query(text: str) \u2192 List[float][source]\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.spacy_embeddings.SpacyEmbeddings.html"} {"id": "7cba0af7fead-1", "text": "embed_query(text: str) \u2192 List[float][source]\u00b6\nGenerates an embedding for a single piece of text.\nParameters\ntext (str) \u2013 The text to generate an embedding for.\nReturns\nThe embedding for the text.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidates that the Spacy package and the \u2018en_core_web_sm\u2019 model are installed.\nParameters\nvalues (Dict) \u2013 The values provided to the class constructor.\nReturns\nThe validated values.\nRaises\nValueError \u2013 If the Spacy package or the \u2018en_core_web_sm\u2019\nmodel are not installed. \u2013 \nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.spacy_embeddings.SpacyEmbeddings.html"} {"id": "6704695e37d4-0", "text": "langchain.embeddings.deepinfra.DeepInfraEmbeddings\u00b6\nclass langchain.embeddings.deepinfra.DeepInfraEmbeddings(*, model_id: str = 'sentence-transformers/clip-ViT-B-32', normalize: bool = False, embed_instruction: str = 'passage: ', query_instruction: str = 'query: ', model_kwargs: Optional[dict] = None, deepinfra_api_token: Optional[str] = None)[source]\u00b6\nBases: BaseModel, Embeddings\nWrapper around Deep Infra\u2019s embedding inference service.\nTo use, you should have the\nenvironment variable DEEPINFRA_API_TOKEN set with your API token, or pass\nit as a named parameter to the constructor.\nThere are multiple embeddings models available,\nsee https://deepinfra.com/models?type=embeddings.\nExample\nfrom langchain.embeddings import DeepInfraEmbeddings\ndeepinfra_emb = DeepInfraEmbeddings(\n model_id=\"sentence-transformers/clip-ViT-B-32\",\n deepinfra_api_token=\"my-api-key\"\n)\nr1 = deepinfra_emb.embed_documents(\n [\n \"Alpha is the first letter of Greek alphabet\",\n \"Beta is the second letter of Greek alphabet\",\n ]\n)\nr2 = deepinfra_emb.embed_query(\n \"What is the second letter of Greek alphabet\"\n)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam deepinfra_api_token: Optional[str] = None\u00b6\nparam embed_instruction: str = 'passage: '\u00b6\nInstruction used to embed documents.\nparam model_id: str = 'sentence-transformers/clip-ViT-B-32'\u00b6\nEmbeddings model to use.\nparam model_kwargs: Optional[dict] = None\u00b6\nOther model keyword args", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.deepinfra.DeepInfraEmbeddings.html"} {"id": "6704695e37d4-1", "text": "param model_kwargs: Optional[dict] = None\u00b6\nOther model keyword args\nparam normalize: bool = False\u00b6\nwhether to normalize the computed embeddings\nparam query_instruction: str = 'query: '\u00b6\nInstruction used to embed the query.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]\u00b6\nEmbed documents using a Deep Infra deployed embedding model.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]\u00b6\nEmbed a query using a Deep Infra deployed embedding model.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.deepinfra.DeepInfraEmbeddings.html"} {"id": "8af2a2acc7e6-0", "text": "langchain.embeddings.dashscope.embed_with_retry\u00b6\nlangchain.embeddings.dashscope.embed_with_retry(embeddings: DashScopeEmbeddings, **kwargs: Any) \u2192 Any[source]\u00b6\nUse tenacity to retry the embedding call.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.dashscope.embed_with_retry.html"} {"id": "59f4c723c6b9-0", "text": "langchain.embeddings.base.Embeddings\u00b6\nclass langchain.embeddings.base.Embeddings[source]\u00b6\nBases: ABC\nInterface for embedding models.\nMethods\n__init__()\naembed_documents(texts)\nEmbed search docs.\naembed_query(text)\nEmbed query text.\nembed_documents(texts)\nEmbed search docs.\nembed_query(text)\nEmbed query text.\nasync aembed_documents(texts: List[str]) \u2192 List[List[float]][source]\u00b6\nEmbed search docs.\nasync aembed_query(text: str) \u2192 List[float][source]\u00b6\nEmbed query text.\nabstract embed_documents(texts: List[str]) \u2192 List[List[float]][source]\u00b6\nEmbed search docs.\nabstract embed_query(text: str) \u2192 List[float][source]\u00b6\nEmbed query text.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.base.Embeddings.html"} {"id": "9bf660541009-0", "text": "langchain.embeddings.openai.OpenAIEmbeddings\u00b6\nclass langchain.embeddings.openai.OpenAIEmbeddings(*, client: Any = None, model: str = 'text-embedding-ada-002', deployment: str = 'text-embedding-ada-002', openai_api_version: Optional[str] = None, openai_api_base: Optional[str] = None, openai_api_type: Optional[str] = None, openai_proxy: Optional[str] = None, embedding_ctx_length: int = 8191, openai_api_key: Optional[str] = None, openai_organization: Optional[str] = None, allowed_special: Union[Literal['all'], Set[str]] = {}, disallowed_special: Union[Literal['all'], Set[str], Sequence[str]] = 'all', chunk_size: int = 1000, max_retries: int = 6, request_timeout: Optional[Union[float, Tuple[float, float]]] = None, headers: Any = None, tiktoken_model_name: Optional[str] = None, show_progress_bar: bool = False)[source]\u00b6\nBases: BaseModel, Embeddings\nWrapper around OpenAI embedding models.\nTo use, you should have the openai python package installed, and the\nenvironment variable OPENAI_API_KEY set with your API key or pass it\nas a named parameter to the constructor.\nExample\nfrom langchain.embeddings import OpenAIEmbeddings\nopenai = OpenAIEmbeddings(openai_api_key=\"my-api-key\")\nIn order to use the library with Microsoft Azure endpoints, you need to set\nthe OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION.\nThe OPENAI_API_TYPE must be set to \u2018azure\u2019 and the others correspond to\nthe properties of your endpoint.\nIn addition, the deployment name must be passed as the model parameter.\nExample\nimport os", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.openai.OpenAIEmbeddings.html"} {"id": "9bf660541009-1", "text": "In addition, the deployment name must be passed as the model parameter.\nExample\nimport os\nos.environ[\"OPENAI_API_TYPE\"] = \"azure\"\nos.environ[\"OPENAI_API_BASE\"] = \"https://, hardware: ~typing.Any = None, model_load_fn: ~typing.Callable = , load_fn_kwargs: ~typing.Optional[dict] = None, model_reqs: ~typing.List[str] = ['./', 'sentence_transformers', 'torch'], inference_kwargs: ~typing.Any = None, model_id: str = 'sentence-transformers/all-mpnet-base-v2')[source]\u00b6\nBases: SelfHostedEmbeddings\nRuns sentence_transformers embedding models on self-hosted remote hardware.\nSupported hardware includes auto-launched instances on AWS, GCP, Azure,\nand Lambda, as well as servers specified\nby IP address and SSH credentials (such as on-prem, or another cloud\nlike Paperspace, Coreweave, etc.).\nTo use, you should have the runhouse python package installed.\nExample\nfrom langchain.embeddings import SelfHostedHuggingFaceEmbeddings\nimport runhouse as rh\nmodel_name = \"sentence-transformers/all-mpnet-base-v2\"", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html"} {"id": "58973ddbaf50-1", "text": "model_name = \"sentence-transformers/all-mpnet-base-v2\"\ngpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\nhf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu)\nInitialize the remote inference function.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam hardware: Any = None\u00b6\nRemote hardware to send the inference function to.\nparam inference_fn: Callable = \u00b6\nInference function to extract the embeddings.\nparam inference_kwargs: Any = None\u00b6\nAny kwargs to pass to the model\u2019s inference function.\nparam load_fn_kwargs: Optional[dict] = None\u00b6\nKey word arguments to pass to the model load function.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_id: str = 'sentence-transformers/all-mpnet-base-v2'\u00b6\nModel name to use.\nparam model_load_fn: Callable = \u00b6\nFunction to load the model remotely on the server.\nparam model_reqs: List[str] = ['./', 'sentence_transformers', 'torch']\u00b6\nRequirements to install on hardware to inference the model.\nparam pipeline_ref: Any = None\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html"} {"id": "58973ddbaf50-2", "text": "param verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html"} {"id": "58973ddbaf50-3", "text": "first occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\nembed_documents(texts: List[str]) \u2192 List[List[float]]\u00b6\nCompute doc embeddings using a HuggingFace transformer model.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html"} {"id": "58973ddbaf50-4", "text": "Compute doc embeddings using a HuggingFace transformer model.\nParameters\ntexts \u2013 The list of texts to embed.s\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float]\u00b6\nCompute query embeddings using a HuggingFace transformer model.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nclassmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) \u2192 LLM\u00b6\nInit the SelfHostedPipeline from a pipeline object or string.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html"} {"id": "58973ddbaf50-5", "text": "text generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html"} {"id": "58973ddbaf50-6", "text": "stop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html"} {"id": "58973ddbaf50-7", "text": "property lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html"} {"id": "bfd0c04d34bf-0", "text": "langchain.embeddings.self_hosted_hugging_face.load_embedding_model\u00b6\nlangchain.embeddings.self_hosted_hugging_face.load_embedding_model(model_id: str, instruct: bool = False, device: int = 0) \u2192 Any[source]\u00b6\nLoad the embedding model.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.load_embedding_model.html"} {"id": "d1994434d818-0", "text": "langchain.embeddings.embaas.EmbaasEmbeddings\u00b6\nclass langchain.embeddings.embaas.EmbaasEmbeddings(*, model: str = 'e5-large-v2', instruction: Optional[str] = None, api_url: str = 'https://api.embaas.io/v1/embeddings/', embaas_api_key: Optional[str] = None)[source]\u00b6\nBases: BaseModel, Embeddings\nWrapper around embaas\u2019s embedding service.\nTo use, you should have the\nenvironment variable EMBAAS_API_KEY set with your API key, or pass\nit as a named parameter to the constructor.\nExample\n# Initialise with default model and instruction\nfrom langchain.embeddings import EmbaasEmbeddings\nemb = EmbaasEmbeddings()\n# Initialise with custom model and instruction\nfrom langchain.embeddings import EmbaasEmbeddings\nemb_model = \"instructor-large\"\nemb_inst = \"Represent the Wikipedia document for retrieval\"\nemb = EmbaasEmbeddings(\n model=emb_model,\n instruction=emb_inst\n)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_url: str = 'https://api.embaas.io/v1/embeddings/'\u00b6\nThe URL for the embaas embeddings API.\nparam embaas_api_key: Optional[str] = None\u00b6\nparam instruction: Optional[str] = None\u00b6\nInstruction used for domain-specific embeddings.\nparam model: str = 'e5-large-v2'\u00b6\nThe model used for embeddings.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]\u00b6\nGet embeddings for a list of texts.\nParameters\ntexts \u2013 The list of texts to get embeddings for.\nReturns\nList of embeddings, one for each text.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.embaas.EmbaasEmbeddings.html"} {"id": "d1994434d818-1", "text": "Returns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]\u00b6\nGet embeddings for a single text.\nParameters\ntext \u2013 The text to get embeddings for.\nReturns\nList of embeddings.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.embaas.EmbaasEmbeddings.html"} {"id": "de601927bbc3-0", "text": "langchain.embeddings.dashscope.DashScopeEmbeddings\u00b6\nclass langchain.embeddings.dashscope.DashScopeEmbeddings(*, client: Any = None, model: str = 'text-embedding-v1', dashscope_api_key: Optional[str] = None, max_retries: int = 5)[source]\u00b6\nBases: BaseModel, Embeddings\nWrapper around DashScope embedding models.\nTo use, you should have the dashscope python package installed, and the\nenvironment variable DASHSCOPE_API_KEY set with your API key or pass it\nas a named parameter to the constructor.\nExample\nfrom langchain.embeddings import DashScopeEmbeddings\nembeddings = DashScopeEmbeddings(dashscope_api_key=\"my-api-key\")\nExample\nimport os\nos.environ[\"DASHSCOPE_API_KEY\"] = \"your DashScope API KEY\"\nfrom langchain.embeddings.dashscope import DashScopeEmbeddings\nembeddings = DashScopeEmbeddings(\n model=\"text-embedding-v1\",\n)\ntext = \"This is a test query.\"\nquery_result = embeddings.embed_query(text)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam dashscope_api_key: Optional[str] = None\u00b6\nMaximum number of retries to make when generating.\nparam max_retries: int = 5\u00b6\nparam model: str = 'text-embedding-v1'\u00b6\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]\u00b6\nCall out to DashScope\u2019s embedding endpoint for embedding search docs.\nParameters\ntexts \u2013 The list of texts to embed.\nchunk_size \u2013 The chunk size of embeddings. If None, will use the chunk size\nspecified by the class.\nReturns\nList of embeddings, one for each text.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.dashscope.DashScopeEmbeddings.html"} {"id": "de601927bbc3-1", "text": "specified by the class.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]\u00b6\nCall out to DashScope\u2019s embedding endpoint for embedding query text.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbedding for the text.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.dashscope.DashScopeEmbeddings.html"} {"id": "2f3c083275ab-0", "text": "langchain.embeddings.minimax.MiniMaxEmbeddings\u00b6\nclass langchain.embeddings.minimax.MiniMaxEmbeddings(*, endpoint_url: str = 'https://api.minimax.chat/v1/embeddings', model: str = 'embo-01', embed_type_db: str = 'db', embed_type_query: str = 'query', minimax_group_id: Optional[str] = None, minimax_api_key: Optional[str] = None)[source]\u00b6\nBases: BaseModel, Embeddings\nWrapper around MiniMax\u2019s embedding inference service.\nTo use, you should have the environment variable MINIMAX_GROUP_ID and\nMINIMAX_API_KEY set with your API token, or pass it as a named parameter to\nthe constructor.\nExample\nfrom langchain.embeddings import MiniMaxEmbeddings\nembeddings = MiniMaxEmbeddings()\nquery_text = \"This is a test query.\"\nquery_result = embeddings.embed_query(query_text)\ndocument_text = \"This is a test document.\"\ndocument_result = embeddings.embed_documents([document_text])\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam embed_type_db: str = 'db'\u00b6\nFor embed_documents\nparam embed_type_query: str = 'query'\u00b6\nFor embed_query\nparam endpoint_url: str = 'https://api.minimax.chat/v1/embeddings'\u00b6\nEndpoint URL to use.\nparam minimax_api_key: Optional[str] = None\u00b6\nAPI Key for MiniMax API.\nparam minimax_group_id: Optional[str] = None\u00b6\nGroup ID for MiniMax API.\nparam model: str = 'embo-01'\u00b6\nEmbeddings model name to use.\nembed(texts: List[str], embed_type: str) \u2192 List[List[float]][source]\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.minimax.MiniMaxEmbeddings.html"} {"id": "2f3c083275ab-1", "text": "embed_documents(texts: List[str]) \u2192 List[List[float]][source]\u00b6\nEmbed documents using a MiniMax embedding endpoint.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]\u00b6\nEmbed a query using a MiniMax embedding endpoint.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that group id and api key exists in environment.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.minimax.MiniMaxEmbeddings.html"} {"id": "d31f9fd9e3bf-0", "text": "langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding\u00b6\nclass langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding(*, client: Any = None, model: Optional[str] = 'luminous-base', hosting: Optional[str] = 'https://api.aleph-alpha.com', normalize: Optional[bool] = True, compress_to_size: Optional[int] = 128, contextual_control_threshold: Optional[int] = None, control_log_additive: Optional[bool] = True, aleph_alpha_api_key: Optional[str] = None)[source]\u00b6\nBases: BaseModel, Embeddings\nWrapper for Aleph Alpha\u2019s Asymmetric Embeddings\nAA provides you with an endpoint to embed a document and a query.\nThe models were optimized to make the embeddings of documents and\nthe query for a document as similar as possible.\nTo learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/\nExample\nfrom aleph_alpha import AlephAlphaAsymmetricSemanticEmbedding\nembeddings = AlephAlphaSymmetricSemanticEmbedding()\ndocument = \"This is a content of the document\"\nquery = \"What is the content of the document?\"\ndoc_result = embeddings.embed_documents([document])\nquery_result = embeddings.embed_query(query)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam aleph_alpha_api_key: Optional[str] = None\u00b6\nAPI key for Aleph Alpha API.\nparam compress_to_size: Optional[int] = 128\u00b6\nShould the returned embeddings come back as an original 5120-dim vector,\nor should it be compressed to 128-dim.\nparam contextual_control_threshold: Optional[int] = None\u00b6\nAttention control parameters only apply to those tokens that have", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding.html"} {"id": "d31f9fd9e3bf-1", "text": "Attention control parameters only apply to those tokens that have\nexplicitly been set in the request.\nparam control_log_additive: Optional[bool] = True\u00b6\nApply controls on prompt items by adding the log(control_factor)\nto attention scores.\nparam hosting: Optional[str] = 'https://api.aleph-alpha.com'\u00b6\nOptional parameter that specifies which datacenters may process the request.\nparam model: Optional[str] = 'luminous-base'\u00b6\nModel name to use.\nparam normalize: Optional[bool] = True\u00b6\nShould returned embeddings be normalized\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]\u00b6\nCall out to Aleph Alpha\u2019s asymmetric Document endpoint.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]\u00b6\nCall out to Aleph Alpha\u2019s asymmetric, query embedding endpoint\n:param text: The text to embed.\nReturns\nEmbeddings for the text.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding.html"} {"id": "8b97d480fe0a-0", "text": "langchain.embeddings.modelscope_hub.ModelScopeEmbeddings\u00b6\nclass langchain.embeddings.modelscope_hub.ModelScopeEmbeddings(*, embed: Any = None, model_id: str = 'damo/nlp_corom_sentence-embedding_english-base')[source]\u00b6\nBases: BaseModel, Embeddings\nWrapper around modelscope_hub embedding models.\nTo use, you should have the modelscope python package installed.\nExample\nfrom langchain.embeddings import ModelScopeEmbeddings\nmodel_id = \"damo/nlp_corom_sentence-embedding_english-base\"\nembed = ModelScopeEmbeddings(model_id=model_id)\nInitialize the modelscope\nparam embed: Any = None\u00b6\nparam model_id: str = 'damo/nlp_corom_sentence-embedding_english-base'\u00b6\nModel name to use.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]\u00b6\nCompute doc embeddings using a modelscope embedding model.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]\u00b6\nCompute query embeddings using a modelscope embedding model.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.modelscope_hub.ModelScopeEmbeddings.html"} {"id": "8fc4e78a51ca-0", "text": "langchain.embeddings.bedrock.BedrockEmbeddings\u00b6\nclass langchain.embeddings.bedrock.BedrockEmbeddings(*, client: Any = None, region_name: Optional[str] = None, credentials_profile_name: Optional[str] = None, model_id: str = 'amazon.titan-e1t-medium', model_kwargs: Optional[Dict] = None)[source]\u00b6\nBases: BaseModel, Embeddings\nEmbeddings provider to invoke Bedrock embedding models.\nTo authenticate, the AWS client uses the following methods to\nautomatically load credentials:\nhttps://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\nIf a specific credential profile should be used, you must pass\nthe name of the profile from the ~/.aws/credentials file that is to be used.\nMake sure the credentials / roles used have the required policies to\naccess the Bedrock service.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam credentials_profile_name: Optional[str] = None\u00b6\nThe name of the profile in the ~/.aws/credentials or ~/.aws/config files, which\nhas either access keys or role information specified.\nIf not specified, the default credential profile or, if on an EC2 instance,\ncredentials from IMDS will be used.\nSee: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\nparam model_id: str = 'amazon.titan-e1t-medium'\u00b6\nId of the model to call, e.g., amazon.titan-e1t-medium, this is\nequivalent to the modelId property in the list-foundation-models api\nparam model_kwargs: Optional[Dict] = None\u00b6\nKey word arguments to pass to the model.\nparam region_name: Optional[str] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.bedrock.BedrockEmbeddings.html"} {"id": "8fc4e78a51ca-1", "text": "param region_name: Optional[str] = None\u00b6\nThe aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable\nor region specified in ~/.aws/config in case it is not provided here.\nembed_documents(texts: List[str], chunk_size: int = 1) \u2192 List[List[float]][source]\u00b6\nCompute doc embeddings using a Bedrock model.\nParameters\ntexts \u2013 The list of texts to embed.\nchunk_size \u2013 Bedrock currently only allows single string\ninputs, so chunk size is always 1. This input is here\nonly for compatibility with the embeddings interface.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]\u00b6\nCompute query embeddings using a Bedrock model.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that AWS credentials to and python package exists in environment.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.bedrock.BedrockEmbeddings.html"} {"id": "9c50c40801d0-0", "text": "langchain.embeddings.cohere.CohereEmbeddings\u00b6\nclass langchain.embeddings.cohere.CohereEmbeddings(*, client: Any = None, model: str = 'embed-english-v2.0', truncate: Optional[str] = None, cohere_api_key: Optional[str] = None)[source]\u00b6\nBases: BaseModel, Embeddings\nWrapper around Cohere embedding models.\nTo use, you should have the cohere python package installed, and the\nenvironment variable COHERE_API_KEY set with your API key or pass it\nas a named parameter to the constructor.\nExample\nfrom langchain.embeddings import CohereEmbeddings\ncohere = CohereEmbeddings(\n model=\"embed-english-light-v2.0\", cohere_api_key=\"my-api-key\"\n)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cohere_api_key: Optional[str] = None\u00b6\nparam model: str = 'embed-english-v2.0'\u00b6\nModel name to use.\nparam truncate: Optional[str] = None\u00b6\nTruncate embeddings that are too long from start or end (\u201cNONE\u201d|\u201dSTART\u201d|\u201dEND\u201d)\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]\u00b6\nCall out to Cohere\u2019s embedding endpoint.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]\u00b6\nCall out to Cohere\u2019s embedding endpoint.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nmodel Config[source]\u00b6\nBases: object", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.cohere.CohereEmbeddings.html"} {"id": "9c50c40801d0-1", "text": "model Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.cohere.CohereEmbeddings.html"} {"id": "5393ca2544d6-0", "text": "langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler\u00b6\nclass langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler[source]\u00b6\nBases: ContentHandlerBase[List[str], List[List[float]]]\nContent handler for LLM class.\nMethods\n__init__()\ntransform_input(prompt,\u00a0model_kwargs)\nTransforms the input to a format that model can accept as the request Body.\ntransform_output(output)\nTransforms the output from the model to string that the LLM class expects.\nAttributes\naccepts\nThe MIME type of the response data returned from endpoint\ncontent_type\nThe MIME type of the input data passed to endpoint\nabstract transform_input(prompt: INPUT_TYPE, model_kwargs: Dict) \u2192 bytes\u00b6\nTransforms the input to a format that model can accept\nas the request Body. Should return bytes or seekable file\nlike object in the format specified in the content_type\nrequest header.\nabstract transform_output(output: bytes) \u2192 OUTPUT_TYPE\u00b6\nTransforms the output from the model to string that\nthe LLM class expects.\naccepts: Optional[str] = 'text/plain'\u00b6\nThe MIME type of the response data returned from endpoint\ncontent_type: Optional[str] = 'text/plain'\u00b6\nThe MIME type of the input data passed to endpoint", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler.html"} {"id": "c2f9bc9d3f9d-0", "text": "langchain.embeddings.self_hosted.SelfHostedEmbeddings\u00b6\nclass langchain.embeddings.self_hosted.SelfHostedEmbeddings(*, cache: ~typing.Optional[bool] = None, verbose: bool = None, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, pipeline_ref: ~typing.Any = None, client: ~typing.Any = None, inference_fn: ~typing.Callable = , hardware: ~typing.Any = None, model_load_fn: ~typing.Callable, load_fn_kwargs: ~typing.Optional[dict] = None, model_reqs: ~typing.List[str] = ['./', 'torch'], inference_kwargs: ~typing.Any = None)[source]\u00b6\nBases: SelfHostedPipeline, Embeddings\nRuns custom embedding models on self-hosted remote hardware.\nSupported hardware includes auto-launched instances on AWS, GCP, Azure,\nand Lambda, as well as servers specified\nby IP address and SSH credentials (such as on-prem, or another\ncloud like Paperspace, Coreweave, etc.).\nTo use, you should have the runhouse python package installed.\nExample using a model load function:from langchain.embeddings import SelfHostedEmbeddings\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\nimport runhouse as rh\ngpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\ndef get_pipeline():\n model_id = \"facebook/bart-large\"", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted.SelfHostedEmbeddings.html"} {"id": "c2f9bc9d3f9d-1", "text": "def get_pipeline():\n model_id = \"facebook/bart-large\"\n tokenizer = AutoTokenizer.from_pretrained(model_id)\n model = AutoModelForCausalLM.from_pretrained(model_id)\n return pipeline(\"feature-extraction\", model=model, tokenizer=tokenizer)\nembeddings = SelfHostedEmbeddings(\n model_load_fn=get_pipeline,\n hardware=gpu\n model_reqs=[\"./\", \"torch\", \"transformers\"],\n)\nExample passing in a pipeline path:from langchain.embeddings import SelfHostedHFEmbeddings\nimport runhouse as rh\nfrom transformers import pipeline\ngpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\npipeline = pipeline(model=\"bert-base-uncased\", task=\"feature-extraction\")\nrh.blob(pickle.dumps(pipeline),\n path=\"models/pipeline.pkl\").save().to(gpu, path=\"models\")\nembeddings = SelfHostedHFEmbeddings.from_pipeline(\n pipeline=\"models/pipeline.pkl\",\n hardware=gpu,\n model_reqs=[\"./\", \"torch\", \"transformers\"],\n)\nInit the pipeline with an auxiliary function.\nThe load function must be in global scope to be imported\nand run on the server, i.e. in a module and not a REPL or closure.\nThen, initialize the remote inference function.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam hardware: Any = None\u00b6\nRemote hardware to send the inference function to.\nparam inference_fn: Callable = \u00b6\nInference function to extract the embeddings on the remote hardware.\nparam inference_kwargs: Any = None\u00b6\nAny kwargs to pass to the model\u2019s inference function.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted.SelfHostedEmbeddings.html"} {"id": "c2f9bc9d3f9d-2", "text": "Any kwargs to pass to the model\u2019s inference function.\nparam load_fn_kwargs: Optional[dict] = None\u00b6\nKey word arguments to pass to the model load function.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_load_fn: Callable [Required]\u00b6\nFunction to load the model remotely on the server.\nparam model_reqs: List[str] = ['./', 'torch']\u00b6\nRequirements to install on hardware to inference the model.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted.SelfHostedEmbeddings.html"} {"id": "c2f9bc9d3f9d-3", "text": "API.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted.SelfHostedEmbeddings.html"} {"id": "c2f9bc9d3f9d-4", "text": "Asynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]\u00b6\nCompute doc embeddings using a HuggingFace transformer model.\nParameters\ntexts \u2013 The list of texts to embed.s\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]\u00b6\nCompute query embeddings using a HuggingFace transformer model.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nclassmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) \u2192 LLM\u00b6\nInit the SelfHostedPipeline from a pipeline object or string.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted.SelfHostedEmbeddings.html"} {"id": "c2f9bc9d3f9d-5", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted.SelfHostedEmbeddings.html"} {"id": "c2f9bc9d3f9d-6", "text": "Get the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted.SelfHostedEmbeddings.html"} {"id": "c2f9bc9d3f9d-7", "text": "to the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted.SelfHostedEmbeddings.html"} {"id": "0ff9b74a6633-0", "text": "langchain.embeddings.huggingface_hub.HuggingFaceHubEmbeddings\u00b6\nclass langchain.embeddings.huggingface_hub.HuggingFaceHubEmbeddings(*, client: Any = None, repo_id: str = 'sentence-transformers/all-mpnet-base-v2', task: Optional[str] = 'feature-extraction', model_kwargs: Optional[dict] = None, huggingfacehub_api_token: Optional[str] = None)[source]\u00b6\nBases: BaseModel, Embeddings\nWrapper around HuggingFaceHub embedding models.\nTo use, you should have the huggingface_hub python package installed, and the\nenvironment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass\nit as a named parameter to the constructor.\nExample\nfrom langchain.embeddings import HuggingFaceHubEmbeddings\nrepo_id = \"sentence-transformers/all-mpnet-base-v2\"\nhf = HuggingFaceHubEmbeddings(\n repo_id=repo_id,\n task=\"feature-extraction\",\n huggingfacehub_api_token=\"my-api-key\",\n)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam huggingfacehub_api_token: Optional[str] = None\u00b6\nparam model_kwargs: Optional[dict] = None\u00b6\nKey word arguments to pass to the model.\nparam repo_id: str = 'sentence-transformers/all-mpnet-base-v2'\u00b6\nModel name to use.\nparam task: Optional[str] = 'feature-extraction'\u00b6\nTask to call the model with.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]\u00b6\nCall out to HuggingFaceHub\u2019s embedding endpoint for embedding search docs.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface_hub.HuggingFaceHubEmbeddings.html"} {"id": "0ff9b74a6633-1", "text": "Parameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]\u00b6\nCall out to HuggingFaceHub\u2019s embedding endpoint for embedding query text.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface_hub.HuggingFaceHubEmbeddings.html"} {"id": "6818466a460b-0", "text": "langchain.embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings\u00b6\nclass langchain.embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings(*, client: Any = None, endpoint_name: str = '', region_name: str = '', credentials_profile_name: Optional[str] = None, content_handler: EmbeddingsContentHandler, model_kwargs: Optional[Dict] = None, endpoint_kwargs: Optional[Dict] = None)[source]\u00b6\nBases: BaseModel, Embeddings\nWrapper around custom Sagemaker Inference Endpoints.\nTo use, you must supply the endpoint name from your deployed\nSagemaker model & the region where it is deployed.\nTo authenticate, the AWS client uses the following methods to\nautomatically load credentials:\nhttps://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\nIf a specific credential profile should be used, you must pass\nthe name of the profile from the ~/.aws/credentials file that is to be used.\nMake sure the credentials / roles used have the required policies to\naccess the Sagemaker endpoint.\nSee: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required]\u00b6\nThe content handler class that provides an input and\noutput transform functions to handle formats between LLM\nand the endpoint.\nparam credentials_profile_name: Optional[str] = None\u00b6\nThe name of the profile in the ~/.aws/credentials or ~/.aws/config files, which\nhas either access keys or role information specified.\nIf not specified, the default credential profile or, if on an EC2 instance,\ncredentials from IMDS will be used.", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings.html"} {"id": "6818466a460b-1", "text": "credentials from IMDS will be used.\nSee: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\nparam endpoint_kwargs: Optional[Dict] = None\u00b6\nOptional attributes passed to the invoke_endpoint\nfunction. See `boto3`_. docs for more info.\n.. _boto3: \nparam endpoint_name: str = ''\u00b6\nThe name of the endpoint from the deployed Sagemaker model.\nMust be unique within an AWS Region.\nparam model_kwargs: Optional[Dict] = None\u00b6\nKey word arguments to pass to the model.\nparam region_name: str = ''\u00b6\nThe aws region where the Sagemaker model is deployed, eg. us-west-2.\nembed_documents(texts: List[str], chunk_size: int = 64) \u2192 List[List[float]][source]\u00b6\nCompute doc embeddings using a SageMaker Inference Endpoint.\nParameters\ntexts \u2013 The list of texts to embed.\nchunk_size \u2013 The chunk size defines how many input texts will\nbe grouped together as request. If None, will use the\nchunk size specified by the class.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]\u00b6\nCompute query embeddings using a SageMaker inference endpoint.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that AWS credentials to and python package exists in environment.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings.html"} {"id": "d0e9a454038d-0", "text": "langchain.embeddings.huggingface.HuggingFaceEmbeddings\u00b6\nclass langchain.embeddings.huggingface.HuggingFaceEmbeddings(*, client: Any = None, model_name: str = 'sentence-transformers/all-mpnet-base-v2', cache_folder: Optional[str] = None, model_kwargs: Dict[str, Any] = None, encode_kwargs: Dict[str, Any] = None)[source]\u00b6\nBases: BaseModel, Embeddings\nWrapper around sentence_transformers embedding models.\nTo use, you should have the sentence_transformers python package installed.\nExample\nfrom langchain.embeddings import HuggingFaceEmbeddings\nmodel_name = \"sentence-transformers/all-mpnet-base-v2\"\nmodel_kwargs = {'device': 'cpu'}\nencode_kwargs = {'normalize_embeddings': False}\nhf = HuggingFaceEmbeddings(\n model_name=model_name,\n model_kwargs=model_kwargs,\n encode_kwargs=encode_kwargs\n)\nInitialize the sentence_transformer.\nparam cache_folder: Optional[str] = None\u00b6\nPath to store models.\nCan be also set by SENTENCE_TRANSFORMERS_HOME environment variable.\nparam encode_kwargs: Dict[str, Any] [Optional]\u00b6\nKey word arguments to pass when calling the encode method of the model.\nparam model_kwargs: Dict[str, Any] [Optional]\u00b6\nKey word arguments to pass to the model.\nparam model_name: str = 'sentence-transformers/all-mpnet-base-v2'\u00b6\nModel name to use.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]\u00b6\nCompute doc embeddings using a HuggingFace transformer model.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]\u00b6\nCompute query embeddings using a HuggingFace transformer model.\nParameters", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface.HuggingFaceEmbeddings.html"} {"id": "d0e9a454038d-1", "text": "Compute query embeddings using a HuggingFace transformer model.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface.HuggingFaceEmbeddings.html"} {"id": "7afc995f8f1a-0", "text": "langchain.embeddings.clarifai.ClarifaiEmbeddings\u00b6\nclass langchain.embeddings.clarifai.ClarifaiEmbeddings(*, stub: Any = None, userDataObject: Any = None, model_id: Optional[str] = None, model_version_id: Optional[str] = None, app_id: Optional[str] = None, user_id: Optional[str] = None, pat: Optional[str] = None, api_base: str = 'https://api.clarifai.com')[source]\u00b6\nBases: BaseModel, Embeddings\nWrapper around Clarifai embedding models.\nTo use, you should have the clarifai python package installed, and the\nenvironment variable CLARIFAI_PAT set with your personal access token or pass it\nas a named parameter to the constructor.\nExample\nfrom langchain.embeddings import ClarifaiEmbeddings\nclarifai = ClarifaiEmbeddings(\n model=\"embed-english-light-v2.0\", clarifai_api_key=\"my-api-key\"\n)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_base: str = 'https://api.clarifai.com'\u00b6\nparam app_id: Optional[str] = None\u00b6\nClarifai application id to use.\nparam model_id: Optional[str] = None\u00b6\nModel id to use.\nparam model_version_id: Optional[str] = None\u00b6\nModel version id to use.\nparam pat: Optional[str] = None\u00b6\nparam userDataObject: Any = None\u00b6\nparam user_id: Optional[str] = None\u00b6\nClarifai user id to use.\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]\u00b6\nCall out to Clarifai\u2019s embedding models.\nParameters", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.clarifai.ClarifaiEmbeddings.html"} {"id": "7afc995f8f1a-1", "text": "Call out to Clarifai\u2019s embedding models.\nParameters\ntexts \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nembed_query(text: str) \u2192 List[float][source]\u00b6\nCall out to Clarifai\u2019s embedding models.\nParameters\ntext \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.clarifai.ClarifaiEmbeddings.html"} {"id": "309042aacc4c-0", "text": "langchain.env.get_runtime_environment\u00b6\nlangchain.env.get_runtime_environment() \u2192 dict[source]\u00b6\nGet information about the environment.", "source": "https://api.python.langchain.com/en/latest/env/langchain.env.get_runtime_environment.html"} {"id": "f9856cfcd351-0", "text": "All modules for which code is available\nlangchain.agents.agent\nlangchain.agents.agent_toolkits.azure_cognitive_services.toolkit\nlangchain.agents.agent_toolkits.base\nlangchain.agents.agent_toolkits.csv.base\nlangchain.agents.agent_toolkits.file_management.toolkit\nlangchain.agents.agent_toolkits.gmail.toolkit\nlangchain.agents.agent_toolkits.jira.toolkit\nlangchain.agents.agent_toolkits.json.base\nlangchain.agents.agent_toolkits.json.toolkit\nlangchain.agents.agent_toolkits.nla.tool\nlangchain.agents.agent_toolkits.nla.toolkit\nlangchain.agents.agent_toolkits.office365.toolkit\nlangchain.agents.agent_toolkits.openapi.base\nlangchain.agents.agent_toolkits.openapi.planner\nlangchain.agents.agent_toolkits.openapi.spec\nlangchain.agents.agent_toolkits.openapi.toolkit\nlangchain.agents.agent_toolkits.pandas.base\nlangchain.agents.agent_toolkits.playwright.toolkit\nlangchain.agents.agent_toolkits.powerbi.base\nlangchain.agents.agent_toolkits.powerbi.chat_base\nlangchain.agents.agent_toolkits.powerbi.toolkit\nlangchain.agents.agent_toolkits.python.base\nlangchain.agents.agent_toolkits.spark.base\nlangchain.agents.agent_toolkits.spark_sql.base\nlangchain.agents.agent_toolkits.spark_sql.toolkit\nlangchain.agents.agent_toolkits.sql.base\nlangchain.agents.agent_toolkits.sql.toolkit\nlangchain.agents.agent_toolkits.vectorstore.base\nlangchain.agents.agent_toolkits.vectorstore.toolkit\nlangchain.agents.agent_toolkits.zapier.toolkit\nlangchain.agents.agent_types\nlangchain.agents.chat.base\nlangchain.agents.chat.output_parser\nlangchain.agents.conversational.base\nlangchain.agents.conversational.output_parser\nlangchain.agents.conversational_chat.base\nlangchain.agents.conversational_chat.output_parser", "source": "https://api.python.langchain.com/en/latest/_modules/index.html"} {"id": "f9856cfcd351-1", "text": "langchain.agents.conversational_chat.output_parser\nlangchain.agents.initialize\nlangchain.agents.load_tools\nlangchain.agents.loading\nlangchain.agents.mrkl.base\nlangchain.agents.mrkl.output_parser\nlangchain.agents.openai_functions_agent.base\nlangchain.agents.openai_functions_multi_agent.base\nlangchain.agents.react.base\nlangchain.agents.react.output_parser\nlangchain.agents.schema\nlangchain.agents.self_ask_with_search.base\nlangchain.agents.self_ask_with_search.output_parser\nlangchain.agents.structured_chat.base\nlangchain.agents.structured_chat.output_parser\nlangchain.agents.tools\nlangchain.agents.utils\nlangchain.cache\nlangchain.callbacks.aim_callback\nlangchain.callbacks.argilla_callback\nlangchain.callbacks.arize_callback\nlangchain.callbacks.arthur_callback\nlangchain.callbacks.base\nlangchain.callbacks.clearml_callback\nlangchain.callbacks.comet_ml_callback\nlangchain.callbacks.context_callback\nlangchain.callbacks.file\nlangchain.callbacks.flyte_callback\nlangchain.callbacks.human\nlangchain.callbacks.infino_callback\nlangchain.callbacks.manager\nlangchain.callbacks.mlflow_callback\nlangchain.callbacks.openai_info\nlangchain.callbacks.promptlayer_callback\nlangchain.callbacks.stdout\nlangchain.callbacks.streaming_aiter\nlangchain.callbacks.streaming_aiter_final_only\nlangchain.callbacks.streaming_stdout\nlangchain.callbacks.streaming_stdout_final_only\nlangchain.callbacks.streamlit.__init__\nlangchain.callbacks.streamlit.mutable_expander\nlangchain.callbacks.streamlit.streamlit_callback_handler\nlangchain.callbacks.tracers.base\nlangchain.callbacks.tracers.evaluation\nlangchain.callbacks.tracers.langchain\nlangchain.callbacks.tracers.langchain_v1\nlangchain.callbacks.tracers.run_collector\nlangchain.callbacks.tracers.schemas\nlangchain.callbacks.tracers.stdout\nlangchain.callbacks.tracers.wandb\nlangchain.callbacks.utils\nlangchain.callbacks.wandb_callback", "source": "https://api.python.langchain.com/en/latest/_modules/index.html"} {"id": "f9856cfcd351-2", "text": "langchain.callbacks.utils\nlangchain.callbacks.wandb_callback\nlangchain.callbacks.whylabs_callback\nlangchain.chains.api.base\nlangchain.chains.api.openapi.chain\nlangchain.chains.api.openapi.requests_chain\nlangchain.chains.api.openapi.response_chain\nlangchain.chains.base\nlangchain.chains.combine_documents.base\nlangchain.chains.combine_documents.map_reduce\nlangchain.chains.combine_documents.map_rerank\nlangchain.chains.combine_documents.reduce\nlangchain.chains.combine_documents.refine\nlangchain.chains.combine_documents.stuff\nlangchain.chains.constitutional_ai.base\nlangchain.chains.constitutional_ai.models\nlangchain.chains.conversation.base\nlangchain.chains.conversational_retrieval.base\nlangchain.chains.flare.base\nlangchain.chains.flare.prompts\nlangchain.chains.graph_qa.base\nlangchain.chains.graph_qa.cypher\nlangchain.chains.graph_qa.hugegraph\nlangchain.chains.graph_qa.kuzu\nlangchain.chains.graph_qa.nebulagraph\nlangchain.chains.graph_qa.sparql\nlangchain.chains.hyde.base\nlangchain.chains.llm\nlangchain.chains.llm_bash.base\nlangchain.chains.llm_bash.prompt\nlangchain.chains.llm_checker.base\nlangchain.chains.llm_math.base\nlangchain.chains.llm_requests\nlangchain.chains.llm_summarization_checker.base\nlangchain.chains.loading\nlangchain.chains.mapreduce\nlangchain.chains.moderation\nlangchain.chains.natbot.base\nlangchain.chains.natbot.crawler\nlangchain.chains.openai_functions.base\nlangchain.chains.openai_functions.citation_fuzzy_match\nlangchain.chains.openai_functions.extraction\nlangchain.chains.openai_functions.openapi\nlangchain.chains.openai_functions.qa_with_structure", "source": "https://api.python.langchain.com/en/latest/_modules/index.html"} {"id": "f9856cfcd351-3", "text": "langchain.chains.openai_functions.qa_with_structure\nlangchain.chains.openai_functions.tagging\nlangchain.chains.openai_functions.utils\nlangchain.chains.pal.base\nlangchain.chains.prompt_selector\nlangchain.chains.qa_generation.base\nlangchain.chains.qa_with_sources.base\nlangchain.chains.qa_with_sources.loading\nlangchain.chains.qa_with_sources.retrieval\nlangchain.chains.qa_with_sources.vector_db\nlangchain.chains.query_constructor.base\nlangchain.chains.query_constructor.ir\nlangchain.chains.query_constructor.parser\nlangchain.chains.query_constructor.schema\nlangchain.chains.question_answering.__init__\nlangchain.chains.retrieval_qa.base\nlangchain.chains.router.base\nlangchain.chains.router.embedding_router\nlangchain.chains.router.llm_router\nlangchain.chains.router.multi_prompt\nlangchain.chains.router.multi_retrieval_qa\nlangchain.chains.sequential\nlangchain.chains.sql_database.base\nlangchain.chains.summarize.__init__\nlangchain.chains.transform\nlangchain.chat_models.anthropic\nlangchain.chat_models.azure_openai\nlangchain.chat_models.base\nlangchain.chat_models.fake\nlangchain.chat_models.google_palm\nlangchain.chat_models.human\nlangchain.chat_models.jinachat\nlangchain.chat_models.openai\nlangchain.chat_models.promptlayer_openai\nlangchain.chat_models.vertexai\nlangchain.client.runner_utils\nlangchain.docstore.arbitrary_fn\nlangchain.docstore.base\nlangchain.docstore.in_memory\nlangchain.docstore.wikipedia\nlangchain.document_loaders.acreom\nlangchain.document_loaders.airbyte_json\nlangchain.document_loaders.airtable\nlangchain.document_loaders.apify_dataset\nlangchain.document_loaders.arxiv\nlangchain.document_loaders.azlyrics\nlangchain.document_loaders.azure_blob_storage_container", "source": "https://api.python.langchain.com/en/latest/_modules/index.html"} {"id": "f9856cfcd351-4", "text": "langchain.document_loaders.azlyrics\nlangchain.document_loaders.azure_blob_storage_container\nlangchain.document_loaders.azure_blob_storage_file\nlangchain.document_loaders.base\nlangchain.document_loaders.bibtex\nlangchain.document_loaders.bigquery\nlangchain.document_loaders.bilibili\nlangchain.document_loaders.blackboard\nlangchain.document_loaders.blob_loaders.file_system\nlangchain.document_loaders.blob_loaders.schema\nlangchain.document_loaders.blob_loaders.youtube_audio\nlangchain.document_loaders.blockchain\nlangchain.document_loaders.brave_search\nlangchain.document_loaders.chatgpt\nlangchain.document_loaders.college_confidential\nlangchain.document_loaders.confluence\nlangchain.document_loaders.conllu\nlangchain.document_loaders.csv_loader\nlangchain.document_loaders.cube_semantic\nlangchain.document_loaders.dataframe\nlangchain.document_loaders.diffbot\nlangchain.document_loaders.directory\nlangchain.document_loaders.discord\nlangchain.document_loaders.docugami\nlangchain.document_loaders.duckdb_loader\nlangchain.document_loaders.email\nlangchain.document_loaders.embaas\nlangchain.document_loaders.epub\nlangchain.document_loaders.evernote\nlangchain.document_loaders.excel\nlangchain.document_loaders.facebook_chat\nlangchain.document_loaders.fauna\nlangchain.document_loaders.figma\nlangchain.document_loaders.gcs_directory\nlangchain.document_loaders.gcs_file\nlangchain.document_loaders.generic\nlangchain.document_loaders.git\nlangchain.document_loaders.gitbook\nlangchain.document_loaders.github\nlangchain.document_loaders.googledrive\nlangchain.document_loaders.gutenberg\nlangchain.document_loaders.helpers\nlangchain.document_loaders.hn\nlangchain.document_loaders.html\nlangchain.document_loaders.html_bs\nlangchain.document_loaders.hugging_face_dataset", "source": "https://api.python.langchain.com/en/latest/_modules/index.html"} {"id": "f9856cfcd351-5", "text": "langchain.document_loaders.html_bs\nlangchain.document_loaders.hugging_face_dataset\nlangchain.document_loaders.ifixit\nlangchain.document_loaders.image\nlangchain.document_loaders.image_captions\nlangchain.document_loaders.imsdb\nlangchain.document_loaders.iugu\nlangchain.document_loaders.joplin\nlangchain.document_loaders.json_loader\nlangchain.document_loaders.larksuite\nlangchain.document_loaders.markdown\nlangchain.document_loaders.mastodon\nlangchain.document_loaders.max_compute\nlangchain.document_loaders.mediawikidump\nlangchain.document_loaders.merge\nlangchain.document_loaders.mhtml\nlangchain.document_loaders.modern_treasury\nlangchain.document_loaders.notebook\nlangchain.document_loaders.notion\nlangchain.document_loaders.notiondb\nlangchain.document_loaders.obsidian\nlangchain.document_loaders.odt\nlangchain.document_loaders.onedrive\nlangchain.document_loaders.onedrive_file\nlangchain.document_loaders.open_city_data\nlangchain.document_loaders.org_mode\nlangchain.document_loaders.parsers.audio\nlangchain.document_loaders.parsers.generic\nlangchain.document_loaders.parsers.grobid\nlangchain.document_loaders.parsers.html.bs4\nlangchain.document_loaders.parsers.language.code_segmenter\nlangchain.document_loaders.parsers.language.javascript\nlangchain.document_loaders.parsers.language.language_parser\nlangchain.document_loaders.parsers.language.python\nlangchain.document_loaders.parsers.pdf\nlangchain.document_loaders.parsers.registry\nlangchain.document_loaders.parsers.txt\nlangchain.document_loaders.pdf\nlangchain.document_loaders.powerpoint\nlangchain.document_loaders.psychic\nlangchain.document_loaders.pyspark_dataframe\nlangchain.document_loaders.python\nlangchain.document_loaders.readthedocs\nlangchain.document_loaders.recursive_url_loader\nlangchain.document_loaders.reddit", "source": "https://api.python.langchain.com/en/latest/_modules/index.html"} {"id": "f9856cfcd351-6", "text": "langchain.document_loaders.recursive_url_loader\nlangchain.document_loaders.reddit\nlangchain.document_loaders.roam\nlangchain.document_loaders.rst\nlangchain.document_loaders.rtf\nlangchain.document_loaders.s3_directory\nlangchain.document_loaders.s3_file\nlangchain.document_loaders.sitemap\nlangchain.document_loaders.slack_directory\nlangchain.document_loaders.snowflake_loader\nlangchain.document_loaders.spreedly\nlangchain.document_loaders.srt\nlangchain.document_loaders.stripe\nlangchain.document_loaders.telegram\nlangchain.document_loaders.tencent_cos_directory\nlangchain.document_loaders.tencent_cos_file\nlangchain.document_loaders.text\nlangchain.document_loaders.tomarkdown\nlangchain.document_loaders.toml\nlangchain.document_loaders.trello\nlangchain.document_loaders.twitter\nlangchain.document_loaders.unstructured\nlangchain.document_loaders.url\nlangchain.document_loaders.url_playwright\nlangchain.document_loaders.url_selenium\nlangchain.document_loaders.weather\nlangchain.document_loaders.web_base\nlangchain.document_loaders.whatsapp_chat\nlangchain.document_loaders.wikipedia\nlangchain.document_loaders.word_document\nlangchain.document_loaders.xml\nlangchain.document_loaders.youtube\nlangchain.document_transformers\nlangchain.embeddings.aleph_alpha\nlangchain.embeddings.base\nlangchain.embeddings.bedrock\nlangchain.embeddings.clarifai\nlangchain.embeddings.cohere\nlangchain.embeddings.dashscope\nlangchain.embeddings.deepinfra\nlangchain.embeddings.elasticsearch\nlangchain.embeddings.embaas\nlangchain.embeddings.fake\nlangchain.embeddings.google_palm\nlangchain.embeddings.huggingface\nlangchain.embeddings.huggingface_hub\nlangchain.embeddings.jina\nlangchain.embeddings.llamacpp\nlangchain.embeddings.minimax\nlangchain.embeddings.modelscope_hub", "source": "https://api.python.langchain.com/en/latest/_modules/index.html"} {"id": "f9856cfcd351-7", "text": "langchain.embeddings.minimax\nlangchain.embeddings.modelscope_hub\nlangchain.embeddings.mosaicml\nlangchain.embeddings.octoai_embeddings\nlangchain.embeddings.openai\nlangchain.embeddings.sagemaker_endpoint\nlangchain.embeddings.self_hosted\nlangchain.embeddings.self_hosted_hugging_face\nlangchain.embeddings.spacy_embeddings\nlangchain.embeddings.tensorflow_hub\nlangchain.embeddings.vertexai\nlangchain.env\nlangchain.evaluation.agents.trajectory_eval_chain\nlangchain.evaluation.comparison.eval_chain\nlangchain.evaluation.criteria.eval_chain\nlangchain.evaluation.embedding_distance.base\nlangchain.evaluation.loading\nlangchain.evaluation.qa.eval_chain\nlangchain.evaluation.qa.generate_chain\nlangchain.evaluation.run_evaluators.base\nlangchain.evaluation.run_evaluators.implementations\nlangchain.evaluation.run_evaluators.loading\nlangchain.evaluation.run_evaluators.string_run_evaluator\nlangchain.evaluation.schema\nlangchain.evaluation.string_distance.base\nlangchain.example_generator\nlangchain.experimental.autonomous_agents.autogpt.memory\nlangchain.experimental.autonomous_agents.autogpt.output_parser\nlangchain.experimental.autonomous_agents.autogpt.prompt\nlangchain.experimental.autonomous_agents.autogpt.prompt_generator\nlangchain.experimental.autonomous_agents.baby_agi.baby_agi\nlangchain.experimental.autonomous_agents.baby_agi.task_creation\nlangchain.experimental.autonomous_agents.baby_agi.task_execution\nlangchain.experimental.autonomous_agents.baby_agi.task_prioritization\nlangchain.experimental.generative_agents.generative_agent\nlangchain.experimental.generative_agents.memory\nlangchain.experimental.llms.jsonformer_decoder\nlangchain.experimental.llms.rellm_decoder\nlangchain.experimental.plan_and_execute.agent_executor\nlangchain.experimental.plan_and_execute.executors.agent_executor\nlangchain.experimental.plan_and_execute.executors.base\nlangchain.experimental.plan_and_execute.planners.base", "source": "https://api.python.langchain.com/en/latest/_modules/index.html"} {"id": "f9856cfcd351-8", "text": "langchain.experimental.plan_and_execute.executors.base\nlangchain.experimental.plan_and_execute.planners.base\nlangchain.experimental.plan_and_execute.planners.chat_planner\nlangchain.experimental.plan_and_execute.schema\nlangchain.formatting\nlangchain.graphs.networkx_graph\nlangchain.indexes.graph\nlangchain.indexes.vectorstore\nlangchain.input\nlangchain.llms.ai21\nlangchain.llms.aleph_alpha\nlangchain.llms.amazon_api_gateway\nlangchain.llms.anthropic\nlangchain.llms.anyscale\nlangchain.llms.aviary\nlangchain.llms.azureml_endpoint\nlangchain.llms.bananadev\nlangchain.llms.base\nlangchain.llms.baseten\nlangchain.llms.beam\nlangchain.llms.bedrock\nlangchain.llms.cerebriumai\nlangchain.llms.clarifai\nlangchain.llms.cohere\nlangchain.llms.ctransformers\nlangchain.llms.databricks\nlangchain.llms.deepinfra\nlangchain.llms.fake\nlangchain.llms.forefrontai\nlangchain.llms.google_palm\nlangchain.llms.gooseai\nlangchain.llms.gpt4all\nlangchain.llms.huggingface_endpoint\nlangchain.llms.huggingface_hub\nlangchain.llms.huggingface_pipeline\nlangchain.llms.huggingface_text_gen_inference\nlangchain.llms.human\nlangchain.llms.llamacpp\nlangchain.llms.loading\nlangchain.llms.manifest\nlangchain.llms.modal\nlangchain.llms.mosaicml\nlangchain.llms.nlpcloud\nlangchain.llms.octoai_endpoint\nlangchain.llms.openai\nlangchain.llms.openllm\nlangchain.llms.openlm\nlangchain.llms.petals\nlangchain.llms.pipelineai\nlangchain.llms.predictionguard", "source": "https://api.python.langchain.com/en/latest/_modules/index.html"} {"id": "f9856cfcd351-9", "text": "langchain.llms.pipelineai\nlangchain.llms.predictionguard\nlangchain.llms.promptlayer_openai\nlangchain.llms.replicate\nlangchain.llms.rwkv\nlangchain.llms.sagemaker_endpoint\nlangchain.llms.self_hosted\nlangchain.llms.self_hosted_hugging_face\nlangchain.llms.stochasticai\nlangchain.llms.textgen\nlangchain.llms.utils\nlangchain.llms.vertexai\nlangchain.llms.writer\nlangchain.load.dump\nlangchain.load.load\nlangchain.load.serializable\nlangchain.math_utils\nlangchain.memory.buffer\nlangchain.memory.buffer_window\nlangchain.memory.chat_memory\nlangchain.memory.chat_message_histories.cassandra\nlangchain.memory.chat_message_histories.cosmos_db\nlangchain.memory.chat_message_histories.dynamodb\nlangchain.memory.chat_message_histories.file\nlangchain.memory.chat_message_histories.firestore\nlangchain.memory.chat_message_histories.in_memory\nlangchain.memory.chat_message_histories.momento\nlangchain.memory.chat_message_histories.mongodb\nlangchain.memory.chat_message_histories.postgres\nlangchain.memory.chat_message_histories.redis\nlangchain.memory.chat_message_histories.sql\nlangchain.memory.chat_message_histories.zep\nlangchain.memory.combined\nlangchain.memory.entity\nlangchain.memory.kg\nlangchain.memory.motorhead_memory\nlangchain.memory.readonly\nlangchain.memory.simple\nlangchain.memory.summary\nlangchain.memory.summary_buffer\nlangchain.memory.token_buffer\nlangchain.memory.utils\nlangchain.memory.vectorstore\nlangchain.output_parsers.boolean\nlangchain.output_parsers.combining\nlangchain.output_parsers.datetime\nlangchain.output_parsers.enum\nlangchain.output_parsers.fix\nlangchain.output_parsers.json\nlangchain.output_parsers.list\nlangchain.output_parsers.loading\nlangchain.output_parsers.openai_functions\nlangchain.output_parsers.pydantic", "source": "https://api.python.langchain.com/en/latest/_modules/index.html"} {"id": "f9856cfcd351-10", "text": "langchain.output_parsers.openai_functions\nlangchain.output_parsers.pydantic\nlangchain.output_parsers.rail_parser\nlangchain.output_parsers.regex\nlangchain.output_parsers.regex_dict\nlangchain.output_parsers.retry\nlangchain.output_parsers.structured\nlangchain.prompts.base\nlangchain.prompts.chat\nlangchain.prompts.example_selector.base\nlangchain.prompts.example_selector.length_based\nlangchain.prompts.example_selector.ngram_overlap\nlangchain.prompts.example_selector.semantic_similarity\nlangchain.prompts.few_shot\nlangchain.prompts.few_shot_with_templates\nlangchain.prompts.loading\nlangchain.prompts.pipeline\nlangchain.prompts.prompt\nlangchain.requests\nlangchain.retrievers.arxiv\nlangchain.retrievers.azure_cognitive_search\nlangchain.retrievers.chaindesk\nlangchain.retrievers.chatgpt_plugin_retriever\nlangchain.retrievers.contextual_compression\nlangchain.retrievers.databerry\nlangchain.retrievers.docarray\nlangchain.retrievers.document_compressors.base\nlangchain.retrievers.document_compressors.chain_extract\nlangchain.retrievers.document_compressors.chain_filter\nlangchain.retrievers.document_compressors.cohere_rerank\nlangchain.retrievers.document_compressors.embeddings_filter\nlangchain.retrievers.elastic_search_bm25\nlangchain.retrievers.kendra\nlangchain.retrievers.knn\nlangchain.retrievers.llama_index\nlangchain.retrievers.merger_retriever\nlangchain.retrievers.metal\nlangchain.retrievers.milvus\nlangchain.retrievers.multi_query\nlangchain.retrievers.pinecone_hybrid_search\nlangchain.retrievers.pubmed\nlangchain.retrievers.remote_retriever\nlangchain.retrievers.self_query.base\nlangchain.retrievers.self_query.chroma", "source": "https://api.python.langchain.com/en/latest/_modules/index.html"} {"id": "f9856cfcd351-11", "text": "langchain.retrievers.self_query.base\nlangchain.retrievers.self_query.chroma\nlangchain.retrievers.self_query.myscale\nlangchain.retrievers.self_query.pinecone\nlangchain.retrievers.self_query.qdrant\nlangchain.retrievers.self_query.weaviate\nlangchain.retrievers.svm\nlangchain.retrievers.tfidf\nlangchain.retrievers.time_weighted_retriever\nlangchain.retrievers.vespa_retriever\nlangchain.retrievers.weaviate_hybrid_search\nlangchain.retrievers.wikipedia\nlangchain.retrievers.zep\nlangchain.retrievers.zilliz\nlangchain.schema.agent\nlangchain.schema.document\nlangchain.schema.language_model\nlangchain.schema.memory\nlangchain.schema.messages\nlangchain.schema.output\nlangchain.schema.output_parser\nlangchain.schema.prompt\nlangchain.schema.prompt_template\nlangchain.schema.retriever\nlangchain.server\nlangchain.sql_database\nlangchain.text_splitter\nlangchain.tools.arxiv.tool\nlangchain.tools.azure_cognitive_services.form_recognizer\nlangchain.tools.azure_cognitive_services.image_analysis\nlangchain.tools.azure_cognitive_services.speech2text\nlangchain.tools.azure_cognitive_services.text2speech\nlangchain.tools.azure_cognitive_services.utils\nlangchain.tools.base\nlangchain.tools.bing_search.tool\nlangchain.tools.brave_search.tool\nlangchain.tools.convert_to_openai\nlangchain.tools.dataforseo_api_search.tool\nlangchain.tools.ddg_search.tool\nlangchain.tools.file_management.copy\nlangchain.tools.file_management.delete\nlangchain.tools.file_management.file_search\nlangchain.tools.file_management.list_dir\nlangchain.tools.file_management.move\nlangchain.tools.file_management.read\nlangchain.tools.file_management.utils\nlangchain.tools.file_management.write\nlangchain.tools.gmail.base\nlangchain.tools.gmail.create_draft\nlangchain.tools.gmail.get_message", "source": "https://api.python.langchain.com/en/latest/_modules/index.html"} {"id": "f9856cfcd351-12", "text": "langchain.tools.gmail.base\nlangchain.tools.gmail.create_draft\nlangchain.tools.gmail.get_message\nlangchain.tools.gmail.get_thread\nlangchain.tools.gmail.search\nlangchain.tools.gmail.send_message\nlangchain.tools.gmail.utils\nlangchain.tools.google_places.tool\nlangchain.tools.google_search.tool\nlangchain.tools.google_serper.tool\nlangchain.tools.graphql.tool\nlangchain.tools.human.tool\nlangchain.tools.ifttt\nlangchain.tools.interaction.tool\nlangchain.tools.jira.tool\nlangchain.tools.json.tool\nlangchain.tools.metaphor_search.tool\nlangchain.tools.office365.base\nlangchain.tools.office365.create_draft_message\nlangchain.tools.office365.events_search\nlangchain.tools.office365.messages_search\nlangchain.tools.office365.send_event\nlangchain.tools.office365.send_message\nlangchain.tools.office365.utils\nlangchain.tools.openapi.utils.api_models\nlangchain.tools.openweathermap.tool\nlangchain.tools.playwright.base\nlangchain.tools.playwright.click\nlangchain.tools.playwright.current_page\nlangchain.tools.playwright.extract_hyperlinks\nlangchain.tools.playwright.extract_text\nlangchain.tools.playwright.get_elements\nlangchain.tools.playwright.navigate\nlangchain.tools.playwright.navigate_back\nlangchain.tools.playwright.utils\nlangchain.tools.plugin\nlangchain.tools.powerbi.tool\nlangchain.tools.pubmed.tool\nlangchain.tools.python.tool\nlangchain.tools.requests.tool\nlangchain.tools.scenexplain.tool\nlangchain.tools.searx_search.tool\nlangchain.tools.shell.tool\nlangchain.tools.sleep.tool\nlangchain.tools.spark_sql.tool\nlangchain.tools.sql_database.tool\nlangchain.tools.steamship_image_generation.tool\nlangchain.tools.steamship_image_generation.utils\nlangchain.tools.vectorstore.tool\nlangchain.tools.wikipedia.tool\nlangchain.tools.wolfram_alpha.tool\nlangchain.tools.youtube.search\nlangchain.tools.zapier.tool\nlangchain.utilities.apify\nlangchain.utilities.arxiv", "source": "https://api.python.langchain.com/en/latest/_modules/index.html"} {"id": "f9856cfcd351-13", "text": "langchain.tools.zapier.tool\nlangchain.utilities.apify\nlangchain.utilities.arxiv\nlangchain.utilities.awslambda\nlangchain.utilities.bibtex\nlangchain.utilities.bing_search\nlangchain.utilities.brave_search\nlangchain.utilities.dataforseo_api_search\nlangchain.utilities.duckduckgo_search\nlangchain.utilities.google_places_api\nlangchain.utilities.google_search\nlangchain.utilities.google_serper\nlangchain.utilities.graphql\nlangchain.utilities.jira\nlangchain.utilities.loading\nlangchain.utilities.metaphor_search\nlangchain.utilities.openapi\nlangchain.utilities.openweathermap\nlangchain.utilities.powerbi\nlangchain.utilities.pupmed\nlangchain.utilities.python\nlangchain.utilities.scenexplain\nlangchain.utilities.searx_search\nlangchain.utilities.serpapi\nlangchain.utilities.twilio\nlangchain.utilities.vertexai\nlangchain.utilities.wikipedia\nlangchain.utilities.wolfram_alpha\nlangchain.utilities.zapier\nlangchain.utils\nlangchain.vectorstores.alibabacloud_opensearch\nlangchain.vectorstores.analyticdb\nlangchain.vectorstores.annoy\nlangchain.vectorstores.atlas\nlangchain.vectorstores.awadb\nlangchain.vectorstores.azuresearch\nlangchain.vectorstores.base\nlangchain.vectorstores.cassandra\nlangchain.vectorstores.chroma\nlangchain.vectorstores.clarifai\nlangchain.vectorstores.clickhouse\nlangchain.vectorstores.deeplake\nlangchain.vectorstores.docarray.base\nlangchain.vectorstores.docarray.hnsw\nlangchain.vectorstores.docarray.in_memory\nlangchain.vectorstores.elastic_vector_search\nlangchain.vectorstores.faiss\nlangchain.vectorstores.hologres\nlangchain.vectorstores.lancedb\nlangchain.vectorstores.marqo\nlangchain.vectorstores.matching_engine\nlangchain.vectorstores.milvus\nlangchain.vectorstores.mongodb_atlas\nlangchain.vectorstores.myscale\nlangchain.vectorstores.opensearch_vector_search", "source": "https://api.python.langchain.com/en/latest/_modules/index.html"} {"id": "f9856cfcd351-14", "text": "langchain.vectorstores.myscale\nlangchain.vectorstores.opensearch_vector_search\nlangchain.vectorstores.pgembedding\nlangchain.vectorstores.pgvector\nlangchain.vectorstores.pinecone\nlangchain.vectorstores.qdrant\nlangchain.vectorstores.redis\nlangchain.vectorstores.rocksetdb\nlangchain.vectorstores.singlestoredb\nlangchain.vectorstores.sklearn\nlangchain.vectorstores.starrocks\nlangchain.vectorstores.supabase\nlangchain.vectorstores.tair\nlangchain.vectorstores.tigris\nlangchain.vectorstores.typesense\nlangchain.vectorstores.utils\nlangchain.vectorstores.vectara\nlangchain.vectorstores.weaviate\nlangchain.vectorstores.zilliz\npydantic.config\npydantic.env_settings\npydantic.utils", "source": "https://api.python.langchain.com/en/latest/_modules/index.html"} {"id": "ee466c2e94b0-0", "text": "Source code for langchain.formatting\n\"\"\"Utilities for formatting strings.\"\"\"\nfrom string import Formatter\nfrom typing import Any, List, Mapping, Sequence, Union\n[docs]class StrictFormatter(Formatter):\n \"\"\"A subclass of formatter that checks for extra keys.\"\"\"\n[docs] def check_unused_args(\n self,\n used_args: Sequence[Union[int, str]],\n args: Sequence,\n kwargs: Mapping[str, Any],\n ) -> None:\n \"\"\"Check to see if extra parameters are passed.\"\"\"\n extra = set(kwargs).difference(used_args)\n if extra:\n raise KeyError(extra)\n[docs] def vformat(\n self, format_string: str, args: Sequence, kwargs: Mapping[str, Any]\n ) -> str:\n \"\"\"Check that no arguments are provided.\"\"\"\n if len(args) > 0:\n raise ValueError(\n \"No arguments should be provided, \"\n \"everything should be passed as keyword arguments.\"\n )\n return super().vformat(format_string, args, kwargs)\n[docs] def validate_input_variables(\n self, format_string: str, input_variables: List[str]\n ) -> None:\n dummy_inputs = {input_variable: \"foo\" for input_variable in input_variables}\n super().format(format_string, **dummy_inputs)\nformatter = StrictFormatter()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/formatting.html"} {"id": "aea7c4a66cf0-0", "text": "Source code for langchain.server\n\"\"\"Script to run langchain-server locally using docker-compose.\"\"\"\nimport subprocess\nfrom pathlib import Path\nfrom langchainplus_sdk.cli.main import get_docker_compose_command\n[docs]def main() -> None:\n \"\"\"Run the langchain server locally.\"\"\"\n p = Path(__file__).absolute().parent / \"docker-compose.yaml\"\n docker_compose_command = get_docker_compose_command()\n subprocess.run([*docker_compose_command, \"-f\", str(p), \"pull\"])\n subprocess.run([*docker_compose_command, \"-f\", str(p), \"up\"])\nif __name__ == \"__main__\":\n main()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/server.html"} {"id": "f006edbd588c-0", "text": "Source code for langchain.document_transformers\n\"\"\"Transform documents\"\"\"\nfrom typing import Any, Callable, List, Sequence\nimport numpy as np\nfrom pydantic import BaseModel, Field\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.math_utils import cosine_similarity\nfrom langchain.schema import BaseDocumentTransformer, Document\nclass _DocumentWithState(Document):\n \"\"\"Wrapper for a document that includes arbitrary state.\"\"\"\n state: dict = Field(default_factory=dict)\n \"\"\"State associated with the document.\"\"\"\n def to_document(self) -> Document:\n \"\"\"Convert the DocumentWithState to a Document.\"\"\"\n return Document(page_content=self.page_content, metadata=self.metadata)\n @classmethod\n def from_document(cls, doc: Document) -> \"_DocumentWithState\":\n \"\"\"Create a DocumentWithState from a Document.\"\"\"\n if isinstance(doc, cls):\n return doc\n return cls(page_content=doc.page_content, metadata=doc.metadata)\n[docs]def get_stateful_documents(\n documents: Sequence[Document],\n) -> Sequence[_DocumentWithState]:\n \"\"\"Convert a list of documents to a list of documents with state.\n Args:\n documents: The documents to convert.\n Returns:\n A list of documents with state.\n \"\"\"\n return [_DocumentWithState.from_document(doc) for doc in documents]\ndef _filter_similar_embeddings(\n embedded_documents: List[List[float]], similarity_fn: Callable, threshold: float\n) -> List[int]:\n \"\"\"Filter redundant documents based on the similarity of their embeddings.\"\"\"\n similarity = np.tril(similarity_fn(embedded_documents, embedded_documents), k=-1)\n redundant = np.where(similarity > threshold)\n redundant_stacked = np.column_stack(redundant)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_transformers.html"} {"id": "f006edbd588c-1", "text": "redundant_stacked = np.column_stack(redundant)\n redundant_sorted = np.argsort(similarity[redundant])[::-1]\n included_idxs = set(range(len(embedded_documents)))\n for first_idx, second_idx in redundant_stacked[redundant_sorted]:\n if first_idx in included_idxs and second_idx in included_idxs:\n # Default to dropping the second document of any highly similar pair.\n included_idxs.remove(second_idx)\n return list(sorted(included_idxs))\ndef _get_embeddings_from_stateful_docs(\n embeddings: Embeddings, documents: Sequence[_DocumentWithState]\n) -> List[List[float]]:\n if len(documents) and \"embedded_doc\" in documents[0].state:\n embedded_documents = [doc.state[\"embedded_doc\"] for doc in documents]\n else:\n embedded_documents = embeddings.embed_documents(\n [d.page_content for d in documents]\n )\n for doc, embedding in zip(documents, embedded_documents):\n doc.state[\"embedded_doc\"] = embedding\n return embedded_documents\ndef _filter_cluster_embeddings(\n embedded_documents: List[List[float]],\n num_clusters: int,\n num_closest: int,\n random_state: int,\n remove_duplicates: bool,\n) -> List[int]:\n \"\"\"Filter documents based on proximity of their embeddings to clusters.\"\"\"\n try:\n from sklearn.cluster import KMeans\n except ImportError:\n raise ValueError(\n \"sklearn package not found, please install it with \"\n \"`pip install scikit-learn`\"\n )\n kmeans = KMeans(n_clusters=num_clusters, random_state=random_state).fit(\n embedded_documents\n )\n closest_indices = []\n # Loop through the number of clusters you have", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_transformers.html"} {"id": "f006edbd588c-2", "text": ")\n closest_indices = []\n # Loop through the number of clusters you have\n for i in range(num_clusters):\n # Get the list of distances from that particular cluster center\n distances = np.linalg.norm(\n embedded_documents - kmeans.cluster_centers_[i], axis=1\n )\n # Find the indices of the two unique closest ones\n # (using argsort to find the smallest 2 distances)\n if remove_duplicates:\n # Only add not duplicated vectors.\n closest_indices_sorted = [\n x\n for x in np.argsort(distances)[:num_closest]\n if x not in closest_indices\n ]\n else:\n # Skip duplicates and add the next closest vector.\n closest_indices_sorted = [\n x for x in np.argsort(distances) if x not in closest_indices\n ][:num_closest]\n # Append that position closest indices list\n closest_indices.extend(closest_indices_sorted)\n return closest_indices\n[docs]class EmbeddingsRedundantFilter(BaseDocumentTransformer, BaseModel):\n \"\"\"Filter that drops redundant documents by comparing their embeddings.\"\"\"\n embeddings: Embeddings\n \"\"\"Embeddings to use for embedding document contents.\"\"\"\n similarity_fn: Callable = cosine_similarity\n \"\"\"Similarity function for comparing documents. Function expected to take as input\n two matrices (List[List[float]]) and return a matrix of scores where higher values\n indicate greater similarity.\"\"\"\n similarity_threshold: float = 0.95\n \"\"\"Threshold for determining when two documents are similar enough\n to be considered redundant.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def transform_documents(\n self, documents: Sequence[Document], **kwargs: Any", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_transformers.html"} {"id": "f006edbd588c-3", "text": "self, documents: Sequence[Document], **kwargs: Any\n ) -> Sequence[Document]:\n \"\"\"Filter down documents.\"\"\"\n stateful_documents = get_stateful_documents(documents)\n embedded_documents = _get_embeddings_from_stateful_docs(\n self.embeddings, stateful_documents\n )\n included_idxs = _filter_similar_embeddings(\n embedded_documents, self.similarity_fn, self.similarity_threshold\n )\n return [stateful_documents[i] for i in sorted(included_idxs)]\n[docs] async def atransform_documents(\n self, documents: Sequence[Document], **kwargs: Any\n ) -> Sequence[Document]:\n raise NotImplementedError\n[docs]class EmbeddingsClusteringFilter(BaseDocumentTransformer, BaseModel):\n \"\"\"Perform K-means clustering on document vectors.\n Returns an arbitrary number of documents closest to center.\"\"\"\n embeddings: Embeddings\n \"\"\"Embeddings to use for embedding document contents.\"\"\"\n num_clusters: int = 5\n \"\"\"Number of clusters. Groups of documents with similar meaning.\"\"\"\n num_closest: int = 1\n \"\"\"The number of closest vectors to return for each cluster center.\"\"\"\n random_state: int = 42\n \"\"\"Controls the random number generator used to initialize the cluster centroids.\n If you set the random_state parameter to None, the KMeans algorithm will use a \n random number generator that is seeded with the current time. This means \n that the results of the KMeans algorithm will be different each time you \n run it.\"\"\"\n sorted: bool = False\n \"\"\"By default results are re-ordered \"grouping\" them by cluster, if sorted is true\n result will be ordered by the original position from the retriever\"\"\"\n remove_duplicates = False", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_transformers.html"} {"id": "f006edbd588c-4", "text": "remove_duplicates = False\n \"\"\" By default duplicated results are skipped and replaced by the next closest \n vector in the cluster. If remove_duplicates is true no replacement will be done:\n This could dramatically reduce results when there is a lot of overlap beetween \n clusters.\n \"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def transform_documents(\n self, documents: Sequence[Document], **kwargs: Any\n ) -> Sequence[Document]:\n \"\"\"Filter down documents.\"\"\"\n stateful_documents = get_stateful_documents(documents)\n embedded_documents = _get_embeddings_from_stateful_docs(\n self.embeddings, stateful_documents\n )\n included_idxs = _filter_cluster_embeddings(\n embedded_documents,\n self.num_clusters,\n self.num_closest,\n self.random_state,\n self.remove_duplicates,\n )\n results = sorted(included_idxs) if self.sorted else included_idxs\n return [stateful_documents[i] for i in results]\n[docs] async def atransform_documents(\n self, documents: Sequence[Document], **kwargs: Any\n ) -> Sequence[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_transformers.html"} {"id": "88e6d621d2c5-0", "text": "Source code for langchain.requests\n\"\"\"Lightweight wrapper around requests library, with async support.\"\"\"\nfrom contextlib import asynccontextmanager\nfrom typing import Any, AsyncGenerator, Dict, Optional\nimport aiohttp\nimport requests\nfrom pydantic import BaseModel, Extra\n[docs]class Requests(BaseModel):\n \"\"\"Wrapper around requests to handle auth and async.\n The main purpose of this wrapper is to handle authentication (by saving\n headers) and enable easy async methods on the same base object.\n \"\"\"\n headers: Optional[Dict[str, str]] = None\n aiosession: Optional[aiohttp.ClientSession] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] def get(self, url: str, **kwargs: Any) -> requests.Response:\n \"\"\"GET the URL and return the text.\"\"\"\n return requests.get(url, headers=self.headers, **kwargs)\n[docs] def post(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:\n \"\"\"POST to the URL and return the text.\"\"\"\n return requests.post(url, json=data, headers=self.headers, **kwargs)\n[docs] def patch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:\n \"\"\"PATCH the URL and return the text.\"\"\"\n return requests.patch(url, json=data, headers=self.headers, **kwargs)\n[docs] def put(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:\n \"\"\"PUT the URL and return the text.\"\"\"\n return requests.put(url, json=data, headers=self.headers, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/requests.html"} {"id": "88e6d621d2c5-1", "text": "return requests.put(url, json=data, headers=self.headers, **kwargs)\n[docs] def delete(self, url: str, **kwargs: Any) -> requests.Response:\n \"\"\"DELETE the URL and return the text.\"\"\"\n return requests.delete(url, headers=self.headers, **kwargs)\n @asynccontextmanager\n async def _arequest(\n self, method: str, url: str, **kwargs: Any\n ) -> AsyncGenerator[aiohttp.ClientResponse, None]:\n \"\"\"Make an async request.\"\"\"\n if not self.aiosession:\n async with aiohttp.ClientSession() as session:\n async with session.request(\n method, url, headers=self.headers, **kwargs\n ) as response:\n yield response\n else:\n async with self.aiosession.request(\n method, url, headers=self.headers, **kwargs\n ) as response:\n yield response\n[docs] @asynccontextmanager\n async def aget(\n self, url: str, **kwargs: Any\n ) -> AsyncGenerator[aiohttp.ClientResponse, None]:\n \"\"\"GET the URL and return the text asynchronously.\"\"\"\n async with self._arequest(\"GET\", url, **kwargs) as response:\n yield response\n[docs] @asynccontextmanager\n async def apost(\n self, url: str, data: Dict[str, Any], **kwargs: Any\n ) -> AsyncGenerator[aiohttp.ClientResponse, None]:\n \"\"\"POST to the URL and return the text asynchronously.\"\"\"\n async with self._arequest(\"POST\", url, json=data, **kwargs) as response:\n yield response\n[docs] @asynccontextmanager\n async def apatch(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/requests.html"} {"id": "88e6d621d2c5-2", "text": "yield response\n[docs] @asynccontextmanager\n async def apatch(\n self, url: str, data: Dict[str, Any], **kwargs: Any\n ) -> AsyncGenerator[aiohttp.ClientResponse, None]:\n \"\"\"PATCH the URL and return the text asynchronously.\"\"\"\n async with self._arequest(\"PATCH\", url, json=data, **kwargs) as response:\n yield response\n[docs] @asynccontextmanager\n async def aput(\n self, url: str, data: Dict[str, Any], **kwargs: Any\n ) -> AsyncGenerator[aiohttp.ClientResponse, None]:\n \"\"\"PUT the URL and return the text asynchronously.\"\"\"\n async with self._arequest(\"PUT\", url, json=data, **kwargs) as response:\n yield response\n[docs] @asynccontextmanager\n async def adelete(\n self, url: str, **kwargs: Any\n ) -> AsyncGenerator[aiohttp.ClientResponse, None]:\n \"\"\"DELETE the URL and return the text asynchronously.\"\"\"\n async with self._arequest(\"DELETE\", url, **kwargs) as response:\n yield response\n[docs]class TextRequestsWrapper(BaseModel):\n \"\"\"Lightweight wrapper around requests library.\n The main purpose of this wrapper is to always return a text output.\n \"\"\"\n headers: Optional[Dict[str, str]] = None\n aiosession: Optional[aiohttp.ClientSession] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def requests(self) -> Requests:\n return Requests(headers=self.headers, aiosession=self.aiosession)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/requests.html"} {"id": "88e6d621d2c5-3", "text": "return Requests(headers=self.headers, aiosession=self.aiosession)\n[docs] def get(self, url: str, **kwargs: Any) -> str:\n \"\"\"GET the URL and return the text.\"\"\"\n return self.requests.get(url, **kwargs).text\n[docs] def post(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:\n \"\"\"POST to the URL and return the text.\"\"\"\n return self.requests.post(url, data, **kwargs).text\n[docs] def patch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:\n \"\"\"PATCH the URL and return the text.\"\"\"\n return self.requests.patch(url, data, **kwargs).text\n[docs] def put(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:\n \"\"\"PUT the URL and return the text.\"\"\"\n return self.requests.put(url, data, **kwargs).text\n[docs] def delete(self, url: str, **kwargs: Any) -> str:\n \"\"\"DELETE the URL and return the text.\"\"\"\n return self.requests.delete(url, **kwargs).text\n[docs] async def aget(self, url: str, **kwargs: Any) -> str:\n \"\"\"GET the URL and return the text asynchronously.\"\"\"\n async with self.requests.aget(url, **kwargs) as response:\n return await response.text()\n[docs] async def apost(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:\n \"\"\"POST to the URL and return the text asynchronously.\"\"\"\n async with self.requests.apost(url, data, **kwargs) as response:\n return await response.text()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/requests.html"} {"id": "88e6d621d2c5-4", "text": "return await response.text()\n[docs] async def apatch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:\n \"\"\"PATCH the URL and return the text asynchronously.\"\"\"\n async with self.requests.apatch(url, data, **kwargs) as response:\n return await response.text()\n[docs] async def aput(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:\n \"\"\"PUT the URL and return the text asynchronously.\"\"\"\n async with self.requests.aput(url, data, **kwargs) as response:\n return await response.text()\n[docs] async def adelete(self, url: str, **kwargs: Any) -> str:\n \"\"\"DELETE the URL and return the text asynchronously.\"\"\"\n async with self.requests.adelete(url, **kwargs) as response:\n return await response.text()\n# For backwards compatibility\nRequestsWrapper = TextRequestsWrapper", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/requests.html"} {"id": "ca5dfb8d2c94-0", "text": "Source code for langchain.utils\n\"\"\"Generic utility functions.\"\"\"\nimport contextlib\nimport datetime\nimport importlib\nimport os\nfrom importlib.metadata import version\nfrom typing import Any, Callable, Dict, List, Optional, Tuple\nfrom packaging.version import parse\nfrom requests import HTTPError, Response\n[docs]def get_from_dict_or_env(\n data: Dict[str, Any], key: str, env_key: str, default: Optional[str] = None\n) -> str:\n \"\"\"Get a value from a dictionary or an environment variable.\"\"\"\n if key in data and data[key]:\n return data[key]\n else:\n return get_from_env(key, env_key, default=default)\n[docs]def get_from_env(key: str, env_key: str, default: Optional[str] = None) -> str:\n \"\"\"Get a value from a dictionary or an environment variable.\"\"\"\n if env_key in os.environ and os.environ[env_key]:\n return os.environ[env_key]\n elif default is not None:\n return default\n else:\n raise ValueError(\n f\"Did not find {key}, please add an environment variable\"\n f\" `{env_key}` which contains it, or pass\"\n f\" `{key}` as a named parameter.\"\n )\n[docs]def xor_args(*arg_groups: Tuple[str, ...]) -> Callable:\n \"\"\"Validate specified keyword args are mutually exclusive.\"\"\"\n def decorator(func: Callable) -> Callable:\n def wrapper(*args: Any, **kwargs: Any) -> Callable:\n \"\"\"Validate exactly one arg in each group is not None.\"\"\"\n counts = [\n sum(1 for arg in arg_group if kwargs.get(arg) is not None)\n for arg_group in arg_groups\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utils.html"} {"id": "ca5dfb8d2c94-1", "text": "for arg_group in arg_groups\n ]\n invalid_groups = [i for i, count in enumerate(counts) if count != 1]\n if invalid_groups:\n invalid_group_names = [\", \".join(arg_groups[i]) for i in invalid_groups]\n raise ValueError(\n \"Exactly one argument in each of the following\"\n \" groups must be defined:\"\n f\" {', '.join(invalid_group_names)}\"\n )\n return func(*args, **kwargs)\n return wrapper\n return decorator\n[docs]def raise_for_status_with_text(response: Response) -> None:\n \"\"\"Raise an error with the response text.\"\"\"\n try:\n response.raise_for_status()\n except HTTPError as e:\n raise ValueError(response.text) from e\n[docs]def stringify_value(val: Any) -> str:\n \"\"\"Stringify a value.\n Args:\n val: The value to stringify.\n Returns:\n str: The stringified value.\n \"\"\"\n if isinstance(val, str):\n return val\n elif isinstance(val, dict):\n return \"\\n\" + stringify_dict(val)\n elif isinstance(val, list):\n return \"\\n\".join(stringify_value(v) for v in val)\n else:\n return str(val)\n[docs]def stringify_dict(data: dict) -> str:\n \"\"\"Stringify a dictionary.\n Args:\n data: The dictionary to stringify.\n Returns:\n str: The stringified dictionary.\n \"\"\"\n text = \"\"\n for key, value in data.items():\n text += key + \": \" + stringify_value(value) + \"\\n\"\n return text\n[docs]def comma_list(items: List[Any]) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utils.html"} {"id": "ca5dfb8d2c94-2", "text": "return text\n[docs]def comma_list(items: List[Any]) -> str:\n return \", \".join(str(item) for item in items)\n[docs]@contextlib.contextmanager\ndef mock_now(dt_value): # type: ignore\n \"\"\"Context manager for mocking out datetime.now() in unit tests.\n Example:\n with mock_now(datetime.datetime(2011, 2, 3, 10, 11)):\n assert datetime.datetime.now() == datetime.datetime(2011, 2, 3, 10, 11)\n \"\"\"\n class MockDateTime(datetime.datetime):\n \"\"\"Mock datetime.datetime.now() with a fixed datetime.\"\"\"\n @classmethod\n def now(cls): # type: ignore\n # Create a copy of dt_value.\n return datetime.datetime(\n dt_value.year,\n dt_value.month,\n dt_value.day,\n dt_value.hour,\n dt_value.minute,\n dt_value.second,\n dt_value.microsecond,\n dt_value.tzinfo,\n )\n real_datetime = datetime.datetime\n datetime.datetime = MockDateTime\n try:\n yield datetime.datetime\n finally:\n datetime.datetime = real_datetime\n[docs]def guard_import(\n module_name: str, *, pip_name: Optional[str] = None, package: Optional[str] = None\n) -> Any:\n \"\"\"Dynamically imports a module and raises a helpful exception if the module is not\n installed.\"\"\"\n try:\n module = importlib.import_module(module_name, package)\n except ImportError:\n raise ImportError(\n f\"Could not import {module_name} python package. \"\n f\"Please install it with `pip install {pip_name or module_name}`.\"\n )\n return module", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utils.html"} {"id": "ca5dfb8d2c94-3", "text": ")\n return module\n[docs]def check_package_version(\n package: str,\n lt_version: Optional[str] = None,\n lte_version: Optional[str] = None,\n gt_version: Optional[str] = None,\n gte_version: Optional[str] = None,\n) -> None:\n \"\"\"Check the version of a package.\"\"\"\n imported_version = parse(version(package))\n if lt_version is not None and imported_version >= parse(lt_version):\n raise ValueError(\n f\"Expected {package} version to be < {lt_version}. Received \"\n f\"{imported_version}.\"\n )\n if lte_version is not None and imported_version > parse(lte_version):\n raise ValueError(\n f\"Expected {package} version to be <= {lte_version}. Received \"\n f\"{imported_version}.\"\n )\n if gt_version is not None and imported_version <= parse(gt_version):\n raise ValueError(\n f\"Expected {package} version to be > {gt_version}. Received \"\n f\"{imported_version}.\"\n )\n if gte_version is not None and imported_version < parse(gte_version):\n raise ValueError(\n f\"Expected {package} version to be >= {gte_version}. Received \"\n f\"{imported_version}.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utils.html"} {"id": "9c0a702e9113-0", "text": "Source code for langchain.text_splitter\n\"\"\"Functionality for splitting text.\"\"\"\nfrom __future__ import annotations\nimport copy\nimport logging\nimport re\nfrom abc import ABC, abstractmethod\nfrom dataclasses import dataclass\nfrom enum import Enum\nfrom typing import (\n AbstractSet,\n Any,\n Callable,\n Collection,\n Dict,\n Iterable,\n List,\n Literal,\n Optional,\n Sequence,\n Tuple,\n Type,\n TypedDict,\n TypeVar,\n Union,\n cast,\n)\nfrom langchain.docstore.document import Document\nfrom langchain.schema import BaseDocumentTransformer\nlogger = logging.getLogger(__name__)\nTS = TypeVar(\"TS\", bound=\"TextSplitter\")\ndef _split_text_with_regex(\n text: str, separator: str, keep_separator: bool\n) -> List[str]:\n # Now that we have the separator, split the text\n if separator:\n if keep_separator:\n # The parentheses in the pattern keep the delimiters in the result.\n _splits = re.split(f\"({separator})\", text)\n splits = [_splits[i] + _splits[i + 1] for i in range(1, len(_splits), 2)]\n if len(_splits) % 2 == 0:\n splits += _splits[-1:]\n splits = [_splits[0]] + splits\n else:\n splits = re.split(separator, text)\n else:\n splits = list(text)\n return [s for s in splits if s != \"\"]\n[docs]class TextSplitter(BaseDocumentTransformer, ABC):\n \"\"\"Interface for splitting text into chunks.\"\"\"\n def __init__(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} {"id": "9c0a702e9113-1", "text": "\"\"\"Interface for splitting text into chunks.\"\"\"\n def __init__(\n self,\n chunk_size: int = 4000,\n chunk_overlap: int = 200,\n length_function: Callable[[str], int] = len,\n keep_separator: bool = False,\n add_start_index: bool = False,\n ) -> None:\n \"\"\"Create a new TextSplitter.\n Args:\n chunk_size: Maximum size of chunks to return\n chunk_overlap: Overlap in characters between chunks\n length_function: Function that measures the length of given chunks\n keep_separator: Whether to keep the separator in the chunks\n add_start_index: If `True`, includes chunk's start index in metadata\n \"\"\"\n if chunk_overlap > chunk_size:\n raise ValueError(\n f\"Got a larger chunk overlap ({chunk_overlap}) than chunk size \"\n f\"({chunk_size}), should be smaller.\"\n )\n self._chunk_size = chunk_size\n self._chunk_overlap = chunk_overlap\n self._length_function = length_function\n self._keep_separator = keep_separator\n self._add_start_index = add_start_index\n[docs] @abstractmethod\n def split_text(self, text: str) -> List[str]:\n \"\"\"Split text into multiple components.\"\"\"\n[docs] def create_documents(\n self, texts: List[str], metadatas: Optional[List[dict]] = None\n ) -> List[Document]:\n \"\"\"Create documents from a list of texts.\"\"\"\n _metadatas = metadatas or [{}] * len(texts)\n documents = []\n for i, text in enumerate(texts):\n index = -1\n for chunk in self.split_text(text):\n metadata = copy.deepcopy(_metadatas[i])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} {"id": "9c0a702e9113-2", "text": "metadata = copy.deepcopy(_metadatas[i])\n if self._add_start_index:\n index = text.find(chunk, index + 1)\n metadata[\"start_index\"] = index\n new_doc = Document(page_content=chunk, metadata=metadata)\n documents.append(new_doc)\n return documents\n[docs] def split_documents(self, documents: Iterable[Document]) -> List[Document]:\n \"\"\"Split documents.\"\"\"\n texts, metadatas = [], []\n for doc in documents:\n texts.append(doc.page_content)\n metadatas.append(doc.metadata)\n return self.create_documents(texts, metadatas=metadatas)\n def _join_docs(self, docs: List[str], separator: str) -> Optional[str]:\n text = separator.join(docs)\n text = text.strip()\n if text == \"\":\n return None\n else:\n return text\n def _merge_splits(self, splits: Iterable[str], separator: str) -> List[str]:\n # We now want to combine these smaller pieces into medium size\n # chunks to send to the LLM.\n separator_len = self._length_function(separator)\n docs = []\n current_doc: List[str] = []\n total = 0\n for d in splits:\n _len = self._length_function(d)\n if (\n total + _len + (separator_len if len(current_doc) > 0 else 0)\n > self._chunk_size\n ):\n if total > self._chunk_size:\n logger.warning(\n f\"Created a chunk of size {total}, \"\n f\"which is longer than the specified {self._chunk_size}\"\n )\n if len(current_doc) > 0:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} {"id": "9c0a702e9113-3", "text": ")\n if len(current_doc) > 0:\n doc = self._join_docs(current_doc, separator)\n if doc is not None:\n docs.append(doc)\n # Keep on popping if:\n # - we have a larger chunk than in the chunk overlap\n # - or if we still have any chunks and the length is long\n while total > self._chunk_overlap or (\n total + _len + (separator_len if len(current_doc) > 0 else 0)\n > self._chunk_size\n and total > 0\n ):\n total -= self._length_function(current_doc[0]) + (\n separator_len if len(current_doc) > 1 else 0\n )\n current_doc = current_doc[1:]\n current_doc.append(d)\n total += _len + (separator_len if len(current_doc) > 1 else 0)\n doc = self._join_docs(current_doc, separator)\n if doc is not None:\n docs.append(doc)\n return docs\n[docs] @classmethod\n def from_huggingface_tokenizer(cls, tokenizer: Any, **kwargs: Any) -> TextSplitter:\n \"\"\"Text splitter that uses HuggingFace tokenizer to count length.\"\"\"\n try:\n from transformers import PreTrainedTokenizerBase\n if not isinstance(tokenizer, PreTrainedTokenizerBase):\n raise ValueError(\n \"Tokenizer received was not an instance of PreTrainedTokenizerBase\"\n )\n def _huggingface_tokenizer_length(text: str) -> int:\n return len(tokenizer.encode(text))\n except ImportError:\n raise ValueError(\n \"Could not import transformers python package. \"\n \"Please install it with `pip install transformers`.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} {"id": "9c0a702e9113-4", "text": "\"Please install it with `pip install transformers`.\"\n )\n return cls(length_function=_huggingface_tokenizer_length, **kwargs)\n[docs] @classmethod\n def from_tiktoken_encoder(\n cls: Type[TS],\n encoding_name: str = \"gpt2\",\n model_name: Optional[str] = None,\n allowed_special: Union[Literal[\"all\"], AbstractSet[str]] = set(),\n disallowed_special: Union[Literal[\"all\"], Collection[str]] = \"all\",\n **kwargs: Any,\n ) -> TS:\n \"\"\"Text splitter that uses tiktoken encoder to count length.\"\"\"\n try:\n import tiktoken\n except ImportError:\n raise ImportError(\n \"Could not import tiktoken python package. \"\n \"This is needed in order to calculate max_tokens_for_prompt. \"\n \"Please install it with `pip install tiktoken`.\"\n )\n if model_name is not None:\n enc = tiktoken.encoding_for_model(model_name)\n else:\n enc = tiktoken.get_encoding(encoding_name)\n def _tiktoken_encoder(text: str) -> int:\n return len(\n enc.encode(\n text,\n allowed_special=allowed_special,\n disallowed_special=disallowed_special,\n )\n )\n if issubclass(cls, TokenTextSplitter):\n extra_kwargs = {\n \"encoding_name\": encoding_name,\n \"model_name\": model_name,\n \"allowed_special\": allowed_special,\n \"disallowed_special\": disallowed_special,\n }\n kwargs = {**kwargs, **extra_kwargs}\n return cls(length_function=_tiktoken_encoder, **kwargs)\n[docs] def transform_documents(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} {"id": "9c0a702e9113-5", "text": "[docs] def transform_documents(\n self, documents: Sequence[Document], **kwargs: Any\n ) -> Sequence[Document]:\n \"\"\"Transform sequence of documents by splitting them.\"\"\"\n return self.split_documents(list(documents))\n[docs] async def atransform_documents(\n self, documents: Sequence[Document], **kwargs: Any\n ) -> Sequence[Document]:\n \"\"\"Asynchronously transform a sequence of documents by splitting them.\"\"\"\n raise NotImplementedError\n[docs]class CharacterTextSplitter(TextSplitter):\n \"\"\"Implementation of splitting text that looks at characters.\"\"\"\n def __init__(self, separator: str = \"\\n\\n\", **kwargs: Any) -> None:\n \"\"\"Create a new TextSplitter.\"\"\"\n super().__init__(**kwargs)\n self._separator = separator\n[docs] def split_text(self, text: str) -> List[str]:\n \"\"\"Split incoming text and return chunks.\"\"\"\n # First we naively split the large input into a bunch of smaller ones.\n splits = _split_text_with_regex(text, self._separator, self._keep_separator)\n _separator = \"\" if self._keep_separator else self._separator\n return self._merge_splits(splits, _separator)\n[docs]class LineType(TypedDict):\n \"\"\"Line type as typed dict.\"\"\"\n metadata: Dict[str, str]\n content: str\n[docs]class HeaderType(TypedDict):\n \"\"\"Header type as typed dict.\"\"\"\n level: int\n name: str\n data: str\nclass MarkdownHeaderTextSplitter:\n \"\"\"Implementation of splitting markdown files based on specified headers.\"\"\"\n def __init__(\n self, headers_to_split_on: List[Tuple[str, str]], return_each_line: bool = False\n ):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} {"id": "9c0a702e9113-6", "text": "):\n \"\"\"Create a new MarkdownHeaderTextSplitter.\n Args:\n headers_to_split_on: Headers we want to track\n return_each_line: Return each line w/ associated headers\n \"\"\"\n # Output line-by-line or aggregated into chunks w/ common headers\n self.return_each_line = return_each_line\n # Given the headers we want to split on,\n # (e.g., \"#, ##, etc\") order by length\n self.headers_to_split_on = sorted(\n headers_to_split_on, key=lambda split: len(split[0]), reverse=True\n )\n def aggregate_lines_to_chunks(self, lines: List[LineType]) -> List[Document]:\n \"\"\"Combine lines with common metadata into chunks\n Args:\n lines: Line of text / associated header metadata\n \"\"\"\n aggregated_chunks: List[LineType] = []\n for line in lines:\n if (\n aggregated_chunks\n and aggregated_chunks[-1][\"metadata\"] == line[\"metadata\"]\n ):\n # If the last line in the aggregated list\n # has the same metadata as the current line,\n # append the current content to the last lines's content\n aggregated_chunks[-1][\"content\"] += \" \\n\" + line[\"content\"]\n else:\n # Otherwise, append the current line to the aggregated list\n aggregated_chunks.append(line)\n return [\n Document(page_content=chunk[\"content\"], metadata=chunk[\"metadata\"])\n for chunk in aggregated_chunks\n ]\n def split_text(self, text: str) -> List[Document]:\n \"\"\"Split markdown file\n Args:\n text: Markdown file\"\"\"\n # Split the input text by newline character (\"\\n\").\n lines = text.split(\"\\n\")\n # Final output", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} {"id": "9c0a702e9113-7", "text": "lines = text.split(\"\\n\")\n # Final output\n lines_with_metadata: List[LineType] = []\n # Content and metadata of the chunk currently being processed\n current_content: List[str] = []\n current_metadata: Dict[str, str] = {}\n # Keep track of the nested header structure\n # header_stack: List[Dict[str, Union[int, str]]] = []\n header_stack: List[HeaderType] = []\n initial_metadata: Dict[str, str] = {}\n for line in lines:\n stripped_line = line.strip()\n # Check each line against each of the header types (e.g., #, ##)\n for sep, name in self.headers_to_split_on:\n # Check if line starts with a header that we intend to split on\n if stripped_line.startswith(sep) and (\n # Header with no text OR header is followed by space\n # Both are valid conditions that sep is being used a header\n len(stripped_line) == len(sep)\n or stripped_line[len(sep)] == \" \"\n ):\n # Ensure we are tracking the header as metadata\n if name is not None:\n # Get the current header level\n current_header_level = sep.count(\"#\")\n # Pop out headers of lower or same level from the stack\n while (\n header_stack\n and header_stack[-1][\"level\"] >= current_header_level\n ):\n # We have encountered a new header\n # at the same or higher level\n popped_header = header_stack.pop()\n # Clear the metadata for the\n # popped header in initial_metadata\n if popped_header[\"name\"] in initial_metadata:\n initial_metadata.pop(popped_header[\"name\"])\n # Push the current header to the stack", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} {"id": "9c0a702e9113-8", "text": "# Push the current header to the stack\n header: HeaderType = {\n \"level\": current_header_level,\n \"name\": name,\n \"data\": stripped_line[len(sep) :].strip(),\n }\n header_stack.append(header)\n # Update initial_metadata with the current header\n initial_metadata[name] = header[\"data\"]\n # Add the previous line to the lines_with_metadata\n # only if current_content is not empty\n if current_content:\n lines_with_metadata.append(\n {\n \"content\": \"\\n\".join(current_content),\n \"metadata\": current_metadata.copy(),\n }\n )\n current_content.clear()\n break\n else:\n if stripped_line:\n current_content.append(stripped_line)\n elif current_content:\n lines_with_metadata.append(\n {\n \"content\": \"\\n\".join(current_content),\n \"metadata\": current_metadata.copy(),\n }\n )\n current_content.clear()\n current_metadata = initial_metadata.copy()\n if current_content:\n lines_with_metadata.append(\n {\"content\": \"\\n\".join(current_content), \"metadata\": current_metadata}\n )\n # lines_with_metadata has each line with associated header metadata\n # aggregate these into chunks based on common metadata\n if not self.return_each_line:\n return self.aggregate_lines_to_chunks(lines_with_metadata)\n else:\n return [\n Document(page_content=chunk[\"content\"], metadata=chunk[\"metadata\"])\n for chunk in lines_with_metadata\n ]\n# should be in newer Python versions (3.10+)\n# @dataclass(frozen=True, kw_only=True, slots=True)\n@dataclass(frozen=True)\nclass Tokenizer:\n chunk_overlap: int", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} {"id": "9c0a702e9113-9", "text": "@dataclass(frozen=True)\nclass Tokenizer:\n chunk_overlap: int\n tokens_per_chunk: int\n decode: Callable[[list[int]], str]\n encode: Callable[[str], List[int]]\n[docs]def split_text_on_tokens(*, text: str, tokenizer: Tokenizer) -> List[str]:\n \"\"\"Split incoming text and return chunks.\"\"\"\n splits: List[str] = []\n input_ids = tokenizer.encode(text)\n start_idx = 0\n cur_idx = min(start_idx + tokenizer.tokens_per_chunk, len(input_ids))\n chunk_ids = input_ids[start_idx:cur_idx]\n while start_idx < len(input_ids):\n splits.append(tokenizer.decode(chunk_ids))\n start_idx += tokenizer.tokens_per_chunk - tokenizer.chunk_overlap\n cur_idx = min(start_idx + tokenizer.tokens_per_chunk, len(input_ids))\n chunk_ids = input_ids[start_idx:cur_idx]\n return splits\n[docs]class TokenTextSplitter(TextSplitter):\n \"\"\"Implementation of splitting text that looks at tokens.\"\"\"\n def __init__(\n self,\n encoding_name: str = \"gpt2\",\n model_name: Optional[str] = None,\n allowed_special: Union[Literal[\"all\"], AbstractSet[str]] = set(),\n disallowed_special: Union[Literal[\"all\"], Collection[str]] = \"all\",\n **kwargs: Any,\n ) -> None:\n \"\"\"Create a new TextSplitter.\"\"\"\n super().__init__(**kwargs)\n try:\n import tiktoken\n except ImportError:\n raise ImportError(\n \"Could not import tiktoken python package. \"\n \"This is needed in order to for TokenTextSplitter. \"\n \"Please install it with `pip install tiktoken`.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} {"id": "9c0a702e9113-10", "text": "\"Please install it with `pip install tiktoken`.\"\n )\n if model_name is not None:\n enc = tiktoken.encoding_for_model(model_name)\n else:\n enc = tiktoken.get_encoding(encoding_name)\n self._tokenizer = enc\n self._allowed_special = allowed_special\n self._disallowed_special = disallowed_special\n[docs] def split_text(self, text: str) -> List[str]:\n def _encode(_text: str) -> List[int]:\n return self._tokenizer.encode(\n _text,\n allowed_special=self._allowed_special,\n disallowed_special=self._disallowed_special,\n )\n tokenizer = Tokenizer(\n chunk_overlap=self._chunk_overlap,\n tokens_per_chunk=self._chunk_size,\n decode=self._tokenizer.decode,\n encode=_encode,\n )\n return split_text_on_tokens(text=text, tokenizer=tokenizer)\n[docs]class SentenceTransformersTokenTextSplitter(TextSplitter):\n \"\"\"Implementation of splitting text that looks at tokens.\"\"\"\n def __init__(\n self,\n chunk_overlap: int = 50,\n model_name: str = \"sentence-transformers/all-mpnet-base-v2\",\n tokens_per_chunk: Optional[int] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Create a new TextSplitter.\"\"\"\n super().__init__(**kwargs, chunk_overlap=chunk_overlap)\n try:\n from sentence_transformers import SentenceTransformer\n except ImportError:\n raise ImportError(\n \"Could not import sentence_transformer python package. \"\n \"This is needed in order to for SentenceTransformersTokenTextSplitter. \"\n \"Please install it with `pip install sentence-transformers`.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} {"id": "9c0a702e9113-11", "text": "\"Please install it with `pip install sentence-transformers`.\"\n )\n self.model_name = model_name\n self._model = SentenceTransformer(self.model_name)\n self.tokenizer = self._model.tokenizer\n self._initialize_chunk_configuration(tokens_per_chunk=tokens_per_chunk)\n def _initialize_chunk_configuration(\n self, *, tokens_per_chunk: Optional[int]\n ) -> None:\n self.maximum_tokens_per_chunk = cast(int, self._model.max_seq_length)\n if tokens_per_chunk is None:\n self.tokens_per_chunk = self.maximum_tokens_per_chunk\n else:\n self.tokens_per_chunk = tokens_per_chunk\n if self.tokens_per_chunk > self.maximum_tokens_per_chunk:\n raise ValueError(\n f\"The token limit of the models '{self.model_name}'\"\n f\" is: {self.maximum_tokens_per_chunk}.\"\n f\" Argument tokens_per_chunk={self.tokens_per_chunk}\"\n f\" > maximum token limit.\"\n )\n[docs] def split_text(self, text: str) -> List[str]:\n def encode_strip_start_and_stop_token_ids(text: str) -> List[int]:\n return self._encode(text)[1:-1]\n tokenizer = Tokenizer(\n chunk_overlap=self._chunk_overlap,\n tokens_per_chunk=self.tokens_per_chunk,\n decode=self.tokenizer.decode,\n encode=encode_strip_start_and_stop_token_ids,\n )\n return split_text_on_tokens(text=text, tokenizer=tokenizer)\n[docs] def count_tokens(self, *, text: str) -> int:\n return len(self._encode(text))\n _max_length_equal_32_bit_integer = 2**32\n def _encode(self, text: str) -> List[int]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} {"id": "9c0a702e9113-12", "text": "def _encode(self, text: str) -> List[int]:\n token_ids_with_start_and_end_token_ids = self.tokenizer.encode(\n text,\n max_length=self._max_length_equal_32_bit_integer,\n truncation=\"do_not_truncate\",\n )\n return token_ids_with_start_and_end_token_ids\n[docs]class Language(str, Enum):\n \"\"\"Enum of the programming languages.\"\"\"\n CPP = \"cpp\"\n GO = \"go\"\n JAVA = \"java\"\n JS = \"js\"\n PHP = \"php\"\n PROTO = \"proto\"\n PYTHON = \"python\"\n RST = \"rst\"\n RUBY = \"ruby\"\n RUST = \"rust\"\n SCALA = \"scala\"\n SWIFT = \"swift\"\n MARKDOWN = \"markdown\"\n LATEX = \"latex\"\n HTML = \"html\"\n SOL = \"sol\"\n[docs]class RecursiveCharacterTextSplitter(TextSplitter):\n \"\"\"Implementation of splitting text that looks at characters.\n Recursively tries to split by different characters to find one\n that works.\n \"\"\"\n def __init__(\n self,\n separators: Optional[List[str]] = None,\n keep_separator: bool = True,\n **kwargs: Any,\n ) -> None:\n \"\"\"Create a new TextSplitter.\"\"\"\n super().__init__(keep_separator=keep_separator, **kwargs)\n self._separators = separators or [\"\\n\\n\", \"\\n\", \" \", \"\"]\n def _split_text(self, text: str, separators: List[str]) -> List[str]:\n \"\"\"Split incoming text and return chunks.\"\"\"\n final_chunks = []\n # Get appropriate separator to use", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} {"id": "9c0a702e9113-13", "text": "final_chunks = []\n # Get appropriate separator to use\n separator = separators[-1]\n new_separators = []\n for i, _s in enumerate(separators):\n if _s == \"\":\n separator = _s\n break\n if re.search(_s, text):\n separator = _s\n new_separators = separators[i + 1 :]\n break\n splits = _split_text_with_regex(text, separator, self._keep_separator)\n # Now go merging things, recursively splitting longer texts.\n _good_splits = []\n _separator = \"\" if self._keep_separator else separator\n for s in splits:\n if self._length_function(s) < self._chunk_size:\n _good_splits.append(s)\n else:\n if _good_splits:\n merged_text = self._merge_splits(_good_splits, _separator)\n final_chunks.extend(merged_text)\n _good_splits = []\n if not new_separators:\n final_chunks.append(s)\n else:\n other_info = self._split_text(s, new_separators)\n final_chunks.extend(other_info)\n if _good_splits:\n merged_text = self._merge_splits(_good_splits, _separator)\n final_chunks.extend(merged_text)\n return final_chunks\n[docs] def split_text(self, text: str) -> List[str]:\n return self._split_text(text, self._separators)\n[docs] @classmethod\n def from_language(\n cls, language: Language, **kwargs: Any\n ) -> RecursiveCharacterTextSplitter:\n separators = cls.get_separators_for_language(language)\n return cls(separators=separators, **kwargs)\n[docs] @staticmethod", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} {"id": "9c0a702e9113-14", "text": "[docs] @staticmethod\n def get_separators_for_language(language: Language) -> List[str]:\n if language == Language.CPP:\n return [\n # Split along class definitions\n \"\\nclass \",\n # Split along function definitions\n \"\\nvoid \",\n \"\\nint \",\n \"\\nfloat \",\n \"\\ndouble \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nfor \",\n \"\\nwhile \",\n \"\\nswitch \",\n \"\\ncase \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.GO:\n return [\n # Split along function definitions\n \"\\nfunc \",\n \"\\nvar \",\n \"\\nconst \",\n \"\\ntype \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nfor \",\n \"\\nswitch \",\n \"\\ncase \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.JAVA:\n return [\n # Split along class definitions\n \"\\nclass \",\n # Split along method definitions\n \"\\npublic \",\n \"\\nprotected \",\n \"\\nprivate \",\n \"\\nstatic \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nfor \",\n \"\\nwhile \",\n \"\\nswitch \",\n \"\\ncase \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.JS:\n return [", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} {"id": "9c0a702e9113-15", "text": "\"\",\n ]\n elif language == Language.JS:\n return [\n # Split along function definitions\n \"\\nfunction \",\n \"\\nconst \",\n \"\\nlet \",\n \"\\nvar \",\n \"\\nclass \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nfor \",\n \"\\nwhile \",\n \"\\nswitch \",\n \"\\ncase \",\n \"\\ndefault \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.PHP:\n return [\n # Split along function definitions\n \"\\nfunction \",\n # Split along class definitions\n \"\\nclass \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nforeach \",\n \"\\nwhile \",\n \"\\ndo \",\n \"\\nswitch \",\n \"\\ncase \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.PROTO:\n return [\n # Split along message definitions\n \"\\nmessage \",\n # Split along service definitions\n \"\\nservice \",\n # Split along enum definitions\n \"\\nenum \",\n # Split along option definitions\n \"\\noption \",\n # Split along import statements\n \"\\nimport \",\n # Split along syntax declarations\n \"\\nsyntax \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.PYTHON:\n return [\n # First, try to split along class definitions\n \"\\nclass \",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} {"id": "9c0a702e9113-16", "text": "# First, try to split along class definitions\n \"\\nclass \",\n \"\\ndef \",\n \"\\n\\tdef \",\n # Now split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.RST:\n return [\n # Split along section titles\n \"\\n=+\\n\",\n \"\\n-+\\n\",\n \"\\n\\*+\\n\",\n # Split along directive markers\n \"\\n\\n.. *\\n\\n\",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.RUBY:\n return [\n # Split along method definitions\n \"\\ndef \",\n \"\\nclass \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nunless \",\n \"\\nwhile \",\n \"\\nfor \",\n \"\\ndo \",\n \"\\nbegin \",\n \"\\nrescue \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.RUST:\n return [\n # Split along function definitions\n \"\\nfn \",\n \"\\nconst \",\n \"\\nlet \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nwhile \",\n \"\\nfor \",\n \"\\nloop \",\n \"\\nmatch \",\n \"\\nconst \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.SCALA:\n return [\n # Split along class definitions", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} {"id": "9c0a702e9113-17", "text": "return [\n # Split along class definitions\n \"\\nclass \",\n \"\\nobject \",\n # Split along method definitions\n \"\\ndef \",\n \"\\nval \",\n \"\\nvar \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nfor \",\n \"\\nwhile \",\n \"\\nmatch \",\n \"\\ncase \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.SWIFT:\n return [\n # Split along function definitions\n \"\\nfunc \",\n # Split along class definitions\n \"\\nclass \",\n \"\\nstruct \",\n \"\\nenum \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nfor \",\n \"\\nwhile \",\n \"\\ndo \",\n \"\\nswitch \",\n \"\\ncase \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.MARKDOWN:\n return [\n # First, try to split along Markdown headings (starting with level 2)\n \"\\n#{1,6} \",\n # Note the alternative syntax for headings (below) is not handled here\n # Heading level 2\n # ---------------\n # End of code block\n \"```\\n\",\n # Horizontal lines\n \"\\n\\*\\*\\*+\\n\",\n \"\\n---+\\n\",\n \"\\n___+\\n\",\n # Note that this splitter doesn't handle horizontal lines defined\n # by *three or more* of ***, ---, or ___, but this is not handled\n \"\\n\\n\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} {"id": "9c0a702e9113-18", "text": "\"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.LATEX:\n return [\n # First, try to split along Latex sections\n \"\\n\\\\\\chapter{\",\n \"\\n\\\\\\section{\",\n \"\\n\\\\\\subsection{\",\n \"\\n\\\\\\subsubsection{\",\n # Now split by environments\n \"\\n\\\\\\begin{enumerate}\",\n \"\\n\\\\\\begin{itemize}\",\n \"\\n\\\\\\begin{description}\",\n \"\\n\\\\\\begin{list}\",\n \"\\n\\\\\\begin{quote}\",\n \"\\n\\\\\\begin{quotation}\",\n \"\\n\\\\\\begin{verse}\",\n \"\\n\\\\\\begin{verbatim}\",\n # Now split by math environments\n \"\\n\\\\\\begin{align}\",\n \"$$\",\n \"$\",\n # Now split by the normal type of lines\n \" \",\n \"\",\n ]\n elif language == Language.HTML:\n return [\n # First, try to split along HTML tags\n \" None:\n \"\"\"Initialize the NLTK splitter.\"\"\"\n super().__init__(**kwargs)\n try:\n from nltk.tokenize import sent_tokenize\n self._tokenizer = sent_tokenize\n except ImportError:\n raise ImportError(\n \"NLTK is not installed, please install it with `pip install nltk`.\"\n )\n self._separator = separator\n[docs] def split_text(self, text: str) -> List[str]:\n \"\"\"Split incoming text and return chunks.\"\"\"\n # First we naively split the large input into a bunch of smaller ones.\n splits = self._tokenizer(text)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} {"id": "9c0a702e9113-20", "text": "splits = self._tokenizer(text)\n return self._merge_splits(splits, self._separator)\n[docs]class SpacyTextSplitter(TextSplitter):\n \"\"\"Implementation of splitting text that looks at sentences using Spacy.\"\"\"\n def __init__(\n self, separator: str = \"\\n\\n\", pipeline: str = \"en_core_web_sm\", **kwargs: Any\n ) -> None:\n \"\"\"Initialize the spacy text splitter.\"\"\"\n super().__init__(**kwargs)\n try:\n import spacy\n except ImportError:\n raise ImportError(\n \"Spacy is not installed, please install it with `pip install spacy`.\"\n )\n self._tokenizer = spacy.load(pipeline)\n self._separator = separator\n[docs] def split_text(self, text: str) -> List[str]:\n \"\"\"Split incoming text and return chunks.\"\"\"\n splits = (str(s) for s in self._tokenizer(text).sents)\n return self._merge_splits(splits, self._separator)\n# For backwards compatibility\n[docs]class PythonCodeTextSplitter(RecursiveCharacterTextSplitter):\n \"\"\"Attempts to split the text along Python syntax.\"\"\"\n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Initialize a PythonCodeTextSplitter.\"\"\"\n separators = self.get_separators_for_language(Language.PYTHON)\n super().__init__(separators=separators, **kwargs)\n[docs]class MarkdownTextSplitter(RecursiveCharacterTextSplitter):\n \"\"\"Attempts to split the text along Markdown-formatted headings.\"\"\"\n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Initialize a MarkdownTextSplitter.\"\"\"\n separators = self.get_separators_for_language(Language.MARKDOWN)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} {"id": "9c0a702e9113-21", "text": "separators = self.get_separators_for_language(Language.MARKDOWN)\n super().__init__(separators=separators, **kwargs)\n[docs]class LatexTextSplitter(RecursiveCharacterTextSplitter):\n \"\"\"Attempts to split the text along Latex-formatted layout elements.\"\"\"\n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Initialize a LatexTextSplitter.\"\"\"\n separators = self.get_separators_for_language(Language.LATEX)\n super().__init__(separators=separators, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} {"id": "f1ea40e26720-0", "text": "Source code for langchain.cache\n\"\"\"Beta Feature: base interface for cache.\"\"\"\nfrom __future__ import annotations\nimport hashlib\nimport inspect\nimport json\nimport logging\nfrom abc import ABC, abstractmethod\nfrom datetime import timedelta\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Dict,\n Optional,\n Sequence,\n Tuple,\n Type,\n Union,\n cast,\n)\nfrom sqlalchemy import Column, Integer, String, create_engine, select\nfrom sqlalchemy.engine.base import Engine\nfrom sqlalchemy.orm import Session\nfrom langchain.utils import get_from_env\ntry:\n from sqlalchemy.orm import declarative_base\nexcept ImportError:\n from sqlalchemy.ext.declarative import declarative_base\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.load.dump import dumps\nfrom langchain.load.load import loads\nfrom langchain.schema import Generation\nfrom langchain.vectorstores.redis import Redis as RedisVectorstore\nlogger = logging.getLogger(__file__)\nif TYPE_CHECKING:\n import momento\nRETURN_VAL_TYPE = Sequence[Generation]\ndef _hash(_input: str) -> str:\n \"\"\"Use a deterministic hashing approach.\"\"\"\n return hashlib.md5(_input.encode()).hexdigest()\ndef _dump_generations_to_json(generations: RETURN_VAL_TYPE) -> str:\n \"\"\"Dump generations to json.\n Args:\n generations (RETURN_VAL_TYPE): A list of language model generations.\n Returns:\n str: Json representing a list of generations.\n \"\"\"\n return json.dumps([generation.dict() for generation in generations])\ndef _load_generations_from_json(generations_json: str) -> RETURN_VAL_TYPE:\n \"\"\"Load generations from json.\n Args:\n generations_json (str): A string of json representing a list of generations.\n Raises:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/cache.html"} {"id": "f1ea40e26720-1", "text": "Raises:\n ValueError: Could not decode json string to list of generations.\n Returns:\n RETURN_VAL_TYPE: A list of generations.\n \"\"\"\n try:\n results = json.loads(generations_json)\n return [Generation(**generation_dict) for generation_dict in results]\n except json.JSONDecodeError:\n raise ValueError(\n f\"Could not decode json to list of generations: {generations_json}\"\n )\n[docs]class BaseCache(ABC):\n \"\"\"Base interface for cache.\"\"\"\n[docs] @abstractmethod\n def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:\n \"\"\"Look up based on prompt and llm_string.\"\"\"\n[docs] @abstractmethod\n def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:\n \"\"\"Update cache based on prompt and llm_string.\"\"\"\n[docs] @abstractmethod\n def clear(self, **kwargs: Any) -> None:\n \"\"\"Clear cache that can take additional keyword arguments.\"\"\"\n[docs]class InMemoryCache(BaseCache):\n \"\"\"Cache that stores things in memory.\"\"\"\n def __init__(self) -> None:\n \"\"\"Initialize with empty cache.\"\"\"\n self._cache: Dict[Tuple[str, str], RETURN_VAL_TYPE] = {}\n[docs] def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:\n \"\"\"Look up based on prompt and llm_string.\"\"\"\n return self._cache.get((prompt, llm_string), None)\n[docs] def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:\n \"\"\"Update cache based on prompt and llm_string.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/cache.html"} {"id": "f1ea40e26720-2", "text": "\"\"\"Update cache based on prompt and llm_string.\"\"\"\n self._cache[(prompt, llm_string)] = return_val\n[docs] def clear(self, **kwargs: Any) -> None:\n \"\"\"Clear cache.\"\"\"\n self._cache = {}\nBase = declarative_base()\n[docs]class FullLLMCache(Base): # type: ignore\n \"\"\"SQLite table for full LLM Cache (all generations).\"\"\"\n __tablename__ = \"full_llm_cache\"\n prompt = Column(String, primary_key=True)\n llm = Column(String, primary_key=True)\n idx = Column(Integer, primary_key=True)\n response = Column(String)\n[docs]class SQLAlchemyCache(BaseCache):\n \"\"\"Cache that uses SQAlchemy as a backend.\"\"\"\n def __init__(self, engine: Engine, cache_schema: Type[FullLLMCache] = FullLLMCache):\n \"\"\"Initialize by creating all tables.\"\"\"\n self.engine = engine\n self.cache_schema = cache_schema\n self.cache_schema.metadata.create_all(self.engine)\n[docs] def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:\n \"\"\"Look up based on prompt and llm_string.\"\"\"\n stmt = (\n select(self.cache_schema.response)\n .where(self.cache_schema.prompt == prompt) # type: ignore\n .where(self.cache_schema.llm == llm_string)\n .order_by(self.cache_schema.idx)\n )\n with Session(self.engine) as session:\n rows = session.execute(stmt).fetchall()\n if rows:\n try:\n return [loads(row[0]) for row in rows]\n except Exception:\n logger.warning(\n \"Retrieving a cache value that could not be deserialized \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/cache.html"} {"id": "f1ea40e26720-3", "text": "logger.warning(\n \"Retrieving a cache value that could not be deserialized \"\n \"properly. This is likely due to the cache being in an \"\n \"older format. Please recreate your cache to avoid this \"\n \"error.\"\n )\n # In a previous life we stored the raw text directly\n # in the table, so assume it's in that format.\n return [Generation(text=row[0]) for row in rows]\n return None\n[docs] def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:\n \"\"\"Update based on prompt and llm_string.\"\"\"\n items = [\n self.cache_schema(prompt=prompt, llm=llm_string, response=dumps(gen), idx=i)\n for i, gen in enumerate(return_val)\n ]\n with Session(self.engine) as session, session.begin():\n for item in items:\n session.merge(item)\n[docs] def clear(self, **kwargs: Any) -> None:\n \"\"\"Clear cache.\"\"\"\n with Session(self.engine) as session:\n session.query(self.cache_schema).delete()\n[docs]class SQLiteCache(SQLAlchemyCache):\n \"\"\"Cache that uses SQLite as a backend.\"\"\"\n def __init__(self, database_path: str = \".langchain.db\"):\n \"\"\"Initialize by creating the engine and all tables.\"\"\"\n engine = create_engine(f\"sqlite:///{database_path}\")\n super().__init__(engine)\n[docs]class RedisCache(BaseCache):\n \"\"\"Cache that uses Redis as a backend.\"\"\"\n # TODO - implement a TTL policy in Redis\n def __init__(self, redis_: Any):\n \"\"\"Initialize by passing in Redis instance.\"\"\"\n try:\n from redis import Redis", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/cache.html"} {"id": "f1ea40e26720-4", "text": "\"\"\"Initialize by passing in Redis instance.\"\"\"\n try:\n from redis import Redis\n except ImportError:\n raise ValueError(\n \"Could not import redis python package. \"\n \"Please install it with `pip install redis`.\"\n )\n if not isinstance(redis_, Redis):\n raise ValueError(\"Please pass in Redis object.\")\n self.redis = redis_\n def _key(self, prompt: str, llm_string: str) -> str:\n \"\"\"Compute key from prompt and llm_string\"\"\"\n return _hash(prompt + llm_string)\n[docs] def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:\n \"\"\"Look up based on prompt and llm_string.\"\"\"\n generations = []\n # Read from a Redis HASH\n results = self.redis.hgetall(self._key(prompt, llm_string))\n if results:\n for _, text in results.items():\n generations.append(Generation(text=text))\n return generations if generations else None\n[docs] def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:\n \"\"\"Update cache based on prompt and llm_string.\"\"\"\n for gen in return_val:\n if not isinstance(gen, Generation):\n raise ValueError(\n \"RedisCache only supports caching of normal LLM generations, \"\n f\"got {type(gen)}\"\n )\n # Write to a Redis HASH\n key = self._key(prompt, llm_string)\n self.redis.hset(\n key,\n mapping={\n str(idx): generation.text for idx, generation in enumerate(return_val)\n },\n )\n[docs] def clear(self, **kwargs: Any) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/cache.html"} {"id": "f1ea40e26720-5", "text": ")\n[docs] def clear(self, **kwargs: Any) -> None:\n \"\"\"Clear cache. If `asynchronous` is True, flush asynchronously.\"\"\"\n asynchronous = kwargs.get(\"asynchronous\", False)\n self.redis.flushdb(asynchronous=asynchronous, **kwargs)\n[docs]class RedisSemanticCache(BaseCache):\n \"\"\"Cache that uses Redis as a vector-store backend.\"\"\"\n # TODO - implement a TTL policy in Redis\n def __init__(\n self, redis_url: str, embedding: Embeddings, score_threshold: float = 0.2\n ):\n \"\"\"Initialize by passing in the `init` GPTCache func\n Args:\n redis_url (str): URL to connect to Redis.\n embedding (Embedding): Embedding provider for semantic encoding and search.\n score_threshold (float, 0.2):\n Example:\n .. code-block:: python\n import langchain\n from langchain.cache import RedisSemanticCache\n from langchain.embeddings import OpenAIEmbeddings\n langchain.llm_cache = RedisSemanticCache(\n redis_url=\"redis://localhost:6379\",\n embedding=OpenAIEmbeddings()\n )\n \"\"\"\n self._cache_dict: Dict[str, RedisVectorstore] = {}\n self.redis_url = redis_url\n self.embedding = embedding\n self.score_threshold = score_threshold\n def _index_name(self, llm_string: str) -> str:\n hashed_index = _hash(llm_string)\n return f\"cache:{hashed_index}\"\n def _get_llm_cache(self, llm_string: str) -> RedisVectorstore:\n index_name = self._index_name(llm_string)\n # return vectorstore client for the specific llm string", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/cache.html"} {"id": "f1ea40e26720-6", "text": "# return vectorstore client for the specific llm string\n if index_name in self._cache_dict:\n return self._cache_dict[index_name]\n # create new vectorstore client for the specific llm string\n try:\n self._cache_dict[index_name] = RedisVectorstore.from_existing_index(\n embedding=self.embedding,\n index_name=index_name,\n redis_url=self.redis_url,\n )\n except ValueError:\n redis = RedisVectorstore(\n embedding_function=self.embedding.embed_query,\n index_name=index_name,\n redis_url=self.redis_url,\n )\n _embedding = self.embedding.embed_query(text=\"test\")\n redis._create_index(dim=len(_embedding))\n self._cache_dict[index_name] = redis\n return self._cache_dict[index_name]\n[docs] def clear(self, **kwargs: Any) -> None:\n \"\"\"Clear semantic cache for a given llm_string.\"\"\"\n index_name = self._index_name(kwargs[\"llm_string\"])\n if index_name in self._cache_dict:\n self._cache_dict[index_name].drop_index(\n index_name=index_name, delete_documents=True, redis_url=self.redis_url\n )\n del self._cache_dict[index_name]\n[docs] def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:\n \"\"\"Look up based on prompt and llm_string.\"\"\"\n llm_cache = self._get_llm_cache(llm_string)\n generations = []\n # Read from a Hash\n results = llm_cache.similarity_search_limit_score(\n query=prompt,\n k=1,\n score_threshold=self.score_threshold,\n )\n if results:\n for document in results:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/cache.html"} {"id": "f1ea40e26720-7", "text": ")\n if results:\n for document in results:\n for text in document.metadata[\"return_val\"]:\n generations.append(Generation(text=text))\n return generations if generations else None\n[docs] def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:\n \"\"\"Update cache based on prompt and llm_string.\"\"\"\n for gen in return_val:\n if not isinstance(gen, Generation):\n raise ValueError(\n \"RedisSemanticCache only supports caching of \"\n f\"normal LLM generations, got {type(gen)}\"\n )\n llm_cache = self._get_llm_cache(llm_string)\n # Write to vectorstore\n metadata = {\n \"llm_string\": llm_string,\n \"prompt\": prompt,\n \"return_val\": [generation.text for generation in return_val],\n }\n llm_cache.add_texts(texts=[prompt], metadatas=[metadata])\n[docs]class GPTCache(BaseCache):\n \"\"\"Cache that uses GPTCache as a backend.\"\"\"\n def __init__(\n self,\n init_func: Union[\n Callable[[Any, str], None], Callable[[Any], None], None\n ] = None,\n ):\n \"\"\"Initialize by passing in init function (default: `None`).\n Args:\n init_func (Optional[Callable[[Any], None]]): init `GPTCache` function\n (default: `None`)\n Example:\n .. code-block:: python\n # Initialize GPTCache with a custom init function\n import gptcache\n from gptcache.processor.pre import get_prompt\n from gptcache.manager.factory import get_data_manager\n # Avoid multiple caches using the same file,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/cache.html"} {"id": "f1ea40e26720-8", "text": "# Avoid multiple caches using the same file,\n causing different llm model caches to affect each other\n def init_gptcache(cache_obj: gptcache.Cache, llm str):\n cache_obj.init(\n pre_embedding_func=get_prompt,\n data_manager=manager_factory(\n manager=\"map\",\n data_dir=f\"map_cache_{llm}\"\n ),\n )\n langchain.llm_cache = GPTCache(init_gptcache)\n \"\"\"\n try:\n import gptcache # noqa: F401\n except ImportError:\n raise ImportError(\n \"Could not import gptcache python package. \"\n \"Please install it with `pip install gptcache`.\"\n )\n self.init_gptcache_func: Union[\n Callable[[Any, str], None], Callable[[Any], None], None\n ] = init_func\n self.gptcache_dict: Dict[str, Any] = {}\n def _new_gptcache(self, llm_string: str) -> Any:\n \"\"\"New gptcache object\"\"\"\n from gptcache import Cache\n from gptcache.manager.factory import get_data_manager\n from gptcache.processor.pre import get_prompt\n _gptcache = Cache()\n if self.init_gptcache_func is not None:\n sig = inspect.signature(self.init_gptcache_func)\n if len(sig.parameters) == 2:\n self.init_gptcache_func(_gptcache, llm_string) # type: ignore[call-arg]\n else:\n self.init_gptcache_func(_gptcache) # type: ignore[call-arg]\n else:\n _gptcache.init(\n pre_embedding_func=get_prompt,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/cache.html"} {"id": "f1ea40e26720-9", "text": "else:\n _gptcache.init(\n pre_embedding_func=get_prompt,\n data_manager=get_data_manager(data_path=llm_string),\n )\n self.gptcache_dict[llm_string] = _gptcache\n return _gptcache\n def _get_gptcache(self, llm_string: str) -> Any:\n \"\"\"Get a cache object.\n When the corresponding llm model cache does not exist, it will be created.\"\"\"\n return self.gptcache_dict.get(llm_string, self._new_gptcache(llm_string))\n[docs] def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:\n \"\"\"Look up the cache data.\n First, retrieve the corresponding cache object using the `llm_string` parameter,\n and then retrieve the data from the cache based on the `prompt`.\n \"\"\"\n from gptcache.adapter.api import get\n _gptcache = self.gptcache_dict.get(llm_string, None)\n if _gptcache is None:\n return None\n res = get(prompt, cache_obj=_gptcache)\n if res:\n return [\n Generation(**generation_dict) for generation_dict in json.loads(res)\n ]\n return None\n[docs] def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:\n \"\"\"Update cache.\n First, retrieve the corresponding cache object using the `llm_string` parameter,\n and then store the `prompt` and `return_val` in the cache object.\n \"\"\"\n for gen in return_val:\n if not isinstance(gen, Generation):\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/cache.html"} {"id": "f1ea40e26720-10", "text": "if not isinstance(gen, Generation):\n raise ValueError(\n \"GPTCache only supports caching of normal LLM generations, \"\n f\"got {type(gen)}\"\n )\n from gptcache.adapter.api import put\n _gptcache = self._get_gptcache(llm_string)\n handled_data = json.dumps([generation.dict() for generation in return_val])\n put(prompt, handled_data, cache_obj=_gptcache)\n return None\n[docs] def clear(self, **kwargs: Any) -> None:\n \"\"\"Clear cache.\"\"\"\n from gptcache import Cache\n for gptcache_instance in self.gptcache_dict.values():\n gptcache_instance = cast(Cache, gptcache_instance)\n gptcache_instance.flush()\n self.gptcache_dict.clear()\ndef _ensure_cache_exists(cache_client: momento.CacheClient, cache_name: str) -> None:\n \"\"\"Create cache if it doesn't exist.\n Raises:\n SdkException: Momento service or network error\n Exception: Unexpected response\n \"\"\"\n from momento.responses import CreateCache\n create_cache_response = cache_client.create_cache(cache_name)\n if isinstance(create_cache_response, CreateCache.Success) or isinstance(\n create_cache_response, CreateCache.CacheAlreadyExists\n ):\n return None\n elif isinstance(create_cache_response, CreateCache.Error):\n raise create_cache_response.inner_exception\n else:\n raise Exception(f\"Unexpected response cache creation: {create_cache_response}\")\ndef _validate_ttl(ttl: Optional[timedelta]) -> None:\n if ttl is not None and ttl <= timedelta(seconds=0):\n raise ValueError(f\"ttl must be positive but was {ttl}.\")\n[docs]class MomentoCache(BaseCache):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/cache.html"} {"id": "f1ea40e26720-11", "text": "[docs]class MomentoCache(BaseCache):\n \"\"\"Cache that uses Momento as a backend. See https://gomomento.com/\"\"\"\n def __init__(\n self,\n cache_client: momento.CacheClient,\n cache_name: str,\n *,\n ttl: Optional[timedelta] = None,\n ensure_cache_exists: bool = True,\n ):\n \"\"\"Instantiate a prompt cache using Momento as a backend.\n Note: to instantiate the cache client passed to MomentoCache,\n you must have a Momento account. See https://gomomento.com/.\n Args:\n cache_client (CacheClient): The Momento cache client.\n cache_name (str): The name of the cache to use to store the data.\n ttl (Optional[timedelta], optional): The time to live for the cache items.\n Defaults to None, ie use the client default TTL.\n ensure_cache_exists (bool, optional): Create the cache if it doesn't\n exist. Defaults to True.\n Raises:\n ImportError: Momento python package is not installed.\n TypeError: cache_client is not of type momento.CacheClientObject\n ValueError: ttl is non-null and non-negative\n \"\"\"\n try:\n from momento import CacheClient\n except ImportError:\n raise ImportError(\n \"Could not import momento python package. \"\n \"Please install it with `pip install momento`.\"\n )\n if not isinstance(cache_client, CacheClient):\n raise TypeError(\"cache_client must be a momento.CacheClient object.\")\n _validate_ttl(ttl)\n if ensure_cache_exists:\n _ensure_cache_exists(cache_client, cache_name)\n self.cache_client = cache_client\n self.cache_name = cache_name\n self.ttl = ttl", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/cache.html"} {"id": "f1ea40e26720-12", "text": "self.cache_name = cache_name\n self.ttl = ttl\n[docs] @classmethod\n def from_client_params(\n cls,\n cache_name: str,\n ttl: timedelta,\n *,\n configuration: Optional[momento.config.Configuration] = None,\n auth_token: Optional[str] = None,\n **kwargs: Any,\n ) -> MomentoCache:\n \"\"\"Construct cache from CacheClient parameters.\"\"\"\n try:\n from momento import CacheClient, Configurations, CredentialProvider\n except ImportError:\n raise ImportError(\n \"Could not import momento python package. \"\n \"Please install it with `pip install momento`.\"\n )\n if configuration is None:\n configuration = Configurations.Laptop.v1()\n auth_token = auth_token or get_from_env(\"auth_token\", \"MOMENTO_AUTH_TOKEN\")\n credentials = CredentialProvider.from_string(auth_token)\n cache_client = CacheClient(configuration, credentials, default_ttl=ttl)\n return cls(cache_client, cache_name, ttl=ttl, **kwargs)\n def __key(self, prompt: str, llm_string: str) -> str:\n \"\"\"Compute cache key from prompt and associated model and settings.\n Args:\n prompt (str): The prompt run through the language model.\n llm_string (str): The language model version and settings.\n Returns:\n str: The cache key.\n \"\"\"\n return _hash(prompt + llm_string)\n[docs] def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:\n \"\"\"Lookup llm generations in cache by prompt and associated model and settings.\n Args:\n prompt (str): The prompt run through the language model.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/cache.html"} {"id": "f1ea40e26720-13", "text": "Args:\n prompt (str): The prompt run through the language model.\n llm_string (str): The language model version and settings.\n Raises:\n SdkException: Momento service or network error\n Returns:\n Optional[RETURN_VAL_TYPE]: A list of language model generations.\n \"\"\"\n from momento.responses import CacheGet\n generations: RETURN_VAL_TYPE = []\n get_response = self.cache_client.get(\n self.cache_name, self.__key(prompt, llm_string)\n )\n if isinstance(get_response, CacheGet.Hit):\n value = get_response.value_string\n generations = _load_generations_from_json(value)\n elif isinstance(get_response, CacheGet.Miss):\n pass\n elif isinstance(get_response, CacheGet.Error):\n raise get_response.inner_exception\n return generations if generations else None\n[docs] def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:\n \"\"\"Store llm generations in cache.\n Args:\n prompt (str): The prompt run through the language model.\n llm_string (str): The language model string.\n return_val (RETURN_VAL_TYPE): A list of language model generations.\n Raises:\n SdkException: Momento service or network error\n Exception: Unexpected response\n \"\"\"\n for gen in return_val:\n if not isinstance(gen, Generation):\n raise ValueError(\n \"Momento only supports caching of normal LLM generations, \"\n f\"got {type(gen)}\"\n )\n key = self.__key(prompt, llm_string)\n value = _dump_generations_to_json(return_val)\n set_response = self.cache_client.set(self.cache_name, key, value, self.ttl)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/cache.html"} {"id": "f1ea40e26720-14", "text": "from momento.responses import CacheSet\n if isinstance(set_response, CacheSet.Success):\n pass\n elif isinstance(set_response, CacheSet.Error):\n raise set_response.inner_exception\n else:\n raise Exception(f\"Unexpected response: {set_response}\")\n[docs] def clear(self, **kwargs: Any) -> None:\n \"\"\"Clear the cache.\n Raises:\n SdkException: Momento service or network error\n \"\"\"\n from momento.responses import CacheFlush\n flush_response = self.cache_client.flush_cache(self.cache_name)\n if isinstance(flush_response, CacheFlush.Success):\n pass\n elif isinstance(flush_response, CacheFlush.Error):\n raise flush_response.inner_exception", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/cache.html"} {"id": "2a58acf5491c-0", "text": "Source code for langchain.math_utils\n\"\"\"Math utils.\"\"\"\nfrom typing import List, Optional, Tuple, Union\nimport numpy as np\nMatrix = Union[List[List[float]], List[np.ndarray], np.ndarray]\n[docs]def cosine_similarity(X: Matrix, Y: Matrix) -> np.ndarray:\n \"\"\"Row-wise cosine similarity between two equal-width matrices.\"\"\"\n if len(X) == 0 or len(Y) == 0:\n return np.array([])\n X = np.array(X)\n Y = np.array(Y)\n if X.shape[1] != Y.shape[1]:\n raise ValueError(\n f\"Number of columns in X and Y must be the same. X has shape {X.shape} \"\n f\"and Y has shape {Y.shape}.\"\n )\n X_norm = np.linalg.norm(X, axis=1)\n Y_norm = np.linalg.norm(Y, axis=1)\n similarity = np.dot(X, Y.T) / np.outer(X_norm, Y_norm)\n similarity[np.isnan(similarity) | np.isinf(similarity)] = 0.0\n return similarity\n[docs]def cosine_similarity_top_k(\n X: Matrix,\n Y: Matrix,\n top_k: Optional[int] = 5,\n score_threshold: Optional[float] = None,\n) -> Tuple[List[Tuple[int, int]], List[float]]:\n \"\"\"Row-wise cosine similarity with optional top-k and score threshold filtering.\n Args:\n X: Matrix.\n Y: Matrix, same width as X.\n top_k: Max number of results to return.\n score_threshold: Minimum cosine similarity of results.\n Returns:\n Tuple of two lists. First contains two-tuples of indices (X_idx, Y_idx),\n second contains corresponding cosine similarities.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/math_utils.html"} {"id": "2a58acf5491c-1", "text": "second contains corresponding cosine similarities.\n \"\"\"\n if len(X) == 0 or len(Y) == 0:\n return [], []\n score_array = cosine_similarity(X, Y)\n sorted_idxs = score_array.flatten().argsort()[::-1]\n top_k = top_k or len(sorted_idxs)\n top_idxs = sorted_idxs[:top_k]\n score_threshold = score_threshold or -1.0\n top_idxs = top_idxs[score_array.flatten()[top_idxs] > score_threshold]\n ret_idxs = [(x // score_array.shape[1], x % score_array.shape[1]) for x in top_idxs]\n scores = score_array.flatten()[top_idxs].tolist()\n return ret_idxs, scores", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/math_utils.html"} {"id": "7dd79e270ede-0", "text": "Source code for langchain.sql_database\n\"\"\"SQLAlchemy wrapper around a database.\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom typing import Any, Iterable, List, Optional\nimport sqlalchemy\nfrom sqlalchemy import MetaData, Table, create_engine, inspect, select, text\nfrom sqlalchemy.engine import Engine\nfrom sqlalchemy.exc import ProgrammingError, SQLAlchemyError\nfrom sqlalchemy.schema import CreateTable\nfrom langchain import utils\ndef _format_index(index: sqlalchemy.engine.interfaces.ReflectedIndex) -> str:\n return (\n f'Name: {index[\"name\"]}, Unique: {index[\"unique\"]},'\n f' Columns: {str(index[\"column_names\"])}'\n )\n[docs]def truncate_word(content: Any, *, length: int, suffix: str = \"...\") -> str:\n \"\"\"\n Truncate a string to a certain number of words, based on the max string\n length.\n \"\"\"\n if not isinstance(content, str) or length <= 0:\n return content\n if len(content) <= length:\n return content\n return content[: length - len(suffix)].rsplit(\" \", 1)[0] + suffix\nclass SQLDatabase:\n \"\"\"SQLAlchemy wrapper around a database.\"\"\"\n def __init__(\n self,\n engine: Engine,\n schema: Optional[str] = None,\n metadata: Optional[MetaData] = None,\n ignore_tables: Optional[List[str]] = None,\n include_tables: Optional[List[str]] = None,\n sample_rows_in_table_info: int = 3,\n indexes_in_table_info: bool = False,\n custom_table_info: Optional[dict] = None,\n view_support: bool = False,\n max_string_length: int = 300,\n ):\n \"\"\"Create engine from database URI.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/sql_database.html"} {"id": "7dd79e270ede-1", "text": "):\n \"\"\"Create engine from database URI.\"\"\"\n self._engine = engine\n self._schema = schema\n if include_tables and ignore_tables:\n raise ValueError(\"Cannot specify both include_tables and ignore_tables\")\n self._inspector = inspect(self._engine)\n # including view support by adding the views as well as tables to the all\n # tables list if view_support is True\n self._all_tables = set(\n self._inspector.get_table_names(schema=schema)\n + (self._inspector.get_view_names(schema=schema) if view_support else [])\n )\n self._include_tables = set(include_tables) if include_tables else set()\n if self._include_tables:\n missing_tables = self._include_tables - self._all_tables\n if missing_tables:\n raise ValueError(\n f\"include_tables {missing_tables} not found in database\"\n )\n self._ignore_tables = set(ignore_tables) if ignore_tables else set()\n if self._ignore_tables:\n missing_tables = self._ignore_tables - self._all_tables\n if missing_tables:\n raise ValueError(\n f\"ignore_tables {missing_tables} not found in database\"\n )\n usable_tables = self.get_usable_table_names()\n self._usable_tables = set(usable_tables) if usable_tables else self._all_tables\n if not isinstance(sample_rows_in_table_info, int):\n raise TypeError(\"sample_rows_in_table_info must be an integer\")\n self._sample_rows_in_table_info = sample_rows_in_table_info\n self._indexes_in_table_info = indexes_in_table_info\n self._custom_table_info = custom_table_info\n if self._custom_table_info:\n if not isinstance(self._custom_table_info, dict):\n raise TypeError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/sql_database.html"} {"id": "7dd79e270ede-2", "text": "if not isinstance(self._custom_table_info, dict):\n raise TypeError(\n \"table_info must be a dictionary with table names as keys and the \"\n \"desired table info as values\"\n )\n # only keep the tables that are also present in the database\n intersection = set(self._custom_table_info).intersection(self._all_tables)\n self._custom_table_info = dict(\n (table, self._custom_table_info[table])\n for table in self._custom_table_info\n if table in intersection\n )\n self._max_string_length = max_string_length\n self._metadata = metadata or MetaData()\n # including view support if view_support = true\n self._metadata.reflect(\n views=view_support,\n bind=self._engine,\n only=list(self._usable_tables),\n schema=self._schema,\n )\n @classmethod\n def from_uri(\n cls, database_uri: str, engine_args: Optional[dict] = None, **kwargs: Any\n ) -> SQLDatabase:\n \"\"\"Construct a SQLAlchemy engine from URI.\"\"\"\n _engine_args = engine_args or {}\n return cls(create_engine(database_uri, **_engine_args), **kwargs)\n @classmethod\n def from_databricks(\n cls,\n catalog: str,\n schema: str,\n host: Optional[str] = None,\n api_token: Optional[str] = None,\n warehouse_id: Optional[str] = None,\n cluster_id: Optional[str] = None,\n engine_args: Optional[dict] = None,\n **kwargs: Any,\n ) -> SQLDatabase:\n \"\"\"\n Class method to create an SQLDatabase instance from a Databricks connection.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/sql_database.html"} {"id": "7dd79e270ede-3", "text": "\"\"\"\n Class method to create an SQLDatabase instance from a Databricks connection.\n This method requires the 'databricks-sql-connector' package. If not installed,\n it can be added using `pip install databricks-sql-connector`.\n Args:\n catalog (str): The catalog name in the Databricks database.\n schema (str): The schema name in the catalog.\n host (Optional[str]): The Databricks workspace hostname, excluding\n 'https://' part. If not provided, it attempts to fetch from the\n environment variable 'DATABRICKS_HOST'. If still unavailable and if\n running in a Databricks notebook, it defaults to the current workspace\n hostname. Defaults to None.\n api_token (Optional[str]): The Databricks personal access token for\n accessing the Databricks SQL warehouse or the cluster. If not provided,\n it attempts to fetch from 'DATABRICKS_TOKEN'. If still unavailable\n and running in a Databricks notebook, a temporary token for the current\n user is generated. Defaults to None.\n warehouse_id (Optional[str]): The warehouse ID in the Databricks SQL. If\n provided, the method configures the connection to use this warehouse.\n Cannot be used with 'cluster_id'. Defaults to None.\n cluster_id (Optional[str]): The cluster ID in the Databricks Runtime. If\n provided, the method configures the connection to use this cluster.\n Cannot be used with 'warehouse_id'. If running in a Databricks notebook\n and both 'warehouse_id' and 'cluster_id' are None, it uses the ID of the\n cluster the notebook is attached to. Defaults to None.\n engine_args (Optional[dict]): The arguments to be used when connecting", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/sql_database.html"} {"id": "7dd79e270ede-4", "text": "engine_args (Optional[dict]): The arguments to be used when connecting\n Databricks. Defaults to None.\n **kwargs (Any): Additional keyword arguments for the `from_uri` method.\n Returns:\n SQLDatabase: An instance of SQLDatabase configured with the provided\n Databricks connection details.\n Raises:\n ValueError: If 'databricks-sql-connector' is not found, or if both\n 'warehouse_id' and 'cluster_id' are provided, or if neither\n 'warehouse_id' nor 'cluster_id' are provided and it's not executing\n inside a Databricks notebook.\n \"\"\"\n try:\n from databricks import sql # noqa: F401\n except ImportError:\n raise ValueError(\n \"databricks-sql-connector package not found, please install with\"\n \" `pip install databricks-sql-connector`\"\n )\n context = None\n try:\n from dbruntime.databricks_repl_context import get_context\n context = get_context()\n except ImportError:\n pass\n default_host = context.browserHostName if context else None\n if host is None:\n host = utils.get_from_env(\"host\", \"DATABRICKS_HOST\", default_host)\n default_api_token = context.apiToken if context else None\n if api_token is None:\n api_token = utils.get_from_env(\n \"api_token\", \"DATABRICKS_TOKEN\", default_api_token\n )\n if warehouse_id is None and cluster_id is None:\n if context:\n cluster_id = context.clusterId\n else:\n raise ValueError(\n \"Need to provide either 'warehouse_id' or 'cluster_id'.\"\n )\n if warehouse_id and cluster_id:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/sql_database.html"} {"id": "7dd79e270ede-5", "text": ")\n if warehouse_id and cluster_id:\n raise ValueError(\"Can't have both 'warehouse_id' or 'cluster_id'.\")\n if warehouse_id:\n http_path = f\"/sql/1.0/warehouses/{warehouse_id}\"\n else:\n http_path = f\"/sql/protocolv1/o/0/{cluster_id}\"\n uri = (\n f\"databricks://token:{api_token}@{host}?\"\n f\"http_path={http_path}&catalog={catalog}&schema={schema}\"\n )\n return cls.from_uri(database_uri=uri, engine_args=engine_args, **kwargs)\n @classmethod\n def from_cnosdb(\n cls,\n url: str = \"127.0.0.1:8902\",\n user: str = \"root\",\n password: str = \"\",\n tenant: str = \"cnosdb\",\n database: str = \"public\",\n ) -> SQLDatabase:\n \"\"\"\n Class method to create an SQLDatabase instance from a CnosDB connection.\n This method requires the 'cnos-connector' package. If not installed, it\n can be added using `pip install cnos-connector`.\n Args:\n url (str): The HTTP connection host name and port number of the CnosDB\n service, excluding \"http://\" or \"https://\", with a default value\n of \"127.0.0.1:8902\".\n user (str): The username used to connect to the CnosDB service, with a\n default value of \"root\".\n password (str): The password of the user connecting to the CnosDB service,\n with a default value of \"\".", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/sql_database.html"} {"id": "7dd79e270ede-6", "text": "with a default value of \"\".\n tenant (str): The name of the tenant used to connect to the CnosDB service,\n with a default value of \"cnosdb\".\n database (str): The name of the database in the CnosDB tenant.\n Returns:\n SQLDatabase: An instance of SQLDatabase configured with the provided\n CnosDB connection details.\n \"\"\"\n try:\n from cnosdb_connector import make_cnosdb_langchain_uri\n uri = make_cnosdb_langchain_uri(url, user, password, tenant, database)\n return cls.from_uri(database_uri=uri)\n except ImportError:\n raise ValueError(\n \"cnos-connector package not found, please install with\"\n \" `pip install cnos-connector`\"\n )\n @property\n def dialect(self) -> str:\n \"\"\"Return string representation of dialect to use.\"\"\"\n return self._engine.dialect.name\n def get_usable_table_names(self) -> Iterable[str]:\n \"\"\"Get names of tables available.\"\"\"\n if self._include_tables:\n return sorted(self._include_tables)\n return sorted(self._all_tables - self._ignore_tables)\n def get_table_names(self) -> Iterable[str]:\n \"\"\"Get names of tables available.\"\"\"\n warnings.warn(\n \"This method is deprecated - please use `get_usable_table_names`.\"\n )\n return self.get_usable_table_names()\n @property\n def table_info(self) -> str:\n \"\"\"Information about all tables in the database.\"\"\"\n return self.get_table_info()\n def get_table_info(self, table_names: Optional[List[str]] = None) -> str:\n \"\"\"Get information about specified tables.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/sql_database.html"} {"id": "7dd79e270ede-7", "text": "\"\"\"Get information about specified tables.\n Follows best practices as specified in: Rajkumar et al, 2022\n (https://arxiv.org/abs/2204.00498)\n If `sample_rows_in_table_info`, the specified number of sample rows will be\n appended to each table description. This can increase performance as\n demonstrated in the paper.\n \"\"\"\n all_table_names = self.get_usable_table_names()\n if table_names is not None:\n missing_tables = set(table_names).difference(all_table_names)\n if missing_tables:\n raise ValueError(f\"table_names {missing_tables} not found in database\")\n all_table_names = table_names\n meta_tables = [\n tbl\n for tbl in self._metadata.sorted_tables\n if tbl.name in set(all_table_names)\n and not (self.dialect == \"sqlite\" and tbl.name.startswith(\"sqlite_\"))\n ]\n tables = []\n for table in meta_tables:\n if self._custom_table_info and table.name in self._custom_table_info:\n tables.append(self._custom_table_info[table.name])\n continue\n # add create table command\n create_table = str(CreateTable(table).compile(self._engine))\n table_info = f\"{create_table.rstrip()}\"\n has_extra_info = (\n self._indexes_in_table_info or self._sample_rows_in_table_info\n )\n if has_extra_info:\n table_info += \"\\n\\n/*\"\n if self._indexes_in_table_info:\n table_info += f\"\\n{self._get_table_indexes(table)}\\n\"\n if self._sample_rows_in_table_info:\n table_info += f\"\\n{self._get_sample_rows(table)}\\n\"\n if has_extra_info:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/sql_database.html"} {"id": "7dd79e270ede-8", "text": "if has_extra_info:\n table_info += \"*/\"\n tables.append(table_info)\n tables.sort()\n final_str = \"\\n\\n\".join(tables)\n return final_str\n def _get_table_indexes(self, table: Table) -> str:\n indexes = self._inspector.get_indexes(table.name)\n indexes_formatted = \"\\n\".join(map(_format_index, indexes))\n return f\"Table Indexes:\\n{indexes_formatted}\"\n def _get_sample_rows(self, table: Table) -> str:\n # build the select command\n command = select(table).limit(self._sample_rows_in_table_info)\n # save the columns in string format\n columns_str = \"\\t\".join([col.name for col in table.columns])\n try:\n # get the sample rows\n with self._engine.connect() as connection:\n sample_rows_result = connection.execute(command) # type: ignore\n # shorten values in the sample rows\n sample_rows = list(\n map(lambda ls: [str(i)[:100] for i in ls], sample_rows_result)\n )\n # save the sample rows in string format\n sample_rows_str = \"\\n\".join([\"\\t\".join(row) for row in sample_rows])\n # in some dialects when there are no rows in the table a\n # 'ProgrammingError' is returned\n except ProgrammingError:\n sample_rows_str = \"\"\n return (\n f\"{self._sample_rows_in_table_info} rows from {table.name} table:\\n\"\n f\"{columns_str}\\n\"\n f\"{sample_rows_str}\"\n )\n def run(self, command: str, fetch: str = \"all\") -> str:\n \"\"\"Execute a SQL command and return a string representing the results.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/sql_database.html"} {"id": "7dd79e270ede-9", "text": "\"\"\"Execute a SQL command and return a string representing the results.\n If the statement returns rows, a string of the results is returned.\n If the statement returns no rows, an empty string is returned.\n \"\"\"\n with self._engine.begin() as connection:\n if self._schema is not None:\n if self.dialect == \"snowflake\":\n connection.exec_driver_sql(\n f\"ALTER SESSION SET search_path='{self._schema}'\"\n )\n elif self.dialect == \"bigquery\":\n connection.exec_driver_sql(f\"SET @@dataset_id='{self._schema}'\")\n else:\n connection.exec_driver_sql(f\"SET search_path TO {self._schema}\")\n cursor = connection.execute(text(command))\n if cursor.returns_rows:\n if fetch == \"all\":\n result = cursor.fetchall()\n elif fetch == \"one\":\n result = cursor.fetchone() # type: ignore\n else:\n raise ValueError(\"Fetch parameter must be either 'one' or 'all'\")\n # Convert columns values to string to avoid issues with sqlalchmey\n # trunacating text\n if isinstance(result, list):\n return str(\n [\n tuple(\n truncate_word(c, length=self._max_string_length)\n for c in r\n )\n for r in result\n ]\n )\n return str(\n tuple(\n truncate_word(c, length=self._max_string_length) for c in result\n )\n )\n return \"\"\n def get_table_info_no_throw(self, table_names: Optional[List[str]] = None) -> str:\n \"\"\"Get information about specified tables.\n Follows best practices as specified in: Rajkumar et al, 2022", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/sql_database.html"} {"id": "7dd79e270ede-10", "text": "Follows best practices as specified in: Rajkumar et al, 2022\n (https://arxiv.org/abs/2204.00498)\n If `sample_rows_in_table_info`, the specified number of sample rows will be\n appended to each table description. This can increase performance as\n demonstrated in the paper.\n \"\"\"\n try:\n return self.get_table_info(table_names)\n except ValueError as e:\n \"\"\"Format the error message\"\"\"\n return f\"Error: {e}\"\n def run_no_throw(self, command: str, fetch: str = \"all\") -> str:\n \"\"\"Execute a SQL command and return a string representing the results.\n If the statement returns rows, a string of the results is returned.\n If the statement returns no rows, an empty string is returned.\n If the statement throws an error, the error message is returned.\n \"\"\"\n try:\n return self.run(command, fetch)\n except SQLAlchemyError as e:\n \"\"\"Format the error message\"\"\"\n return f\"Error: {e}\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/sql_database.html"} {"id": "ad53f5087877-0", "text": "Source code for langchain.input\n\"\"\"Handle chained inputs.\"\"\"\nfrom typing import Dict, List, Optional, TextIO\n_TEXT_COLOR_MAPPING = {\n \"blue\": \"36;1\",\n \"yellow\": \"33;1\",\n \"pink\": \"38;5;200\",\n \"green\": \"32;1\",\n \"red\": \"31;1\",\n}\n[docs]def get_color_mapping(\n items: List[str], excluded_colors: Optional[List] = None\n) -> Dict[str, str]:\n \"\"\"Get mapping for items to a support color.\"\"\"\n colors = list(_TEXT_COLOR_MAPPING.keys())\n if excluded_colors is not None:\n colors = [c for c in colors if c not in excluded_colors]\n color_mapping = {item: colors[i % len(colors)] for i, item in enumerate(items)}\n return color_mapping\n[docs]def get_colored_text(text: str, color: str) -> str:\n \"\"\"Get colored text.\"\"\"\n color_str = _TEXT_COLOR_MAPPING[color]\n return f\"\\u001b[{color_str}m\\033[1;3m{text}\\u001b[0m\"\n[docs]def get_bolded_text(text: str) -> str:\n \"\"\"Get bolded text.\"\"\"\n return f\"\\033[1m{text}\\033[0m\"\n[docs]def print_text(\n text: str, color: Optional[str] = None, end: str = \"\", file: Optional[TextIO] = None\n) -> None:\n \"\"\"Print text with highlighting and no end characters.\"\"\"\n text_to_print = get_colored_text(text, color) if color else text\n print(text_to_print, end=end, file=file)\n if file:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/input.html"} {"id": "ad53f5087877-1", "text": "print(text_to_print, end=end, file=file)\n if file:\n file.flush() # ensure all printed content are written to file", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/input.html"} {"id": "168087c11ead-0", "text": "Source code for langchain.env\nimport platform\nfrom functools import lru_cache\n[docs]@lru_cache(maxsize=1)\ndef get_runtime_environment() -> dict:\n \"\"\"Get information about the environment.\"\"\"\n # Lazy import to avoid circular imports\n from langchain import __version__\n return {\n \"library_version\": __version__,\n \"library\": \"langchain\",\n \"platform\": platform.platform(),\n \"runtime\": \"python\",\n \"runtime_version\": platform.python_version(),\n }", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/env.html"} {"id": "02050d9d0f21-0", "text": "Source code for langchain.example_generator\n\"\"\"Utility functions for working with prompts.\"\"\"\nfrom typing import List\nfrom langchain.chains.llm import LLMChain\nfrom langchain.prompts.few_shot import FewShotPromptTemplate\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\nTEST_GEN_TEMPLATE_SUFFIX = \"Add another example.\"\n[docs]def generate_example(\n examples: List[dict], llm: BaseLanguageModel, prompt_template: PromptTemplate\n) -> str:\n \"\"\"Return another example given a list of examples for a prompt.\"\"\"\n prompt = FewShotPromptTemplate(\n examples=examples,\n suffix=TEST_GEN_TEMPLATE_SUFFIX,\n input_variables=[],\n example_prompt=prompt_template,\n )\n chain = LLMChain(llm=llm, prompt=prompt)\n return chain.predict()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/example_generator.html"} {"id": "118827f75f18-0", "text": "Source code for langchain.embeddings.deepinfra\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nDEFAULT_MODEL_ID = \"sentence-transformers/clip-ViT-B-32\"\n[docs]class DeepInfraEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around Deep Infra's embedding inference service.\n To use, you should have the\n environment variable ``DEEPINFRA_API_TOKEN`` set with your API token, or pass\n it as a named parameter to the constructor.\n There are multiple embeddings models available,\n see https://deepinfra.com/models?type=embeddings.\n Example:\n .. code-block:: python\n from langchain.embeddings import DeepInfraEmbeddings\n deepinfra_emb = DeepInfraEmbeddings(\n model_id=\"sentence-transformers/clip-ViT-B-32\",\n deepinfra_api_token=\"my-api-key\"\n )\n r1 = deepinfra_emb.embed_documents(\n [\n \"Alpha is the first letter of Greek alphabet\",\n \"Beta is the second letter of Greek alphabet\",\n ]\n )\n r2 = deepinfra_emb.embed_query(\n \"What is the second letter of Greek alphabet\"\n )\n \"\"\"\n model_id: str = DEFAULT_MODEL_ID\n \"\"\"Embeddings model to use.\"\"\"\n normalize: bool = False\n \"\"\"whether to normalize the computed embeddings\"\"\"\n embed_instruction: str = \"passage: \"\n \"\"\"Instruction used to embed documents.\"\"\"\n query_instruction: str = \"query: \"\n \"\"\"Instruction used to embed the query.\"\"\"\n model_kwargs: Optional[dict] = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/deepinfra.html"} {"id": "118827f75f18-1", "text": "model_kwargs: Optional[dict] = None\n \"\"\"Other model keyword args\"\"\"\n deepinfra_api_token: Optional[str] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n deepinfra_api_token = get_from_dict_or_env(\n values, \"deepinfra_api_token\", \"DEEPINFRA_API_TOKEN\"\n )\n values[\"deepinfra_api_token\"] = deepinfra_api_token\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\"model_id\": self.model_id}\n def _embed(self, input: List[str]) -> List[List[float]]:\n _model_kwargs = self.model_kwargs or {}\n # HTTP headers for authorization\n headers = {\n \"Authorization\": f\"bearer {self.deepinfra_api_token}\",\n \"Content-Type\": \"application/json\",\n }\n # send request\n try:\n res = requests.post(\n f\"https://api.deepinfra.com/v1/inference/{self.model_id}\",\n headers=headers,\n json={\"inputs\": input, \"normalize\": self.normalize, **_model_kwargs},\n )\n except requests.exceptions.RequestException as e:\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\n if res.status_code != 200:\n raise ValueError(\n \"Error raised by inference API HTTP code: %s, %s\"\n % (res.status_code, res.text)\n )\n try:\n t = res.json()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/deepinfra.html"} {"id": "118827f75f18-2", "text": ")\n try:\n t = res.json()\n embeddings = t[\"embeddings\"]\n except requests.exceptions.JSONDecodeError as e:\n raise ValueError(\n f\"Error raised by inference API: {e}.\\nResponse: {res.text}\"\n )\n return embeddings\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Embed documents using a Deep Infra deployed embedding model.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n instruction_pairs = [f\"{self.query_instruction}{text}\" for text in texts]\n embeddings = self._embed(instruction_pairs)\n return embeddings\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Embed a query using a Deep Infra deployed embedding model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n instruction_pair = f\"{self.query_instruction}{text}\"\n embedding = self._embed([instruction_pair])[0]\n return embedding", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/deepinfra.html"} {"id": "5bf956fdef61-0", "text": "Source code for langchain.embeddings.sagemaker_endpoint\n\"\"\"Wrapper around Sagemaker InvokeEndpoint API.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.llms.sagemaker_endpoint import ContentHandlerBase\n[docs]class EmbeddingsContentHandler(ContentHandlerBase[List[str], List[List[float]]]):\n \"\"\"Content handler for LLM class.\"\"\"\n[docs]class SagemakerEndpointEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around custom Sagemaker Inference Endpoints.\n To use, you must supply the endpoint name from your deployed\n Sagemaker model & the region where it is deployed.\n To authenticate, the AWS client uses the following methods to\n automatically load credentials:\n https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\n If a specific credential profile should be used, you must pass\n the name of the profile from the ~/.aws/credentials file that is to be used.\n Make sure the credentials / roles used have the required policies to\n access the Sagemaker endpoint.\n See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n from langchain.embeddings import SagemakerEndpointEmbeddings\n endpoint_name = (\n \"my-endpoint-name\"\n )\n region_name = (\n \"us-west-2\"\n )\n credentials_profile_name = (\n \"default\"\n )\n se = SagemakerEndpointEmbeddings(\n endpoint_name=endpoint_name,\n region_name=region_name,\n credentials_profile_name=credentials_profile_name\n )\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/sagemaker_endpoint.html"} {"id": "5bf956fdef61-1", "text": "credentials_profile_name=credentials_profile_name\n )\n \"\"\"\n client: Any #: :meta private:\n endpoint_name: str = \"\"\n \"\"\"The name of the endpoint from the deployed Sagemaker model.\n Must be unique within an AWS Region.\"\"\"\n region_name: str = \"\"\n \"\"\"The aws region where the Sagemaker model is deployed, eg. `us-west-2`.\"\"\"\n credentials_profile_name: Optional[str] = None\n \"\"\"The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which\n has either access keys or role information specified.\n If not specified, the default credential profile or, if on an EC2 instance,\n credentials from IMDS will be used.\n See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\n \"\"\"\n content_handler: EmbeddingsContentHandler\n \"\"\"The content handler class that provides an input and\n output transform functions to handle formats between LLM\n and the endpoint.\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n from langchain.embeddings.sagemaker_endpoint import EmbeddingsContentHandler\n class ContentHandler(EmbeddingsContentHandler):\n content_type = \"application/json\"\n accepts = \"application/json\"\n def transform_input(self, prompts: List[str], model_kwargs: Dict) -> bytes:\n input_str = json.dumps({prompts: prompts, **model_kwargs})\n return input_str.encode('utf-8')\n def transform_output(self, output: bytes) -> List[List[float]]:\n response_json = json.loads(output.read().decode(\"utf-8\"))\n return response_json[\"vectors\"]\n \"\"\" # noqa: E501\n model_kwargs: Optional[Dict] = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/sagemaker_endpoint.html"} {"id": "5bf956fdef61-2", "text": "\"\"\" # noqa: E501\n model_kwargs: Optional[Dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n endpoint_kwargs: Optional[Dict] = None\n \"\"\"Optional attributes passed to the invoke_endpoint\n function. See `boto3`_. docs for more info.\n .. _boto3: \n \"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that AWS credentials to and python package exists in environment.\"\"\"\n try:\n import boto3\n try:\n if values[\"credentials_profile_name\"] is not None:\n session = boto3.Session(\n profile_name=values[\"credentials_profile_name\"]\n )\n else:\n # use default credentials\n session = boto3.Session()\n values[\"client\"] = session.client(\n \"sagemaker-runtime\", region_name=values[\"region_name\"]\n )\n except Exception as e:\n raise ValueError(\n \"Could not load credentials to authenticate with AWS client. \"\n \"Please check that credentials in the specified \"\n \"profile name are valid.\"\n ) from e\n except ImportError:\n raise ValueError(\n \"Could not import boto3 python package. \"\n \"Please install it with `pip install boto3`.\"\n )\n return values\n def _embedding_func(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Call out to SageMaker Inference embedding endpoint.\"\"\"\n # replace newlines, which can negatively affect performance.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/sagemaker_endpoint.html"} {"id": "5bf956fdef61-3", "text": "# replace newlines, which can negatively affect performance.\n texts = list(map(lambda x: x.replace(\"\\n\", \" \"), texts))\n _model_kwargs = self.model_kwargs or {}\n _endpoint_kwargs = self.endpoint_kwargs or {}\n body = self.content_handler.transform_input(texts, _model_kwargs)\n content_type = self.content_handler.content_type\n accepts = self.content_handler.accepts\n # send request\n try:\n response = self.client.invoke_endpoint(\n EndpointName=self.endpoint_name,\n Body=body,\n ContentType=content_type,\n Accept=accepts,\n **_endpoint_kwargs,\n )\n except Exception as e:\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\n return self.content_handler.transform_output(response[\"Body\"])\n[docs] def embed_documents(\n self, texts: List[str], chunk_size: int = 64\n ) -> List[List[float]]:\n \"\"\"Compute doc embeddings using a SageMaker Inference Endpoint.\n Args:\n texts: The list of texts to embed.\n chunk_size: The chunk size defines how many input texts will\n be grouped together as request. If None, will use the\n chunk size specified by the class.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n results = []\n _chunk_size = len(texts) if chunk_size > len(texts) else chunk_size\n for i in range(0, len(texts), _chunk_size):\n response = self._embedding_func(texts[i : i + _chunk_size])\n results.extend(response)\n return results\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embeddings using a SageMaker inference endpoint.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/sagemaker_endpoint.html"} {"id": "5bf956fdef61-4", "text": "\"\"\"Compute query embeddings using a SageMaker inference endpoint.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n return self._embedding_func([text])[0]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/sagemaker_endpoint.html"} {"id": "dd97fc1e26b4-0", "text": "Source code for langchain.embeddings.llamacpp\n\"\"\"Wrapper around llama.cpp embedding models.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, Field, root_validator\nfrom langchain.embeddings.base import Embeddings\n[docs]class LlamaCppEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around llama.cpp embedding models.\n To use, you should have the llama-cpp-python library installed, and provide the\n path to the Llama model as a named parameter to the constructor.\n Check out: https://github.com/abetlen/llama-cpp-python\n Example:\n .. code-block:: python\n from langchain.embeddings import LlamaCppEmbeddings\n llama = LlamaCppEmbeddings(model_path=\"/path/to/model.bin\")\n \"\"\"\n client: Any #: :meta private:\n model_path: str\n n_ctx: int = Field(512, alias=\"n_ctx\")\n \"\"\"Token context window.\"\"\"\n n_parts: int = Field(-1, alias=\"n_parts\")\n \"\"\"Number of parts to split the model into. \n If -1, the number of parts is automatically determined.\"\"\"\n seed: int = Field(-1, alias=\"seed\")\n \"\"\"Seed. If -1, a random seed is used.\"\"\"\n f16_kv: bool = Field(False, alias=\"f16_kv\")\n \"\"\"Use half-precision for key/value cache.\"\"\"\n logits_all: bool = Field(False, alias=\"logits_all\")\n \"\"\"Return logits for all tokens, not just the last token.\"\"\"\n vocab_only: bool = Field(False, alias=\"vocab_only\")\n \"\"\"Only load the vocabulary, no weights.\"\"\"\n use_mlock: bool = Field(False, alias=\"use_mlock\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/llamacpp.html"} {"id": "dd97fc1e26b4-1", "text": "use_mlock: bool = Field(False, alias=\"use_mlock\")\n \"\"\"Force system to keep model in RAM.\"\"\"\n n_threads: Optional[int] = Field(None, alias=\"n_threads\")\n \"\"\"Number of threads to use. If None, the number \n of threads is automatically determined.\"\"\"\n n_batch: Optional[int] = Field(8, alias=\"n_batch\")\n \"\"\"Number of tokens to process in parallel.\n Should be a number between 1 and n_ctx.\"\"\"\n n_gpu_layers: Optional[int] = Field(None, alias=\"n_gpu_layers\")\n \"\"\"Number of layers to be loaded into gpu memory. Default None.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that llama-cpp-python library is installed.\"\"\"\n model_path = values[\"model_path\"]\n model_param_names = [\n \"n_ctx\",\n \"n_parts\",\n \"seed\",\n \"f16_kv\",\n \"logits_all\",\n \"vocab_only\",\n \"use_mlock\",\n \"n_threads\",\n \"n_batch\",\n ]\n model_params = {k: values[k] for k in model_param_names}\n # For backwards compatibility, only include if non-null.\n if values[\"n_gpu_layers\"] is not None:\n model_params[\"n_gpu_layers\"] = values[\"n_gpu_layers\"]\n try:\n from llama_cpp import Llama\n values[\"client\"] = Llama(model_path, embedding=True, **model_params)\n except ImportError:\n raise ModuleNotFoundError(\n \"Could not import llama-cpp-python library. \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/llamacpp.html"} {"id": "dd97fc1e26b4-2", "text": "raise ModuleNotFoundError(\n \"Could not import llama-cpp-python library. \"\n \"Please install the llama-cpp-python library to \"\n \"use this embedding model: pip install llama-cpp-python\"\n )\n except Exception as e:\n raise ValueError(\n f\"Could not load Llama model from path: {model_path}. \"\n f\"Received error {e}\"\n )\n return values\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Embed a list of documents using the Llama model.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n embeddings = [self.client.embed(text) for text in texts]\n return [list(map(float, e)) for e in embeddings]\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Embed a query using the Llama model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n embedding = self.client.embed(text)\n return list(map(float, embedding))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/llamacpp.html"} {"id": "e37f1cf7730b-0", "text": "Source code for langchain.embeddings.openai\n\"\"\"Wrapper around OpenAI embedding models.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import (\n Any,\n Callable,\n Dict,\n List,\n Literal,\n Optional,\n Sequence,\n Set,\n Tuple,\n Union,\n)\nimport numpy as np\nfrom pydantic import BaseModel, Extra, root_validator\nfrom tenacity import (\n AsyncRetrying,\n before_sleep_log,\n retry,\n retry_if_exception_type,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\ndef _create_retry_decorator(embeddings: OpenAIEmbeddings) -> Callable[[Any], Any]:\n import openai\n min_seconds = 4\n max_seconds = 10\n # Wait 2^x * 1 second between each retry starting with\n # 4 seconds, then up to 10 seconds, then 10 seconds afterwards\n return retry(\n reraise=True,\n stop=stop_after_attempt(embeddings.max_retries),\n wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),\n retry=(\n retry_if_exception_type(openai.error.Timeout)\n | retry_if_exception_type(openai.error.APIError)\n | retry_if_exception_type(openai.error.APIConnectionError)\n | retry_if_exception_type(openai.error.RateLimitError)\n | retry_if_exception_type(openai.error.ServiceUnavailableError)\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\ndef _async_retry_decorator(embeddings: OpenAIEmbeddings) -> Any:\n import openai", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"} {"id": "e37f1cf7730b-1", "text": "import openai\n min_seconds = 4\n max_seconds = 10\n # Wait 2^x * 1 second between each retry starting with\n # 4 seconds, then up to 10 seconds, then 10 seconds afterwards\n async_retrying = AsyncRetrying(\n reraise=True,\n stop=stop_after_attempt(embeddings.max_retries),\n wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),\n retry=(\n retry_if_exception_type(openai.error.Timeout)\n | retry_if_exception_type(openai.error.APIError)\n | retry_if_exception_type(openai.error.APIConnectionError)\n | retry_if_exception_type(openai.error.RateLimitError)\n | retry_if_exception_type(openai.error.ServiceUnavailableError)\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\n def wrap(func: Callable) -> Callable:\n async def wrapped_f(*args: Any, **kwargs: Any) -> Callable:\n async for _ in async_retrying:\n return await func(*args, **kwargs)\n raise AssertionError(\"this is unreachable\")\n return wrapped_f\n return wrap\n# https://stackoverflow.com/questions/76469415/getting-embeddings-of-length-1-from-langchain-openaiembeddings\ndef _check_response(response: dict) -> dict:\n if any(len(d[\"embedding\"]) == 1 for d in response[\"data\"]):\n import openai\n raise openai.error.APIError(\"OpenAI API returned an empty embedding\")\n return response\n[docs]def embed_with_retry(embeddings: OpenAIEmbeddings, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the embedding call.\"\"\"\n retry_decorator = _create_retry_decorator(embeddings)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"} {"id": "e37f1cf7730b-2", "text": "retry_decorator = _create_retry_decorator(embeddings)\n @retry_decorator\n def _embed_with_retry(**kwargs: Any) -> Any:\n response = embeddings.client.create(**kwargs)\n return _check_response(response)\n return _embed_with_retry(**kwargs)\nasync def async_embed_with_retry(embeddings: OpenAIEmbeddings, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the embedding call.\"\"\"\n @_async_retry_decorator(embeddings)\n async def _async_embed_with_retry(**kwargs: Any) -> Any:\n response = await embeddings.client.acreate(**kwargs)\n return _check_response(response)\n return await _async_embed_with_retry(**kwargs)\n[docs]class OpenAIEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around OpenAI embedding models.\n To use, you should have the ``openai`` python package installed, and the\n environment variable ``OPENAI_API_KEY`` set with your API key or pass it\n as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.embeddings import OpenAIEmbeddings\n openai = OpenAIEmbeddings(openai_api_key=\"my-api-key\")\n In order to use the library with Microsoft Azure endpoints, you need to set\n the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION.\n The OPENAI_API_TYPE must be set to 'azure' and the others correspond to\n the properties of your endpoint.\n In addition, the deployment name must be passed as the model parameter.\n Example:\n .. code-block:: python\n import os\n os.environ[\"OPENAI_API_TYPE\"] = \"azure\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"} {"id": "e37f1cf7730b-3", "text": "import os\n os.environ[\"OPENAI_API_TYPE\"] = \"azure\"\n os.environ[\"OPENAI_API_BASE\"] = \"https:// Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n values[\"openai_api_key\"] = get_from_dict_or_env(\n values, \"openai_api_key\", \"OPENAI_API_KEY\"\n )\n values[\"openai_api_base\"] = get_from_dict_or_env(\n values,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"} {"id": "e37f1cf7730b-5", "text": "values[\"openai_api_base\"] = get_from_dict_or_env(\n values,\n \"openai_api_base\",\n \"OPENAI_API_BASE\",\n default=\"\",\n )\n values[\"openai_api_type\"] = get_from_dict_or_env(\n values,\n \"openai_api_type\",\n \"OPENAI_API_TYPE\",\n default=\"\",\n )\n values[\"openai_proxy\"] = get_from_dict_or_env(\n values,\n \"openai_proxy\",\n \"OPENAI_PROXY\",\n default=\"\",\n )\n if values[\"openai_api_type\"] in (\"azure\", \"azure_ad\", \"azuread\"):\n default_api_version = \"2022-12-01\"\n else:\n default_api_version = \"\"\n values[\"openai_api_version\"] = get_from_dict_or_env(\n values,\n \"openai_api_version\",\n \"OPENAI_API_VERSION\",\n default=default_api_version,\n )\n values[\"openai_organization\"] = get_from_dict_or_env(\n values,\n \"openai_organization\",\n \"OPENAI_ORGANIZATION\",\n default=\"\",\n )\n try:\n import openai\n values[\"client\"] = openai.Embedding\n except ImportError:\n raise ImportError(\n \"Could not import openai python package. \"\n \"Please install it with `pip install openai`.\"\n )\n return values\n @property\n def _invocation_params(self) -> Dict:\n openai_args = {\n \"engine\": self.deployment,\n \"request_timeout\": self.request_timeout,\n \"headers\": self.headers,\n \"api_key\": self.openai_api_key,\n \"organization\": self.openai_organization,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"} {"id": "e37f1cf7730b-6", "text": "\"organization\": self.openai_organization,\n \"api_base\": self.openai_api_base,\n \"api_type\": self.openai_api_type,\n \"api_version\": self.openai_api_version,\n }\n if self.openai_proxy:\n import openai\n openai.proxy = {\n \"http\": self.openai_proxy,\n \"https\": self.openai_proxy,\n } # type: ignore[assignment] # noqa: E501\n return openai_args\n # please refer to\n # https://github.com/openai/openai-cookbook/blob/main/examples/Embedding_long_inputs.ipynb\n def _get_len_safe_embeddings(\n self, texts: List[str], *, engine: str, chunk_size: Optional[int] = None\n ) -> List[List[float]]:\n embeddings: List[List[float]] = [[] for _ in range(len(texts))]\n try:\n import tiktoken\n except ImportError:\n raise ImportError(\n \"Could not import tiktoken python package. \"\n \"This is needed in order to for OpenAIEmbeddings. \"\n \"Please install it with `pip install tiktoken`.\"\n )\n tokens = []\n indices = []\n model_name = self.tiktoken_model_name or self.model\n try:\n encoding = tiktoken.encoding_for_model(model_name)\n except KeyError:\n logger.warning(\"Warning: model not found. Using cl100k_base encoding.\")\n model = \"cl100k_base\"\n encoding = tiktoken.get_encoding(model)\n for i, text in enumerate(texts):\n if self.model.endswith(\"001\"):\n # See: https://github.com/openai/openai-python/issues/418#issuecomment-1525939500", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"} {"id": "e37f1cf7730b-7", "text": "# replace newlines, which can negatively affect performance.\n text = text.replace(\"\\n\", \" \")\n token = encoding.encode(\n text,\n allowed_special=self.allowed_special,\n disallowed_special=self.disallowed_special,\n )\n for j in range(0, len(token), self.embedding_ctx_length):\n tokens += [token[j : j + self.embedding_ctx_length]]\n indices += [i]\n batched_embeddings = []\n _chunk_size = chunk_size or self.chunk_size\n if self.show_progress_bar:\n try:\n import tqdm\n _iter = tqdm.tqdm(range(0, len(tokens), _chunk_size))\n except ImportError:\n _iter = range(0, len(tokens), _chunk_size)\n else:\n _iter = range(0, len(tokens), _chunk_size)\n for i in _iter:\n response = embed_with_retry(\n self,\n input=tokens[i : i + _chunk_size],\n **self._invocation_params,\n )\n batched_embeddings += [r[\"embedding\"] for r in response[\"data\"]]\n results: List[List[List[float]]] = [[] for _ in range(len(texts))]\n num_tokens_in_batch: List[List[int]] = [[] for _ in range(len(texts))]\n for i in range(len(indices)):\n results[indices[i]].append(batched_embeddings[i])\n num_tokens_in_batch[indices[i]].append(len(tokens[i]))\n for i in range(len(texts)):\n _result = results[i]\n if len(_result) == 0:\n average = embed_with_retry(\n self,\n input=\"\",\n **self._invocation_params,\n )[\n \"data\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"} {"id": "e37f1cf7730b-8", "text": "**self._invocation_params,\n )[\n \"data\"\n ][0][\"embedding\"]\n else:\n average = np.average(_result, axis=0, weights=num_tokens_in_batch[i])\n embeddings[i] = (average / np.linalg.norm(average)).tolist()\n return embeddings\n # please refer to\n # https://github.com/openai/openai-cookbook/blob/main/examples/Embedding_long_inputs.ipynb\n async def _aget_len_safe_embeddings(\n self, texts: List[str], *, engine: str, chunk_size: Optional[int] = None\n ) -> List[List[float]]:\n embeddings: List[List[float]] = [[] for _ in range(len(texts))]\n try:\n import tiktoken\n except ImportError:\n raise ImportError(\n \"Could not import tiktoken python package. \"\n \"This is needed in order to for OpenAIEmbeddings. \"\n \"Please install it with `pip install tiktoken`.\"\n )\n tokens = []\n indices = []\n model_name = self.tiktoken_model_name or self.model\n try:\n encoding = tiktoken.encoding_for_model(model_name)\n except KeyError:\n logger.warning(\"Warning: model not found. Using cl100k_base encoding.\")\n model = \"cl100k_base\"\n encoding = tiktoken.get_encoding(model)\n for i, text in enumerate(texts):\n if self.model.endswith(\"001\"):\n # See: https://github.com/openai/openai-python/issues/418#issuecomment-1525939500\n # replace newlines, which can negatively affect performance.\n text = text.replace(\"\\n\", \" \")\n token = encoding.encode(\n text,\n allowed_special=self.allowed_special,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"} {"id": "e37f1cf7730b-9", "text": "token = encoding.encode(\n text,\n allowed_special=self.allowed_special,\n disallowed_special=self.disallowed_special,\n )\n for j in range(0, len(token), self.embedding_ctx_length):\n tokens += [token[j : j + self.embedding_ctx_length]]\n indices += [i]\n batched_embeddings = []\n _chunk_size = chunk_size or self.chunk_size\n for i in range(0, len(tokens), _chunk_size):\n response = await async_embed_with_retry(\n self,\n input=tokens[i : i + _chunk_size],\n **self._invocation_params,\n )\n batched_embeddings += [r[\"embedding\"] for r in response[\"data\"]]\n results: List[List[List[float]]] = [[] for _ in range(len(texts))]\n num_tokens_in_batch: List[List[int]] = [[] for _ in range(len(texts))]\n for i in range(len(indices)):\n results[indices[i]].append(batched_embeddings[i])\n num_tokens_in_batch[indices[i]].append(len(tokens[i]))\n for i in range(len(texts)):\n _result = results[i]\n if len(_result) == 0:\n average = (\n await async_embed_with_retry(\n self,\n input=\"\",\n **self._invocation_params,\n )\n )[\"data\"][0][\"embedding\"]\n else:\n average = np.average(_result, axis=0, weights=num_tokens_in_batch[i])\n embeddings[i] = (average / np.linalg.norm(average)).tolist()\n return embeddings\n def _embedding_func(self, text: str, *, engine: str) -> List[float]:\n \"\"\"Call out to OpenAI's embedding endpoint.\"\"\"\n # handle large input text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"} {"id": "e37f1cf7730b-10", "text": "\"\"\"Call out to OpenAI's embedding endpoint.\"\"\"\n # handle large input text\n if len(text) > self.embedding_ctx_length:\n return self._get_len_safe_embeddings([text], engine=engine)[0]\n else:\n if self.model.endswith(\"001\"):\n # See: https://github.com/openai/openai-python/issues/418#issuecomment-1525939500\n # replace newlines, which can negatively affect performance.\n text = text.replace(\"\\n\", \" \")\n return embed_with_retry(\n self,\n input=[text],\n **self._invocation_params,\n )[\n \"data\"\n ][0][\"embedding\"]\n async def _aembedding_func(self, text: str, *, engine: str) -> List[float]:\n \"\"\"Call out to OpenAI's embedding endpoint.\"\"\"\n # handle large input text\n if len(text) > self.embedding_ctx_length:\n return (await self._aget_len_safe_embeddings([text], engine=engine))[0]\n else:\n if self.model.endswith(\"001\"):\n # See: https://github.com/openai/openai-python/issues/418#issuecomment-1525939500\n # replace newlines, which can negatively affect performance.\n text = text.replace(\"\\n\", \" \")\n return (\n await async_embed_with_retry(\n self,\n input=[text],\n **self._invocation_params,\n )\n )[\"data\"][0][\"embedding\"]\n[docs] def embed_documents(\n self, texts: List[str], chunk_size: Optional[int] = 0\n ) -> List[List[float]]:\n \"\"\"Call out to OpenAI's embedding endpoint for embedding search docs.\n Args:\n texts: The list of texts to embed.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"} {"id": "e37f1cf7730b-11", "text": "Args:\n texts: The list of texts to embed.\n chunk_size: The chunk size of embeddings. If None, will use the chunk size\n specified by the class.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n # NOTE: to keep things simple, we assume the list may contain texts longer\n # than the maximum context and use length-safe embedding function.\n return self._get_len_safe_embeddings(texts, engine=self.deployment)\n[docs] async def aembed_documents(\n self, texts: List[str], chunk_size: Optional[int] = 0\n ) -> List[List[float]]:\n \"\"\"Call out to OpenAI's embedding endpoint async for embedding search docs.\n Args:\n texts: The list of texts to embed.\n chunk_size: The chunk size of embeddings. If None, will use the chunk size\n specified by the class.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n # NOTE: to keep things simple, we assume the list may contain texts longer\n # than the maximum context and use length-safe embedding function.\n return await self._aget_len_safe_embeddings(texts, engine=self.deployment)\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Call out to OpenAI's embedding endpoint for embedding query text.\n Args:\n text: The text to embed.\n Returns:\n Embedding for the text.\n \"\"\"\n embedding = self._embedding_func(text, engine=self.deployment)\n return embedding\n[docs] async def aembed_query(self, text: str) -> List[float]:\n \"\"\"Call out to OpenAI's embedding endpoint async for embedding query text.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"} {"id": "e37f1cf7730b-12", "text": "Args:\n text: The text to embed.\n Returns:\n Embedding for the text.\n \"\"\"\n embedding = await self._aembedding_func(text, engine=self.deployment)\n return embedding", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"} {"id": "8fa7e58f0592-0", "text": "Source code for langchain.embeddings.vertexai\n\"\"\"Wrapper around Google VertexAI embedding models.\"\"\"\nfrom typing import Dict, List\nfrom pydantic import root_validator\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.llms.vertexai import _VertexAICommon\nfrom langchain.utilities.vertexai import raise_vertex_import_error\n[docs]class VertexAIEmbeddings(_VertexAICommon, Embeddings):\n model_name: str = \"textembedding-gecko\"\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validates that the python package exists in environment.\"\"\"\n cls._try_init_vertexai(values)\n try:\n from vertexai.preview.language_models import TextEmbeddingModel\n except ImportError:\n raise_vertex_import_error()\n values[\"client\"] = TextEmbeddingModel.from_pretrained(values[\"model_name\"])\n return values\n[docs] def embed_documents(\n self, texts: List[str], batch_size: int = 5\n ) -> List[List[float]]:\n \"\"\"Embed a list of strings. Vertex AI currently\n sets a max batch size of 5 strings.\n Args:\n texts: List[str] The list of strings to embed.\n batch_size: [int] The batch size of embeddings to send to the model\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n embeddings = []\n for batch in range(0, len(texts), batch_size):\n text_batch = texts[batch : batch + batch_size]\n embeddings_batch = self.client.get_embeddings(text_batch)\n embeddings.extend([el.values for el in embeddings_batch])\n return embeddings\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Embed a text.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/vertexai.html"} {"id": "8fa7e58f0592-1", "text": "\"\"\"Embed a text.\n Args:\n text: The text to embed.\n Returns:\n Embedding for the text.\n \"\"\"\n embeddings = self.client.get_embeddings([text])\n return embeddings[0].values", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/vertexai.html"} {"id": "a8e51979ae60-0", "text": "Source code for langchain.embeddings.huggingface_hub\n\"\"\"Wrapper around HuggingFace Hub embedding models.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nDEFAULT_REPO_ID = \"sentence-transformers/all-mpnet-base-v2\"\nVALID_TASKS = (\"feature-extraction\",)\n[docs]class HuggingFaceHubEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around HuggingFaceHub embedding models.\n To use, you should have the ``huggingface_hub`` python package installed, and the\n environment variable ``HUGGINGFACEHUB_API_TOKEN`` set with your API token, or pass\n it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.embeddings import HuggingFaceHubEmbeddings\n repo_id = \"sentence-transformers/all-mpnet-base-v2\"\n hf = HuggingFaceHubEmbeddings(\n repo_id=repo_id,\n task=\"feature-extraction\",\n huggingfacehub_api_token=\"my-api-key\",\n )\n \"\"\"\n client: Any #: :meta private:\n repo_id: str = DEFAULT_REPO_ID\n \"\"\"Model name to use.\"\"\"\n task: Optional[str] = \"feature-extraction\"\n \"\"\"Task to call the model with.\"\"\"\n model_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n huggingfacehub_api_token: Optional[str] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/huggingface_hub.html"} {"id": "a8e51979ae60-1", "text": "extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n huggingfacehub_api_token = get_from_dict_or_env(\n values, \"huggingfacehub_api_token\", \"HUGGINGFACEHUB_API_TOKEN\"\n )\n try:\n from huggingface_hub.inference_api import InferenceApi\n repo_id = values[\"repo_id\"]\n if not repo_id.startswith(\"sentence-transformers\"):\n raise ValueError(\n \"Currently only 'sentence-transformers' embedding models \"\n f\"are supported. Got invalid 'repo_id' {repo_id}.\"\n )\n client = InferenceApi(\n repo_id=repo_id,\n token=huggingfacehub_api_token,\n task=values.get(\"task\"),\n )\n if client.task not in VALID_TASKS:\n raise ValueError(\n f\"Got invalid task {client.task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n values[\"client\"] = client\n except ImportError:\n raise ValueError(\n \"Could not import huggingface_hub python package. \"\n \"Please install it with `pip install huggingface_hub`.\"\n )\n return values\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Call out to HuggingFaceHub's embedding endpoint for embedding search docs.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n # replace newlines, which can negatively affect performance.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/huggingface_hub.html"} {"id": "a8e51979ae60-2", "text": "\"\"\"\n # replace newlines, which can negatively affect performance.\n texts = [text.replace(\"\\n\", \" \") for text in texts]\n _model_kwargs = self.model_kwargs or {}\n responses = self.client(inputs=texts, params=_model_kwargs)\n return responses\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Call out to HuggingFaceHub's embedding endpoint for embedding query text.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n response = self.embed_documents([text])[0]\n return response", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/huggingface_hub.html"} {"id": "6945f72787a3-0", "text": "Source code for langchain.embeddings.google_palm\n\"\"\"Wrapper arround Google's PaLM Embeddings APIs.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, Callable, Dict, List, Optional\nfrom pydantic import BaseModel, root_validator\nfrom tenacity import (\n before_sleep_log,\n retry,\n retry_if_exception_type,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\ndef _create_retry_decorator() -> Callable[[Any], Any]:\n \"\"\"Returns a tenacity retry decorator, preconfigured to handle PaLM exceptions\"\"\"\n import google.api_core.exceptions\n multiplier = 2\n min_seconds = 1\n max_seconds = 60\n max_retries = 10\n return retry(\n reraise=True,\n stop=stop_after_attempt(max_retries),\n wait=wait_exponential(multiplier=multiplier, min=min_seconds, max=max_seconds),\n retry=(\n retry_if_exception_type(google.api_core.exceptions.ResourceExhausted)\n | retry_if_exception_type(google.api_core.exceptions.ServiceUnavailable)\n | retry_if_exception_type(google.api_core.exceptions.GoogleAPIError)\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\n[docs]def embed_with_retry(\n embeddings: GooglePalmEmbeddings, *args: Any, **kwargs: Any\n) -> Any:\n \"\"\"Use tenacity to retry the completion call.\"\"\"\n retry_decorator = _create_retry_decorator()\n @retry_decorator\n def _embed_with_retry(*args: Any, **kwargs: Any) -> Any:\n return embeddings.client.generate_embeddings(*args, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/google_palm.html"} {"id": "6945f72787a3-1", "text": "return embeddings.client.generate_embeddings(*args, **kwargs)\n return _embed_with_retry(*args, **kwargs)\n[docs]class GooglePalmEmbeddings(BaseModel, Embeddings):\n client: Any\n google_api_key: Optional[str]\n model_name: str = \"models/embedding-gecko-001\"\n \"\"\"Model name to use.\"\"\"\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate api key, python package exists.\"\"\"\n google_api_key = get_from_dict_or_env(\n values, \"google_api_key\", \"GOOGLE_API_KEY\"\n )\n try:\n import google.generativeai as genai\n genai.configure(api_key=google_api_key)\n except ImportError:\n raise ImportError(\"Could not import google.generativeai python package.\")\n values[\"client\"] = genai\n return values\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n return [self.embed_query(text) for text in texts]\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Embed query text.\"\"\"\n embedding = embed_with_retry(self, self.model_name, text)\n return embedding[\"embedding\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/google_palm.html"} {"id": "529bc3adc71e-0", "text": "Source code for langchain.embeddings.clarifai\n\"\"\"Wrapper around Clarifai embedding models.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class ClarifaiEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around Clarifai embedding models.\n To use, you should have the ``clarifai`` python package installed, and the\n environment variable ``CLARIFAI_PAT`` set with your personal access token or pass it\n as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.embeddings import ClarifaiEmbeddings\n clarifai = ClarifaiEmbeddings(\n model=\"embed-english-light-v2.0\", clarifai_api_key=\"my-api-key\"\n )\n \"\"\"\n stub: Any #: :meta private:\n userDataObject: Any\n model_id: Optional[str] = None\n \"\"\"Model id to use.\"\"\"\n model_version_id: Optional[str] = None\n \"\"\"Model version id to use.\"\"\"\n app_id: Optional[str] = None\n \"\"\"Clarifai application id to use.\"\"\"\n user_id: Optional[str] = None\n \"\"\"Clarifai user id to use.\"\"\"\n pat: Optional[str] = None\n api_base: str = \"https://api.clarifai.com\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/clarifai.html"} {"id": "529bc3adc71e-1", "text": "def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n values[\"pat\"] = get_from_dict_or_env(values, \"pat\", \"CLARIFAI_PAT\")\n user_id = values.get(\"user_id\")\n app_id = values.get(\"app_id\")\n model_id = values.get(\"model_id\")\n if values[\"pat\"] is None:\n raise ValueError(\"Please provide a pat.\")\n if user_id is None:\n raise ValueError(\"Please provide a user_id.\")\n if app_id is None:\n raise ValueError(\"Please provide a app_id.\")\n if model_id is None:\n raise ValueError(\"Please provide a model_id.\")\n try:\n from clarifai.auth.helper import ClarifaiAuthHelper\n from clarifai.client import create_stub\n except ImportError:\n raise ImportError(\n \"Could not import clarifai python package. \"\n \"Please install it with `pip install clarifai`.\"\n )\n auth = ClarifaiAuthHelper(\n user_id=user_id,\n app_id=app_id,\n pat=values[\"pat\"],\n base=values[\"api_base\"],\n )\n values[\"userDataObject\"] = auth.get_user_app_id_proto()\n values[\"stub\"] = create_stub(auth)\n return values\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Call out to Clarifai's embedding models.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n try:\n from clarifai_grpc.grpc.api import (\n resources_pb2,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/clarifai.html"} {"id": "529bc3adc71e-2", "text": "from clarifai_grpc.grpc.api import (\n resources_pb2,\n service_pb2,\n )\n from clarifai_grpc.grpc.api.status import status_code_pb2\n except ImportError:\n raise ImportError(\n \"Could not import clarifai python package. \"\n \"Please install it with `pip install clarifai`.\"\n )\n post_model_outputs_request = service_pb2.PostModelOutputsRequest(\n user_app_id=self.userDataObject,\n model_id=self.model_id,\n version_id=self.model_version_id,\n inputs=[\n resources_pb2.Input(\n data=resources_pb2.Data(text=resources_pb2.Text(raw=t))\n )\n for t in texts\n ],\n )\n post_model_outputs_response = self.stub.PostModelOutputs(\n post_model_outputs_request\n )\n if post_model_outputs_response.status.code != status_code_pb2.SUCCESS:\n logger.error(post_model_outputs_response.status)\n first_output_failure = (\n post_model_outputs_response.outputs[0].status\n if len(post_model_outputs_response.outputs[0])\n else None\n )\n raise Exception(\n f\"Post model outputs failed, status: \"\n f\"{post_model_outputs_response.status}, first output failure: \"\n f\"{first_output_failure}\"\n )\n embeddings = [\n list(o.data.embeddings[0].vector)\n for o in post_model_outputs_response.outputs\n ]\n return embeddings\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Call out to Clarifai's embedding models.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/clarifai.html"} {"id": "529bc3adc71e-3", "text": "Returns:\n Embeddings for the text.\n \"\"\"\n try:\n from clarifai_grpc.grpc.api import (\n resources_pb2,\n service_pb2,\n )\n from clarifai_grpc.grpc.api.status import status_code_pb2\n except ImportError:\n raise ImportError(\n \"Could not import clarifai python package. \"\n \"Please install it with `pip install clarifai`.\"\n )\n post_model_outputs_request = service_pb2.PostModelOutputsRequest(\n user_app_id=self.userDataObject,\n model_id=self.model_id,\n version_id=self.model_version_id,\n inputs=[\n resources_pb2.Input(\n data=resources_pb2.Data(text=resources_pb2.Text(raw=text))\n )\n ],\n )\n post_model_outputs_response = self.stub.PostModelOutputs(\n post_model_outputs_request\n )\n if post_model_outputs_response.status.code != status_code_pb2.SUCCESS:\n logger.error(post_model_outputs_response.status)\n first_output_failure = (\n post_model_outputs_response.outputs[0].status\n if len(post_model_outputs_response.outputs[0])\n else None\n )\n raise Exception(\n f\"Post model outputs failed, status: \"\n f\"{post_model_outputs_response.status}, first output failure: \"\n f\"{first_output_failure}\"\n )\n embeddings = [\n list(o.data.embeddings[0].vector)\n for o in post_model_outputs_response.outputs\n ]\n return embeddings[0]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/clarifai.html"} {"id": "1e5a18d66200-0", "text": "Source code for langchain.embeddings.cohere\n\"\"\"Wrapper around Cohere embedding models.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\n[docs]class CohereEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around Cohere embedding models.\n To use, you should have the ``cohere`` python package installed, and the\n environment variable ``COHERE_API_KEY`` set with your API key or pass it\n as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.embeddings import CohereEmbeddings\n cohere = CohereEmbeddings(\n model=\"embed-english-light-v2.0\", cohere_api_key=\"my-api-key\"\n )\n \"\"\"\n client: Any #: :meta private:\n model: str = \"embed-english-v2.0\"\n \"\"\"Model name to use.\"\"\"\n truncate: Optional[str] = None\n \"\"\"Truncate embeddings that are too long from start or end (\"NONE\"|\"START\"|\"END\")\"\"\"\n cohere_api_key: Optional[str] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n cohere_api_key = get_from_dict_or_env(\n values, \"cohere_api_key\", \"COHERE_API_KEY\"\n )\n try:\n import cohere\n values[\"client\"] = cohere.Client(cohere_api_key)\n except ImportError:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/cohere.html"} {"id": "1e5a18d66200-1", "text": "values[\"client\"] = cohere.Client(cohere_api_key)\n except ImportError:\n raise ValueError(\n \"Could not import cohere python package. \"\n \"Please install it with `pip install cohere`.\"\n )\n return values\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Call out to Cohere's embedding endpoint.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n embeddings = self.client.embed(\n model=self.model, texts=texts, truncate=self.truncate\n ).embeddings\n return [list(map(float, e)) for e in embeddings]\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Call out to Cohere's embedding endpoint.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n embedding = self.client.embed(\n model=self.model, texts=[text], truncate=self.truncate\n ).embeddings[0]\n return list(map(float, embedding))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/cohere.html"} {"id": "d7d1c475e68a-0", "text": "Source code for langchain.embeddings.bedrock\nimport json\nimport os\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.embeddings.base import Embeddings\n[docs]class BedrockEmbeddings(BaseModel, Embeddings):\n \"\"\"Embeddings provider to invoke Bedrock embedding models.\n To authenticate, the AWS client uses the following methods to\n automatically load credentials:\n https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\n If a specific credential profile should be used, you must pass\n the name of the profile from the ~/.aws/credentials file that is to be used.\n Make sure the credentials / roles used have the required policies to\n access the Bedrock service.\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n from langchain.bedrock_embeddings import BedrockEmbeddings\n \n region_name =\"us-east-1\"\n credentials_profile_name = \"default\"\n model_id = \"amazon.titan-e1t-medium\"\n be = BedrockEmbeddings(\n credentials_profile_name=credentials_profile_name,\n region_name=region_name,\n model_id=model_id\n )\n \"\"\"\n client: Any #: :meta private:\n region_name: Optional[str] = None\n \"\"\"The aws region e.g., `us-west-2`. Fallsback to AWS_DEFAULT_REGION env variable\n or region specified in ~/.aws/config in case it is not provided here.\n \"\"\"\n credentials_profile_name: Optional[str] = None\n \"\"\"The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which\n has either access keys or role information specified.\n If not specified, the default credential profile or, if on an EC2 instance,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/bedrock.html"} {"id": "d7d1c475e68a-1", "text": "If not specified, the default credential profile or, if on an EC2 instance,\n credentials from IMDS will be used.\n See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\n \"\"\"\n model_id: str = \"amazon.titan-e1t-medium\"\n \"\"\"Id of the model to call, e.g., amazon.titan-e1t-medium, this is\n equivalent to the modelId property in the list-foundation-models api\"\"\"\n model_kwargs: Optional[Dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that AWS credentials to and python package exists in environment.\"\"\"\n if values[\"client\"] is not None:\n return values\n try:\n import boto3\n if values[\"credentials_profile_name\"] is not None:\n session = boto3.Session(profile_name=values[\"credentials_profile_name\"])\n else:\n # use default credentials\n session = boto3.Session()\n client_params = {}\n if values[\"region_name\"]:\n client_params[\"region_name\"] = values[\"region_name\"]\n values[\"client\"] = session.client(\"bedrock\", **client_params)\n except ImportError:\n raise ModuleNotFoundError(\n \"Could not import boto3 python package. \"\n \"Please install it with `pip install boto3`.\"\n )\n except Exception as e:\n raise ValueError(\n \"Could not load credentials to authenticate with AWS client. \"\n \"Please check that credentials in the specified \"\n \"profile name are valid.\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/bedrock.html"} {"id": "d7d1c475e68a-2", "text": "\"Please check that credentials in the specified \"\n \"profile name are valid.\"\n ) from e\n return values\n def _embedding_func(self, text: str) -> List[float]:\n \"\"\"Call out to Bedrock embedding endpoint.\"\"\"\n # replace newlines, which can negatively affect performance.\n text = text.replace(os.linesep, \" \")\n _model_kwargs = self.model_kwargs or {}\n input_body = {**_model_kwargs, \"inputText\": text}\n body = json.dumps(input_body)\n try:\n response = self.client.invoke_model(\n body=body,\n modelId=self.model_id,\n accept=\"application/json\",\n contentType=\"application/json\",\n )\n response_body = json.loads(response.get(\"body\").read())\n return response_body.get(\"embedding\")\n except Exception as e:\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\n[docs] def embed_documents(\n self, texts: List[str], chunk_size: int = 1\n ) -> List[List[float]]:\n \"\"\"Compute doc embeddings using a Bedrock model.\n Args:\n texts: The list of texts to embed.\n chunk_size: Bedrock currently only allows single string\n inputs, so chunk size is always 1. This input is here\n only for compatibility with the embeddings interface.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n results = []\n for text in texts:\n response = self._embedding_func(text)\n results.append(response)\n return results\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embeddings using a Bedrock model.\n Args:\n text: The text to embed.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/bedrock.html"} {"id": "d7d1c475e68a-3", "text": "Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n return self._embedding_func(text)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/bedrock.html"} {"id": "943b8f632cea-0", "text": "Source code for langchain.embeddings.self_hosted_hugging_face\n\"\"\"Wrapper around HuggingFace embedding models for self-hosted remote hardware.\"\"\"\nimport importlib\nimport logging\nfrom typing import Any, Callable, List, Optional\nfrom langchain.embeddings.self_hosted import SelfHostedEmbeddings\nDEFAULT_MODEL_NAME = \"sentence-transformers/all-mpnet-base-v2\"\nDEFAULT_INSTRUCT_MODEL = \"hkunlp/instructor-large\"\nDEFAULT_EMBED_INSTRUCTION = \"Represent the document for retrieval: \"\nDEFAULT_QUERY_INSTRUCTION = (\n \"Represent the question for retrieving supporting documents: \"\n)\nlogger = logging.getLogger(__name__)\ndef _embed_documents(client: Any, *args: Any, **kwargs: Any) -> List[List[float]]:\n \"\"\"Inference function to send to the remote hardware.\n Accepts a sentence_transformer model_id and\n returns a list of embeddings for each document in the batch.\n \"\"\"\n return client.encode(*args, **kwargs)\n[docs]def load_embedding_model(model_id: str, instruct: bool = False, device: int = 0) -> Any:\n \"\"\"Load the embedding model.\"\"\"\n if not instruct:\n import sentence_transformers\n client = sentence_transformers.SentenceTransformer(model_id)\n else:\n from InstructorEmbedding import INSTRUCTOR\n client = INSTRUCTOR(model_id)\n if importlib.util.find_spec(\"torch\") is not None:\n import torch\n cuda_device_count = torch.cuda.device_count()\n if device < -1 or (device >= cuda_device_count):\n raise ValueError(\n f\"Got device=={device}, \"\n f\"device is required to be within [-1, {cuda_device_count})\"\n )\n if device < 0 and cuda_device_count > 0:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted_hugging_face.html"} {"id": "943b8f632cea-1", "text": ")\n if device < 0 and cuda_device_count > 0:\n logger.warning(\n \"Device has %d GPUs available. \"\n \"Provide device={deviceId} to `from_model_id` to use available\"\n \"GPUs for execution. deviceId is -1 for CPU and \"\n \"can be a positive integer associated with CUDA device id.\",\n cuda_device_count,\n )\n client = client.to(device)\n return client\n[docs]class SelfHostedHuggingFaceEmbeddings(SelfHostedEmbeddings):\n \"\"\"Runs sentence_transformers embedding models on self-hosted remote hardware.\n Supported hardware includes auto-launched instances on AWS, GCP, Azure,\n and Lambda, as well as servers specified\n by IP address and SSH credentials (such as on-prem, or another cloud\n like Paperspace, Coreweave, etc.).\n To use, you should have the ``runhouse`` python package installed.\n Example:\n .. code-block:: python\n from langchain.embeddings import SelfHostedHuggingFaceEmbeddings\n import runhouse as rh\n model_name = \"sentence-transformers/all-mpnet-base-v2\"\n gpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\n hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu)\n \"\"\"\n client: Any #: :meta private:\n model_id: str = DEFAULT_MODEL_NAME\n \"\"\"Model name to use.\"\"\"\n model_reqs: List[str] = [\"./\", \"sentence_transformers\", \"torch\"]\n \"\"\"Requirements to install on hardware to inference the model.\"\"\"\n hardware: Any\n \"\"\"Remote hardware to send the inference function to.\"\"\"\n model_load_fn: Callable = load_embedding_model", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted_hugging_face.html"} {"id": "943b8f632cea-2", "text": "model_load_fn: Callable = load_embedding_model\n \"\"\"Function to load the model remotely on the server.\"\"\"\n load_fn_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model load function.\"\"\"\n inference_fn: Callable = _embed_documents\n \"\"\"Inference function to extract the embeddings.\"\"\"\n def __init__(self, **kwargs: Any):\n \"\"\"Initialize the remote inference function.\"\"\"\n load_fn_kwargs = kwargs.pop(\"load_fn_kwargs\", {})\n load_fn_kwargs[\"model_id\"] = load_fn_kwargs.get(\"model_id\", DEFAULT_MODEL_NAME)\n load_fn_kwargs[\"instruct\"] = load_fn_kwargs.get(\"instruct\", False)\n load_fn_kwargs[\"device\"] = load_fn_kwargs.get(\"device\", 0)\n super().__init__(load_fn_kwargs=load_fn_kwargs, **kwargs)\n[docs]class SelfHostedHuggingFaceInstructEmbeddings(SelfHostedHuggingFaceEmbeddings):\n \"\"\"Runs InstructorEmbedding embedding models on self-hosted remote hardware.\n Supported hardware includes auto-launched instances on AWS, GCP, Azure,\n and Lambda, as well as servers specified\n by IP address and SSH credentials (such as on-prem, or another\n cloud like Paperspace, Coreweave, etc.).\n To use, you should have the ``runhouse`` python package installed.\n Example:\n .. code-block:: python\n from langchain.embeddings import SelfHostedHuggingFaceInstructEmbeddings\n import runhouse as rh\n model_name = \"hkunlp/instructor-large\"\n gpu = rh.cluster(name='rh-a10x', instance_type='A100:1')\n hf = SelfHostedHuggingFaceInstructEmbeddings(\n model_name=model_name, hardware=gpu)\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted_hugging_face.html"} {"id": "943b8f632cea-3", "text": "model_name=model_name, hardware=gpu)\n \"\"\"\n model_id: str = DEFAULT_INSTRUCT_MODEL\n \"\"\"Model name to use.\"\"\"\n embed_instruction: str = DEFAULT_EMBED_INSTRUCTION\n \"\"\"Instruction to use for embedding documents.\"\"\"\n query_instruction: str = DEFAULT_QUERY_INSTRUCTION\n \"\"\"Instruction to use for embedding query.\"\"\"\n model_reqs: List[str] = [\"./\", \"InstructorEmbedding\", \"torch\"]\n \"\"\"Requirements to install on hardware to inference the model.\"\"\"\n def __init__(self, **kwargs: Any):\n \"\"\"Initialize the remote inference function.\"\"\"\n load_fn_kwargs = kwargs.pop(\"load_fn_kwargs\", {})\n load_fn_kwargs[\"model_id\"] = load_fn_kwargs.get(\n \"model_id\", DEFAULT_INSTRUCT_MODEL\n )\n load_fn_kwargs[\"instruct\"] = load_fn_kwargs.get(\"instruct\", True)\n load_fn_kwargs[\"device\"] = load_fn_kwargs.get(\"device\", 0)\n super().__init__(load_fn_kwargs=load_fn_kwargs, **kwargs)\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Compute doc embeddings using a HuggingFace instruct model.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n instruction_pairs = []\n for text in texts:\n instruction_pairs.append([self.embed_instruction, text])\n embeddings = self.client(self.pipeline_ref, instruction_pairs)\n return embeddings.tolist()\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embeddings using a HuggingFace instruct model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted_hugging_face.html"} {"id": "943b8f632cea-4", "text": "Returns:\n Embeddings for the text.\n \"\"\"\n instruction_pair = [self.query_instruction, text]\n embedding = self.client(self.pipeline_ref, [instruction_pair])[0]\n return embedding.tolist()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted_hugging_face.html"} {"id": "a16b728dec6f-0", "text": "Source code for langchain.embeddings.base\n\"\"\"Interface for embedding models.\"\"\"\nfrom abc import ABC, abstractmethod\nfrom typing import List\n[docs]class Embeddings(ABC):\n \"\"\"Interface for embedding models.\"\"\"\n[docs] @abstractmethod\n def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Embed search docs.\"\"\"\n[docs] @abstractmethod\n def embed_query(self, text: str) -> List[float]:\n \"\"\"Embed query text.\"\"\"\n[docs] async def aembed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Embed search docs.\"\"\"\n raise NotImplementedError\n[docs] async def aembed_query(self, text: str) -> List[float]:\n \"\"\"Embed query text.\"\"\"\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/base.html"} {"id": "95978c8df685-0", "text": "Source code for langchain.embeddings.tensorflow_hub\n\"\"\"Wrapper around TensorflowHub embedding models.\"\"\"\nfrom typing import Any, List\nfrom pydantic import BaseModel, Extra\nfrom langchain.embeddings.base import Embeddings\nDEFAULT_MODEL_URL = \"https://tfhub.dev/google/universal-sentence-encoder-multilingual/3\"\n[docs]class TensorflowHubEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around tensorflow_hub embedding models.\n To use, you should have the ``tensorflow_text`` python package installed.\n Example:\n .. code-block:: python\n from langchain.embeddings import TensorflowHubEmbeddings\n url = \"https://tfhub.dev/google/universal-sentence-encoder-multilingual/3\"\n tf = TensorflowHubEmbeddings(model_url=url)\n \"\"\"\n embed: Any #: :meta private:\n model_url: str = DEFAULT_MODEL_URL\n \"\"\"Model name to use.\"\"\"\n def __init__(self, **kwargs: Any):\n \"\"\"Initialize the tensorflow_hub and tensorflow_text.\"\"\"\n super().__init__(**kwargs)\n try:\n import tensorflow_hub\n except ImportError:\n raise ImportError(\n \"Could not import tensorflow-hub python package. \"\n \"Please install it with `pip install tensorflow-hub``.\"\n )\n try:\n import tensorflow_text # noqa\n except ImportError:\n raise ImportError(\n \"Could not import tensorflow_text python package. \"\n \"Please install it with `pip install tensorflow_text``.\"\n )\n self.embed = tensorflow_hub.load(self.model_url)\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/tensorflow_hub.html"} {"id": "95978c8df685-1", "text": "\"\"\"Compute doc embeddings using a TensorflowHub embedding model.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n texts = list(map(lambda x: x.replace(\"\\n\", \" \"), texts))\n embeddings = self.embed(texts).numpy()\n return embeddings.tolist()\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embeddings using a TensorflowHub embedding model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n text = text.replace(\"\\n\", \" \")\n embedding = self.embed([text]).numpy()[0]\n return embedding.tolist()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/tensorflow_hub.html"} {"id": "de5260d8c6e1-0", "text": "Source code for langchain.embeddings.spacy_embeddings\nimport importlib.util\nfrom typing import Any, Dict, List\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.embeddings.base import Embeddings\n[docs]class SpacyEmbeddings(BaseModel, Embeddings):\n \"\"\"\n SpacyEmbeddings is a class for generating embeddings using the Spacy library.\n It only supports the 'en_core_web_sm' model.\n Attributes:\n nlp (Any): The Spacy model loaded into memory.\n Methods:\n embed_documents(texts: List[str]) -> List[List[float]]:\n Generates embeddings for a list of documents.\n embed_query(text: str) -> List[float]:\n Generates an embedding for a single piece of text.\n \"\"\"\n nlp: Any # The Spacy model loaded into memory\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid # Forbid extra attributes during model initialization\n[docs] @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"\n Validates that the Spacy package and the 'en_core_web_sm' model are installed.\n Args:\n values (Dict): The values provided to the class constructor.\n Returns:\n The validated values.\n Raises:\n ValueError: If the Spacy package or the 'en_core_web_sm'\n model are not installed.\n \"\"\"\n # Check if the Spacy package is installed\n if importlib.util.find_spec(\"spacy\") is None:\n raise ValueError(\n \"Spacy package not found. \"\n \"Please install it with `pip install spacy`.\"\n )\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/spacy_embeddings.html"} {"id": "de5260d8c6e1-1", "text": ")\n try:\n # Try to load the 'en_core_web_sm' Spacy model\n import spacy\n values[\"nlp\"] = spacy.load(\"en_core_web_sm\")\n except OSError:\n # If the model is not found, raise a ValueError\n raise ValueError(\n \"Spacy model 'en_core_web_sm' not found. \"\n \"Please install it with\"\n \" `python -m spacy download en_core_web_sm`.\"\n )\n return values # Return the validated values\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"\n Generates embeddings for a list of documents.\n Args:\n texts (List[str]): The documents to generate embeddings for.\n Returns:\n A list of embeddings, one for each document.\n \"\"\"\n return [self.nlp(text).vector.tolist() for text in texts]\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"\n Generates an embedding for a single piece of text.\n Args:\n text (str): The text to generate an embedding for.\n Returns:\n The embedding for the text.\n \"\"\"\n return self.nlp(text).vector.tolist()\n[docs] async def aembed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"\n Asynchronously generates embeddings for a list of documents.\n This method is not implemented and raises a NotImplementedError.\n Args:\n texts (List[str]): The documents to generate embeddings for.\n Raises:\n NotImplementedError: This method is not implemented.\n \"\"\"\n raise NotImplementedError(\"Asynchronous embedding generation is not supported.\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/spacy_embeddings.html"} {"id": "de5260d8c6e1-2", "text": "\"\"\"\n raise NotImplementedError(\"Asynchronous embedding generation is not supported.\")\n[docs] async def aembed_query(self, text: str) -> List[float]:\n \"\"\"\n Asynchronously generates an embedding for a single piece of text.\n This method is not implemented and raises a NotImplementedError.\n Args:\n text (str): The text to generate an embedding for.\n Raises:\n NotImplementedError: This method is not implemented.\n \"\"\"\n raise NotImplementedError(\"Asynchronous embedding generation is not supported.\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/spacy_embeddings.html"} {"id": "b17c2e1e9477-0", "text": "Source code for langchain.embeddings.self_hosted\n\"\"\"Running custom embedding models on self-hosted remote hardware.\"\"\"\nfrom typing import Any, Callable, List\nfrom pydantic import Extra\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.llms import SelfHostedPipeline\ndef _embed_documents(pipeline: Any, *args: Any, **kwargs: Any) -> List[List[float]]:\n \"\"\"Inference function to send to the remote hardware.\n Accepts a sentence_transformer model_id and\n returns a list of embeddings for each document in the batch.\n \"\"\"\n return pipeline(*args, **kwargs)\n[docs]class SelfHostedEmbeddings(SelfHostedPipeline, Embeddings):\n \"\"\"Runs custom embedding models on self-hosted remote hardware.\n Supported hardware includes auto-launched instances on AWS, GCP, Azure,\n and Lambda, as well as servers specified\n by IP address and SSH credentials (such as on-prem, or another\n cloud like Paperspace, Coreweave, etc.).\n To use, you should have the ``runhouse`` python package installed.\n Example using a model load function:\n .. code-block:: python\n from langchain.embeddings import SelfHostedEmbeddings\n from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\n import runhouse as rh\n gpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\n def get_pipeline():\n model_id = \"facebook/bart-large\"\n tokenizer = AutoTokenizer.from_pretrained(model_id)\n model = AutoModelForCausalLM.from_pretrained(model_id)\n return pipeline(\"feature-extraction\", model=model, tokenizer=tokenizer)\n embeddings = SelfHostedEmbeddings(\n model_load_fn=get_pipeline,\n hardware=gpu", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted.html"} {"id": "b17c2e1e9477-1", "text": "model_load_fn=get_pipeline,\n hardware=gpu\n model_reqs=[\"./\", \"torch\", \"transformers\"],\n )\n Example passing in a pipeline path:\n .. code-block:: python\n from langchain.embeddings import SelfHostedHFEmbeddings\n import runhouse as rh\n from transformers import pipeline\n gpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\n pipeline = pipeline(model=\"bert-base-uncased\", task=\"feature-extraction\")\n rh.blob(pickle.dumps(pipeline),\n path=\"models/pipeline.pkl\").save().to(gpu, path=\"models\")\n embeddings = SelfHostedHFEmbeddings.from_pipeline(\n pipeline=\"models/pipeline.pkl\",\n hardware=gpu,\n model_reqs=[\"./\", \"torch\", \"transformers\"],\n )\n \"\"\"\n inference_fn: Callable = _embed_documents\n \"\"\"Inference function to extract the embeddings on the remote hardware.\"\"\"\n inference_kwargs: Any = None\n \"\"\"Any kwargs to pass to the model's inference function.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Compute doc embeddings using a HuggingFace transformer model.\n Args:\n texts: The list of texts to embed.s\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n texts = list(map(lambda x: x.replace(\"\\n\", \" \"), texts))\n embeddings = self.client(self.pipeline_ref, texts)\n if not isinstance(embeddings, list):\n return embeddings.tolist()\n return embeddings", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted.html"} {"id": "b17c2e1e9477-2", "text": "if not isinstance(embeddings, list):\n return embeddings.tolist()\n return embeddings\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embeddings using a HuggingFace transformer model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n text = text.replace(\"\\n\", \" \")\n embeddings = self.client(self.pipeline_ref, text)\n if not isinstance(embeddings, list):\n return embeddings.tolist()\n return embeddings", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted.html"} {"id": "511b4d66447b-0", "text": "Source code for langchain.embeddings.dashscope\n\"\"\"Wrapper around DashScope embedding models.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import (\n Any,\n Callable,\n Dict,\n List,\n Optional,\n)\nfrom pydantic import BaseModel, Extra, root_validator\nfrom requests.exceptions import HTTPError\nfrom tenacity import (\n before_sleep_log,\n retry,\n retry_if_exception_type,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\ndef _create_retry_decorator(embeddings: DashScopeEmbeddings) -> Callable[[Any], Any]:\n multiplier = 1\n min_seconds = 1\n max_seconds = 4\n # Wait 2^x * 1 second between each retry starting with\n # 1 seconds, then up to 4 seconds, then 4 seconds afterwards\n return retry(\n reraise=True,\n stop=stop_after_attempt(embeddings.max_retries),\n wait=wait_exponential(multiplier, min=min_seconds, max=max_seconds),\n retry=(retry_if_exception_type(HTTPError)),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\n[docs]def embed_with_retry(embeddings: DashScopeEmbeddings, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the embedding call.\"\"\"\n retry_decorator = _create_retry_decorator(embeddings)\n @retry_decorator\n def _embed_with_retry(**kwargs: Any) -> Any:\n resp = embeddings.client.call(**kwargs)\n if resp.status_code == 200:\n return resp.output[\"embeddings\"]\n elif resp.status_code in [400, 401]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/dashscope.html"} {"id": "511b4d66447b-1", "text": "elif resp.status_code in [400, 401]:\n raise ValueError(\n f\"status_code: {resp.status_code} \\n \"\n f\"code: {resp.code} \\n message: {resp.message}\"\n )\n else:\n raise HTTPError(\n f\"HTTP error occurred: status_code: {resp.status_code} \\n \"\n f\"code: {resp.code} \\n message: {resp.message}\"\n )\n return _embed_with_retry(**kwargs)\n[docs]class DashScopeEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around DashScope embedding models.\n To use, you should have the ``dashscope`` python package installed, and the\n environment variable ``DASHSCOPE_API_KEY`` set with your API key or pass it\n as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.embeddings import DashScopeEmbeddings\n embeddings = DashScopeEmbeddings(dashscope_api_key=\"my-api-key\")\n Example:\n .. code-block:: python\n import os\n os.environ[\"DASHSCOPE_API_KEY\"] = \"your DashScope API KEY\"\n from langchain.embeddings.dashscope import DashScopeEmbeddings\n embeddings = DashScopeEmbeddings(\n model=\"text-embedding-v1\",\n )\n text = \"This is a test query.\"\n query_result = embeddings.embed_query(text)\n \"\"\"\n client: Any #: :meta private:\n model: str = \"text-embedding-v1\"\n dashscope_api_key: Optional[str] = None\n \"\"\"Maximum number of retries to make when generating.\"\"\"\n max_retries: int = 5\n[docs] class Config:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/dashscope.html"} {"id": "511b4d66447b-2", "text": "max_retries: int = 5\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n import dashscope\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n values[\"dashscope_api_key\"] = get_from_dict_or_env(\n values, \"dashscope_api_key\", \"DASHSCOPE_API_KEY\"\n )\n dashscope.api_key = values[\"dashscope_api_key\"]\n try:\n import dashscope\n values[\"client\"] = dashscope.TextEmbedding\n except ImportError:\n raise ImportError(\n \"Could not import dashscope python package. \"\n \"Please install it with `pip install dashscope`.\"\n )\n return values\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Call out to DashScope's embedding endpoint for embedding search docs.\n Args:\n texts: The list of texts to embed.\n chunk_size: The chunk size of embeddings. If None, will use the chunk size\n specified by the class.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n embeddings = embed_with_retry(\n self, input=texts, text_type=\"document\", model=self.model\n )\n embedding_list = [item[\"embedding\"] for item in embeddings]\n return embedding_list\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Call out to DashScope's embedding endpoint for embedding query text.\n Args:\n text: The text to embed.\n Returns:\n Embedding for the text.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/dashscope.html"} {"id": "511b4d66447b-3", "text": "Returns:\n Embedding for the text.\n \"\"\"\n embedding = embed_with_retry(\n self, input=text, text_type=\"query\", model=self.model\n )[0][\"embedding\"]\n return embedding", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/dashscope.html"} {"id": "468d25d9526a-0", "text": "Source code for langchain.embeddings.octoai_embeddings\n\"\"\"Module providing a wrapper around OctoAI Compute Service embedding models.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import BaseModel, Extra, Field, root_validator\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nDEFAULT_EMBED_INSTRUCTION = \"Represent this input: \"\nDEFAULT_QUERY_INSTRUCTION = \"Represent the question for retrieving similar documents: \"\n[docs]class OctoAIEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around OctoAI Compute Service embedding models.\n The environment variable ``OCTOAI_API_TOKEN`` should be set\n with your API token, or it can be passed\n as a named parameter to the constructor.\n \"\"\"\n endpoint_url: Optional[str] = Field(None, description=\"Endpoint URL to use.\")\n model_kwargs: Optional[dict] = Field(\n None, description=\"Keyword arguments to pass to the model.\"\n )\n octoai_api_token: Optional[str] = Field(None, description=\"OCTOAI API Token\")\n embed_instruction: str = Field(\n DEFAULT_EMBED_INSTRUCTION,\n description=\"Instruction to use for embedding documents.\",\n )\n query_instruction: str = Field(\n DEFAULT_QUERY_INSTRUCTION, description=\"Instruction to use for embedding query.\"\n )\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator(allow_reuse=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Ensure that the API key and python package exist in environment.\"\"\"\n values[\"octoai_api_token\"] = get_from_dict_or_env(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/octoai_embeddings.html"} {"id": "468d25d9526a-1", "text": "values[\"octoai_api_token\"] = get_from_dict_or_env(\n values, \"octoai_api_token\", \"OCTOAI_API_TOKEN\"\n )\n values[\"endpoint_url\"] = get_from_dict_or_env(\n values, \"endpoint_url\", \"ENDPOINT_URL\"\n )\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Return the identifying parameters.\"\"\"\n return {\n \"endpoint_url\": self.endpoint_url,\n \"model_kwargs\": self.model_kwargs or {},\n }\n def _compute_embeddings(\n self, texts: List[str], instruction: str\n ) -> List[List[float]]:\n \"\"\"Compute embeddings using an OctoAI instruct model.\"\"\"\n from octoai import client\n embeddings = []\n octoai_client = client.Client(token=self.octoai_api_token)\n for text in texts:\n parameter_payload = {\n \"sentence\": str([text]), # for item in text]),\n \"instruction\": str([instruction]), # for item in text]),\n \"parameters\": self.model_kwargs or {},\n }\n try:\n resp_json = octoai_client.infer(self.endpoint_url, parameter_payload)\n embedding = resp_json[\"embeddings\"]\n except Exception as e:\n raise ValueError(f\"Error raised by the inference endpoint: {e}\") from e\n embeddings.append(embedding)\n return embeddings\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Compute document embeddings using an OctoAI instruct model.\"\"\"\n texts = list(map(lambda x: x.replace(\"\\n\", \" \"), texts))\n return self._compute_embeddings(texts, self.embed_instruction)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/octoai_embeddings.html"} {"id": "468d25d9526a-2", "text": "return self._compute_embeddings(texts, self.embed_instruction)\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embedding using an OctoAI instruct model.\"\"\"\n text = text.replace(\"\\n\", \" \")\n return self._compute_embeddings([text], self.embed_instruction)[0]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/octoai_embeddings.html"} {"id": "4996b9518d7d-0", "text": "Source code for langchain.embeddings.mosaicml\n\"\"\"Wrapper around MosaicML APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional, Tuple\nimport requests\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\n[docs]class MosaicMLInstructorEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around MosaicML's embedding inference service.\n To use, you should have the\n environment variable ``MOSAICML_API_TOKEN`` set with your API token, or pass\n it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.llms import MosaicMLInstructorEmbeddings\n endpoint_url = (\n \"https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict\"\n )\n mosaic_llm = MosaicMLInstructorEmbeddings(\n endpoint_url=endpoint_url,\n mosaicml_api_token=\"my-api-key\"\n )\n \"\"\"\n endpoint_url: str = (\n \"https://models.hosted-on.mosaicml.hosting/instructor-xl/v1/predict\"\n )\n \"\"\"Endpoint URL to use.\"\"\"\n embed_instruction: str = \"Represent the document for retrieval: \"\n \"\"\"Instruction used to embed documents.\"\"\"\n query_instruction: str = (\n \"Represent the question for retrieving supporting documents: \"\n )\n \"\"\"Instruction used to embed the query.\"\"\"\n retry_sleep: float = 1.0\n \"\"\"How long to try sleeping for if a rate limit is encountered\"\"\"\n mosaicml_api_token: Optional[str] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/mosaicml.html"} {"id": "4996b9518d7d-1", "text": "[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n mosaicml_api_token = get_from_dict_or_env(\n values, \"mosaicml_api_token\", \"MOSAICML_API_TOKEN\"\n )\n values[\"mosaicml_api_token\"] = mosaicml_api_token\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\"endpoint_url\": self.endpoint_url}\n def _embed(\n self, input: List[Tuple[str, str]], is_retry: bool = False\n ) -> List[List[float]]:\n payload = {\"input_strings\": input}\n # HTTP headers for authorization\n headers = {\n \"Authorization\": f\"{self.mosaicml_api_token}\",\n \"Content-Type\": \"application/json\",\n }\n # send request\n try:\n response = requests.post(self.endpoint_url, headers=headers, json=payload)\n except requests.exceptions.RequestException as e:\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\n try:\n parsed_response = response.json()\n if \"error\" in parsed_response:\n # if we get rate limited, try sleeping for 1 second\n if (\n not is_retry\n and \"rate limit exceeded\" in parsed_response[\"error\"].lower()\n ):\n import time\n time.sleep(self.retry_sleep)\n return self._embed(input, is_retry=True)\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/mosaicml.html"} {"id": "4996b9518d7d-2", "text": "return self._embed(input, is_retry=True)\n raise ValueError(\n f\"Error raised by inference API: {parsed_response['error']}\"\n )\n # The inference API has changed a couple of times, so we add some handling\n # to be robust to multiple response formats.\n if isinstance(parsed_response, dict):\n if \"data\" in parsed_response:\n output_item = parsed_response[\"data\"]\n elif \"output\" in parsed_response:\n output_item = parsed_response[\"output\"]\n else:\n raise ValueError(\n f\"No key data or output in response: {parsed_response}\"\n )\n if isinstance(output_item, list) and isinstance(output_item[0], list):\n embeddings = output_item\n else:\n embeddings = [output_item]\n elif isinstance(parsed_response, list):\n first_item = parsed_response[0]\n if isinstance(first_item, list):\n embeddings = parsed_response\n elif isinstance(first_item, dict):\n if \"output\" in first_item:\n embeddings = [item[\"output\"] for item in parsed_response]\n else:\n raise ValueError(\n f\"No key data or output in response: {parsed_response}\"\n )\n else:\n raise ValueError(f\"Unexpected response format: {parsed_response}\")\n else:\n raise ValueError(f\"Unexpected response type: {parsed_response}\")\n except requests.exceptions.JSONDecodeError as e:\n raise ValueError(\n f\"Error raised by inference API: {e}.\\nResponse: {response.text}\"\n )\n return embeddings\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Embed documents using a MosaicML deployed instructor embedding model.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/mosaicml.html"} {"id": "4996b9518d7d-3", "text": "\"\"\"Embed documents using a MosaicML deployed instructor embedding model.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n instruction_pairs = [(self.embed_instruction, text) for text in texts]\n embeddings = self._embed(instruction_pairs)\n return embeddings\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Embed a query using a MosaicML deployed instructor embedding model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n instruction_pair = (self.query_instruction, text)\n embedding = self._embed([instruction_pair])[0]\n return embedding", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/mosaicml.html"} {"id": "5651c122d035-0", "text": "Source code for langchain.embeddings.embaas\n\"\"\"Wrapper around embaas embeddings API.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import BaseModel, Extra, root_validator\nfrom typing_extensions import NotRequired, TypedDict\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\n# Currently supported maximum batch size for embedding requests\nMAX_BATCH_SIZE = 256\nEMBAAS_API_URL = \"https://api.embaas.io/v1/embeddings/\"\n[docs]class EmbaasEmbeddingsPayload(TypedDict):\n \"\"\"Payload for the embaas embeddings API.\"\"\"\n model: str\n texts: List[str]\n instruction: NotRequired[str]\n[docs]class EmbaasEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around embaas's embedding service.\n To use, you should have the\n environment variable ``EMBAAS_API_KEY`` set with your API key, or pass\n it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n # Initialise with default model and instruction\n from langchain.embeddings import EmbaasEmbeddings\n emb = EmbaasEmbeddings()\n # Initialise with custom model and instruction\n from langchain.embeddings import EmbaasEmbeddings\n emb_model = \"instructor-large\"\n emb_inst = \"Represent the Wikipedia document for retrieval\"\n emb = EmbaasEmbeddings(\n model=emb_model,\n instruction=emb_inst\n )\n \"\"\"\n model: str = \"e5-large-v2\"\n \"\"\"The model used for embeddings.\"\"\"\n instruction: Optional[str] = None\n \"\"\"Instruction used for domain-specific embeddings.\"\"\"\n api_url: str = EMBAAS_API_URL", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/embaas.html"} {"id": "5651c122d035-1", "text": "api_url: str = EMBAAS_API_URL\n \"\"\"The URL for the embaas embeddings API.\"\"\"\n embaas_api_key: Optional[str] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n embaas_api_key = get_from_dict_or_env(\n values, \"embaas_api_key\", \"EMBAAS_API_KEY\"\n )\n values[\"embaas_api_key\"] = embaas_api_key\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying params.\"\"\"\n return {\"model\": self.model, \"instruction\": self.instruction}\n def _generate_payload(self, texts: List[str]) -> EmbaasEmbeddingsPayload:\n \"\"\"Generates payload for the API request.\"\"\"\n payload = EmbaasEmbeddingsPayload(texts=texts, model=self.model)\n if self.instruction:\n payload[\"instruction\"] = self.instruction\n return payload\n def _handle_request(self, payload: EmbaasEmbeddingsPayload) -> List[List[float]]:\n \"\"\"Sends a request to the Embaas API and handles the response.\"\"\"\n headers = {\n \"Authorization\": f\"Bearer {self.embaas_api_key}\",\n \"Content-Type\": \"application/json\",\n }\n response = requests.post(self.api_url, headers=headers, json=payload)\n response.raise_for_status()\n parsed_response = response.json()\n embeddings = [item[\"embedding\"] for item in parsed_response[\"data\"]]\n return embeddings", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/embaas.html"} {"id": "5651c122d035-2", "text": "return embeddings\n def _generate_embeddings(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Generate embeddings using the Embaas API.\"\"\"\n payload = self._generate_payload(texts)\n try:\n return self._handle_request(payload)\n except requests.exceptions.RequestException as e:\n if e.response is None or not e.response.text:\n raise ValueError(f\"Error raised by embaas embeddings API: {e}\")\n parsed_response = e.response.json()\n if \"message\" in parsed_response:\n raise ValueError(\n \"Validation Error raised by embaas embeddings API:\"\n f\"{parsed_response['message']}\"\n )\n raise\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Get embeddings for a list of texts.\n Args:\n texts: The list of texts to get embeddings for.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n batches = [\n texts[i : i + MAX_BATCH_SIZE] for i in range(0, len(texts), MAX_BATCH_SIZE)\n ]\n embeddings = [self._generate_embeddings(batch) for batch in batches]\n # flatten the list of lists into a single list\n return [embedding for batch in embeddings for embedding in batch]\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Get embeddings for a single text.\n Args:\n text: The text to get embeddings for.\n Returns:\n List of embeddings.\n \"\"\"\n return self.embed_documents([text])[0]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/embaas.html"} {"id": "73b1580c8ab7-0", "text": "Source code for langchain.embeddings.fake\nfrom typing import List\nimport numpy as np\nfrom pydantic import BaseModel\nfrom langchain.embeddings.base import Embeddings\n[docs]class FakeEmbeddings(Embeddings, BaseModel):\n size: int\n def _get_embedding(self) -> List[float]:\n return list(np.random.normal(size=self.size))\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n return [self._get_embedding() for _ in texts]\n[docs] def embed_query(self, text: str) -> List[float]:\n return self._get_embedding()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/fake.html"} {"id": "cd1cf8981eac-0", "text": "Source code for langchain.embeddings.aleph_alpha\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, root_validator\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\n[docs]class AlephAlphaAsymmetricSemanticEmbedding(BaseModel, Embeddings):\n \"\"\"\n Wrapper for Aleph Alpha's Asymmetric Embeddings\n AA provides you with an endpoint to embed a document and a query.\n The models were optimized to make the embeddings of documents and\n the query for a document as similar as possible.\n To learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/\n Example:\n .. code-block:: python\n from aleph_alpha import AlephAlphaAsymmetricSemanticEmbedding\n embeddings = AlephAlphaSymmetricSemanticEmbedding()\n document = \"This is a content of the document\"\n query = \"What is the content of the document?\"\n doc_result = embeddings.embed_documents([document])\n query_result = embeddings.embed_query(query)\n \"\"\"\n client: Any #: :meta private:\n model: Optional[str] = \"luminous-base\"\n \"\"\"Model name to use.\"\"\"\n hosting: Optional[str] = \"https://api.aleph-alpha.com\"\n \"\"\"Optional parameter that specifies which datacenters may process the request.\"\"\"\n normalize: Optional[bool] = True\n \"\"\"Should returned embeddings be normalized\"\"\"\n compress_to_size: Optional[int] = 128\n \"\"\"Should the returned embeddings come back as an original 5120-dim vector, \n or should it be compressed to 128-dim.\"\"\"\n contextual_control_threshold: Optional[int] = None\n \"\"\"Attention control parameters only apply to those tokens that have", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/aleph_alpha.html"} {"id": "cd1cf8981eac-1", "text": "\"\"\"Attention control parameters only apply to those tokens that have \n explicitly been set in the request.\"\"\"\n control_log_additive: Optional[bool] = True\n \"\"\"Apply controls on prompt items by adding the log(control_factor) \n to attention scores.\"\"\"\n aleph_alpha_api_key: Optional[str] = None\n \"\"\"API key for Aleph Alpha API.\"\"\"\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n aleph_alpha_api_key = get_from_dict_or_env(\n values, \"aleph_alpha_api_key\", \"ALEPH_ALPHA_API_KEY\"\n )\n try:\n from aleph_alpha_client import Client\n except ImportError:\n raise ValueError(\n \"Could not import aleph_alpha_client python package. \"\n \"Please install it with `pip install aleph_alpha_client`.\"\n )\n values[\"client\"] = Client(token=aleph_alpha_api_key)\n return values\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Call out to Aleph Alpha's asymmetric Document endpoint.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n try:\n from aleph_alpha_client import (\n Prompt,\n SemanticEmbeddingRequest,\n SemanticRepresentation,\n )\n except ImportError:\n raise ValueError(\n \"Could not import aleph_alpha_client python package. \"\n \"Please install it with `pip install aleph_alpha_client`.\"\n )\n document_embeddings = []\n for text in texts:\n document_params = {", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/aleph_alpha.html"} {"id": "cd1cf8981eac-2", "text": "document_embeddings = []\n for text in texts:\n document_params = {\n \"prompt\": Prompt.from_text(text),\n \"representation\": SemanticRepresentation.Document,\n \"compress_to_size\": self.compress_to_size,\n \"normalize\": self.normalize,\n \"contextual_control_threshold\": self.contextual_control_threshold,\n \"control_log_additive\": self.control_log_additive,\n }\n document_request = SemanticEmbeddingRequest(**document_params)\n document_response = self.client.semantic_embed(\n request=document_request, model=self.model\n )\n document_embeddings.append(document_response.embedding)\n return document_embeddings\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Call out to Aleph Alpha's asymmetric, query embedding endpoint\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n try:\n from aleph_alpha_client import (\n Prompt,\n SemanticEmbeddingRequest,\n SemanticRepresentation,\n )\n except ImportError:\n raise ValueError(\n \"Could not import aleph_alpha_client python package. \"\n \"Please install it with `pip install aleph_alpha_client`.\"\n )\n symmetric_params = {\n \"prompt\": Prompt.from_text(text),\n \"representation\": SemanticRepresentation.Query,\n \"compress_to_size\": self.compress_to_size,\n \"normalize\": self.normalize,\n \"contextual_control_threshold\": self.contextual_control_threshold,\n \"control_log_additive\": self.control_log_additive,\n }\n symmetric_request = SemanticEmbeddingRequest(**symmetric_params)\n symmetric_response = self.client.semantic_embed(\n request=symmetric_request, model=self.model\n )\n return symmetric_response.embedding", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/aleph_alpha.html"} {"id": "cd1cf8981eac-3", "text": "request=symmetric_request, model=self.model\n )\n return symmetric_response.embedding\n[docs]class AlephAlphaSymmetricSemanticEmbedding(AlephAlphaAsymmetricSemanticEmbedding):\n \"\"\"The symmetric version of the Aleph Alpha's semantic embeddings.\n The main difference is that here, both the documents and\n queries are embedded with a SemanticRepresentation.Symmetric\n Example:\n .. code-block:: python\n from aleph_alpha import AlephAlphaSymmetricSemanticEmbedding\n embeddings = AlephAlphaAsymmetricSemanticEmbedding()\n text = \"This is a test text\"\n doc_result = embeddings.embed_documents([text])\n query_result = embeddings.embed_query(text)\n \"\"\"\n def _embed(self, text: str) -> List[float]:\n try:\n from aleph_alpha_client import (\n Prompt,\n SemanticEmbeddingRequest,\n SemanticRepresentation,\n )\n except ImportError:\n raise ValueError(\n \"Could not import aleph_alpha_client python package. \"\n \"Please install it with `pip install aleph_alpha_client`.\"\n )\n query_params = {\n \"prompt\": Prompt.from_text(text),\n \"representation\": SemanticRepresentation.Symmetric,\n \"compress_to_size\": self.compress_to_size,\n \"normalize\": self.normalize,\n \"contextual_control_threshold\": self.contextual_control_threshold,\n \"control_log_additive\": self.control_log_additive,\n }\n query_request = SemanticEmbeddingRequest(**query_params)\n query_response = self.client.semantic_embed(\n request=query_request, model=self.model\n )\n return query_response.embedding\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Call out to Aleph Alpha's Document endpoint.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/aleph_alpha.html"} {"id": "cd1cf8981eac-4", "text": "\"\"\"Call out to Aleph Alpha's Document endpoint.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n document_embeddings = []\n for text in texts:\n document_embeddings.append(self._embed(text))\n return document_embeddings\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Call out to Aleph Alpha's asymmetric, query embedding endpoint\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n return self._embed(text)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/aleph_alpha.html"} {"id": "2c23367b6e6c-0", "text": "Source code for langchain.embeddings.modelscope_hub\n\"\"\"Wrapper around ModelScopeHub embedding models.\"\"\"\nfrom typing import Any, List\nfrom pydantic import BaseModel, Extra\nfrom langchain.embeddings.base import Embeddings\n[docs]class ModelScopeEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around modelscope_hub embedding models.\n To use, you should have the ``modelscope`` python package installed.\n Example:\n .. code-block:: python\n from langchain.embeddings import ModelScopeEmbeddings\n model_id = \"damo/nlp_corom_sentence-embedding_english-base\"\n embed = ModelScopeEmbeddings(model_id=model_id)\n \"\"\"\n embed: Any\n model_id: str = \"damo/nlp_corom_sentence-embedding_english-base\"\n \"\"\"Model name to use.\"\"\"\n def __init__(self, **kwargs: Any):\n \"\"\"Initialize the modelscope\"\"\"\n super().__init__(**kwargs)\n try:\n from modelscope.pipelines import pipeline\n from modelscope.utils.constant import Tasks\n self.embed = pipeline(Tasks.sentence_embedding, model=self.model_id)\n except ImportError as e:\n raise ImportError(\n \"Could not import some python packages.\"\n \"Please install it with `pip install modelscope`.\"\n ) from e\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Compute doc embeddings using a modelscope embedding model.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/modelscope_hub.html"} {"id": "2c23367b6e6c-1", "text": "Returns:\n List of embeddings, one for each text.\n \"\"\"\n texts = list(map(lambda x: x.replace(\"\\n\", \" \"), texts))\n inputs = {\"source_sentence\": texts}\n embeddings = self.embed(input=inputs)[\"text_embedding\"]\n return embeddings.tolist()\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embeddings using a modelscope embedding model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n text = text.replace(\"\\n\", \" \")\n inputs = {\"source_sentence\": [text]}\n embedding = self.embed(input=inputs)[\"text_embedding\"][0]\n return embedding.tolist()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/modelscope_hub.html"} {"id": "f7c5e1f2e96e-0", "text": "Source code for langchain.embeddings.jina\n\"\"\"Wrapper around Jina embedding models.\"\"\"\nimport os\nfrom typing import Any, Dict, List, Optional\nimport requests\nfrom pydantic import BaseModel, root_validator\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\n[docs]class JinaEmbeddings(BaseModel, Embeddings):\n client: Any #: :meta private:\n model_name: str = \"ViT-B-32::openai\"\n \"\"\"Model name to use.\"\"\"\n jina_auth_token: Optional[str] = None\n jina_api_url: str = \"https://api.clip.jina.ai/api/v1/models/\"\n request_headers: Optional[dict] = None\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that auth token exists in environment.\"\"\"\n # Set Auth\n jina_auth_token = get_from_dict_or_env(\n values, \"jina_auth_token\", \"JINA_AUTH_TOKEN\"\n )\n values[\"jina_auth_token\"] = jina_auth_token\n values[\"request_headers\"] = ((\"authorization\", jina_auth_token),)\n # Test that package is installed\n try:\n import jina\n except ImportError:\n raise ImportError(\n \"Could not import `jina` python package. \"\n \"Please install it with `pip install jina`.\"\n )\n # Setup client\n jina_api_url = os.environ.get(\"JINA_API_URL\", values[\"jina_api_url\"])\n model_name = values[\"model_name\"]\n try:\n resp = requests.get(\n jina_api_url + f\"?model_name={model_name}\",\n headers={\"Authorization\": jina_auth_token},\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/jina.html"} {"id": "f7c5e1f2e96e-1", "text": "headers={\"Authorization\": jina_auth_token},\n )\n if resp.status_code == 401:\n raise ValueError(\n \"The given Jina auth token is invalid. \"\n \"Please check your Jina auth token.\"\n )\n elif resp.status_code == 404:\n raise ValueError(\n f\"The given model name `{model_name}` is not valid. \"\n f\"Please go to https://cloud.jina.ai/user/inference \"\n f\"and create a model with the given model name.\"\n )\n resp.raise_for_status()\n endpoint = resp.json()[\"endpoints\"][\"grpc\"]\n values[\"client\"] = jina.Client(host=endpoint)\n except requests.exceptions.HTTPError as err:\n raise ValueError(f\"Error: {err!r}\")\n return values\n def _post(self, docs: List[Any], **kwargs: Any) -> Any:\n payload = dict(inputs=docs, metadata=self.request_headers, **kwargs)\n return self.client.post(on=\"/encode\", **payload)\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Call out to Jina's embedding endpoint.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n from docarray import Document, DocumentArray\n embeddings = self._post(\n docs=DocumentArray([Document(text=t) for t in texts])\n ).embeddings\n return [list(map(float, e)) for e in embeddings]\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Call out to Jina's embedding endpoint.\n Args:\n text: The text to embed.\n Returns:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/jina.html"} {"id": "f7c5e1f2e96e-2", "text": "Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n from docarray import Document, DocumentArray\n embedding = self._post(docs=DocumentArray([Document(text=text)])).embeddings[0]\n return list(map(float, embedding))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/jina.html"} {"id": "c999f81e5813-0", "text": "Source code for langchain.embeddings.huggingface\n\"\"\"Wrapper around HuggingFace embedding models.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, Field\nfrom langchain.embeddings.base import Embeddings\nDEFAULT_MODEL_NAME = \"sentence-transformers/all-mpnet-base-v2\"\nDEFAULT_INSTRUCT_MODEL = \"hkunlp/instructor-large\"\nDEFAULT_EMBED_INSTRUCTION = \"Represent the document for retrieval: \"\nDEFAULT_QUERY_INSTRUCTION = (\n \"Represent the question for retrieving supporting documents: \"\n)\n[docs]class HuggingFaceEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around sentence_transformers embedding models.\n To use, you should have the ``sentence_transformers`` python package installed.\n Example:\n .. code-block:: python\n from langchain.embeddings import HuggingFaceEmbeddings\n model_name = \"sentence-transformers/all-mpnet-base-v2\"\n model_kwargs = {'device': 'cpu'}\n encode_kwargs = {'normalize_embeddings': False}\n hf = HuggingFaceEmbeddings(\n model_name=model_name,\n model_kwargs=model_kwargs,\n encode_kwargs=encode_kwargs\n )\n \"\"\"\n client: Any #: :meta private:\n model_name: str = DEFAULT_MODEL_NAME\n \"\"\"Model name to use.\"\"\"\n cache_folder: Optional[str] = None\n \"\"\"Path to store models. \n Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Key word arguments to pass to the model.\"\"\"\n encode_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Key word arguments to pass when calling the `encode` method of the model.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/huggingface.html"} {"id": "c999f81e5813-1", "text": "\"\"\"Key word arguments to pass when calling the `encode` method of the model.\"\"\"\n def __init__(self, **kwargs: Any):\n \"\"\"Initialize the sentence_transformer.\"\"\"\n super().__init__(**kwargs)\n try:\n import sentence_transformers\n except ImportError as exc:\n raise ImportError(\n \"Could not import sentence_transformers python package. \"\n \"Please install it with `pip install sentence_transformers`.\"\n ) from exc\n self.client = sentence_transformers.SentenceTransformer(\n self.model_name, cache_folder=self.cache_folder, **self.model_kwargs\n )\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Compute doc embeddings using a HuggingFace transformer model.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n texts = list(map(lambda x: x.replace(\"\\n\", \" \"), texts))\n embeddings = self.client.encode(texts, **self.encode_kwargs)\n return embeddings.tolist()\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embeddings using a HuggingFace transformer model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n text = text.replace(\"\\n\", \" \")\n embedding = self.client.encode(text, **self.encode_kwargs)\n return embedding.tolist()\n[docs]class HuggingFaceInstructEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around sentence_transformers embedding models.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/huggingface.html"} {"id": "c999f81e5813-2", "text": "\"\"\"Wrapper around sentence_transformers embedding models.\n To use, you should have the ``sentence_transformers``\n and ``InstructorEmbedding`` python packages installed.\n Example:\n .. code-block:: python\n from langchain.embeddings import HuggingFaceInstructEmbeddings\n model_name = \"hkunlp/instructor-large\"\n model_kwargs = {'device': 'cpu'}\n encode_kwargs = {'normalize_embeddings': True}\n hf = HuggingFaceInstructEmbeddings(\n model_name=model_name,\n model_kwargs=model_kwargs,\n encode_kwargs=encode_kwargs\n )\n \"\"\"\n client: Any #: :meta private:\n model_name: str = DEFAULT_INSTRUCT_MODEL\n \"\"\"Model name to use.\"\"\"\n cache_folder: Optional[str] = None\n \"\"\"Path to store models. \n Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Key word arguments to pass to the model.\"\"\"\n encode_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Key word arguments to pass when calling the `encode` method of the model.\"\"\"\n embed_instruction: str = DEFAULT_EMBED_INSTRUCTION\n \"\"\"Instruction to use for embedding documents.\"\"\"\n query_instruction: str = DEFAULT_QUERY_INSTRUCTION\n \"\"\"Instruction to use for embedding query.\"\"\"\n def __init__(self, **kwargs: Any):\n \"\"\"Initialize the sentence_transformer.\"\"\"\n super().__init__(**kwargs)\n try:\n from InstructorEmbedding import INSTRUCTOR\n self.client = INSTRUCTOR(\n self.model_name, cache_folder=self.cache_folder, **self.model_kwargs\n )\n except ImportError as e:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/huggingface.html"} {"id": "c999f81e5813-3", "text": ")\n except ImportError as e:\n raise ValueError(\"Dependencies for InstructorEmbedding not found.\") from e\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Compute doc embeddings using a HuggingFace instruct model.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n instruction_pairs = [[self.embed_instruction, text] for text in texts]\n embeddings = self.client.encode(instruction_pairs, **self.encode_kwargs)\n return embeddings.tolist()\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embeddings using a HuggingFace instruct model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n instruction_pair = [self.query_instruction, text]\n embedding = self.client.encode([instruction_pair], **self.encode_kwargs)[0]\n return embedding.tolist()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/huggingface.html"} {"id": "e3dcd7edea32-0", "text": "Source code for langchain.embeddings.minimax\n\"\"\"Wrapper around MiniMax APIs.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, Callable, Dict, List, Optional\nimport requests\nfrom pydantic import BaseModel, Extra, root_validator\nfrom tenacity import (\n before_sleep_log,\n retry,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\ndef _create_retry_decorator() -> Callable[[Any], Any]:\n \"\"\"Returns a tenacity retry decorator.\"\"\"\n multiplier = 1\n min_seconds = 1\n max_seconds = 4\n max_retries = 6\n return retry(\n reraise=True,\n stop=stop_after_attempt(max_retries),\n wait=wait_exponential(multiplier=multiplier, min=min_seconds, max=max_seconds),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\n[docs]def embed_with_retry(embeddings: MiniMaxEmbeddings, *args: Any, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the completion call.\"\"\"\n retry_decorator = _create_retry_decorator()\n @retry_decorator\n def _embed_with_retry(*args: Any, **kwargs: Any) -> Any:\n return embeddings.embed(*args, **kwargs)\n return _embed_with_retry(*args, **kwargs)\n[docs]class MiniMaxEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around MiniMax's embedding inference service.\n To use, you should have the environment variable ``MINIMAX_GROUP_ID`` and\n ``MINIMAX_API_KEY`` set with your API token, or pass it as a named parameter to", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/minimax.html"} {"id": "e3dcd7edea32-1", "text": "the constructor.\n Example:\n .. code-block:: python\n from langchain.embeddings import MiniMaxEmbeddings\n embeddings = MiniMaxEmbeddings()\n query_text = \"This is a test query.\"\n query_result = embeddings.embed_query(query_text)\n document_text = \"This is a test document.\"\n document_result = embeddings.embed_documents([document_text])\n \"\"\"\n endpoint_url: str = \"https://api.minimax.chat/v1/embeddings\"\n \"\"\"Endpoint URL to use.\"\"\"\n model: str = \"embo-01\"\n \"\"\"Embeddings model name to use.\"\"\"\n embed_type_db: str = \"db\"\n \"\"\"For embed_documents\"\"\"\n embed_type_query: str = \"query\"\n \"\"\"For embed_query\"\"\"\n minimax_group_id: Optional[str] = None\n \"\"\"Group ID for MiniMax API.\"\"\"\n minimax_api_key: Optional[str] = None\n \"\"\"API Key for MiniMax API.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that group id and api key exists in environment.\"\"\"\n minimax_group_id = get_from_dict_or_env(\n values, \"minimax_group_id\", \"MINIMAX_GROUP_ID\"\n )\n minimax_api_key = get_from_dict_or_env(\n values, \"minimax_api_key\", \"MINIMAX_API_KEY\"\n )\n values[\"minimax_group_id\"] = minimax_group_id\n values[\"minimax_api_key\"] = minimax_api_key\n return values\n[docs] def embed(\n self,\n texts: List[str],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/minimax.html"} {"id": "e3dcd7edea32-2", "text": "[docs] def embed(\n self,\n texts: List[str],\n embed_type: str,\n ) -> List[List[float]]:\n payload = {\n \"model\": self.model,\n \"type\": embed_type,\n \"texts\": texts,\n }\n # HTTP headers for authorization\n headers = {\n \"Authorization\": f\"Bearer {self.minimax_api_key}\",\n \"Content-Type\": \"application/json\",\n }\n params = {\n \"GroupId\": self.minimax_group_id,\n }\n # send request\n response = requests.post(\n self.endpoint_url, params=params, headers=headers, json=payload\n )\n parsed_response = response.json()\n # check for errors\n if parsed_response[\"base_resp\"][\"status_code\"] != 0:\n raise ValueError(\n f\"MiniMax API returned an error: {parsed_response['base_resp']}\"\n )\n embeddings = parsed_response[\"vectors\"]\n return embeddings\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Embed documents using a MiniMax embedding endpoint.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n embeddings = embed_with_retry(self, texts=texts, embed_type=self.embed_type_db)\n return embeddings\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Embed a query using a MiniMax embedding endpoint.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n embeddings = embed_with_retry(\n self, texts=[text], embed_type=self.embed_type_query\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/minimax.html"} {"id": "e3dcd7edea32-3", "text": "self, texts=[text], embed_type=self.embed_type_query\n )\n return embeddings[0]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/minimax.html"} {"id": "ca090d5f0bef-0", "text": "Source code for langchain.embeddings.elasticsearch\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, List, Optional\nfrom langchain.utils import get_from_env\nif TYPE_CHECKING:\n from elasticsearch import Elasticsearch\n from elasticsearch.client import MlClient\nfrom langchain.embeddings.base import Embeddings\n[docs]class ElasticsearchEmbeddings(Embeddings):\n \"\"\"\n Wrapper around Elasticsearch embedding models.\n This class provides an interface to generate embeddings using a model deployed\n in an Elasticsearch cluster. It requires an Elasticsearch connection object\n and the model_id of the model deployed in the cluster.\n In Elasticsearch you need to have an embedding model loaded and deployed.\n - https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-trained-model.html\n - https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-deploy-models.html\n \"\"\" # noqa: E501\n def __init__(\n self,\n client: MlClient,\n model_id: str,\n *,\n input_field: str = \"text_field\",\n ):\n \"\"\"\n Initialize the ElasticsearchEmbeddings instance.\n Args:\n client (MlClient): An Elasticsearch ML client object.\n model_id (str): The model_id of the model deployed in the Elasticsearch\n cluster.\n input_field (str): The name of the key for the input text field in the\n document. Defaults to 'text_field'.\n \"\"\"\n self.client = client\n self.model_id = model_id\n self.input_field = input_field\n[docs] @classmethod\n def from_credentials(\n cls,\n model_id: str,\n *,\n es_cloud_id: Optional[str] = None,\n es_user: Optional[str] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/elasticsearch.html"} {"id": "ca090d5f0bef-1", "text": "es_user: Optional[str] = None,\n es_password: Optional[str] = None,\n input_field: str = \"text_field\",\n ) -> ElasticsearchEmbeddings:\n \"\"\"Instantiate embeddings from Elasticsearch credentials.\n Args:\n model_id (str): The model_id of the model deployed in the Elasticsearch\n cluster.\n input_field (str): The name of the key for the input text field in the\n document. Defaults to 'text_field'.\n es_cloud_id: (str, optional): The Elasticsearch cloud ID to connect to.\n es_user: (str, optional): Elasticsearch username.\n es_password: (str, optional): Elasticsearch password.\n Example:\n .. code-block:: python\n from langchain.embeddings import ElasticsearchEmbeddings\n # Define the model ID and input field name (if different from default)\n model_id = \"your_model_id\"\n # Optional, only if different from 'text_field'\n input_field = \"your_input_field\"\n # Credentials can be passed in two ways. Either set the env vars\n # ES_CLOUD_ID, ES_USER, ES_PASSWORD and they will be automatically\n # pulled in, or pass them in directly as kwargs.\n embeddings = ElasticsearchEmbeddings.from_credentials(\n model_id,\n input_field=input_field,\n # es_cloud_id=\"foo\",\n # es_user=\"bar\",\n # es_password=\"baz\",\n )\n documents = [\n \"This is an example document.\",\n \"Another example document to generate embeddings for.\",\n ]\n embeddings_generator.embed_documents(documents)\n \"\"\"\n try:\n from elasticsearch import Elasticsearch\n from elasticsearch.client import MlClient\n except ImportError:\n raise ImportError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/elasticsearch.html"} {"id": "ca090d5f0bef-2", "text": "from elasticsearch.client import MlClient\n except ImportError:\n raise ImportError(\n \"elasticsearch package not found, please install with 'pip install \"\n \"elasticsearch'\"\n )\n es_cloud_id = es_cloud_id or get_from_env(\"es_cloud_id\", \"ES_CLOUD_ID\")\n es_user = es_user or get_from_env(\"es_user\", \"ES_USER\")\n es_password = es_password or get_from_env(\"es_password\", \"ES_PASSWORD\")\n # Connect to Elasticsearch\n es_connection = Elasticsearch(\n cloud_id=es_cloud_id, basic_auth=(es_user, es_password)\n )\n client = MlClient(es_connection)\n return cls(client, model_id, input_field=input_field)\n[docs] @classmethod\n def from_es_connection(\n cls,\n model_id: str,\n es_connection: Elasticsearch,\n input_field: str = \"text_field\",\n ) -> ElasticsearchEmbeddings:\n \"\"\"\n Instantiate embeddings from an existing Elasticsearch connection.\n This method provides a way to create an instance of the ElasticsearchEmbeddings\n class using an existing Elasticsearch connection. The connection object is used\n to create an MlClient, which is then used to initialize the\n ElasticsearchEmbeddings instance.\n Args:\n model_id (str): The model_id of the model deployed in the Elasticsearch cluster.\n es_connection (elasticsearch.Elasticsearch): An existing Elasticsearch\n connection object. input_field (str, optional): The name of the key for the\n input text field in the document. Defaults to 'text_field'.\n Returns:\n ElasticsearchEmbeddings: An instance of the ElasticsearchEmbeddings class.\n Example:\n .. code-block:: python\n from elasticsearch import Elasticsearch", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/elasticsearch.html"} {"id": "ca090d5f0bef-3", "text": "Example:\n .. code-block:: python\n from elasticsearch import Elasticsearch\n from langchain.embeddings import ElasticsearchEmbeddings\n # Define the model ID and input field name (if different from default)\n model_id = \"your_model_id\"\n # Optional, only if different from 'text_field'\n input_field = \"your_input_field\"\n # Create Elasticsearch connection\n es_connection = Elasticsearch(\n hosts=[\"localhost:9200\"], http_auth=(\"user\", \"password\")\n )\n # Instantiate ElasticsearchEmbeddings using the existing connection\n embeddings = ElasticsearchEmbeddings.from_es_connection(\n model_id,\n es_connection,\n input_field=input_field,\n )\n documents = [\n \"This is an example document.\",\n \"Another example document to generate embeddings for.\",\n ]\n embeddings_generator.embed_documents(documents)\n \"\"\"\n # Importing MlClient from elasticsearch.client within the method to\n # avoid unnecessary import if the method is not used\n from elasticsearch.client import MlClient\n # Create an MlClient from the given Elasticsearch connection\n client = MlClient(es_connection)\n # Return a new instance of the ElasticsearchEmbeddings class with\n # the MlClient, model_id, and input_field\n return cls(client, model_id, input_field=input_field)\n def _embedding_func(self, texts: List[str]) -> List[List[float]]:\n \"\"\"\n Generate embeddings for the given texts using the Elasticsearch model.\n Args:\n texts (List[str]): A list of text strings to generate embeddings for.\n Returns:\n List[List[float]]: A list of embeddings, one for each text in the input\n list.\n \"\"\"\n response = self.client.infer_trained_model(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/elasticsearch.html"} {"id": "ca090d5f0bef-4", "text": "list.\n \"\"\"\n response = self.client.infer_trained_model(\n model_id=self.model_id, docs=[{self.input_field: text} for text in texts]\n )\n embeddings = [doc[\"predicted_value\"] for doc in response[\"inference_results\"]]\n return embeddings\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"\n Generate embeddings for a list of documents.\n Args:\n texts (List[str]): A list of document text strings to generate embeddings\n for.\n Returns:\n List[List[float]]: A list of embeddings, one for each document in the input\n list.\n \"\"\"\n return self._embedding_func(texts)\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"\n Generate an embedding for a single query text.\n Args:\n text (str): The query text to generate an embedding for.\n Returns:\n List[float]: The embedding for the input query text.\n \"\"\"\n return self._embedding_func([text])[0]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/elasticsearch.html"} {"id": "0a95e5913f07-0", "text": "Source code for langchain.memory.buffer_window\nfrom typing import Any, Dict, List\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.schema.messages import BaseMessage, get_buffer_string\n[docs]class ConversationBufferWindowMemory(BaseChatMemory):\n \"\"\"Buffer for storing conversation memory.\"\"\"\n human_prefix: str = \"Human\"\n ai_prefix: str = \"AI\"\n memory_key: str = \"history\" #: :meta private:\n k: int = 5\n @property\n def buffer(self) -> List[BaseMessage]:\n \"\"\"String buffer of memory.\"\"\"\n return self.chat_memory.messages\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Will always return list of memory variables.\n :meta private:\n \"\"\"\n return [self.memory_key]\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n \"\"\"Return history buffer.\"\"\"\n buffer: Any = self.buffer[-self.k * 2 :] if self.k > 0 else []\n if not self.return_messages:\n buffer = get_buffer_string(\n buffer,\n human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )\n return {self.memory_key: buffer}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/buffer_window.html"} {"id": "df2884a95aab-0", "text": "Source code for langchain.memory.readonly\nfrom typing import Any, Dict, List\nfrom langchain.schema import BaseMemory\n[docs]class ReadOnlySharedMemory(BaseMemory):\n \"\"\"A memory wrapper that is read-only and cannot be changed.\"\"\"\n memory: BaseMemory\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Return memory variables.\"\"\"\n return self.memory.memory_variables\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n \"\"\"Load memory variables from memory.\"\"\"\n return self.memory.load_memory_variables(inputs)\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Nothing should be saved or changed\"\"\"\n pass\n[docs] def clear(self) -> None:\n \"\"\"Nothing to clear, got a memory like a vault.\"\"\"\n pass", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/readonly.html"} {"id": "1d33b49f89f9-0", "text": "Source code for langchain.memory.summary\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Type\nfrom pydantic import BaseModel, root_validator\nfrom langchain.chains.llm import LLMChain\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.memory.prompt import SUMMARY_PROMPT\nfrom langchain.schema import (\n BaseChatMessageHistory,\n BasePromptTemplate,\n)\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.schema.messages import BaseMessage, SystemMessage, get_buffer_string\n[docs]class SummarizerMixin(BaseModel):\n human_prefix: str = \"Human\"\n ai_prefix: str = \"AI\"\n llm: BaseLanguageModel\n prompt: BasePromptTemplate = SUMMARY_PROMPT\n summary_message_cls: Type[BaseMessage] = SystemMessage\n[docs] def predict_new_summary(\n self, messages: List[BaseMessage], existing_summary: str\n ) -> str:\n new_lines = get_buffer_string(\n messages,\n human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )\n chain = LLMChain(llm=self.llm, prompt=self.prompt)\n return chain.predict(summary=existing_summary, new_lines=new_lines)\n[docs]class ConversationSummaryMemory(BaseChatMemory, SummarizerMixin):\n \"\"\"Conversation summarizer to memory.\"\"\"\n buffer: str = \"\"\n memory_key: str = \"history\" #: :meta private:\n[docs] @classmethod\n def from_messages(\n cls,\n llm: BaseLanguageModel,\n chat_memory: BaseChatMessageHistory,\n *,\n summarize_step: int = 2,\n **kwargs: Any,\n ) -> ConversationSummaryMemory:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/summary.html"} {"id": "1d33b49f89f9-1", "text": "**kwargs: Any,\n ) -> ConversationSummaryMemory:\n obj = cls(llm=llm, chat_memory=chat_memory, **kwargs)\n for i in range(0, len(obj.chat_memory.messages), summarize_step):\n obj.buffer = obj.predict_new_summary(\n obj.chat_memory.messages[i : i + summarize_step], obj.buffer\n )\n return obj\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Will always return list of memory variables.\n :meta private:\n \"\"\"\n return [self.memory_key]\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Return history buffer.\"\"\"\n if self.return_messages:\n buffer: Any = [self.summary_message_cls(content=self.buffer)]\n else:\n buffer = self.buffer\n return {self.memory_key: buffer}\n[docs] @root_validator()\n def validate_prompt_input_variables(cls, values: Dict) -> Dict:\n \"\"\"Validate that prompt input variables are consistent.\"\"\"\n prompt_variables = values[\"prompt\"].input_variables\n expected_keys = {\"summary\", \"new_lines\"}\n if expected_keys != set(prompt_variables):\n raise ValueError(\n \"Got unexpected prompt input variables. The prompt expects \"\n f\"{prompt_variables}, but it should have {expected_keys}.\"\n )\n return values\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Save context from this conversation to buffer.\"\"\"\n super().save_context(inputs, outputs)\n self.buffer = self.predict_new_summary(\n self.chat_memory.messages[-2:], self.buffer\n )\n[docs] def clear(self) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/summary.html"} {"id": "1d33b49f89f9-2", "text": ")\n[docs] def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"\n super().clear()\n self.buffer = \"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/summary.html"} {"id": "0d197775b5c1-0", "text": "Source code for langchain.memory.entity\nimport logging\nfrom abc import ABC, abstractmethod\nfrom itertools import islice\nfrom typing import Any, Dict, Iterable, List, Optional\nfrom pydantic import BaseModel, Field\nfrom langchain.chains.llm import LLMChain\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.memory.prompt import (\n ENTITY_EXTRACTION_PROMPT,\n ENTITY_SUMMARIZATION_PROMPT,\n)\nfrom langchain.memory.utils import get_prompt_input_key\nfrom langchain.schema import BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.schema.messages import BaseMessage, get_buffer_string\nlogger = logging.getLogger(__name__)\n[docs]class BaseEntityStore(BaseModel, ABC):\n[docs] @abstractmethod\n def get(self, key: str, default: Optional[str] = None) -> Optional[str]:\n \"\"\"Get entity value from store.\"\"\"\n pass\n[docs] @abstractmethod\n def set(self, key: str, value: Optional[str]) -> None:\n \"\"\"Set entity value in store.\"\"\"\n pass\n[docs] @abstractmethod\n def delete(self, key: str) -> None:\n \"\"\"Delete entity value from store.\"\"\"\n pass\n[docs] @abstractmethod\n def exists(self, key: str) -> bool:\n \"\"\"Check if entity exists in store.\"\"\"\n pass\n[docs] @abstractmethod\n def clear(self) -> None:\n \"\"\"Delete all entities from store.\"\"\"\n pass\n[docs]class InMemoryEntityStore(BaseEntityStore):\n \"\"\"Basic in-memory entity store.\"\"\"\n store: Dict[str, Optional[str]] = {}\n[docs] def get(self, key: str, default: Optional[str] = None) -> Optional[str]:\n return self.store.get(key, default)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/entity.html"} {"id": "0d197775b5c1-1", "text": "return self.store.get(key, default)\n[docs] def set(self, key: str, value: Optional[str]) -> None:\n self.store[key] = value\n[docs] def delete(self, key: str) -> None:\n del self.store[key]\n[docs] def exists(self, key: str) -> bool:\n return key in self.store\n[docs] def clear(self) -> None:\n return self.store.clear()\n[docs]class RedisEntityStore(BaseEntityStore):\n \"\"\"Redis-backed Entity store. Entities get a TTL of 1 day by default, and\n that TTL is extended by 3 days every time the entity is read back.\n \"\"\"\n redis_client: Any\n session_id: str = \"default\"\n key_prefix: str = \"memory_store\"\n ttl: Optional[int] = 60 * 60 * 24\n recall_ttl: Optional[int] = 60 * 60 * 24 * 3\n def __init__(\n self,\n session_id: str = \"default\",\n url: str = \"redis://localhost:6379/0\",\n key_prefix: str = \"memory_store\",\n ttl: Optional[int] = 60 * 60 * 24,\n recall_ttl: Optional[int] = 60 * 60 * 24 * 3,\n *args: Any,\n **kwargs: Any,\n ):\n try:\n import redis\n except ImportError:\n raise ImportError(\n \"Could not import redis python package. \"\n \"Please install it with `pip install redis`.\"\n )\n super().__init__(*args, **kwargs)\n try:\n self.redis_client = redis.Redis.from_url(url=url, decode_responses=True)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/entity.html"} {"id": "0d197775b5c1-2", "text": "self.redis_client = redis.Redis.from_url(url=url, decode_responses=True)\n except redis.exceptions.ConnectionError as error:\n logger.error(error)\n self.session_id = session_id\n self.key_prefix = key_prefix\n self.ttl = ttl\n self.recall_ttl = recall_ttl or ttl\n @property\n def full_key_prefix(self) -> str:\n return f\"{self.key_prefix}:{self.session_id}\"\n[docs] def get(self, key: str, default: Optional[str] = None) -> Optional[str]:\n res = (\n self.redis_client.getex(f\"{self.full_key_prefix}:{key}\", ex=self.recall_ttl)\n or default\n or \"\"\n )\n logger.debug(f\"REDIS MEM get '{self.full_key_prefix}:{key}': '{res}'\")\n return res\n[docs] def set(self, key: str, value: Optional[str]) -> None:\n if not value:\n return self.delete(key)\n self.redis_client.set(f\"{self.full_key_prefix}:{key}\", value, ex=self.ttl)\n logger.debug(\n f\"REDIS MEM set '{self.full_key_prefix}:{key}': '{value}' EX {self.ttl}\"\n )\n[docs] def delete(self, key: str) -> None:\n self.redis_client.delete(f\"{self.full_key_prefix}:{key}\")\n[docs] def exists(self, key: str) -> bool:\n return self.redis_client.exists(f\"{self.full_key_prefix}:{key}\") == 1\n[docs] def clear(self) -> None:\n # iterate a list in batches of size batch_size\n def batched(iterable: Iterable[Any], batch_size: int) -> Iterable[Any]:\n iterator = iter(iterable)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/entity.html"} {"id": "0d197775b5c1-3", "text": "iterator = iter(iterable)\n while batch := list(islice(iterator, batch_size)):\n yield batch\n for keybatch in batched(\n self.redis_client.scan_iter(f\"{self.full_key_prefix}:*\"), 500\n ):\n self.redis_client.delete(*keybatch)\n[docs]class SQLiteEntityStore(BaseEntityStore):\n \"\"\"SQLite-backed Entity store\"\"\"\n session_id: str = \"default\"\n table_name: str = \"memory_store\"\n def __init__(\n self,\n session_id: str = \"default\",\n db_file: str = \"entities.db\",\n table_name: str = \"memory_store\",\n *args: Any,\n **kwargs: Any,\n ):\n try:\n import sqlite3\n except ImportError:\n raise ImportError(\n \"Could not import sqlite3 python package. \"\n \"Please install it with `pip install sqlite3`.\"\n )\n super().__init__(*args, **kwargs)\n self.conn = sqlite3.connect(db_file)\n self.session_id = session_id\n self.table_name = table_name\n self._create_table_if_not_exists()\n @property\n def full_table_name(self) -> str:\n return f\"{self.table_name}_{self.session_id}\"\n def _create_table_if_not_exists(self) -> None:\n create_table_query = f\"\"\"\n CREATE TABLE IF NOT EXISTS {self.full_table_name} (\n key TEXT PRIMARY KEY,\n value TEXT\n )\n \"\"\"\n with self.conn:\n self.conn.execute(create_table_query)\n[docs] def get(self, key: str, default: Optional[str] = None) -> Optional[str]:\n query = f\"\"\"\n SELECT value", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/entity.html"} {"id": "0d197775b5c1-4", "text": "query = f\"\"\"\n SELECT value\n FROM {self.full_table_name}\n WHERE key = ?\n \"\"\"\n cursor = self.conn.execute(query, (key,))\n result = cursor.fetchone()\n if result is not None:\n value = result[0]\n return value\n return default\n[docs] def set(self, key: str, value: Optional[str]) -> None:\n if not value:\n return self.delete(key)\n query = f\"\"\"\n INSERT OR REPLACE INTO {self.full_table_name} (key, value)\n VALUES (?, ?)\n \"\"\"\n with self.conn:\n self.conn.execute(query, (key, value))\n[docs] def delete(self, key: str) -> None:\n query = f\"\"\"\n DELETE FROM {self.full_table_name}\n WHERE key = ?\n \"\"\"\n with self.conn:\n self.conn.execute(query, (key,))\n[docs] def exists(self, key: str) -> bool:\n query = f\"\"\"\n SELECT 1\n FROM {self.full_table_name}\n WHERE key = ?\n LIMIT 1\n \"\"\"\n cursor = self.conn.execute(query, (key,))\n result = cursor.fetchone()\n return result is not None\n[docs] def clear(self) -> None:\n query = f\"\"\"\n DELETE FROM {self.full_table_name}\n \"\"\"\n with self.conn:\n self.conn.execute(query)\n[docs]class ConversationEntityMemory(BaseChatMemory):\n \"\"\"Entity extractor & summarizer memory.\n Extracts named entities from the recent chat history and generates summaries.\n With a swapable entity store, persisting entities across conversations.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/entity.html"} {"id": "0d197775b5c1-5", "text": "With a swapable entity store, persisting entities across conversations.\n Defaults to an in-memory entity store, and can be swapped out for a Redis,\n SQLite, or other entity store.\n \"\"\"\n human_prefix: str = \"Human\"\n ai_prefix: str = \"AI\"\n llm: BaseLanguageModel\n entity_extraction_prompt: BasePromptTemplate = ENTITY_EXTRACTION_PROMPT\n entity_summarization_prompt: BasePromptTemplate = ENTITY_SUMMARIZATION_PROMPT\n # Cache of recently detected entity names, if any\n # It is updated when load_memory_variables is called:\n entity_cache: List[str] = []\n # Number of recent message pairs to consider when updating entities:\n k: int = 3\n chat_history_key: str = \"history\"\n # Store to manage entity-related data:\n entity_store: BaseEntityStore = Field(default_factory=InMemoryEntityStore)\n @property\n def buffer(self) -> List[BaseMessage]:\n \"\"\"Access chat memory messages.\"\"\"\n return self.chat_memory.messages\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Will always return list of memory variables.\n :meta private:\n \"\"\"\n return [\"entities\", self.chat_history_key]\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"\n Returns chat history and all generated entities with summaries if available,\n and updates or clears the recent entity cache.\n New entity name can be found when calling this method, before the entity\n summaries are generated, so the entity cache values may be empty if no entity\n descriptions are generated yet.\n \"\"\"\n # Create an LLMChain for predicting entity names from the recent chat history:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/entity.html"} {"id": "0d197775b5c1-6", "text": "# Create an LLMChain for predicting entity names from the recent chat history:\n chain = LLMChain(llm=self.llm, prompt=self.entity_extraction_prompt)\n if self.input_key is None:\n prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)\n else:\n prompt_input_key = self.input_key\n # Extract an arbitrary window of the last message pairs from\n # the chat history, where the hyperparameter k is the\n # number of message pairs:\n buffer_string = get_buffer_string(\n self.buffer[-self.k * 2 :],\n human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )\n # Generates a comma-separated list of named entities,\n # e.g. \"Jane, White House, UFO\"\n # or \"NONE\" if no named entities are extracted:\n output = chain.predict(\n history=buffer_string,\n input=inputs[prompt_input_key],\n )\n # If no named entities are extracted, assigns an empty list.\n if output.strip() == \"NONE\":\n entities = []\n else:\n # Make a list of the extracted entities:\n entities = [w.strip() for w in output.split(\",\")]\n # Make a dictionary of entities with summary if exists:\n entity_summaries = {}\n for entity in entities:\n entity_summaries[entity] = self.entity_store.get(entity, \"\")\n # Replaces the entity name cache with the most recently discussed entities,\n # or if no entities were extracted, clears the cache:\n self.entity_cache = entities\n # Should we return as message objects or as a string?\n if self.return_messages:\n # Get last `k` pair of chat messages:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/entity.html"} {"id": "0d197775b5c1-7", "text": "if self.return_messages:\n # Get last `k` pair of chat messages:\n buffer: Any = self.buffer[-self.k * 2 :]\n else:\n # Reuse the string we made earlier:\n buffer = buffer_string\n return {\n self.chat_history_key: buffer,\n \"entities\": entity_summaries,\n }\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"\n Save context from this conversation history to the entity store.\n Generates a summary for each entity in the entity cache by prompting\n the model, and saves these summaries to the entity store.\n \"\"\"\n super().save_context(inputs, outputs)\n if self.input_key is None:\n prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)\n else:\n prompt_input_key = self.input_key\n # Extract an arbitrary window of the last message pairs from\n # the chat history, where the hyperparameter k is the\n # number of message pairs:\n buffer_string = get_buffer_string(\n self.buffer[-self.k * 2 :],\n human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )\n input_data = inputs[prompt_input_key]\n # Create an LLMChain for predicting entity summarization from the context\n chain = LLMChain(llm=self.llm, prompt=self.entity_summarization_prompt)\n # Generate new summaries for entities and save them in the entity store\n for entity in self.entity_cache:\n # Get existing summary if it exists\n existing_summary = self.entity_store.get(entity, \"\")\n output = chain.predict(\n summary=existing_summary,\n entity=entity,\n history=buffer_string,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/entity.html"} {"id": "0d197775b5c1-8", "text": "summary=existing_summary,\n entity=entity,\n history=buffer_string,\n input=input_data,\n )\n # Save the updated summary to the entity store\n self.entity_store.set(entity, output.strip())\n[docs] def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"\n self.chat_memory.clear()\n self.entity_cache.clear()\n self.entity_store.clear()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/entity.html"} {"id": "1df6fcb43524-0", "text": "Source code for langchain.memory.buffer\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import root_validator\nfrom langchain.memory.chat_memory import BaseChatMemory, BaseMemory\nfrom langchain.memory.utils import get_prompt_input_key\nfrom langchain.schema.messages import get_buffer_string\n[docs]class ConversationBufferMemory(BaseChatMemory):\n \"\"\"Buffer for storing conversation memory.\"\"\"\n human_prefix: str = \"Human\"\n ai_prefix: str = \"AI\"\n memory_key: str = \"history\" #: :meta private:\n @property\n def buffer(self) -> Any:\n \"\"\"String buffer of memory.\"\"\"\n if self.return_messages:\n return self.chat_memory.messages\n else:\n return get_buffer_string(\n self.chat_memory.messages,\n human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Will always return list of memory variables.\n :meta private:\n \"\"\"\n return [self.memory_key]\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Return history buffer.\"\"\"\n return {self.memory_key: self.buffer}\n[docs]class ConversationStringBufferMemory(BaseMemory):\n \"\"\"Buffer for storing conversation memory.\"\"\"\n human_prefix: str = \"Human\"\n ai_prefix: str = \"AI\"\n \"\"\"Prefix to use for AI generated responses.\"\"\"\n buffer: str = \"\"\n output_key: Optional[str] = None\n input_key: Optional[str] = None\n memory_key: str = \"history\" #: :meta private:\n[docs] @root_validator()\n def validate_chains(cls, values: Dict) -> Dict:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/buffer.html"} {"id": "1df6fcb43524-1", "text": "def validate_chains(cls, values: Dict) -> Dict:\n \"\"\"Validate that return messages is not True.\"\"\"\n if values.get(\"return_messages\", False):\n raise ValueError(\n \"return_messages must be False for ConversationStringBufferMemory\"\n )\n return values\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Will always return list of memory variables.\n :meta private:\n \"\"\"\n return [self.memory_key]\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n \"\"\"Return history buffer.\"\"\"\n return {self.memory_key: self.buffer}\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Save context from this conversation to buffer.\"\"\"\n if self.input_key is None:\n prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)\n else:\n prompt_input_key = self.input_key\n if self.output_key is None:\n if len(outputs) != 1:\n raise ValueError(f\"One output key expected, got {outputs.keys()}\")\n output_key = list(outputs.keys())[0]\n else:\n output_key = self.output_key\n human = f\"{self.human_prefix}: \" + inputs[prompt_input_key]\n ai = f\"{self.ai_prefix}: \" + outputs[output_key]\n self.buffer += \"\\n\" + \"\\n\".join([human, ai])\n[docs] def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"\n self.buffer = \"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/buffer.html"} {"id": "809b8bdd9d6c-0", "text": "Source code for langchain.memory.chat_memory\nfrom abc import ABC\nfrom typing import Any, Dict, Optional, Tuple\nfrom pydantic import Field\nfrom langchain.memory.chat_message_histories.in_memory import ChatMessageHistory\nfrom langchain.memory.utils import get_prompt_input_key\nfrom langchain.schema import BaseChatMessageHistory, BaseMemory\n[docs]class BaseChatMemory(BaseMemory, ABC):\n chat_memory: BaseChatMessageHistory = Field(default_factory=ChatMessageHistory)\n output_key: Optional[str] = None\n input_key: Optional[str] = None\n return_messages: bool = False\n def _get_input_output(\n self, inputs: Dict[str, Any], outputs: Dict[str, str]\n ) -> Tuple[str, str]:\n if self.input_key is None:\n prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)\n else:\n prompt_input_key = self.input_key\n if self.output_key is None:\n if len(outputs) != 1:\n raise ValueError(f\"One output key expected, got {outputs.keys()}\")\n output_key = list(outputs.keys())[0]\n else:\n output_key = self.output_key\n return inputs[prompt_input_key], outputs[output_key]\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Save context from this conversation to buffer.\"\"\"\n input_str, output_str = self._get_input_output(inputs, outputs)\n self.chat_memory.add_user_message(input_str)\n self.chat_memory.add_ai_message(output_str)\n[docs] def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"\n self.chat_memory.clear()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_memory.html"} {"id": "1cd6af2f0244-0", "text": "Source code for langchain.memory.vectorstore\n\"\"\"Class for a VectorStore-backed memory object.\"\"\"\nfrom typing import Any, Dict, List, Optional, Union\nfrom pydantic import Field\nfrom langchain.memory.chat_memory import BaseMemory\nfrom langchain.memory.utils import get_prompt_input_key\nfrom langchain.schema import Document\nfrom langchain.vectorstores.base import VectorStoreRetriever\n[docs]class VectorStoreRetrieverMemory(BaseMemory):\n \"\"\"Class for a VectorStore-backed memory object.\"\"\"\n retriever: VectorStoreRetriever = Field(exclude=True)\n \"\"\"VectorStoreRetriever object to connect to.\"\"\"\n memory_key: str = \"history\" #: :meta private:\n \"\"\"Key name to locate the memories in the result of load_memory_variables.\"\"\"\n input_key: Optional[str] = None\n \"\"\"Key name to index the inputs to load_memory_variables.\"\"\"\n return_docs: bool = False\n \"\"\"Whether or not to return the result of querying the database directly.\"\"\"\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"The list of keys emitted from the load_memory_variables method.\"\"\"\n return [self.memory_key]\n def _get_prompt_input_key(self, inputs: Dict[str, Any]) -> str:\n \"\"\"Get the input key for the prompt.\"\"\"\n if self.input_key is None:\n return get_prompt_input_key(inputs, self.memory_variables)\n return self.input_key\n[docs] def load_memory_variables(\n self, inputs: Dict[str, Any]\n ) -> Dict[str, Union[List[Document], str]]:\n \"\"\"Return history buffer.\"\"\"\n input_key = self._get_prompt_input_key(inputs)\n query = inputs[input_key]\n docs = self.retriever.get_relevant_documents(query)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/vectorstore.html"} {"id": "1cd6af2f0244-1", "text": "docs = self.retriever.get_relevant_documents(query)\n result: Union[List[Document], str]\n if not self.return_docs:\n result = \"\\n\".join([doc.page_content for doc in docs])\n else:\n result = docs\n return {self.memory_key: result}\n def _form_documents(\n self, inputs: Dict[str, Any], outputs: Dict[str, str]\n ) -> List[Document]:\n \"\"\"Format context from this conversation to buffer.\"\"\"\n # Each document should only include the current turn, not the chat history\n filtered_inputs = {k: v for k, v in inputs.items() if k != self.memory_key}\n texts = [\n f\"{k}: {v}\"\n for k, v in list(filtered_inputs.items()) + list(outputs.items())\n ]\n page_content = \"\\n\".join(texts)\n return [Document(page_content=page_content)]\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Save context from this conversation to buffer.\"\"\"\n documents = self._form_documents(inputs, outputs)\n self.retriever.add_documents(documents)\n[docs] def clear(self) -> None:\n \"\"\"Nothing to clear.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/vectorstore.html"} {"id": "b3745250c5dd-0", "text": "Source code for langchain.memory.kg\nfrom typing import Any, Dict, List, Type, Union\nfrom pydantic import Field\nfrom langchain.chains.llm import LLMChain\nfrom langchain.graphs import NetworkxEntityGraph\nfrom langchain.graphs.networkx_graph import KnowledgeTriple, get_entities, parse_triples\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.memory.prompt import (\n ENTITY_EXTRACTION_PROMPT,\n KNOWLEDGE_TRIPLE_EXTRACTION_PROMPT,\n)\nfrom langchain.memory.utils import get_prompt_input_key\nfrom langchain.schema import BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.schema.messages import BaseMessage, SystemMessage, get_buffer_string\n[docs]class ConversationKGMemory(BaseChatMemory):\n \"\"\"Knowledge graph memory for storing conversation memory.\n Integrates with external knowledge graph to store and retrieve\n information about knowledge triples in the conversation.\n \"\"\"\n k: int = 2\n human_prefix: str = \"Human\"\n ai_prefix: str = \"AI\"\n kg: NetworkxEntityGraph = Field(default_factory=NetworkxEntityGraph)\n knowledge_extraction_prompt: BasePromptTemplate = KNOWLEDGE_TRIPLE_EXTRACTION_PROMPT\n entity_extraction_prompt: BasePromptTemplate = ENTITY_EXTRACTION_PROMPT\n llm: BaseLanguageModel\n summary_message_cls: Type[BaseMessage] = SystemMessage\n \"\"\"Number of previous utterances to include in the context.\"\"\"\n memory_key: str = \"history\" #: :meta private:\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Return history buffer.\"\"\"\n entities = self._get_current_entities(inputs)\n summary_strings = []\n for entity in entities:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/kg.html"} {"id": "b3745250c5dd-1", "text": "summary_strings = []\n for entity in entities:\n knowledge = self.kg.get_entity_knowledge(entity)\n if knowledge:\n summary = f\"On {entity}: {'. '.join(knowledge)}.\"\n summary_strings.append(summary)\n context: Union[str, List]\n if not summary_strings:\n context = [] if self.return_messages else \"\"\n elif self.return_messages:\n context = [\n self.summary_message_cls(content=text) for text in summary_strings\n ]\n else:\n context = \"\\n\".join(summary_strings)\n return {self.memory_key: context}\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Will always return list of memory variables.\n :meta private:\n \"\"\"\n return [self.memory_key]\n def _get_prompt_input_key(self, inputs: Dict[str, Any]) -> str:\n \"\"\"Get the input key for the prompt.\"\"\"\n if self.input_key is None:\n return get_prompt_input_key(inputs, self.memory_variables)\n return self.input_key\n def _get_prompt_output_key(self, outputs: Dict[str, Any]) -> str:\n \"\"\"Get the output key for the prompt.\"\"\"\n if self.output_key is None:\n if len(outputs) != 1:\n raise ValueError(f\"One output key expected, got {outputs.keys()}\")\n return list(outputs.keys())[0]\n return self.output_key\n[docs] def get_current_entities(self, input_string: str) -> List[str]:\n chain = LLMChain(llm=self.llm, prompt=self.entity_extraction_prompt)\n buffer_string = get_buffer_string(\n self.chat_memory.messages[-self.k * 2 :],\n human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/kg.html"} {"id": "b3745250c5dd-2", "text": "human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )\n output = chain.predict(\n history=buffer_string,\n input=input_string,\n )\n return get_entities(output)\n def _get_current_entities(self, inputs: Dict[str, Any]) -> List[str]:\n \"\"\"Get the current entities in the conversation.\"\"\"\n prompt_input_key = self._get_prompt_input_key(inputs)\n return self.get_current_entities(inputs[prompt_input_key])\n[docs] def get_knowledge_triplets(self, input_string: str) -> List[KnowledgeTriple]:\n chain = LLMChain(llm=self.llm, prompt=self.knowledge_extraction_prompt)\n buffer_string = get_buffer_string(\n self.chat_memory.messages[-self.k * 2 :],\n human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )\n output = chain.predict(\n history=buffer_string,\n input=input_string,\n verbose=True,\n )\n knowledge = parse_triples(output)\n return knowledge\n def _get_and_update_kg(self, inputs: Dict[str, Any]) -> None:\n \"\"\"Get and update knowledge graph from the conversation history.\"\"\"\n prompt_input_key = self._get_prompt_input_key(inputs)\n knowledge = self.get_knowledge_triplets(inputs[prompt_input_key])\n for triple in knowledge:\n self.kg.add_triple(triple)\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Save context from this conversation to buffer.\"\"\"\n super().save_context(inputs, outputs)\n self._get_and_update_kg(inputs)\n[docs] def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/kg.html"} {"id": "b3745250c5dd-3", "text": "[docs] def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"\n super().clear()\n self.kg.clear()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/kg.html"} {"id": "b72e44086b0e-0", "text": "Source code for langchain.memory.utils\nfrom typing import Any, Dict, List\nfrom langchain.schema.messages import get_buffer_string # noqa: 401\n[docs]def get_prompt_input_key(inputs: Dict[str, Any], memory_variables: List[str]) -> str:\n \"\"\"\n Get the prompt input key.\n Args:\n inputs: Dict[str, Any]\n memory_variables: List[str]\n Returns:\n A prompt input key.\n \"\"\"\n # \"stop\" is a special key that can be passed as input but is not used to\n # format the prompt.\n prompt_input_keys = list(set(inputs).difference(memory_variables + [\"stop\"]))\n if len(prompt_input_keys) != 1:\n raise ValueError(f\"One input key expected got {prompt_input_keys}\")\n return prompt_input_keys[0]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/utils.html"} {"id": "a0d4c6a63da7-0", "text": "Source code for langchain.memory.motorhead_memory\nfrom typing import Any, Dict, List, Optional\nimport requests\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.schema.messages import get_buffer_string\nMANAGED_URL = \"https://api.getmetal.io/v1/motorhead\"\n# LOCAL_URL = \"http://localhost:8080\"\n[docs]class MotorheadMemory(BaseChatMemory):\n url: str = MANAGED_URL\n timeout = 3000\n memory_key = \"history\"\n session_id: str\n context: Optional[str] = None\n # Managed Params\n api_key: Optional[str] = None\n client_id: Optional[str] = None\n def __get_headers(self) -> Dict[str, str]:\n is_managed = self.url == MANAGED_URL\n headers = {\n \"Content-Type\": \"application/json\",\n }\n if is_managed and not (self.api_key and self.client_id):\n raise ValueError(\n \"\"\"\n You must provide an API key or a client ID to use the managed\n version of Motorhead. Visit https://getmetal.io for more information.\n \"\"\"\n )\n if is_managed and self.api_key and self.client_id:\n headers[\"x-metal-api-key\"] = self.api_key\n headers[\"x-metal-client-id\"] = self.client_id\n return headers\n[docs] async def init(self) -> None:\n res = requests.get(\n f\"{self.url}/sessions/{self.session_id}/memory\",\n timeout=self.timeout,\n headers=self.__get_headers(),\n )\n res_data = res.json()\n res_data = res_data.get(\"data\", res_data) # Handle Managed Version\n messages = res_data.get(\"messages\", [])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/motorhead_memory.html"} {"id": "a0d4c6a63da7-1", "text": "messages = res_data.get(\"messages\", [])\n context = res_data.get(\"context\", \"NONE\")\n for message in reversed(messages):\n if message[\"role\"] == \"AI\":\n self.chat_memory.add_ai_message(message[\"content\"])\n else:\n self.chat_memory.add_user_message(message[\"content\"])\n if context and context != \"NONE\":\n self.context = context\n[docs] def load_memory_variables(self, values: Dict[str, Any]) -> Dict[str, Any]:\n if self.return_messages:\n return {self.memory_key: self.chat_memory.messages}\n else:\n return {self.memory_key: get_buffer_string(self.chat_memory.messages)}\n @property\n def memory_variables(self) -> List[str]:\n return [self.memory_key]\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n input_str, output_str = self._get_input_output(inputs, outputs)\n requests.post(\n f\"{self.url}/sessions/{self.session_id}/memory\",\n timeout=self.timeout,\n json={\n \"messages\": [\n {\"role\": \"Human\", \"content\": f\"{input_str}\"},\n {\"role\": \"AI\", \"content\": f\"{output_str}\"},\n ]\n },\n headers=self.__get_headers(),\n )\n super().save_context(inputs, outputs)\n[docs] def delete_session(self) -> None:\n \"\"\"Delete a session\"\"\"\n requests.delete(f\"{self.url}/sessions/{self.session_id}/memory\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/motorhead_memory.html"} {"id": "6d472a7f0c97-0", "text": "Source code for langchain.memory.simple\nfrom typing import Any, Dict, List\nfrom langchain.schema import BaseMemory\n[docs]class SimpleMemory(BaseMemory):\n \"\"\"Simple memory for storing context or other bits of information that shouldn't\n ever change between prompts.\n \"\"\"\n memories: Dict[str, Any] = dict()\n @property\n def memory_variables(self) -> List[str]:\n return list(self.memories.keys())\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n return self.memories\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Nothing should be saved or changed, my memory is set in stone.\"\"\"\n pass\n[docs] def clear(self) -> None:\n \"\"\"Nothing to clear, got a memory like a vault.\"\"\"\n pass", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/simple.html"} {"id": "94eef1072d07-0", "text": "Source code for langchain.memory.token_buffer\nfrom typing import Any, Dict, List\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.schema.messages import BaseMessage, get_buffer_string\n[docs]class ConversationTokenBufferMemory(BaseChatMemory):\n \"\"\"Buffer for storing conversation memory.\"\"\"\n human_prefix: str = \"Human\"\n ai_prefix: str = \"AI\"\n llm: BaseLanguageModel\n memory_key: str = \"history\"\n max_token_limit: int = 2000\n @property\n def buffer(self) -> List[BaseMessage]:\n \"\"\"String buffer of memory.\"\"\"\n return self.chat_memory.messages\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Will always return list of memory variables.\n :meta private:\n \"\"\"\n return [self.memory_key]\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Return history buffer.\"\"\"\n buffer: Any = self.buffer\n if self.return_messages:\n final_buffer: Any = buffer\n else:\n final_buffer = get_buffer_string(\n buffer,\n human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )\n return {self.memory_key: final_buffer}\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Save context from this conversation to buffer. Pruned.\"\"\"\n super().save_context(inputs, outputs)\n # Prune buffer if it exceeds max token limit\n buffer = self.chat_memory.messages\n curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)\n if curr_buffer_length > self.max_token_limit:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/token_buffer.html"} {"id": "94eef1072d07-1", "text": "if curr_buffer_length > self.max_token_limit:\n pruned_memory = []\n while curr_buffer_length > self.max_token_limit:\n pruned_memory.append(buffer.pop(0))\n curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/token_buffer.html"} {"id": "7f7b943fd6ec-0", "text": "Source code for langchain.memory.combined\nimport warnings\nfrom typing import Any, Dict, List, Set\nfrom pydantic import validator\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.schema import BaseMemory\n[docs]class CombinedMemory(BaseMemory):\n \"\"\"Class for combining multiple memories' data together.\"\"\"\n memories: List[BaseMemory]\n \"\"\"For tracking all the memories that should be accessed.\"\"\"\n[docs] @validator(\"memories\")\n def check_repeated_memory_variable(\n cls, value: List[BaseMemory]\n ) -> List[BaseMemory]:\n all_variables: Set[str] = set()\n for val in value:\n overlap = all_variables.intersection(val.memory_variables)\n if overlap:\n raise ValueError(\n f\"The same variables {overlap} are found in multiple\"\n \"memory object, which is not allowed by CombinedMemory.\"\n )\n all_variables |= set(val.memory_variables)\n return value\n[docs] @validator(\"memories\")\n def check_input_key(cls, value: List[BaseMemory]) -> List[BaseMemory]:\n \"\"\"Check that if memories are of type BaseChatMemory that input keys exist.\"\"\"\n for val in value:\n if isinstance(val, BaseChatMemory):\n if val.input_key is None:\n warnings.warn(\n \"When using CombinedMemory, \"\n \"input keys should be so the input is known. \"\n f\" Was not set on {val}\"\n )\n return value\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"All the memory variables that this instance provides.\"\"\"\n \"\"\"Collected from the all the linked memories.\"\"\"\n memory_variables = []\n for memory in self.memories:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/combined.html"} {"id": "7f7b943fd6ec-1", "text": "memory_variables = []\n for memory in self.memories:\n memory_variables.extend(memory.memory_variables)\n return memory_variables\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n \"\"\"Load all vars from sub-memories.\"\"\"\n memory_data: Dict[str, Any] = {}\n # Collect vars from all sub-memories\n for memory in self.memories:\n data = memory.load_memory_variables(inputs)\n memory_data = {\n **memory_data,\n **data,\n }\n return memory_data\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Save context from this session for every memory.\"\"\"\n # Save context for all sub-memories\n for memory in self.memories:\n memory.save_context(inputs, outputs)\n[docs] def clear(self) -> None:\n \"\"\"Clear context from this session for every memory.\"\"\"\n for memory in self.memories:\n memory.clear()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/combined.html"} {"id": "3dbcf750ab43-0", "text": "Source code for langchain.memory.summary_buffer\nfrom typing import Any, Dict, List\nfrom pydantic import root_validator\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.memory.summary import SummarizerMixin\nfrom langchain.schema.messages import BaseMessage, get_buffer_string\n[docs]class ConversationSummaryBufferMemory(BaseChatMemory, SummarizerMixin):\n \"\"\"Buffer with summarizer for storing conversation memory.\"\"\"\n max_token_limit: int = 2000\n moving_summary_buffer: str = \"\"\n memory_key: str = \"history\"\n @property\n def buffer(self) -> List[BaseMessage]:\n return self.chat_memory.messages\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Will always return list of memory variables.\n :meta private:\n \"\"\"\n return [self.memory_key]\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Return history buffer.\"\"\"\n buffer = self.buffer\n if self.moving_summary_buffer != \"\":\n first_messages: List[BaseMessage] = [\n self.summary_message_cls(content=self.moving_summary_buffer)\n ]\n buffer = first_messages + buffer\n if self.return_messages:\n final_buffer: Any = buffer\n else:\n final_buffer = get_buffer_string(\n buffer, human_prefix=self.human_prefix, ai_prefix=self.ai_prefix\n )\n return {self.memory_key: final_buffer}\n[docs] @root_validator()\n def validate_prompt_input_variables(cls, values: Dict) -> Dict:\n \"\"\"Validate that prompt input variables are consistent.\"\"\"\n prompt_variables = values[\"prompt\"].input_variables\n expected_keys = {\"summary\", \"new_lines\"}\n if expected_keys != set(prompt_variables):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/summary_buffer.html"} {"id": "3dbcf750ab43-1", "text": "if expected_keys != set(prompt_variables):\n raise ValueError(\n \"Got unexpected prompt input variables. The prompt expects \"\n f\"{prompt_variables}, but it should have {expected_keys}.\"\n )\n return values\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Save context from this conversation to buffer.\"\"\"\n super().save_context(inputs, outputs)\n self.prune()\n[docs] def prune(self) -> None:\n \"\"\"Prune buffer if it exceeds max token limit\"\"\"\n buffer = self.chat_memory.messages\n curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)\n if curr_buffer_length > self.max_token_limit:\n pruned_memory = []\n while curr_buffer_length > self.max_token_limit:\n pruned_memory.append(buffer.pop(0))\n curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)\n self.moving_summary_buffer = self.predict_new_summary(\n pruned_memory, self.moving_summary_buffer\n )\n[docs] def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"\n super().clear()\n self.moving_summary_buffer = \"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/summary_buffer.html"} {"id": "40a540b201d1-0", "text": "Source code for langchain.memory.chat_message_histories.cassandra\n\"\"\"Cassandra-based chat message history, based on cassIO.\"\"\"\nfrom __future__ import annotations\nimport json\nimport typing\nfrom typing import List\nif typing.TYPE_CHECKING:\n from cassandra.cluster import Session\nfrom langchain.schema import (\n BaseChatMessageHistory,\n)\nfrom langchain.schema.messages import BaseMessage, _message_to_dict, messages_from_dict\nDEFAULT_TABLE_NAME = \"message_store\"\nDEFAULT_TTL_SECONDS = None\n[docs]class CassandraChatMessageHistory(BaseChatMessageHistory):\n \"\"\"Chat message history that stores history in Cassandra.\n Args:\n session_id: arbitrary key that is used to store the messages\n of a single chat session.\n session: a Cassandra `Session` object (an open DB connection)\n keyspace: name of the keyspace to use.\n table_name: name of the table to use.\n ttl_seconds: time-to-live (seconds) for automatic expiration\n of stored entries. None (default) for no expiration.\n \"\"\"\n def __init__(\n self,\n session_id: str,\n session: Session,\n keyspace: str,\n table_name: str = DEFAULT_TABLE_NAME,\n ttl_seconds: int | None = DEFAULT_TTL_SECONDS,\n ) -> None:\n try:\n from cassio.history import StoredBlobHistory\n except (ImportError, ModuleNotFoundError):\n raise ValueError(\n \"Could not import cassio python package. \"\n \"Please install it with `pip install cassio`.\"\n )\n self.session_id = session_id\n self.ttl_seconds = ttl_seconds\n self.blob_history = StoredBlobHistory(session, keyspace, table_name)\n @property", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/cassandra.html"} {"id": "40a540b201d1-1", "text": "@property\n def messages(self) -> List[BaseMessage]: # type: ignore\n \"\"\"Retrieve all session messages from DB\"\"\"\n message_blobs = self.blob_history.retrieve(\n self.session_id,\n )\n items = [json.loads(message_blob) for message_blob in message_blobs]\n messages = messages_from_dict(items)\n return messages\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Write a message to the table\"\"\"\n self.blob_history.store(\n self.session_id, json.dumps(_message_to_dict(message)), self.ttl_seconds\n )\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from DB\"\"\"\n self.blob_history.clear_session_id(self.session_id)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/cassandra.html"} {"id": "016dc49635d8-0", "text": "Source code for langchain.memory.chat_message_histories.momento\nfrom __future__ import annotations\nimport json\nfrom datetime import timedelta\nfrom typing import TYPE_CHECKING, Any, Optional\nfrom langchain.schema import (\n BaseChatMessageHistory,\n)\nfrom langchain.schema.messages import BaseMessage, _message_to_dict, messages_from_dict\nfrom langchain.utils import get_from_env\nif TYPE_CHECKING:\n import momento\ndef _ensure_cache_exists(cache_client: momento.CacheClient, cache_name: str) -> None:\n \"\"\"Create cache if it doesn't exist.\n Raises:\n SdkException: Momento service or network error\n Exception: Unexpected response\n \"\"\"\n from momento.responses import CreateCache\n create_cache_response = cache_client.create_cache(cache_name)\n if isinstance(create_cache_response, CreateCache.Success) or isinstance(\n create_cache_response, CreateCache.CacheAlreadyExists\n ):\n return None\n elif isinstance(create_cache_response, CreateCache.Error):\n raise create_cache_response.inner_exception\n else:\n raise Exception(f\"Unexpected response cache creation: {create_cache_response}\")\n[docs]class MomentoChatMessageHistory(BaseChatMessageHistory):\n \"\"\"Chat message history cache that uses Momento as a backend.\n See https://gomomento.com/\"\"\"\n def __init__(\n self,\n session_id: str,\n cache_client: momento.CacheClient,\n cache_name: str,\n *,\n key_prefix: str = \"message_store:\",\n ttl: Optional[timedelta] = None,\n ensure_cache_exists: bool = True,\n ):\n \"\"\"Instantiate a chat message history cache that uses Momento as a backend.\n Note: to instantiate the cache client passed to MomentoChatMessageHistory,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/momento.html"} {"id": "016dc49635d8-1", "text": "Note: to instantiate the cache client passed to MomentoChatMessageHistory,\n you must have a Momento account at https://gomomento.com/.\n Args:\n session_id (str): The session ID to use for this chat session.\n cache_client (CacheClient): The Momento cache client.\n cache_name (str): The name of the cache to use to store the messages.\n key_prefix (str, optional): The prefix to apply to the cache key.\n Defaults to \"message_store:\".\n ttl (Optional[timedelta], optional): The TTL to use for the messages.\n Defaults to None, ie the default TTL of the cache will be used.\n ensure_cache_exists (bool, optional): Create the cache if it doesn't exist.\n Defaults to True.\n Raises:\n ImportError: Momento python package is not installed.\n TypeError: cache_client is not of type momento.CacheClientObject\n \"\"\"\n try:\n from momento import CacheClient\n from momento.requests import CollectionTtl\n except ImportError:\n raise ImportError(\n \"Could not import momento python package. \"\n \"Please install it with `pip install momento`.\"\n )\n if not isinstance(cache_client, CacheClient):\n raise TypeError(\"cache_client must be a momento.CacheClient object.\")\n if ensure_cache_exists:\n _ensure_cache_exists(cache_client, cache_name)\n self.key = key_prefix + session_id\n self.cache_client = cache_client\n self.cache_name = cache_name\n if ttl is not None:\n self.ttl = CollectionTtl.of(ttl)\n else:\n self.ttl = CollectionTtl.from_cache_ttl()\n[docs] @classmethod\n def from_client_params(\n cls,\n session_id: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/momento.html"} {"id": "016dc49635d8-2", "text": "def from_client_params(\n cls,\n session_id: str,\n cache_name: str,\n ttl: timedelta,\n *,\n configuration: Optional[momento.config.Configuration] = None,\n auth_token: Optional[str] = None,\n **kwargs: Any,\n ) -> MomentoChatMessageHistory:\n \"\"\"Construct cache from CacheClient parameters.\"\"\"\n try:\n from momento import CacheClient, Configurations, CredentialProvider\n except ImportError:\n raise ImportError(\n \"Could not import momento python package. \"\n \"Please install it with `pip install momento`.\"\n )\n if configuration is None:\n configuration = Configurations.Laptop.v1()\n auth_token = auth_token or get_from_env(\"auth_token\", \"MOMENTO_AUTH_TOKEN\")\n credentials = CredentialProvider.from_string(auth_token)\n cache_client = CacheClient(configuration, credentials, default_ttl=ttl)\n return cls(session_id, cache_client, cache_name, ttl=ttl, **kwargs)\n @property\n def messages(self) -> list[BaseMessage]: # type: ignore[override]\n \"\"\"Retrieve the messages from Momento.\n Raises:\n SdkException: Momento service or network error\n Exception: Unexpected response\n Returns:\n list[BaseMessage]: List of cached messages\n \"\"\"\n from momento.responses import CacheListFetch\n fetch_response = self.cache_client.list_fetch(self.cache_name, self.key)\n if isinstance(fetch_response, CacheListFetch.Hit):\n items = [json.loads(m) for m in fetch_response.value_list_string]\n return messages_from_dict(items)\n elif isinstance(fetch_response, CacheListFetch.Miss):\n return []\n elif isinstance(fetch_response, CacheListFetch.Error):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/momento.html"} {"id": "016dc49635d8-3", "text": "return []\n elif isinstance(fetch_response, CacheListFetch.Error):\n raise fetch_response.inner_exception\n else:\n raise Exception(f\"Unexpected response: {fetch_response}\")\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Store a message in the cache.\n Args:\n message (BaseMessage): The message object to store.\n Raises:\n SdkException: Momento service or network error.\n Exception: Unexpected response.\n \"\"\"\n from momento.responses import CacheListPushBack\n item = json.dumps(_message_to_dict(message))\n push_response = self.cache_client.list_push_back(\n self.cache_name, self.key, item, ttl=self.ttl\n )\n if isinstance(push_response, CacheListPushBack.Success):\n return None\n elif isinstance(push_response, CacheListPushBack.Error):\n raise push_response.inner_exception\n else:\n raise Exception(f\"Unexpected response: {push_response}\")\n[docs] def clear(self) -> None:\n \"\"\"Remove the session's messages from the cache.\n Raises:\n SdkException: Momento service or network error.\n Exception: Unexpected response.\n \"\"\"\n from momento.responses import CacheDelete\n delete_response = self.cache_client.delete(self.cache_name, self.key)\n if isinstance(delete_response, CacheDelete.Success):\n return None\n elif isinstance(delete_response, CacheDelete.Error):\n raise delete_response.inner_exception\n else:\n raise Exception(f\"Unexpected response: {delete_response}\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/momento.html"} {"id": "dde663e59360-0", "text": "Source code for langchain.memory.chat_message_histories.dynamodb\nimport logging\nfrom typing import List, Optional\nfrom langchain.schema import (\n BaseChatMessageHistory,\n)\nfrom langchain.schema.messages import (\n BaseMessage,\n _message_to_dict,\n messages_from_dict,\n messages_to_dict,\n)\nlogger = logging.getLogger(__name__)\n[docs]class DynamoDBChatMessageHistory(BaseChatMessageHistory):\n \"\"\"Chat message history that stores history in AWS DynamoDB.\n This class expects that a DynamoDB table with name `table_name`\n and a partition Key of `SessionId` is present.\n Args:\n table_name: name of the DynamoDB table\n session_id: arbitrary key that is used to store the messages\n of a single chat session.\n endpoint_url: URL of the AWS endpoint to connect to. This argument\n is optional and useful for test purposes, like using Localstack.\n If you plan to use AWS cloud service, you normally don't have to\n worry about setting the endpoint_url.\n \"\"\"\n def __init__(\n self, table_name: str, session_id: str, endpoint_url: Optional[str] = None\n ):\n import boto3\n if endpoint_url:\n client = boto3.resource(\"dynamodb\", endpoint_url=endpoint_url)\n else:\n client = boto3.resource(\"dynamodb\")\n self.table = client.Table(table_name)\n self.session_id = session_id\n @property\n def messages(self) -> List[BaseMessage]: # type: ignore\n \"\"\"Retrieve the messages from DynamoDB\"\"\"\n from botocore.exceptions import ClientError\n response = None\n try:\n response = self.table.get_item(Key={\"SessionId\": self.session_id})", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/dynamodb.html"} {"id": "dde663e59360-1", "text": "response = self.table.get_item(Key={\"SessionId\": self.session_id})\n except ClientError as error:\n if error.response[\"Error\"][\"Code\"] == \"ResourceNotFoundException\":\n logger.warning(\"No record found with session id: %s\", self.session_id)\n else:\n logger.error(error)\n if response and \"Item\" in response:\n items = response[\"Item\"][\"History\"]\n else:\n items = []\n messages = messages_from_dict(items)\n return messages\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Append the message to the record in DynamoDB\"\"\"\n from botocore.exceptions import ClientError\n messages = messages_to_dict(self.messages)\n _message = _message_to_dict(message)\n messages.append(_message)\n try:\n self.table.put_item(\n Item={\"SessionId\": self.session_id, \"History\": messages}\n )\n except ClientError as err:\n logger.error(err)\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from DynamoDB\"\"\"\n from botocore.exceptions import ClientError\n try:\n self.table.delete_item(Key={\"SessionId\": self.session_id})\n except ClientError as err:\n logger.error(err)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/dynamodb.html"} {"id": "cefd0ee44a61-0", "text": "Source code for langchain.memory.chat_message_histories.redis\nimport json\nimport logging\nfrom typing import List, Optional\nfrom langchain.schema import (\n BaseChatMessageHistory,\n)\nfrom langchain.schema.messages import BaseMessage, _message_to_dict, messages_from_dict\nlogger = logging.getLogger(__name__)\n[docs]class RedisChatMessageHistory(BaseChatMessageHistory):\n \"\"\"Chat message history stored in a Redis database.\"\"\"\n def __init__(\n self,\n session_id: str,\n url: str = \"redis://localhost:6379/0\",\n key_prefix: str = \"message_store:\",\n ttl: Optional[int] = None,\n ):\n try:\n import redis\n except ImportError:\n raise ImportError(\n \"Could not import redis python package. \"\n \"Please install it with `pip install redis`.\"\n )\n try:\n self.redis_client = redis.Redis.from_url(url=url)\n except redis.exceptions.ConnectionError as error:\n logger.error(error)\n self.session_id = session_id\n self.key_prefix = key_prefix\n self.ttl = ttl\n @property\n def key(self) -> str:\n \"\"\"Construct the record key to use\"\"\"\n return self.key_prefix + self.session_id\n @property\n def messages(self) -> List[BaseMessage]: # type: ignore\n \"\"\"Retrieve the messages from Redis\"\"\"\n _items = self.redis_client.lrange(self.key, 0, -1)\n items = [json.loads(m.decode(\"utf-8\")) for m in _items[::-1]]\n messages = messages_from_dict(items)\n return messages\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Append the message to the record in Redis\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/redis.html"} {"id": "cefd0ee44a61-1", "text": "\"\"\"Append the message to the record in Redis\"\"\"\n self.redis_client.lpush(self.key, json.dumps(_message_to_dict(message)))\n if self.ttl:\n self.redis_client.expire(self.key, self.ttl)\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from Redis\"\"\"\n self.redis_client.delete(self.key)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/redis.html"} {"id": "09e38d1c9735-0", "text": "Source code for langchain.memory.chat_message_histories.zep\nfrom __future__ import annotations\nimport logging\nfrom typing import TYPE_CHECKING, Dict, List, Optional\nfrom langchain.schema import (\n BaseChatMessageHistory,\n)\nfrom langchain.schema.messages import AIMessage, BaseMessage, HumanMessage\nif TYPE_CHECKING:\n from zep_python import Memory, MemorySearchResult, Message, NotFoundError\nlogger = logging.getLogger(__name__)\n[docs]class ZepChatMessageHistory(BaseChatMessageHistory):\n \"\"\"A ChatMessageHistory implementation that uses Zep as a backend.\n Recommended usage::\n # Set up Zep Chat History\n zep_chat_history = ZepChatMessageHistory(\n session_id=session_id,\n url=ZEP_API_URL,\n api_key=,\n )\n # Use a standard ConversationBufferMemory to encapsulate the Zep chat history\n memory = ConversationBufferMemory(\n memory_key=\"chat_history\", chat_memory=zep_chat_history\n )\n Zep provides long-term conversation storage for LLM apps. The server stores,\n summarizes, embeds, indexes, and enriches conversational AI chat\n histories, and exposes them via simple, low-latency APIs.\n For server installation instructions and more, see:\n https://docs.getzep.com/deployment/quickstart/\n This class is a thin wrapper around the zep-python package. Additional\n Zep functionality is exposed via the `zep_summary` and `zep_messages`\n properties.\n For more information on the zep-python package, see:\n https://github.com/getzep/zep-python\n \"\"\"\n def __init__(\n self,\n session_id: str,\n url: str = \"http://localhost:8000\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/zep.html"} {"id": "09e38d1c9735-1", "text": "url: str = \"http://localhost:8000\",\n api_key: Optional[str] = None,\n ) -> None:\n try:\n from zep_python import ZepClient\n except ImportError:\n raise ValueError(\n \"Could not import zep-python package. \"\n \"Please install it with `pip install zep-python`.\"\n )\n self.zep_client = ZepClient(base_url=url, api_key=api_key)\n self.session_id = session_id\n @property\n def messages(self) -> List[BaseMessage]: # type: ignore\n \"\"\"Retrieve messages from Zep memory\"\"\"\n zep_memory: Optional[Memory] = self._get_memory()\n if not zep_memory:\n return []\n messages: List[BaseMessage] = []\n # Extract summary, if present, and messages\n if zep_memory.summary:\n if len(zep_memory.summary.content) > 0:\n messages.append(HumanMessage(content=zep_memory.summary.content))\n if zep_memory.messages:\n msg: Message\n for msg in zep_memory.messages:\n if msg.role == \"ai\":\n messages.append(AIMessage(content=msg.content))\n else:\n messages.append(HumanMessage(content=msg.content))\n return messages\n @property\n def zep_messages(self) -> List[Message]:\n \"\"\"Retrieve summary from Zep memory\"\"\"\n zep_memory: Optional[Memory] = self._get_memory()\n if not zep_memory:\n return []\n return zep_memory.messages\n @property\n def zep_summary(self) -> Optional[str]:\n \"\"\"Retrieve summary from Zep memory\"\"\"\n zep_memory: Optional[Memory] = self._get_memory()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/zep.html"} {"id": "09e38d1c9735-2", "text": "zep_memory: Optional[Memory] = self._get_memory()\n if not zep_memory or not zep_memory.summary:\n return None\n return zep_memory.summary.content\n def _get_memory(self) -> Optional[Memory]:\n \"\"\"Retrieve memory from Zep\"\"\"\n from zep_python import NotFoundError\n try:\n zep_memory: Memory = self.zep_client.get_memory(self.session_id)\n except NotFoundError:\n logger.warning(\n f\"Session {self.session_id} not found in Zep. Returning None\"\n )\n return None\n return zep_memory\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Append the message to the Zep memory history\"\"\"\n from zep_python import Memory, Message\n zep_message: Message\n if isinstance(message, HumanMessage):\n zep_message = Message(content=message.content, role=\"human\")\n else:\n zep_message = Message(content=message.content, role=\"ai\")\n zep_memory = Memory(messages=[zep_message])\n self.zep_client.add_memory(self.session_id, zep_memory)\n[docs] def search(\n self, query: str, metadata: Optional[Dict] = None, limit: Optional[int] = None\n ) -> List[MemorySearchResult]:\n \"\"\"Search Zep memory for messages matching the query\"\"\"\n from zep_python import MemorySearchPayload\n payload: MemorySearchPayload = MemorySearchPayload(\n text=query, metadata=metadata\n )\n return self.zep_client.search_memory(self.session_id, payload, limit=limit)\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from Zep. Note that Zep is long-term storage for memory", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/zep.html"} {"id": "09e38d1c9735-3", "text": "\"\"\"Clear session memory from Zep. Note that Zep is long-term storage for memory\n and this is not advised unless you have specific data retention requirements.\n \"\"\"\n try:\n self.zep_client.delete_memory(self.session_id)\n except NotFoundError:\n logger.warning(\n f\"Session {self.session_id} not found in Zep. Skipping delete.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/zep.html"} {"id": "93a9fde0e6f2-0", "text": "Source code for langchain.memory.chat_message_histories.in_memory\nfrom typing import List\nfrom pydantic import BaseModel\nfrom langchain.schema import (\n BaseChatMessageHistory,\n)\nfrom langchain.schema.messages import BaseMessage\n[docs]class ChatMessageHistory(BaseChatMessageHistory, BaseModel):\n \"\"\"In memory implementation of chat message history.\n Stores messages in an in memory list.\n \"\"\"\n messages: List[BaseMessage] = []\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Add a self-created message to the store\"\"\"\n self.messages.append(message)\n[docs] def clear(self) -> None:\n self.messages = []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/in_memory.html"} {"id": "b23fd6f3fa19-0", "text": "Source code for langchain.memory.chat_message_histories.file\nimport json\nimport logging\nfrom pathlib import Path\nfrom typing import List\nfrom langchain.schema import (\n BaseChatMessageHistory,\n)\nfrom langchain.schema.messages import BaseMessage, messages_from_dict, messages_to_dict\nlogger = logging.getLogger(__name__)\n[docs]class FileChatMessageHistory(BaseChatMessageHistory):\n \"\"\"\n Chat message history that stores history in a local file.\n Args:\n file_path: path of the local file to store the messages.\n \"\"\"\n def __init__(self, file_path: str):\n self.file_path = Path(file_path)\n if not self.file_path.exists():\n self.file_path.touch()\n self.file_path.write_text(json.dumps([]))\n @property\n def messages(self) -> List[BaseMessage]: # type: ignore\n \"\"\"Retrieve the messages from the local file\"\"\"\n items = json.loads(self.file_path.read_text())\n messages = messages_from_dict(items)\n return messages\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Append the message to the record in the local file\"\"\"\n messages = messages_to_dict(self.messages)\n messages.append(messages_to_dict([message])[0])\n self.file_path.write_text(json.dumps(messages))\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from the local file\"\"\"\n self.file_path.write_text(json.dumps([]))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/file.html"} {"id": "360844d9cc53-0", "text": "Source code for langchain.memory.chat_message_histories.mongodb\nimport json\nimport logging\nfrom typing import List\nfrom langchain.schema import (\n BaseChatMessageHistory,\n)\nfrom langchain.schema.messages import BaseMessage, _message_to_dict, messages_from_dict\nlogger = logging.getLogger(__name__)\nDEFAULT_DBNAME = \"chat_history\"\nDEFAULT_COLLECTION_NAME = \"message_store\"\n[docs]class MongoDBChatMessageHistory(BaseChatMessageHistory):\n \"\"\"Chat message history that stores history in MongoDB.\n Args:\n connection_string: connection string to connect to MongoDB\n session_id: arbitrary key that is used to store the messages\n of a single chat session.\n database_name: name of the database to use\n collection_name: name of the collection to use\n \"\"\"\n def __init__(\n self,\n connection_string: str,\n session_id: str,\n database_name: str = DEFAULT_DBNAME,\n collection_name: str = DEFAULT_COLLECTION_NAME,\n ):\n from pymongo import MongoClient, errors\n self.connection_string = connection_string\n self.session_id = session_id\n self.database_name = database_name\n self.collection_name = collection_name\n try:\n self.client: MongoClient = MongoClient(connection_string)\n except errors.ConnectionFailure as error:\n logger.error(error)\n self.db = self.client[database_name]\n self.collection = self.db[collection_name]\n self.collection.create_index(\"SessionId\")\n @property\n def messages(self) -> List[BaseMessage]: # type: ignore\n \"\"\"Retrieve the messages from MongoDB\"\"\"\n from pymongo import errors\n try:\n cursor = self.collection.find({\"SessionId\": self.session_id})\n except errors.OperationFailure as error:\n logger.error(error)\n if cursor:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/mongodb.html"} {"id": "360844d9cc53-1", "text": "except errors.OperationFailure as error:\n logger.error(error)\n if cursor:\n items = [json.loads(document[\"History\"]) for document in cursor]\n else:\n items = []\n messages = messages_from_dict(items)\n return messages\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Append the message to the record in MongoDB\"\"\"\n from pymongo import errors\n try:\n self.collection.insert_one(\n {\n \"SessionId\": self.session_id,\n \"History\": json.dumps(_message_to_dict(message)),\n }\n )\n except errors.WriteError as err:\n logger.error(err)\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from MongoDB\"\"\"\n from pymongo import errors\n try:\n self.collection.delete_many({\"SessionId\": self.session_id})\n except errors.WriteError as err:\n logger.error(err)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/mongodb.html"} {"id": "a3c8807a81ef-0", "text": "Source code for langchain.memory.chat_message_histories.cosmos_db\n\"\"\"Azure CosmosDB Memory History.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom types import TracebackType\nfrom typing import TYPE_CHECKING, Any, List, Optional, Type\nfrom langchain.schema import (\n BaseChatMessageHistory,\n)\nfrom langchain.schema.messages import BaseMessage, messages_from_dict, messages_to_dict\nlogger = logging.getLogger(__name__)\nif TYPE_CHECKING:\n from azure.cosmos import ContainerProxy\n[docs]class CosmosDBChatMessageHistory(BaseChatMessageHistory):\n \"\"\"Chat history backed by Azure CosmosDB.\"\"\"\n def __init__(\n self,\n cosmos_endpoint: str,\n cosmos_database: str,\n cosmos_container: str,\n session_id: str,\n user_id: str,\n credential: Any = None,\n connection_string: Optional[str] = None,\n ttl: Optional[int] = None,\n cosmos_client_kwargs: Optional[dict] = None,\n ):\n \"\"\"\n Initializes a new instance of the CosmosDBChatMessageHistory class.\n Make sure to call prepare_cosmos or use the context manager to make\n sure your database is ready.\n Either a credential or a connection string must be provided.\n :param cosmos_endpoint: The connection endpoint for the Azure Cosmos DB account.\n :param cosmos_database: The name of the database to use.\n :param cosmos_container: The name of the container to use.\n :param session_id: The session ID to use, can be overwritten while loading.\n :param user_id: The user ID to use, can be overwritten while loading.\n :param credential: The credential to use to authenticate to Azure Cosmos DB.\n :param connection_string: The connection string to use to authenticate.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/cosmos_db.html"} {"id": "a3c8807a81ef-1", "text": ":param connection_string: The connection string to use to authenticate.\n :param ttl: The time to live (in seconds) to use for documents in the container.\n :param cosmos_client_kwargs: Additional kwargs to pass to the CosmosClient.\n \"\"\"\n self.cosmos_endpoint = cosmos_endpoint\n self.cosmos_database = cosmos_database\n self.cosmos_container = cosmos_container\n self.credential = credential\n self.conn_string = connection_string\n self.session_id = session_id\n self.user_id = user_id\n self.ttl = ttl\n self.messages: List[BaseMessage] = []\n try:\n from azure.cosmos import ( # pylint: disable=import-outside-toplevel # noqa: E501\n CosmosClient,\n )\n except ImportError as exc:\n raise ImportError(\n \"You must install the azure-cosmos package to use the CosmosDBChatMessageHistory.\" # noqa: E501\n ) from exc\n if self.credential:\n self._client = CosmosClient(\n url=self.cosmos_endpoint,\n credential=self.credential,\n **cosmos_client_kwargs or {},\n )\n elif self.conn_string:\n self._client = CosmosClient.from_connection_string(\n conn_str=self.conn_string,\n **cosmos_client_kwargs or {},\n )\n else:\n raise ValueError(\"Either a connection string or a credential must be set.\")\n self._container: Optional[ContainerProxy] = None\n[docs] def prepare_cosmos(self) -> None:\n \"\"\"Prepare the CosmosDB client.\n Use this function or the context manager to make sure your database is ready.\n \"\"\"\n try:\n from azure.cosmos import ( # pylint: disable=import-outside-toplevel # noqa: E501", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/cosmos_db.html"} {"id": "a3c8807a81ef-2", "text": "PartitionKey,\n )\n except ImportError as exc:\n raise ImportError(\n \"You must install the azure-cosmos package to use the CosmosDBChatMessageHistory.\" # noqa: E501\n ) from exc\n database = self._client.create_database_if_not_exists(self.cosmos_database)\n self._container = database.create_container_if_not_exists(\n self.cosmos_container,\n partition_key=PartitionKey(\"/user_id\"),\n default_ttl=self.ttl,\n )\n self.load_messages()\n def __enter__(self) -> \"CosmosDBChatMessageHistory\":\n \"\"\"Context manager entry point.\"\"\"\n self._client.__enter__()\n self.prepare_cosmos()\n return self\n def __exit__(\n self,\n exc_type: Optional[Type[BaseException]],\n exc_val: Optional[BaseException],\n traceback: Optional[TracebackType],\n ) -> None:\n \"\"\"Context manager exit\"\"\"\n self.upsert_messages()\n self._client.__exit__(exc_type, exc_val, traceback)\n[docs] def load_messages(self) -> None:\n \"\"\"Retrieve the messages from Cosmos\"\"\"\n if not self._container:\n raise ValueError(\"Container not initialized\")\n try:\n from azure.cosmos.exceptions import ( # pylint: disable=import-outside-toplevel # noqa: E501\n CosmosHttpResponseError,\n )\n except ImportError as exc:\n raise ImportError(\n \"You must install the azure-cosmos package to use the CosmosDBChatMessageHistory.\" # noqa: E501\n ) from exc\n try:\n item = self._container.read_item(\n item=self.session_id, partition_key=self.user_id\n )\n except CosmosHttpResponseError:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/cosmos_db.html"} {"id": "a3c8807a81ef-3", "text": ")\n except CosmosHttpResponseError:\n logger.info(\"no session found\")\n return\n if \"messages\" in item and len(item[\"messages\"]) > 0:\n self.messages = messages_from_dict(item[\"messages\"])\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Add a self-created message to the store\"\"\"\n self.messages.append(message)\n self.upsert_messages()\n[docs] def upsert_messages(self) -> None:\n \"\"\"Update the cosmosdb item.\"\"\"\n if not self._container:\n raise ValueError(\"Container not initialized\")\n self._container.upsert_item(\n body={\n \"id\": self.session_id,\n \"user_id\": self.user_id,\n \"messages\": messages_to_dict(self.messages),\n }\n )\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from this memory and cosmos.\"\"\"\n self.messages = []\n if self._container:\n self._container.delete_item(\n item=self.session_id, partition_key=self.user_id\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/cosmos_db.html"} {"id": "8c8c12298469-0", "text": "Source code for langchain.memory.chat_message_histories.firestore\n\"\"\"Firestore Chat Message History.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import TYPE_CHECKING, List, Optional\nfrom langchain.schema import (\n BaseChatMessageHistory,\n)\nfrom langchain.schema.messages import BaseMessage, messages_from_dict, messages_to_dict\nlogger = logging.getLogger(__name__)\nif TYPE_CHECKING:\n from google.cloud.firestore import DocumentReference\n[docs]class FirestoreChatMessageHistory(BaseChatMessageHistory):\n \"\"\"Chat history backed by Google Firestore.\"\"\"\n def __init__(\n self,\n collection_name: str,\n session_id: str,\n user_id: str,\n ):\n \"\"\"\n Initialize a new instance of the FirestoreChatMessageHistory class.\n :param collection_name: The name of the collection to use.\n :param session_id: The session ID for the chat..\n :param user_id: The user ID for the chat.\n \"\"\"\n self.collection_name = collection_name\n self.session_id = session_id\n self.user_id = user_id\n self._document: Optional[DocumentReference] = None\n self.messages: List[BaseMessage] = []\n self.prepare_firestore()\n[docs] def prepare_firestore(self) -> None:\n \"\"\"Prepare the Firestore client.\n Use this function to make sure your database is ready.\n \"\"\"\n try:\n import firebase_admin\n from firebase_admin import firestore\n except ImportError:\n raise ImportError(\n \"Could not import firebase-admin python package. \"\n \"Please install it with `pip install firebase-admin`.\"\n )\n # For multiple instances, only initialize the app once.\n try:\n firebase_admin.get_app()\n except ValueError as e:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/firestore.html"} {"id": "8c8c12298469-1", "text": "try:\n firebase_admin.get_app()\n except ValueError as e:\n logger.debug(\"Initializing Firebase app: %s\", e)\n firebase_admin.initialize_app()\n self.firestore_client = firestore.client()\n self._document = self.firestore_client.collection(\n self.collection_name\n ).document(self.session_id)\n self.load_messages()\n[docs] def load_messages(self) -> None:\n \"\"\"Retrieve the messages from Firestore\"\"\"\n if not self._document:\n raise ValueError(\"Document not initialized\")\n doc = self._document.get()\n if doc.exists:\n data = doc.to_dict()\n if \"messages\" in data and len(data[\"messages\"]) > 0:\n self.messages = messages_from_dict(data[\"messages\"])\n[docs] def add_message(self, message: BaseMessage) -> None:\n self.messages.append(message)\n self.upsert_messages()\n[docs] def upsert_messages(self, new_message: Optional[BaseMessage] = None) -> None:\n \"\"\"Update the Firestore document.\"\"\"\n if not self._document:\n raise ValueError(\"Document not initialized\")\n self._document.set(\n {\n \"id\": self.session_id,\n \"user_id\": self.user_id,\n \"messages\": messages_to_dict(self.messages),\n }\n )\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from this memory and Firestore.\"\"\"\n self.messages = []\n if self._document:\n self._document.delete()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/firestore.html"} {"id": "86f141b9ca0a-0", "text": "Source code for langchain.memory.chat_message_histories.postgres\nimport json\nimport logging\nfrom typing import List\nfrom langchain.schema import (\n BaseChatMessageHistory,\n)\nfrom langchain.schema.messages import BaseMessage, _message_to_dict, messages_from_dict\nlogger = logging.getLogger(__name__)\nDEFAULT_CONNECTION_STRING = \"postgresql://postgres:mypassword@localhost/chat_history\"\n[docs]class PostgresChatMessageHistory(BaseChatMessageHistory):\n \"\"\"Chat message history stored in a Postgres database.\"\"\"\n def __init__(\n self,\n session_id: str,\n connection_string: str = DEFAULT_CONNECTION_STRING,\n table_name: str = \"message_store\",\n ):\n import psycopg\n from psycopg.rows import dict_row\n try:\n self.connection = psycopg.connect(connection_string)\n self.cursor = self.connection.cursor(row_factory=dict_row)\n except psycopg.OperationalError as error:\n logger.error(error)\n self.session_id = session_id\n self.table_name = table_name\n self._create_table_if_not_exists()\n def _create_table_if_not_exists(self) -> None:\n create_table_query = f\"\"\"CREATE TABLE IF NOT EXISTS {self.table_name} (\n id SERIAL PRIMARY KEY,\n session_id TEXT NOT NULL,\n message JSONB NOT NULL\n );\"\"\"\n self.cursor.execute(create_table_query)\n self.connection.commit()\n @property\n def messages(self) -> List[BaseMessage]: # type: ignore\n \"\"\"Retrieve the messages from PostgreSQL\"\"\"\n query = (\n f\"SELECT message FROM {self.table_name} WHERE session_id = %s ORDER BY id;\"\n )\n self.cursor.execute(query, (self.session_id,))\n items = [record[\"message\"] for record in self.cursor.fetchall()]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/postgres.html"} {"id": "86f141b9ca0a-1", "text": "items = [record[\"message\"] for record in self.cursor.fetchall()]\n messages = messages_from_dict(items)\n return messages\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Append the message to the record in PostgreSQL\"\"\"\n from psycopg import sql\n query = sql.SQL(\"INSERT INTO {} (session_id, message) VALUES (%s, %s);\").format(\n sql.Identifier(self.table_name)\n )\n self.cursor.execute(\n query, (self.session_id, json.dumps(_message_to_dict(message)))\n )\n self.connection.commit()\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from PostgreSQL\"\"\"\n query = f\"DELETE FROM {self.table_name} WHERE session_id = %s;\"\n self.cursor.execute(query, (self.session_id,))\n self.connection.commit()\n def __del__(self) -> None:\n if self.cursor:\n self.cursor.close()\n if self.connection:\n self.connection.close()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/postgres.html"} {"id": "4b20beff84da-0", "text": "Source code for langchain.memory.chat_message_histories.sql\nimport json\nimport logging\nfrom typing import List\nfrom sqlalchemy import Column, Integer, Text, create_engine\ntry:\n from sqlalchemy.orm import declarative_base\nexcept ImportError:\n from sqlalchemy.ext.declarative import declarative_base\nfrom sqlalchemy.orm import sessionmaker\nfrom langchain.schema import (\n BaseChatMessageHistory,\n)\nfrom langchain.schema.messages import BaseMessage, _message_to_dict, messages_from_dict\nlogger = logging.getLogger(__name__)\n[docs]def create_message_model(table_name, DynamicBase): # type: ignore\n \"\"\"\n Create a message model for a given table name.\n Args:\n table_name: The name of the table to use.\n DynamicBase: The base class to use for the model.\n Returns:\n The model class.\n \"\"\"\n # Model decleared inside a function to have a dynamic table name\n class Message(DynamicBase):\n __tablename__ = table_name\n id = Column(Integer, primary_key=True)\n session_id = Column(Text)\n message = Column(Text)\n return Message\n[docs]class SQLChatMessageHistory(BaseChatMessageHistory):\n \"\"\"Chat message history stored in an SQL database.\"\"\"\n def __init__(\n self,\n session_id: str,\n connection_string: str,\n table_name: str = \"message_store\",\n ):\n self.table_name = table_name\n self.connection_string = connection_string\n self.engine = create_engine(connection_string, echo=False)\n self._create_table_if_not_exists()\n self.session_id = session_id\n self.Session = sessionmaker(self.engine)\n def _create_table_if_not_exists(self) -> None:\n DynamicBase = declarative_base()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/sql.html"} {"id": "4b20beff84da-1", "text": "DynamicBase = declarative_base()\n self.Message = create_message_model(self.table_name, DynamicBase)\n # Create all does the check for us in case the table exists.\n DynamicBase.metadata.create_all(self.engine)\n @property\n def messages(self) -> List[BaseMessage]: # type: ignore\n \"\"\"Retrieve all messages from db\"\"\"\n with self.Session() as session:\n result = session.query(self.Message).where(\n self.Message.session_id == self.session_id\n )\n items = [json.loads(record.message) for record in result]\n messages = messages_from_dict(items)\n return messages\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Append the message to the record in db\"\"\"\n with self.Session() as session:\n jsonstr = json.dumps(_message_to_dict(message))\n session.add(self.Message(session_id=self.session_id, message=jsonstr))\n session.commit()\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from db\"\"\"\n with self.Session() as session:\n session.query(self.Message).filter(\n self.Message.session_id == self.session_id\n ).delete()\n session.commit()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/sql.html"} {"id": "1b09c14eb0e8-0", "text": "Source code for langchain.tools.convert_to_openai\nfrom typing import TypedDict\nfrom langchain.tools import BaseTool, StructuredTool\n[docs]class FunctionDescription(TypedDict):\n \"\"\"Representation of a callable function to the OpenAI API.\"\"\"\n name: str\n \"\"\"The name of the function.\"\"\"\n description: str\n \"\"\"A description of the function.\"\"\"\n parameters: dict\n \"\"\"The parameters of the function.\"\"\"\n[docs]def format_tool_to_openai_function(tool: BaseTool) -> FunctionDescription:\n \"\"\"Format tool into the OpenAI function API.\"\"\"\n if isinstance(tool, StructuredTool):\n schema_ = tool.args_schema.schema()\n # Bug with required missing for structured tools.\n required = sorted(schema_[\"properties\"]) # BUG WORKAROUND\n return {\n \"name\": tool.name,\n \"description\": tool.description,\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": schema_[\"properties\"],\n \"required\": required,\n },\n }\n else:\n if tool.args_schema:\n parameters = tool.args_schema.schema()\n else:\n parameters = {\n # This is a hack to get around the fact that some tools\n # do not expose an args_schema, and expect an argument\n # which is a string.\n # And Open AI does not support an array type for the\n # parameters.\n \"properties\": {\n \"__arg1\": {\"title\": \"__arg1\", \"type\": \"string\"},\n },\n \"required\": [\"__arg1\"],\n \"type\": \"object\",\n }\n return {\n \"name\": tool.name,\n \"description\": tool.description,\n \"parameters\": parameters,\n }", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/convert_to_openai.html"} {"id": "ef31ddcf0c43-0", "text": "Source code for langchain.tools.base\n\"\"\"Base implementation for tools or skills.\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom abc import ABC, abstractmethod\nfrom inspect import signature\nfrom typing import Any, Awaitable, Callable, Dict, List, Optional, Tuple, Type, Union\nfrom pydantic import (\n BaseModel,\n Extra,\n Field,\n create_model,\n root_validator,\n validate_arguments,\n)\nfrom pydantic.main import ModelMetaclass\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.callbacks.manager import (\n AsyncCallbackManager,\n AsyncCallbackManagerForToolRun,\n CallbackManager,\n CallbackManagerForToolRun,\n Callbacks,\n)\n[docs]class SchemaAnnotationError(TypeError):\n \"\"\"Raised when 'args_schema' is missing or has an incorrect type annotation.\"\"\"\n[docs]class ToolMetaclass(ModelMetaclass):\n \"\"\"Metaclass for BaseTool to ensure the provided args_schema\n doesn't silently ignored.\"\"\"\n def __new__(\n cls: Type[ToolMetaclass], name: str, bases: Tuple[Type, ...], dct: dict\n ) -> ToolMetaclass:\n \"\"\"Create the definition of the new tool class.\"\"\"\n schema_type: Optional[Type[BaseModel]] = dct.get(\"args_schema\")\n if schema_type is not None:\n schema_annotations = dct.get(\"__annotations__\", {})\n args_schema_type = schema_annotations.get(\"args_schema\", None)\n if args_schema_type is None or args_schema_type == BaseModel:\n # Throw errors for common mis-annotations.\n # TODO: Use get_args / get_origin and fully\n # specify valid annotations.\n typehint_mandate = \"\"\"\nclass ChildTool(BaseTool):\n ...", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} {"id": "ef31ddcf0c43-1", "text": "typehint_mandate = \"\"\"\nclass ChildTool(BaseTool):\n ...\n args_schema: Type[BaseModel] = SchemaClass\n ...\"\"\"\n raise SchemaAnnotationError(\n f\"Tool definition for {name} must include valid type annotations\"\n f\" for argument 'args_schema' to behave as expected.\\n\"\n f\"Expected annotation of 'Type[BaseModel]'\"\n f\" but got '{args_schema_type}'.\\n\"\n f\"Expected class looks like:\\n\"\n f\"{typehint_mandate}\"\n )\n # Pass through to Pydantic's metaclass\n return super().__new__(cls, name, bases, dct)\ndef _create_subset_model(\n name: str, model: BaseModel, field_names: list\n) -> Type[BaseModel]:\n \"\"\"Create a pydantic model with only a subset of model's fields.\"\"\"\n fields = {}\n for field_name in field_names:\n field = model.__fields__[field_name]\n fields[field_name] = (field.outer_type_, field.field_info)\n return create_model(name, **fields) # type: ignore\ndef _get_filtered_args(\n inferred_model: Type[BaseModel],\n func: Callable,\n) -> dict:\n \"\"\"Get the arguments from a function's signature.\"\"\"\n schema = inferred_model.schema()[\"properties\"]\n valid_keys = signature(func).parameters\n return {k: schema[k] for k in valid_keys if k not in (\"run_manager\", \"callbacks\")}\nclass _SchemaConfig:\n \"\"\"Configuration for the pydantic model.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs]def create_schema_from_function(\n model_name: str,\n func: Callable,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} {"id": "ef31ddcf0c43-2", "text": "model_name: str,\n func: Callable,\n) -> Type[BaseModel]:\n \"\"\"Create a pydantic schema from a function's signature.\n Args:\n model_name: Name to assign to the generated pydandic schema\n func: Function to generate the schema from\n Returns:\n A pydantic model with the same arguments as the function\n \"\"\"\n # https://docs.pydantic.dev/latest/usage/validation_decorator/\n validated = validate_arguments(func, config=_SchemaConfig) # type: ignore\n inferred_model = validated.model # type: ignore\n if \"run_manager\" in inferred_model.__fields__:\n del inferred_model.__fields__[\"run_manager\"]\n if \"callbacks\" in inferred_model.__fields__:\n del inferred_model.__fields__[\"callbacks\"]\n # Pydantic adds placeholder virtual fields we need to strip\n valid_properties = _get_filtered_args(inferred_model, func)\n return _create_subset_model(\n f\"{model_name}Schema\", inferred_model, list(valid_properties)\n )\n[docs]class ToolException(Exception):\n \"\"\"An optional exception that tool throws when execution error occurs.\n When this exception is thrown, the agent will not stop working,\n but will handle the exception according to the handle_tool_error\n variable of the tool, and the processing result will be returned\n to the agent as observation, and printed in red on the console.\n \"\"\"\n pass\n[docs]class BaseTool(ABC, BaseModel, metaclass=ToolMetaclass):\n \"\"\"Interface LangChain tools must implement.\"\"\"\n name: str\n \"\"\"The unique name of the tool that clearly communicates its purpose.\"\"\"\n description: str\n \"\"\"Used to tell the model how/when/why to use the tool.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} {"id": "ef31ddcf0c43-3", "text": "\"\"\"Used to tell the model how/when/why to use the tool.\n \n You can provide few-shot examples as a part of the description.\n \"\"\"\n args_schema: Optional[Type[BaseModel]] = None\n \"\"\"Pydantic model class to validate and parse the tool's input arguments.\"\"\"\n return_direct: bool = False\n \"\"\"Whether to return the tool's output directly. Setting this to True means\n \n that after the tool is called, the AgentExecutor will stop looping.\n \"\"\"\n verbose: bool = False\n \"\"\"Whether to log the tool's progress.\"\"\"\n callbacks: Callbacks = Field(default=None, exclude=True)\n \"\"\"Callbacks to be called during tool execution.\"\"\"\n callback_manager: Optional[BaseCallbackManager] = Field(default=None, exclude=True)\n \"\"\"Deprecated. Please use callbacks instead.\"\"\"\n tags: Optional[List[str]] = None\n \"\"\"Optional list of tags associated with the tool. Defaults to None\n These tags will be associated with each call to this tool,\n and passed as arguments to the handlers defined in `callbacks`.\n You can use these to eg identify a specific instance of a tool with its use case.\n \"\"\"\n metadata: Optional[Dict[str, Any]] = None\n \"\"\"Optional metadata associated with the tool. Defaults to None\n This metadata will be associated with each call to this tool,\n and passed as arguments to the handlers defined in `callbacks`.\n You can use these to eg identify a specific instance of a tool with its use case.\n \"\"\"\n handle_tool_error: Optional[\n Union[bool, str, Callable[[ToolException], str]]\n ] = False\n \"\"\"Handle the content of the ToolException thrown.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} {"id": "ef31ddcf0c43-4", "text": "[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def is_single_input(self) -> bool:\n \"\"\"Whether the tool only accepts a single input.\"\"\"\n keys = {k for k in self.args if k != \"kwargs\"}\n return len(keys) == 1\n @property\n def args(self) -> dict:\n if self.args_schema is not None:\n return self.args_schema.schema()[\"properties\"]\n else:\n schema = create_schema_from_function(self.name, self._run)\n return schema.schema()[\"properties\"]\n def _parse_input(\n self,\n tool_input: Union[str, Dict],\n ) -> Union[str, Dict[str, Any]]:\n \"\"\"Convert tool input to pydantic model.\"\"\"\n input_args = self.args_schema\n if isinstance(tool_input, str):\n if input_args is not None:\n key_ = next(iter(input_args.__fields__.keys()))\n input_args.validate({key_: tool_input})\n return tool_input\n else:\n if input_args is not None:\n result = input_args.parse_obj(tool_input)\n return {k: v for k, v in result.dict().items() if k in tool_input}\n return tool_input\n[docs] @root_validator()\n def raise_deprecation(cls, values: Dict) -> Dict:\n \"\"\"Raise deprecation warning if callback_manager is used.\"\"\"\n if values.get(\"callback_manager\") is not None:\n warnings.warn(\n \"callback_manager is deprecated. Please use callbacks instead.\",\n DeprecationWarning,\n )\n values[\"callbacks\"] = values.pop(\"callback_manager\", None)\n return values", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} {"id": "ef31ddcf0c43-5", "text": "values[\"callbacks\"] = values.pop(\"callback_manager\", None)\n return values\n @abstractmethod\n def _run(\n self,\n *args: Any,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Use the tool.\n Add run_manager: Optional[CallbackManagerForToolRun] = None\n to child implementations to enable tracing,\n \"\"\"\n @abstractmethod\n async def _arun(\n self,\n *args: Any,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Use the tool asynchronously.\n Add run_manager: Optional[AsyncCallbackManagerForToolRun] = None\n to child implementations to enable tracing,\n \"\"\"\n def _to_args_and_kwargs(self, tool_input: Union[str, Dict]) -> Tuple[Tuple, Dict]:\n # For backwards compatibility, if run_input is a string,\n # pass as a positional argument.\n if isinstance(tool_input, str):\n return (tool_input,), {}\n else:\n return (), tool_input\n[docs] def run(\n self,\n tool_input: Union[str, Dict],\n verbose: Optional[bool] = None,\n start_color: Optional[str] = \"green\",\n color: Optional[str] = \"green\",\n callbacks: Callbacks = None,\n *,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run the tool.\"\"\"\n parsed_input = self._parse_input(tool_input)\n if not self.verbose and verbose is not None:\n verbose_ = verbose\n else:\n verbose_ = self.verbose\n callback_manager = CallbackManager.configure(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} {"id": "ef31ddcf0c43-6", "text": "else:\n verbose_ = self.verbose\n callback_manager = CallbackManager.configure(\n callbacks,\n self.callbacks,\n verbose_,\n tags,\n self.tags,\n metadata,\n self.metadata,\n )\n # TODO: maybe also pass through run_manager is _run supports kwargs\n new_arg_supported = signature(self._run).parameters.get(\"run_manager\")\n run_manager = callback_manager.on_tool_start(\n {\"name\": self.name, \"description\": self.description},\n tool_input if isinstance(tool_input, str) else str(tool_input),\n color=start_color,\n **kwargs,\n )\n try:\n tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)\n observation = (\n self._run(*tool_args, run_manager=run_manager, **tool_kwargs)\n if new_arg_supported\n else self._run(*tool_args, **tool_kwargs)\n )\n except ToolException as e:\n if not self.handle_tool_error:\n run_manager.on_tool_error(e)\n raise e\n elif isinstance(self.handle_tool_error, bool):\n if e.args:\n observation = e.args[0]\n else:\n observation = \"Tool execution error\"\n elif isinstance(self.handle_tool_error, str):\n observation = self.handle_tool_error\n elif callable(self.handle_tool_error):\n observation = self.handle_tool_error(e)\n else:\n raise ValueError(\n f\"Got unexpected type of `handle_tool_error`. Expected bool, str \"\n f\"or callable. Received: {self.handle_tool_error}\"\n )\n run_manager.on_tool_end(\n str(observation), color=\"red\", name=self.name, **kwargs\n )\n return observation", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} {"id": "ef31ddcf0c43-7", "text": ")\n return observation\n except (Exception, KeyboardInterrupt) as e:\n run_manager.on_tool_error(e)\n raise e\n else:\n run_manager.on_tool_end(\n str(observation), color=color, name=self.name, **kwargs\n )\n return observation\n[docs] async def arun(\n self,\n tool_input: Union[str, Dict],\n verbose: Optional[bool] = None,\n start_color: Optional[str] = \"green\",\n color: Optional[str] = \"green\",\n callbacks: Callbacks = None,\n *,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run the tool asynchronously.\"\"\"\n parsed_input = self._parse_input(tool_input)\n if not self.verbose and verbose is not None:\n verbose_ = verbose\n else:\n verbose_ = self.verbose\n callback_manager = AsyncCallbackManager.configure(\n callbacks,\n self.callbacks,\n verbose_,\n tags,\n self.tags,\n metadata,\n self.metadata,\n )\n new_arg_supported = signature(self._arun).parameters.get(\"run_manager\")\n run_manager = await callback_manager.on_tool_start(\n {\"name\": self.name, \"description\": self.description},\n tool_input if isinstance(tool_input, str) else str(tool_input),\n color=start_color,\n **kwargs,\n )\n try:\n # We then call the tool on the tool input to get an observation\n tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)\n observation = (", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} {"id": "ef31ddcf0c43-8", "text": "observation = (\n await self._arun(*tool_args, run_manager=run_manager, **tool_kwargs)\n if new_arg_supported\n else await self._arun(*tool_args, **tool_kwargs)\n )\n except ToolException as e:\n if not self.handle_tool_error:\n await run_manager.on_tool_error(e)\n raise e\n elif isinstance(self.handle_tool_error, bool):\n if e.args:\n observation = e.args[0]\n else:\n observation = \"Tool execution error\"\n elif isinstance(self.handle_tool_error, str):\n observation = self.handle_tool_error\n elif callable(self.handle_tool_error):\n observation = self.handle_tool_error(e)\n else:\n raise ValueError(\n f\"Got unexpected type of `handle_tool_error`. Expected bool, str \"\n f\"or callable. Received: {self.handle_tool_error}\"\n )\n await run_manager.on_tool_end(\n str(observation), color=\"red\", name=self.name, **kwargs\n )\n return observation\n except (Exception, KeyboardInterrupt) as e:\n await run_manager.on_tool_error(e)\n raise e\n else:\n await run_manager.on_tool_end(\n str(observation), color=color, name=self.name, **kwargs\n )\n return observation\n[docs] def __call__(self, tool_input: str, callbacks: Callbacks = None) -> str:\n \"\"\"Make tool callable.\"\"\"\n return self.run(tool_input, callbacks=callbacks)\n[docs]class Tool(BaseTool):\n \"\"\"Tool that takes in function or coroutine directly.\"\"\"\n description: str = \"\"\n func: Callable[..., str]\n \"\"\"The function to run when the tool is called.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} {"id": "ef31ddcf0c43-9", "text": "\"\"\"The function to run when the tool is called.\"\"\"\n coroutine: Optional[Callable[..., Awaitable[str]]] = None\n \"\"\"The asynchronous version of the function.\"\"\"\n @property\n def args(self) -> dict:\n \"\"\"The tool's input arguments.\"\"\"\n if self.args_schema is not None:\n return self.args_schema.schema()[\"properties\"]\n # For backwards compatibility, if the function signature is ambiguous,\n # assume it takes a single string input.\n return {\"tool_input\": {\"type\": \"string\"}}\n def _to_args_and_kwargs(self, tool_input: Union[str, Dict]) -> Tuple[Tuple, Dict]:\n \"\"\"Convert tool input to pydantic model.\"\"\"\n args, kwargs = super()._to_args_and_kwargs(tool_input)\n # For backwards compatibility. The tool must be run with a single input\n all_args = list(args) + list(kwargs.values())\n if len(all_args) != 1:\n raise ToolException(\n f\"Too many arguments to single-input tool {self.name}.\"\n f\" Args: {all_args}\"\n )\n return tuple(all_args), {}\n def _run(\n self,\n *args: Any,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Use the tool.\"\"\"\n new_argument_supported = signature(self.func).parameters.get(\"callbacks\")\n return (\n self.func(\n *args,\n callbacks=run_manager.get_child() if run_manager else None,\n **kwargs,\n )\n if new_argument_supported\n else self.func(*args, **kwargs)\n )\n async def _arun(\n self,\n *args: Any,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} {"id": "ef31ddcf0c43-10", "text": "async def _arun(\n self,\n *args: Any,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Use the tool asynchronously.\"\"\"\n if self.coroutine:\n new_argument_supported = signature(self.coroutine).parameters.get(\n \"callbacks\"\n )\n return (\n await self.coroutine(\n *args,\n callbacks=run_manager.get_child() if run_manager else None,\n **kwargs,\n )\n if new_argument_supported\n else await self.coroutine(*args, **kwargs)\n )\n raise NotImplementedError(\"Tool does not support async\")\n # TODO: this is for backwards compatibility, remove in future\n def __init__(\n self, name: str, func: Callable, description: str, **kwargs: Any\n ) -> None:\n \"\"\"Initialize tool.\"\"\"\n super(Tool, self).__init__(\n name=name, func=func, description=description, **kwargs\n )\n[docs] @classmethod\n def from_function(\n cls,\n func: Callable,\n name: str, # We keep these required to support backwards compatibility\n description: str,\n return_direct: bool = False,\n args_schema: Optional[Type[BaseModel]] = None,\n **kwargs: Any,\n ) -> Tool:\n \"\"\"Initialize tool from a function.\"\"\"\n return cls(\n name=name,\n func=func,\n description=description,\n return_direct=return_direct,\n args_schema=args_schema,\n **kwargs,\n )\n[docs]class StructuredTool(BaseTool):\n \"\"\"Tool that can operate on any number of inputs.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} {"id": "ef31ddcf0c43-11", "text": "\"\"\"Tool that can operate on any number of inputs.\"\"\"\n description: str = \"\"\n args_schema: Type[BaseModel] = Field(..., description=\"The tool schema.\")\n \"\"\"The input arguments' schema.\"\"\"\n func: Callable[..., Any]\n \"\"\"The function to run when the tool is called.\"\"\"\n coroutine: Optional[Callable[..., Awaitable[Any]]] = None\n \"\"\"The asynchronous version of the function.\"\"\"\n @property\n def args(self) -> dict:\n \"\"\"The tool's input arguments.\"\"\"\n return self.args_schema.schema()[\"properties\"]\n def _run(\n self,\n *args: Any,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Use the tool.\"\"\"\n new_argument_supported = signature(self.func).parameters.get(\"callbacks\")\n return (\n self.func(\n *args,\n callbacks=run_manager.get_child() if run_manager else None,\n **kwargs,\n )\n if new_argument_supported\n else self.func(*args, **kwargs)\n )\n async def _arun(\n self,\n *args: Any,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n if self.coroutine:\n new_argument_supported = signature(self.coroutine).parameters.get(\n \"callbacks\"\n )\n return (\n await self.coroutine(\n *args,\n callbacks=run_manager.get_child() if run_manager else None,\n **kwargs,\n )\n if new_argument_supported\n else await self.coroutine(*args, **kwargs)\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} {"id": "ef31ddcf0c43-12", "text": "else await self.coroutine(*args, **kwargs)\n )\n raise NotImplementedError(\"Tool does not support async\")\n[docs] @classmethod\n def from_function(\n cls,\n func: Callable,\n name: Optional[str] = None,\n description: Optional[str] = None,\n return_direct: bool = False,\n args_schema: Optional[Type[BaseModel]] = None,\n infer_schema: bool = True,\n **kwargs: Any,\n ) -> StructuredTool:\n \"\"\"Create tool from a given function.\n A classmethod that helps to create a tool from a function.\n Args:\n func: The function from which to create a tool\n name: The name of the tool. Defaults to the function name\n description: The description of the tool. Defaults to the function docstring\n return_direct: Whether to return the result directly or as a callback\n args_schema: The schema of the tool's input arguments\n infer_schema: Whether to infer the schema from the function's signature\n **kwargs: Additional arguments to pass to the tool\n Returns:\n The tool\n Examples:\n ... code-block:: python\n def add(a: int, b: int) -> int:\n \\\"\\\"\\\"Add two numbers\\\"\\\"\\\"\n return a + b\n tool = StructuredTool.from_function(add)\n tool.run(1, 2) # 3\n \"\"\"\n name = name or func.__name__\n description = description or func.__doc__\n assert (\n description is not None\n ), \"Function must have a docstring if description not provided.\"\n # Description example:\n # search_api(query: str) - Searches the API for the query.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} {"id": "ef31ddcf0c43-13", "text": "# search_api(query: str) - Searches the API for the query.\n description = f\"{name}{signature(func)} - {description.strip()}\"\n _args_schema = args_schema\n if _args_schema is None and infer_schema:\n _args_schema = create_schema_from_function(f\"{name}Schema\", func)\n return cls(\n name=name,\n func=func,\n args_schema=_args_schema,\n description=description,\n return_direct=return_direct,\n **kwargs,\n )\n[docs]def tool(\n *args: Union[str, Callable],\n return_direct: bool = False,\n args_schema: Optional[Type[BaseModel]] = None,\n infer_schema: bool = True,\n) -> Callable:\n \"\"\"Make tools out of functions, can be used with or without arguments.\n Args:\n *args: The arguments to the tool.\n return_direct: Whether to return directly from the tool rather\n than continuing the agent loop.\n args_schema: optional argument schema for user to specify\n infer_schema: Whether to infer the schema of the arguments from\n the function's signature. This also makes the resultant tool\n accept a dictionary input to its `run()` function.\n Requires:\n - Function must be of type (str) -> str\n - Function must have a docstring\n Examples:\n .. code-block:: python\n @tool\n def search_api(query: str) -> str:\n # Searches the API for the query.\n return\n @tool(\"search\", return_direct=True)\n def search_api(query: str) -> str:\n # Searches the API for the query.\n return\n \"\"\"\n def _make_with_name(tool_name: str) -> Callable:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} {"id": "ef31ddcf0c43-14", "text": "\"\"\"\n def _make_with_name(tool_name: str) -> Callable:\n def _make_tool(func: Callable) -> BaseTool:\n if infer_schema or args_schema is not None:\n return StructuredTool.from_function(\n func,\n name=tool_name,\n return_direct=return_direct,\n args_schema=args_schema,\n infer_schema=infer_schema,\n )\n # If someone doesn't want a schema applied, we must treat it as\n # a simple string->string function\n assert func.__doc__ is not None, \"Function must have a docstring\"\n return Tool(\n name=tool_name,\n func=func,\n description=f\"{tool_name} tool\",\n return_direct=return_direct,\n )\n return _make_tool\n if len(args) == 1 and isinstance(args[0], str):\n # if the argument is a string, then we use the string as the tool name\n # Example usage: @tool(\"search\", return_direct=True)\n return _make_with_name(args[0])\n elif len(args) == 1 and callable(args[0]):\n # if the argument is a function, then we use the function name as the tool name\n # Example usage: @tool\n return _make_with_name(args[0].__name__)(args[0])\n elif len(args) == 0:\n # if there are no arguments, then we use the function name as the tool name\n # Example usage: @tool(return_direct=True)\n def _partial(func: Callable[[str], str]) -> BaseTool:\n return _make_with_name(func.__name__)(func)\n return _partial\n else:\n raise ValueError(\"Too many arguments for tool decorator\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} {"id": "885753ed14b9-0", "text": "Source code for langchain.tools.ifttt\n\"\"\"From https://github.com/SidU/teams-langchain-js/wiki/Connecting-IFTTT-Services.\n# Creating a webhook\n- Go to https://ifttt.com/create\n# Configuring the \"If This\"\n- Click on the \"If This\" button in the IFTTT interface.\n- Search for \"Webhooks\" in the search bar.\n- Choose the first option for \"Receive a web request with a JSON payload.\"\n- Choose an Event Name that is specific to the service you plan to connect to.\nThis will make it easier for you to manage the webhook URL.\nFor example, if you're connecting to Spotify, you could use \"Spotify\" as your\nEvent Name.\n- Click the \"Create Trigger\" button to save your settings and create your webhook.\n# Configuring the \"Then That\"\n- Tap on the \"Then That\" button in the IFTTT interface.\n- Search for the service you want to connect, such as Spotify.\n- Choose an action from the service, such as \"Add track to a playlist\".\n- Configure the action by specifying the necessary details, such as the playlist name,\ne.g., \"Songs from AI\".\n- Reference the JSON Payload received by the Webhook in your action. For the Spotify\nscenario, choose \"{{JsonPayload}}\" as your search query.\n- Tap the \"Create Action\" button to save your action settings.\n- Once you have finished configuring your action, click the \"Finish\" button to\ncomplete the setup.\n- Congratulations! You have successfully connected the Webhook to the desired\nservice, and you're ready to start receiving data and triggering actions \ud83c\udf89\n# Finishing up\n- To get your webhook URL go to https://ifttt.com/maker_webhooks/settings", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/ifttt.html"} {"id": "885753ed14b9-1", "text": "- To get your webhook URL go to https://ifttt.com/maker_webhooks/settings\n- Copy the IFTTT key value from there. The URL is of the form\nhttps://maker.ifttt.com/use/YOUR_IFTTT_KEY. Grab the YOUR_IFTTT_KEY value.\n\"\"\"\nfrom typing import Optional\nimport requests\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\n[docs]class IFTTTWebhook(BaseTool):\n \"\"\"IFTTT Webhook.\n Args:\n name: name of the tool\n description: description of the tool\n url: url to hit with the json event.\n \"\"\"\n url: str\n def _run(\n self,\n tool_input: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n body = {\"this\": tool_input}\n response = requests.post(self.url, data=body)\n return response.text\n async def _arun(\n self,\n tool_input: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n raise NotImplementedError(\"Not implemented.\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/ifttt.html"} {"id": "152f8632ffa0-0", "text": "Source code for langchain.tools.plugin\nfrom __future__ import annotations\nimport json\nfrom typing import Optional, Type\nimport requests\nimport yaml\nfrom pydantic import BaseModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\n[docs]class ApiConfig(BaseModel):\n type: str\n url: str\n has_user_authentication: Optional[bool] = False\n[docs]class AIPlugin(BaseModel):\n \"\"\"AI Plugin Definition.\"\"\"\n schema_version: str\n name_for_model: str\n name_for_human: str\n description_for_model: str\n description_for_human: str\n auth: Optional[dict] = None\n api: ApiConfig\n logo_url: Optional[str]\n contact_email: Optional[str]\n legal_info_url: Optional[str]\n[docs] @classmethod\n def from_url(cls, url: str) -> AIPlugin:\n \"\"\"Instantiate AIPlugin from a URL.\"\"\"\n response = requests.get(url).json()\n return cls(**response)\n[docs]def marshal_spec(txt: str) -> dict:\n \"\"\"Convert the yaml or json serialized spec to a dict.\n Args:\n txt: The yaml or json serialized spec.\n Returns:\n dict: The spec as a dict.\n \"\"\"\n try:\n return json.loads(txt)\n except json.JSONDecodeError:\n return yaml.safe_load(txt)\n[docs]class AIPluginToolSchema(BaseModel):\n \"\"\"AIPLuginToolSchema.\"\"\"\n tool_input: Optional[str] = \"\"\n[docs]class AIPluginTool(BaseTool):\n plugin: AIPlugin\n api_spec: str", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/plugin.html"} {"id": "152f8632ffa0-1", "text": "plugin: AIPlugin\n api_spec: str\n args_schema: Type[AIPluginToolSchema] = AIPluginToolSchema\n[docs] @classmethod\n def from_plugin_url(cls, url: str) -> AIPluginTool:\n plugin = AIPlugin.from_url(url)\n description = (\n f\"Call this tool to get the OpenAPI spec (and usage guide) \"\n f\"for interacting with the {plugin.name_for_human} API. \"\n f\"You should only call this ONCE! What is the \"\n f\"{plugin.name_for_human} API useful for? \"\n ) + plugin.description_for_human\n open_api_spec_str = requests.get(plugin.api.url).text\n open_api_spec = marshal_spec(open_api_spec_str)\n api_spec = (\n f\"Usage Guide: {plugin.description_for_model}\\n\\n\"\n f\"OpenAPI Spec: {open_api_spec}\"\n )\n return cls(\n name=plugin.name_for_model,\n description=description,\n plugin=plugin,\n api_spec=api_spec,\n )\n def _run(\n self,\n tool_input: Optional[str] = \"\",\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return self.api_spec\n async def _arun(\n self,\n tool_input: Optional[str] = None,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n return self.api_spec", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/plugin.html"} {"id": "78ff1bad45d9-0", "text": "Source code for langchain.tools.wolfram_alpha.tool\n\"\"\"Tool for the Wolfram Alpha API.\"\"\"\nfrom typing import Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper\n[docs]class WolframAlphaQueryRun(BaseTool):\n \"\"\"Tool that adds the capability to query using the Wolfram Alpha SDK.\"\"\"\n name = \"wolfram_alpha\"\n description = (\n \"A wrapper around Wolfram Alpha. \"\n \"Useful for when you need to answer questions about Math, \"\n \"Science, Technology, Culture, Society and Everyday Life. \"\n \"Input should be a search query.\"\n )\n api_wrapper: WolframAlphaAPIWrapper\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the WolframAlpha tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the WolframAlpha tool asynchronously.\"\"\"\n raise NotImplementedError(\"WolframAlphaQueryRun does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/wolfram_alpha/tool.html"} {"id": "ddfc7a8c5e50-0", "text": "Source code for langchain.tools.zapier.tool\n\"\"\"## Zapier Natural Language Actions API\n\\\nFull docs here: https://nla.zapier.com/start/\n**Zapier Natural Language Actions** gives you access to the 5k+ apps, 20k+ actions\non Zapier's platform through a natural language API interface.\nNLA supports apps like Gmail, Salesforce, Trello, Slack, Asana, HubSpot, Google Sheets,\nMicrosoft Teams, and thousands more apps: https://zapier.com/apps\nZapier NLA handles ALL the underlying API auth and translation from\nnatural language --> underlying API call --> return simplified output for LLMs\nThe key idea is you, or your users, expose a set of actions via an oauth-like setup\nwindow, which you can then query and execute via a REST API.\nNLA offers both API Key and OAuth for signing NLA API requests.\n1. Server-side (API Key): for quickly getting started, testing, and production scenarios\n where LangChain will only use actions exposed in the developer's Zapier account\n (and will use the developer's connected accounts on Zapier.com)\n2. User-facing (Oauth): for production scenarios where you are deploying an end-user\n facing application and LangChain needs access to end-user's exposed actions and\n connected accounts on Zapier.com\nThis quick start will focus on the server-side use case for brevity.\nReview [full docs](https://nla.zapier.com/start/) for user-facing oauth developer\nsupport.\nTypically, you'd use SequentialChain, here's a basic example:\n 1. Use NLA to find an email in Gmail\n 2. Use LLMChain to generate a draft reply to (1)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/zapier/tool.html"} {"id": "ddfc7a8c5e50-1", "text": "2. Use LLMChain to generate a draft reply to (1)\n 3. Use NLA to send the draft reply (2) to someone in Slack via direct message\nIn code, below:\n```python\nimport os\n# get from https://platform.openai.com/\nos.environ[\"OPENAI_API_KEY\"] = os.environ.get(\"OPENAI_API_KEY\", \"\")\n# get from https://nla.zapier.com/docs/authentication/\nos.environ[\"ZAPIER_NLA_API_KEY\"] = os.environ.get(\"ZAPIER_NLA_API_KEY\", \"\")\nfrom langchain.llms import OpenAI\nfrom langchain.agents import initialize_agent\nfrom langchain.agents.agent_toolkits import ZapierToolkit\nfrom langchain.utilities.zapier import ZapierNLAWrapper\n## step 0. expose gmail 'find email' and slack 'send channel message' actions\n# first go here, log in, expose (enable) the two actions:\n# https://nla.zapier.com/demo/start\n# -- for this example, can leave all fields \"Have AI guess\"\n# in an oauth scenario, you'd get your own id (instead of 'demo')\n# which you route your users through first\nllm = OpenAI(temperature=0)\nzapier = ZapierNLAWrapper()\n## To leverage OAuth you may pass the value `nla_oauth_access_token` to\n## the ZapierNLAWrapper. If you do this there is no need to initialize\n## the ZAPIER_NLA_API_KEY env variable\n# zapier = ZapierNLAWrapper(zapier_nla_oauth_access_token=\"TOKEN_HERE\")\ntoolkit = ZapierToolkit.from_zapier_nla_wrapper(zapier)\nagent = initialize_agent(\n toolkit.get_tools(),\n llm,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/zapier/tool.html"} {"id": "ddfc7a8c5e50-2", "text": "agent = initialize_agent(\n toolkit.get_tools(),\n llm,\n agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n verbose=True\n)\nagent.run((\"Summarize the last email I received regarding Silicon Valley Bank. \"\n \"Send the summary to the #test-zapier channel in slack.\"))\n```\n\"\"\"\nfrom typing import Any, Dict, Optional\nfrom pydantic import Field, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.zapier.prompt import BASE_ZAPIER_TOOL_PROMPT\nfrom langchain.utilities.zapier import ZapierNLAWrapper\n[docs]class ZapierNLARunAction(BaseTool):\n \"\"\"\n Args:\n action_id: a specific action ID (from list actions) of the action to execute\n (the set api_key must be associated with the action owner)\n instructions: a natural language instruction string for using the action\n (eg. \"get the latest email from Mike Knoop\" for \"Gmail: find email\" action)\n params: a dict, optional. Any params provided will *override* AI guesses\n from `instructions` (see \"understanding the AI guessing flow\" here:\n https://nla.zapier.com/docs/using-the-api#ai-guessing)\n \"\"\"\n api_wrapper: ZapierNLAWrapper = Field(default_factory=ZapierNLAWrapper)\n action_id: str\n params: Optional[dict] = None\n base_prompt: str = BASE_ZAPIER_TOOL_PROMPT\n zapier_description: str\n params_schema: Dict[str, str] = Field(default_factory=dict)\n name = \"\"\n description = \"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/zapier/tool.html"} {"id": "ddfc7a8c5e50-3", "text": "name = \"\"\n description = \"\"\n[docs] @root_validator\n def set_name_description(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n zapier_description = values[\"zapier_description\"]\n params_schema = values[\"params_schema\"]\n if \"instructions\" in params_schema:\n del params_schema[\"instructions\"]\n # Ensure base prompt (if overrided) contains necessary input fields\n necessary_fields = {\"{zapier_description}\", \"{params}\"}\n if not all(field in values[\"base_prompt\"] for field in necessary_fields):\n raise ValueError(\n \"Your custom base Zapier prompt must contain input fields for \"\n \"{zapier_description} and {params}.\"\n )\n values[\"name\"] = zapier_description\n values[\"description\"] = values[\"base_prompt\"].format(\n zapier_description=zapier_description,\n params=str(list(params_schema.keys())),\n )\n return values\n def _run(\n self, instructions: str, run_manager: Optional[CallbackManagerForToolRun] = None\n ) -> str:\n \"\"\"Use the Zapier NLA tool to return a list of all exposed user actions.\"\"\"\n return self.api_wrapper.run_as_str(self.action_id, instructions, self.params)\n async def _arun(\n self,\n instructions: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Zapier NLA tool to return a list of all exposed user actions.\"\"\"\n return await self.api_wrapper.arun_as_str(\n self.action_id,\n instructions,\n self.params,\n )\nZapierNLARunAction.__doc__ = (", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/zapier/tool.html"} {"id": "ddfc7a8c5e50-4", "text": ")\nZapierNLARunAction.__doc__ = (\n ZapierNLAWrapper.run.__doc__ + ZapierNLARunAction.__doc__ # type: ignore\n)\n# other useful actions\n[docs]class ZapierNLAListActions(BaseTool):\n \"\"\"\n Args:\n None\n \"\"\"\n name = \"ZapierNLA_list_actions\"\n description = BASE_ZAPIER_TOOL_PROMPT + (\n \"This tool returns a list of the user's exposed actions.\"\n )\n api_wrapper: ZapierNLAWrapper = Field(default_factory=ZapierNLAWrapper)\n def _run(\n self,\n _: str = \"\",\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Zapier NLA tool to return a list of all exposed user actions.\"\"\"\n return self.api_wrapper.list_as_str()\n async def _arun(\n self,\n _: str = \"\",\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Zapier NLA tool to return a list of all exposed user actions.\"\"\"\n return await self.api_wrapper.alist_as_str()\nZapierNLAListActions.__doc__ = (\n ZapierNLAWrapper.list.__doc__ + ZapierNLAListActions.__doc__ # type: ignore\n)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/zapier/tool.html"} {"id": "45e808e20a75-0", "text": "Source code for langchain.tools.requests.tool\n# flake8: noqa\n\"\"\"Tools for making requests to an API endpoint.\"\"\"\nimport json\nfrom typing import Any, Dict, Optional\nfrom pydantic import BaseModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.requests import TextRequestsWrapper\nfrom langchain.tools.base import BaseTool\ndef _parse_input(text: str) -> Dict[str, Any]:\n \"\"\"Parse the json string into a dict.\"\"\"\n return json.loads(text)\ndef _clean_url(url: str) -> str:\n \"\"\"Strips quotes from the url.\"\"\"\n return url.strip(\"\\\"'\")\n[docs]class BaseRequestsTool(BaseModel):\n \"\"\"Base class for requests tools.\"\"\"\n requests_wrapper: TextRequestsWrapper\n[docs]class RequestsGetTool(BaseRequestsTool, BaseTool):\n \"\"\"Tool for making a GET request to an API endpoint.\"\"\"\n name = \"requests_get\"\n description = \"A portal to the internet. Use this when you need to get specific content from a website. Input should be a url (i.e. https://www.google.com). The output will be the text response of the GET request.\"\n def _run(\n self, url: str, run_manager: Optional[CallbackManagerForToolRun] = None\n ) -> str:\n \"\"\"Run the tool.\"\"\"\n return self.requests_wrapper.get(_clean_url(url))\n async def _arun(\n self,\n url: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Run the tool asynchronously.\"\"\"\n return await self.requests_wrapper.aget(_clean_url(url))\n[docs]class RequestsPostTool(BaseRequestsTool, BaseTool):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/requests/tool.html"} {"id": "45e808e20a75-1", "text": "[docs]class RequestsPostTool(BaseRequestsTool, BaseTool):\n \"\"\"Tool for making a POST request to an API endpoint.\"\"\"\n name = \"requests_post\"\n description = \"\"\"Use this when you want to POST to a website.\n Input should be a json string with two keys: \"url\" and \"data\".\n The value of \"url\" should be a string, and the value of \"data\" should be a dictionary of \n key-value pairs you want to POST to the url.\n Be careful to always use double quotes for strings in the json string\n The output will be the text response of the POST request.\n \"\"\"\n def _run(\n self, text: str, run_manager: Optional[CallbackManagerForToolRun] = None\n ) -> str:\n \"\"\"Run the tool.\"\"\"\n try:\n data = _parse_input(text)\n return self.requests_wrapper.post(_clean_url(data[\"url\"]), data[\"data\"])\n except Exception as e:\n return repr(e)\n async def _arun(\n self,\n text: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Run the tool asynchronously.\"\"\"\n try:\n data = _parse_input(text)\n return await self.requests_wrapper.apost(\n _clean_url(data[\"url\"]), data[\"data\"]\n )\n except Exception as e:\n return repr(e)\n[docs]class RequestsPatchTool(BaseRequestsTool, BaseTool):\n \"\"\"Tool for making a PATCH request to an API endpoint.\"\"\"\n name = \"requests_patch\"\n description = \"\"\"Use this when you want to PATCH to a website.\n Input should be a json string with two keys: \"url\" and \"data\".", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/requests/tool.html"} {"id": "45e808e20a75-2", "text": "Input should be a json string with two keys: \"url\" and \"data\".\n The value of \"url\" should be a string, and the value of \"data\" should be a dictionary of \n key-value pairs you want to PATCH to the url.\n Be careful to always use double quotes for strings in the json string\n The output will be the text response of the PATCH request.\n \"\"\"\n def _run(\n self, text: str, run_manager: Optional[CallbackManagerForToolRun] = None\n ) -> str:\n \"\"\"Run the tool.\"\"\"\n try:\n data = _parse_input(text)\n return self.requests_wrapper.patch(_clean_url(data[\"url\"]), data[\"data\"])\n except Exception as e:\n return repr(e)\n async def _arun(\n self,\n text: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Run the tool asynchronously.\"\"\"\n try:\n data = _parse_input(text)\n return await self.requests_wrapper.apatch(\n _clean_url(data[\"url\"]), data[\"data\"]\n )\n except Exception as e:\n return repr(e)\n[docs]class RequestsPutTool(BaseRequestsTool, BaseTool):\n \"\"\"Tool for making a PUT request to an API endpoint.\"\"\"\n name = \"requests_put\"\n description = \"\"\"Use this when you want to PUT to a website.\n Input should be a json string with two keys: \"url\" and \"data\".\n The value of \"url\" should be a string, and the value of \"data\" should be a dictionary of \n key-value pairs you want to PUT to the url.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/requests/tool.html"} {"id": "45e808e20a75-3", "text": "key-value pairs you want to PUT to the url.\n Be careful to always use double quotes for strings in the json string.\n The output will be the text response of the PUT request.\n \"\"\"\n def _run(\n self, text: str, run_manager: Optional[CallbackManagerForToolRun] = None\n ) -> str:\n \"\"\"Run the tool.\"\"\"\n try:\n data = _parse_input(text)\n return self.requests_wrapper.put(_clean_url(data[\"url\"]), data[\"data\"])\n except Exception as e:\n return repr(e)\n async def _arun(\n self,\n text: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Run the tool asynchronously.\"\"\"\n try:\n data = _parse_input(text)\n return await self.requests_wrapper.aput(\n _clean_url(data[\"url\"]), data[\"data\"]\n )\n except Exception as e:\n return repr(e)\n[docs]class RequestsDeleteTool(BaseRequestsTool, BaseTool):\n \"\"\"Tool for making a DELETE request to an API endpoint.\"\"\"\n name = \"requests_delete\"\n description = \"A portal to the internet. Use this when you need to make a DELETE request to a URL. Input should be a specific url, and the output will be the text response of the DELETE request.\"\n def _run(\n self,\n url: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Run the tool.\"\"\"\n return self.requests_wrapper.delete(_clean_url(url))\n async def _arun(\n self,\n url: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/requests/tool.html"} {"id": "45e808e20a75-4", "text": "async def _arun(\n self,\n url: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Run the tool asynchronously.\"\"\"\n return await self.requests_wrapper.adelete(_clean_url(url))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/requests/tool.html"} {"id": "af547b3491bd-0", "text": "Source code for langchain.tools.sql_database.tool\n# flake8: noqa\n\"\"\"Tools for interacting with a SQL database.\"\"\"\nfrom typing import Any, Dict, Optional\nfrom pydantic import BaseModel, Extra, Field, root_validator\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.chains.llm import LLMChain\nfrom langchain.prompts import PromptTemplate\nfrom langchain.sql_database import SQLDatabase\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.sql_database.prompt import QUERY_CHECKER\n[docs]class BaseSQLDatabaseTool(BaseModel):\n \"\"\"Base tool for interacting with a SQL database.\"\"\"\n db: SQLDatabase = Field(exclude=True)\n # Override BaseTool.Config to appease mypy\n # See https://github.com/pydantic/pydantic/issues/4173\n[docs] class Config(BaseTool.Config):\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n extra = Extra.forbid\n[docs]class QuerySQLDataBaseTool(BaseSQLDatabaseTool, BaseTool):\n \"\"\"Tool for querying a SQL database.\"\"\"\n name = \"sql_db_query\"\n description = \"\"\"\n Input to this tool is a detailed and correct SQL query, output is a result from the database.\n If the query is not correct, an error message will be returned.\n If an error is returned, rewrite the query, check the query, and try again.\n \"\"\"\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Execute the query, return the results or an error message.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/sql_database/tool.html"} {"id": "af547b3491bd-1", "text": "\"\"\"Execute the query, return the results or an error message.\"\"\"\n return self.db.run_no_throw(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n raise NotImplementedError(\"QuerySqlDbTool does not support async\")\n[docs]class InfoSQLDatabaseTool(BaseSQLDatabaseTool, BaseTool):\n \"\"\"Tool for getting metadata about a SQL database.\"\"\"\n name = \"sql_db_schema\"\n description = \"\"\"\n Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables. \n Example Input: \"table1, table2, table3\"\n \"\"\"\n def _run(\n self,\n table_names: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Get the schema for tables in a comma-separated list.\"\"\"\n return self.db.get_table_info_no_throw(table_names.split(\", \"))\n async def _arun(\n self,\n table_name: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n raise NotImplementedError(\"SchemaSqlDbTool does not support async\")\n[docs]class ListSQLDatabaseTool(BaseSQLDatabaseTool, BaseTool):\n \"\"\"Tool for getting tables names.\"\"\"\n name = \"sql_db_list_tables\"\n description = \"Input is an empty string, output is a comma separated list of tables in the database.\"\n def _run(\n self,\n tool_input: str = \"\",\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/sql_database/tool.html"} {"id": "af547b3491bd-2", "text": ") -> str:\n \"\"\"Get the schema for a specific table.\"\"\"\n return \", \".join(self.db.get_usable_table_names())\n async def _arun(\n self,\n tool_input: str = \"\",\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n raise NotImplementedError(\"ListTablesSqlDbTool does not support async\")\n[docs]class QuerySQLCheckerTool(BaseSQLDatabaseTool, BaseTool):\n \"\"\"Use an LLM to check if a query is correct.\n Adapted from https://www.patterns.app/blog/2023/01/18/crunchbot-sql-analyst-gpt/\"\"\"\n template: str = QUERY_CHECKER\n llm: BaseLanguageModel\n llm_chain: LLMChain = Field(init=False)\n name = \"sql_db_query_checker\"\n description = \"\"\"\n Use this tool to double check if your query is correct before executing it.\n Always use this tool before executing a query with query_sql_db!\n \"\"\"\n[docs] @root_validator(pre=True)\n def initialize_llm_chain(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n if \"llm_chain\" not in values:\n values[\"llm_chain\"] = LLMChain(\n llm=values.get(\"llm\"),\n prompt=PromptTemplate(\n template=QUERY_CHECKER, input_variables=[\"query\", \"dialect\"]\n ),\n )\n if values[\"llm_chain\"].prompt.input_variables != [\"query\", \"dialect\"]:\n raise ValueError(\n \"LLM chain for QueryCheckerTool must have input variables ['query', 'dialect']\"\n )\n return values\n def _run(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/sql_database/tool.html"} {"id": "af547b3491bd-3", "text": ")\n return values\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the LLM to check the query.\"\"\"\n return self.llm_chain.predict(query=query, dialect=self.db.dialect)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n return await self.llm_chain.apredict(query=query, dialect=self.db.dialect)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/sql_database/tool.html"} {"id": "468e1d739536-0", "text": "Source code for langchain.tools.jira.tool\n\"\"\"\nThis tool allows agents to interact with the atlassian-python-api library\nand operate on a Jira instance. For more information on the\natlassian-python-api library, see https://atlassian-python-api.readthedocs.io/jira.html\nTo use this tool, you must first set as environment variables:\n JIRA_API_TOKEN\n JIRA_USERNAME\n JIRA_INSTANCE_URL\nBelow is a sample script that uses the Jira tool:\n```python\nfrom langchain.agents import AgentType\nfrom langchain.agents import initialize_agent\nfrom langchain.agents.agent_toolkits.jira.toolkit import JiraToolkit\nfrom langchain.llms import OpenAI\nfrom langchain.utilities.jira import JiraAPIWrapper\nllm = OpenAI(temperature=0)\njira = JiraAPIWrapper()\ntoolkit = JiraToolkit.from_jira_api_wrapper(jira)\nagent = initialize_agent(\n toolkit.get_tools(),\n llm,\n agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n verbose=True\n)\n```\n\"\"\"\nfrom typing import Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.jira import JiraAPIWrapper\n[docs]class JiraAction(BaseTool):\n api_wrapper: JiraAPIWrapper = Field(default_factory=JiraAPIWrapper)\n mode: str\n name = \"\"\n description = \"\"\n def _run(\n self,\n instructions: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Atlassian Jira API to run an operation.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/jira/tool.html"} {"id": "468e1d739536-1", "text": "\"\"\"Use the Atlassian Jira API to run an operation.\"\"\"\n return self.api_wrapper.run(self.mode, instructions)\n async def _arun(\n self,\n _: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Atlassian Jira API to run an operation.\"\"\"\n raise NotImplementedError(\"JiraAction does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/jira/tool.html"} {"id": "d2693c525901-0", "text": "Source code for langchain.tools.sleep.tool\n\"\"\"Tool for agent to sleep.\"\"\"\nfrom asyncio import sleep as asleep\nfrom time import sleep\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\n[docs]class SleepInput(BaseModel):\n \"\"\"Input for CopyFileTool.\"\"\"\n sleep_time: int = Field(..., description=\"Time to sleep in seconds\")\n[docs]class SleepTool(BaseTool):\n \"\"\"Tool that adds the capability to sleep.\"\"\"\n name = \"sleep\"\n args_schema: Type[BaseModel] = SleepInput\n description = \"Make agent sleep for a specified number of seconds.\"\n def _run(\n self,\n sleep_time: int,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Sleep tool.\"\"\"\n sleep(sleep_time)\n return f\"Agent slept for {sleep_time} seconds.\"\n async def _arun(\n self,\n sleep_time: int,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the sleep tool asynchronously.\"\"\"\n await asleep(sleep_time)\n return f\"Agent slept for {sleep_time} seconds.\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/sleep/tool.html"} {"id": "87295774c631-0", "text": "Source code for langchain.tools.office365.create_draft_message\nfrom typing import List, Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.office365.base import O365BaseTool\n[docs]class CreateDraftMessageSchema(BaseModel):\n body: str = Field(\n ...,\n description=\"The message body to include in the draft.\",\n )\n to: List[str] = Field(\n ...,\n description=\"The list of recipients.\",\n )\n subject: str = Field(\n ...,\n description=\"The subject of the message.\",\n )\n cc: Optional[List[str]] = Field(\n None,\n description=\"The list of CC recipients.\",\n )\n bcc: Optional[List[str]] = Field(\n None,\n description=\"The list of BCC recipients.\",\n )\n[docs]class O365CreateDraftMessage(O365BaseTool):\n name: str = \"create_email_draft\"\n description: str = (\n \"Use this tool to create a draft email with the provided message fields.\"\n )\n args_schema: Type[CreateDraftMessageSchema] = CreateDraftMessageSchema\n def _run(\n self,\n body: str,\n to: List[str],\n subject: str,\n cc: Optional[List[str]] = None,\n bcc: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n # Get mailbox object\n mailbox = self.account.mailbox()\n message = mailbox.new_message()\n # Assign message values\n message.body = body\n message.subject = subject", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/office365/create_draft_message.html"} {"id": "87295774c631-1", "text": "# Assign message values\n message.body = body\n message.subject = subject\n message.to.add(to)\n if cc is not None:\n message.cc.add(cc)\n if bcc is not None:\n message.bcc.add(cc)\n message.save_draft()\n output = \"Draft created: \" + str(message)\n return output\n async def _arun(\n self,\n message: str,\n to: List[str],\n subject: str,\n cc: Optional[List[str]] = None,\n bcc: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n raise NotImplementedError(f\"The tool {self.name} does not support async yet.\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/office365/create_draft_message.html"} {"id": "eb27da868b2c-0", "text": "Source code for langchain.tools.office365.messages_search\n\"\"\"Util that Searches email messages in Office 365.\nFree, but setup is required. See link below.\nhttps://learn.microsoft.com/en-us/graph/auth/\n\"\"\"\nfrom typing import Any, Dict, List, Optional, Type\nfrom pydantic import BaseModel, Extra, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.office365.base import O365BaseTool\nfrom langchain.tools.office365.utils import clean_body\n[docs]class SearchEmailsInput(BaseModel):\n \"\"\"Input for SearchEmails Tool.\"\"\"\n \"\"\"From https://learn.microsoft.com/en-us/graph/search-query-parameter\"\"\"\n folder: str = Field(\n default=None,\n description=(\n \" If the user wants to search in only one folder, the name of the folder. \"\n 'Default folders are \"inbox\", \"drafts\", \"sent items\", \"deleted ttems\", but '\n \"users can search custom folders as well.\"\n ),\n )\n query: str = Field(\n description=(\n \"The Microsoift Graph v1.0 $search query. Example filters include \"\n \"from:sender, from:sender, to:recipient, subject:subject, \"\n \"recipients:list_of_recipients, body:excitement, importance:high, \"\n \"received>2022-12-01, received<2021-12-01, sent>2022-12-01, \"\n \"sent<2021-12-01, hasAttachments:true attachment:api-catalog.md, \"\n \"cc:samanthab@contoso.com, bcc:samanthab@contoso.com, body:excitement date \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/office365/messages_search.html"} {"id": "eb27da868b2c-1", "text": "\"range example: received:2023-06-08..2023-06-09 matching example: \"\n \"from:amy OR from:david.\"\n )\n )\n max_results: int = Field(\n default=10,\n description=\"The maximum number of results to return.\",\n )\n truncate: bool = Field(\n default=True,\n description=(\n \"Whether the email body is trucated to meet token number limits. Set to \"\n \"False for searches that will retrieve very few results, otherwise, set to \"\n \"True\"\n ),\n )\n[docs]class O365SearchEmails(O365BaseTool):\n \"\"\"Class for searching email messages in Office 365\n Free, but setup is required\n \"\"\"\n name: str = \"messages_search\"\n args_schema: Type[BaseModel] = SearchEmailsInput\n description: str = (\n \"Use this tool to search for email messages.\"\n \" The input must be a valid Microsoft Graph v1.0 $search query.\"\n \" The output is a JSON list of the requested resource.\"\n )\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n def _run(\n self,\n query: str,\n folder: str = \"\",\n max_results: int = 10,\n truncate: bool = True,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> List[Dict[str, Any]]:\n # Get mailbox object\n mailbox = self.account.mailbox()\n # Pull the folder if the user wants to search in a folder\n if folder != \"\":", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/office365/messages_search.html"} {"id": "eb27da868b2c-2", "text": "if folder != \"\":\n mailbox = mailbox.get_folder(folder_name=folder)\n # Retrieve messages based on query\n query = mailbox.q().search(query)\n messages = mailbox.get_messages(limit=max_results, query=query)\n # Generate output dict\n output_messages = []\n for message in messages:\n output_message = {}\n output_message[\"from\"] = message.sender\n if truncate:\n output_message[\"body\"] = message.body_preview\n else:\n output_message[\"body\"] = clean_body(message.body)\n output_message[\"subject\"] = message.subject\n output_message[\"date\"] = message.modified.strftime(\"%Y-%m-%dT%H:%M:%S%z\")\n output_message[\"to\"] = []\n for recipient in message.to._recipients:\n output_message[\"to\"].append(str(recipient))\n output_message[\"cc\"] = []\n for recipient in message.cc._recipients:\n output_message[\"cc\"].append(str(recipient))\n output_message[\"bcc\"] = []\n for recipient in message.bcc._recipients:\n output_message[\"bcc\"].append(str(recipient))\n output_messages.append(output_message)\n return output_messages\n async def _arun(\n self,\n query: str,\n max_results: int = 10,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> List[Dict[str, Any]]:\n \"\"\"Run the tool.\"\"\"\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/office365/messages_search.html"} {"id": "ab43493042ae-0", "text": "Source code for langchain.tools.office365.events_search\n\"\"\"Util that Searches calendar events in Office 365.\nFree, but setup is required. See link below.\nhttps://learn.microsoft.com/en-us/graph/auth/\n\"\"\"\nfrom datetime import datetime as dt\nfrom typing import Any, Dict, List, Optional, Type\nfrom pydantic import BaseModel, Extra, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.office365.base import O365BaseTool\nfrom langchain.tools.office365.utils import clean_body\n[docs]class SearchEventsInput(BaseModel):\n \"\"\"Input for SearchEmails Tool.\"\"\"\n \"\"\"From https://learn.microsoft.com/en-us/graph/search-query-parameter\"\"\"\n start_datetime: str = Field(\n description=(\n \" The start datetime for the search query in the following format: \"\n ' YYYY-MM-DDTHH:MM:SS\u00b1hh:mm, where \"T\" separates the date and time '\n \" components, and the time zone offset is specified as \u00b1hh:mm. \"\n ' For example: \"2023-06-09T10:30:00+03:00\" represents June 9th, '\n \" 2023, at 10:30 AM in a time zone with a positive offset of 3 \"\n \" hours from Coordinated Universal Time (UTC).\"\n )\n )\n end_datetime: str = Field(\n description=(\n \" The end datetime for the search query in the following format: \"\n ' YYYY-MM-DDTHH:MM:SS\u00b1hh:mm, where \"T\" separates the date and time '\n \" components, and the time zone offset is specified as \u00b1hh:mm. \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/office365/events_search.html"} {"id": "ab43493042ae-1", "text": "\" components, and the time zone offset is specified as \u00b1hh:mm. \"\n ' For example: \"2023-06-09T10:30:00+03:00\" represents June 9th, '\n \" 2023, at 10:30 AM in a time zone with a positive offset of 3 \"\n \" hours from Coordinated Universal Time (UTC).\"\n )\n )\n max_results: int = Field(\n default=10,\n description=\"The maximum number of results to return.\",\n )\n truncate: bool = Field(\n default=True,\n description=(\n \"Whether the event's body is trucated to meet token number limits. Set to \"\n \"False for searches that will retrieve very few results, otherwise, set to \"\n \"True.\"\n ),\n )\n[docs]class O365SearchEvents(O365BaseTool):\n \"\"\"Class for searching calendar events in Office 365\n Free, but setup is required\n \"\"\"\n name: str = \"events_search\"\n args_schema: Type[BaseModel] = SearchEventsInput\n description: str = (\n \" Use this tool to search for the user's calendar events.\"\n \" The input must be the start and end datetimes for the search query.\"\n \" The output is a JSON list of all the events in the user's calendar\"\n \" between the start and end times. You can assume that the user can \"\n \" not schedule any meeting over existing meetings, and that the user \"\n \"is busy during meetings. Any times without events are free for the user. \"\n )\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n def _run(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/office365/events_search.html"} {"id": "ab43493042ae-2", "text": "extra = Extra.forbid\n def _run(\n self,\n start_datetime: str,\n end_datetime: str,\n max_results: int = 10,\n truncate: bool = True,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> List[Dict[str, Any]]:\n TRUNCATE_LIMIT = 150\n # Get calendar object\n schedule = self.account.schedule()\n calendar = schedule.get_default_calendar()\n # Process the date range parameters\n start_datetime_query = dt.strptime(start_datetime, \"%Y-%m-%dT%H:%M:%S%z\")\n end_datetime_query = dt.strptime(end_datetime, \"%Y-%m-%dT%H:%M:%S%z\")\n # Run the query\n q = calendar.new_query(\"start\").greater_equal(start_datetime_query)\n q.chain(\"and\").on_attribute(\"end\").less_equal(end_datetime_query)\n events = calendar.get_events(query=q, include_recurring=True, limit=max_results)\n # Generate output dict\n output_events = []\n for event in events:\n output_event = {}\n output_event[\"organizer\"] = event.organizer\n output_event[\"subject\"] = event.subject\n if truncate:\n output_event[\"body\"] = clean_body(event.body)[:TRUNCATE_LIMIT]\n else:\n output_event[\"body\"] = clean_body(event.body)\n # Get the time zone from the search parameters\n time_zone = start_datetime_query.tzinfo\n # Assign the datetimes in the search time zone\n output_event[\"start_datetime\"] = event.start.astimezone(time_zone).strftime(\n \"%Y-%m-%dT%H:%M:%S%z\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/office365/events_search.html"} {"id": "ab43493042ae-3", "text": "\"%Y-%m-%dT%H:%M:%S%z\"\n )\n output_event[\"end_datetime\"] = event.end.astimezone(time_zone).strftime(\n \"%Y-%m-%dT%H:%M:%S%z\"\n )\n output_event[\"modified_date\"] = event.modified.astimezone(\n time_zone\n ).strftime(\"%Y-%m-%dT%H:%M:%S%z\")\n output_events.append(output_event)\n return output_events\n async def _arun(\n self,\n query: str,\n max_results: int = 10,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> List[Dict[str, Any]]:\n \"\"\"Run the tool.\"\"\"\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/office365/events_search.html"} {"id": "4ae64711c2d1-0", "text": "Source code for langchain.tools.office365.base\n\"\"\"Base class for Gmail tools.\"\"\"\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING\nfrom pydantic import Field\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.office365.utils import authenticate\nif TYPE_CHECKING:\n from O365 import Account\n[docs]class O365BaseTool(BaseTool):\n account: Account = Field(default_factory=authenticate)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/office365/base.html"} {"id": "164d904450a6-0", "text": "Source code for langchain.tools.office365.utils\n\"\"\"O365 tool utils.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport os\nfrom typing import TYPE_CHECKING\nif TYPE_CHECKING:\n from O365 import Account\nlogger = logging.getLogger(__name__)\n[docs]def clean_body(body: str) -> str:\n \"\"\"Clean body of a message or event.\"\"\"\n try:\n from bs4 import BeautifulSoup\n try:\n # Remove HTML\n soup = BeautifulSoup(str(body), \"html.parser\")\n body = soup.get_text()\n # Remove return characters\n body = \"\".join(body.splitlines())\n # Remove extra spaces\n body = \" \".join(body.split())\n return str(body)\n except Exception:\n return str(body)\n except ImportError:\n return str(body)\n[docs]def authenticate() -> Account:\n \"\"\"Authenticate using the Microsoft Grah API\"\"\"\n try:\n from O365 import Account\n except ImportError as e:\n raise ImportError(\n \"Cannot import 0365. Please install the package with `pip install O365`.\"\n ) from e\n if \"CLIENT_ID\" in os.environ and \"CLIENT_SECRET\" in os.environ:\n client_id = os.environ[\"CLIENT_ID\"]\n client_secret = os.environ[\"CLIENT_SECRET\"]\n credentials = (client_id, client_secret)\n else:\n logger.error(\n \"Error: The CLIENT_ID and CLIENT_SECRET environmental variables have not \"\n \"been set. Visit the following link on how to acquire these authorization \"\n \"tokens: https://learn.microsoft.com/en-us/graph/auth/\"\n )\n return None\n account = Account(credentials)\n if account.is_authenticated is False:\n if not account.authenticate(\n scopes=[", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/office365/utils.html"} {"id": "164d904450a6-1", "text": "if account.is_authenticated is False:\n if not account.authenticate(\n scopes=[\n \"https://graph.microsoft.com/Mail.ReadWrite\",\n \"https://graph.microsoft.com/Mail.Send\",\n \"https://graph.microsoft.com/Calendars.ReadWrite\",\n \"https://graph.microsoft.com/MailboxSettings.ReadWrite\",\n ]\n ):\n print(\"Error: Could not authenticate\")\n return None\n else:\n return account\n else:\n return account", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/office365/utils.html"} {"id": "dce37db5c9c8-0", "text": "Source code for langchain.tools.office365.send_event\n\"\"\"Util that sends calendar events in Office 365.\nFree, but setup is required. See link below.\nhttps://learn.microsoft.com/en-us/graph/auth/\n\"\"\"\nfrom datetime import datetime as dt\nfrom typing import List, Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.office365.base import O365BaseTool\n[docs]class SendEventSchema(BaseModel):\n \"\"\"Input for CreateEvent Tool.\"\"\"\n body: str = Field(\n ...,\n description=\"The message body to include in the event.\",\n )\n attendees: List[str] = Field(\n ...,\n description=\"The list of attendees for the event.\",\n )\n subject: str = Field(\n ...,\n description=\"The subject of the event.\",\n )\n start_datetime: str = Field(\n description=\" The start datetime for the event in the following format: \"\n ' YYYY-MM-DDTHH:MM:SS\u00b1hh:mm, where \"T\" separates the date and time '\n \" components, and the time zone offset is specified as \u00b1hh:mm. \"\n ' For example: \"2023-06-09T10:30:00+03:00\" represents June 9th, '\n \" 2023, at 10:30 AM in a time zone with a positive offset of 3 \"\n \" hours from Coordinated Universal Time (UTC).\",\n )\n end_datetime: str = Field(\n description=\" The end datetime for the event in the following format: \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/office365/send_event.html"} {"id": "dce37db5c9c8-1", "text": "description=\" The end datetime for the event in the following format: \"\n ' YYYY-MM-DDTHH:MM:SS\u00b1hh:mm, where \"T\" separates the date and time '\n \" components, and the time zone offset is specified as \u00b1hh:mm. \"\n ' For example: \"2023-06-09T10:30:00+03:00\" represents June 9th, '\n \" 2023, at 10:30 AM in a time zone with a positive offset of 3 \"\n \" hours from Coordinated Universal Time (UTC).\",\n )\n[docs]class O365SendEvent(O365BaseTool):\n name: str = \"send_event\"\n description: str = (\n \"Use this tool to create and send an event with the provided event fields.\"\n )\n args_schema: Type[SendEventSchema] = SendEventSchema\n def _run(\n self,\n body: str,\n attendees: List[str],\n subject: str,\n start_datetime: str,\n end_datetime: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n # Get calendar object\n schedule = self.account.schedule()\n calendar = schedule.get_default_calendar()\n event = calendar.new_event()\n event.body = body\n event.subject = subject\n event.start = dt.strptime(start_datetime, \"%Y-%m-%dT%H:%M:%S%z\")\n event.end = dt.strptime(end_datetime, \"%Y-%m-%dT%H:%M:%S%z\")\n for attendee in attendees:\n event.attendees.add(attendee)\n # TO-DO: Look into PytzUsageWarning\n event.save()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/office365/send_event.html"} {"id": "dce37db5c9c8-2", "text": "# TO-DO: Look into PytzUsageWarning\n event.save()\n output = \"Event sent: \" + str(event)\n return output\n async def _arun(\n self,\n message: str,\n to: List[str],\n subject: str,\n cc: Optional[List[str]] = None,\n bcc: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n raise NotImplementedError(f\"The tool {self.name} does not support async yet.\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/office365/send_event.html"} {"id": "0ba2e122165e-0", "text": "Source code for langchain.tools.office365.send_message\nfrom typing import List, Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.office365.base import O365BaseTool\n[docs]class SendMessageSchema(BaseModel):\n body: str = Field(\n ...,\n description=\"The message body to be sent.\",\n )\n to: List[str] = Field(\n ...,\n description=\"The list of recipients.\",\n )\n subject: str = Field(\n ...,\n description=\"The subject of the message.\",\n )\n cc: Optional[List[str]] = Field(\n None,\n description=\"The list of CC recipients.\",\n )\n bcc: Optional[List[str]] = Field(\n None,\n description=\"The list of BCC recipients.\",\n )\n[docs]class O365SendMessage(O365BaseTool):\n name: str = \"send_email\"\n description: str = (\n \"Use this tool to send an email with the provided message fields.\"\n )\n args_schema: Type[SendMessageSchema] = SendMessageSchema\n def _run(\n self,\n body: str,\n to: List[str],\n subject: str,\n cc: Optional[List[str]] = None,\n bcc: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n # Get mailbox object\n mailbox = self.account.mailbox()\n message = mailbox.new_message()\n # Assign message values\n message.body = body\n message.subject = subject\n message.to.add(to)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/office365/send_message.html"} {"id": "0ba2e122165e-1", "text": "message.body = body\n message.subject = subject\n message.to.add(to)\n if cc is not None:\n message.cc.add(cc)\n if bcc is not None:\n message.bcc.add(cc)\n message.send()\n output = \"Message sent: \" + str(message)\n return output\n async def _arun(\n self,\n message: str,\n to: List[str],\n subject: str,\n cc: Optional[List[str]] = None,\n bcc: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n raise NotImplementedError(f\"The tool {self.name} does not support async yet.\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/office365/send_message.html"} {"id": "8c9df3a4cb14-0", "text": "Source code for langchain.tools.steamship_image_generation.tool\n\"\"\"This tool allows agents to generate images using Steamship.\nSteamship offers access to different third party image generation APIs\nusing a single API key.\nToday the following models are supported:\n- Dall-E\n- Stable Diffusion\nTo use this tool, you must first set as environment variables:\n STEAMSHIP_API_KEY\n```\n\"\"\"\nfrom __future__ import annotations\nfrom enum import Enum\nfrom typing import TYPE_CHECKING, Dict, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools import BaseTool\nfrom langchain.tools.steamship_image_generation.utils import make_image_public\nfrom langchain.utils import get_from_dict_or_env\nif TYPE_CHECKING:\n from steamship import Steamship\n[docs]class ModelName(str, Enum):\n \"\"\"Supported Image Models for generation.\"\"\"\n DALL_E = \"dall-e\"\n STABLE_DIFFUSION = \"stable-diffusion\"\nSUPPORTED_IMAGE_SIZES = {\n ModelName.DALL_E: (\"256x256\", \"512x512\", \"1024x1024\"),\n ModelName.STABLE_DIFFUSION: (\"512x512\", \"768x768\"),\n}\n[docs]class SteamshipImageGenerationTool(BaseTool):\n \"\"\"Tool used to generate images from a text-prompt.\"\"\"\n model_name: ModelName\n size: Optional[str] = \"512x512\"\n steamship: Steamship\n return_urls: Optional[bool] = False\n name = \"GenerateImage\"\n description = (\n \"Useful for when you need to generate an image.\"\n \"Input: A detailed text-2-image prompt describing an image\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/steamship_image_generation/tool.html"} {"id": "8c9df3a4cb14-1", "text": "\"Input: A detailed text-2-image prompt describing an image\"\n \"Output: the UUID of a generated image\"\n )\n[docs] @root_validator(pre=True)\n def validate_size(cls, values: Dict) -> Dict:\n if \"size\" in values:\n size = values[\"size\"]\n model_name = values[\"model_name\"]\n if size not in SUPPORTED_IMAGE_SIZES[model_name]:\n raise RuntimeError(f\"size {size} is not supported by {model_name}\")\n return values\n[docs] @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n steamship_api_key = get_from_dict_or_env(\n values, \"steamship_api_key\", \"STEAMSHIP_API_KEY\"\n )\n try:\n from steamship import Steamship\n except ImportError:\n raise ImportError(\n \"steamship is not installed. \"\n \"Please install it with `pip install steamship`\"\n )\n steamship = Steamship(\n api_key=steamship_api_key,\n )\n values[\"steamship\"] = steamship\n if \"steamship_api_key\" in values:\n del values[\"steamship_api_key\"]\n return values\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n image_generator = self.steamship.use_plugin(\n plugin_handle=self.model_name.value, config={\"n\": 1, \"size\": self.size}\n )\n task = image_generator.generate(text=query, append_output_to_file=True)\n task.wait()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/steamship_image_generation/tool.html"} {"id": "8c9df3a4cb14-2", "text": "task.wait()\n blocks = task.output.blocks\n if len(blocks) > 0:\n if self.return_urls:\n return make_image_public(self.steamship, blocks[0])\n else:\n return blocks[0].id\n raise RuntimeError(f\"[{self.name}] Tool unable to generate image!\")\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"GenerateImageTool does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/steamship_image_generation/tool.html"} {"id": "e77d9e85ce66-0", "text": "Source code for langchain.tools.steamship_image_generation.utils\n\"\"\"Steamship Utils.\"\"\"\nfrom __future__ import annotations\nimport uuid\nfrom typing import TYPE_CHECKING\nif TYPE_CHECKING:\n from steamship import Block, Steamship\n[docs]def make_image_public(client: Steamship, block: Block) -> str:\n \"\"\"Upload a block to a signed URL and return the public URL.\"\"\"\n try:\n from steamship.data.workspace import SignedUrl\n from steamship.utils.signed_urls import upload_to_signed_url\n except ImportError:\n raise ValueError(\n \"The make_image_public function requires the steamship\"\n \" package to be installed. Please install steamship\"\n \" with `pip install --upgrade steamship`\"\n )\n filepath = str(uuid.uuid4())\n signed_url = (\n client.get_workspace()\n .create_signed_url(\n SignedUrl.Request(\n bucket=SignedUrl.Bucket.PLUGIN_DATA,\n filepath=filepath,\n operation=SignedUrl.Operation.WRITE,\n )\n )\n .signed_url\n )\n read_signed_url = (\n client.get_workspace()\n .create_signed_url(\n SignedUrl.Request(\n bucket=SignedUrl.Bucket.PLUGIN_DATA,\n filepath=filepath,\n operation=SignedUrl.Operation.READ,\n )\n )\n .signed_url\n )\n upload_to_signed_url(signed_url, block.raw())\n return read_signed_url", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/steamship_image_generation/utils.html"} {"id": "de462fd751a8-0", "text": "Source code for langchain.tools.dataforseo_api_search.tool\n\"\"\"Tool for the DataForSeo SERP API.\"\"\"\nfrom typing import Optional\nfrom pydantic.fields import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.dataforseo_api_search import DataForSeoAPIWrapper\n[docs]class DataForSeoAPISearchRun(BaseTool):\n \"\"\"Tool that adds the capability to query the DataForSeo Google search API.\"\"\"\n name = \"dataforseo_api_search\"\n description = (\n \"A robust Google Search API provided by DataForSeo.\"\n \"This tool is handy when you need information about trending topics \"\n \"or current events.\"\n )\n api_wrapper: DataForSeoAPIWrapper\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return str(self.api_wrapper.run(query))\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n return (await self.api_wrapper.arun(query)).__str__()\n[docs]class DataForSeoAPISearchResults(BaseTool):\n \"\"\"Tool that has capability to query the DataForSeo Google Search API\n and get back json.\"\"\"\n name = \"DataForSeo Results JSON\"\n description = (\n \"A comprehensive Google Search API provided by DataForSeo.\"\n \"This tool is useful for obtaining real-time data on current events \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/dataforseo_api_search/tool.html"} {"id": "de462fd751a8-1", "text": "\"This tool is useful for obtaining real-time data on current events \"\n \"or popular searches.\"\n \"The input should be a search query and the output is a JSON object \"\n \"of the query results.\"\n )\n api_wrapper: DataForSeoAPIWrapper = Field(default_factory=DataForSeoAPIWrapper)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return str(self.api_wrapper.results(query))\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n return (await self.api_wrapper.aresults(query)).__str__()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/dataforseo_api_search/tool.html"} {"id": "0c792c039cd2-0", "text": "Source code for langchain.tools.human.tool\n\"\"\"Tool for asking human input.\"\"\"\nfrom typing import Callable, Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\ndef _print_func(text: str) -> None:\n print(\"\\n\")\n print(text)\n[docs]class HumanInputRun(BaseTool):\n \"\"\"Tool that adds the capability to ask user for input.\"\"\"\n name = \"human\"\n description = (\n \"You can ask a human for guidance when you think you \"\n \"got stuck or you are not sure what to do next. \"\n \"The input should be a question for the human.\"\n )\n prompt_func: Callable[[str], None] = Field(default_factory=lambda: _print_func)\n input_func: Callable = Field(default_factory=lambda: input)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Human input tool.\"\"\"\n self.prompt_func(query)\n return self.input_func()\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Human tool asynchronously.\"\"\"\n raise NotImplementedError(\"Human tool does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/human/tool.html"} {"id": "a8f4f374bac8-0", "text": "Source code for langchain.tools.scenexplain.tool\n\"\"\"Tool for the SceneXplain API.\"\"\"\nfrom typing import Optional\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.scenexplain import SceneXplainAPIWrapper\n[docs]class SceneXplainInput(BaseModel):\n \"\"\"Input for SceneXplain.\"\"\"\n query: str = Field(..., description=\"The link to the image to explain\")\n[docs]class SceneXplainTool(BaseTool):\n \"\"\"Tool that adds the capability to explain images.\"\"\"\n name = \"image_explainer\"\n description = (\n \"An Image Captioning Tool: Use this tool to generate a detailed caption \"\n \"for an image. The input can be an image file of any format, and \"\n \"the output will be a text description that covers every detail of the image.\"\n )\n api_wrapper: SceneXplainAPIWrapper = Field(default_factory=SceneXplainAPIWrapper)\n def _run(\n self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"SceneXplainTool does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/scenexplain/tool.html"} {"id": "8bb124c04f38-0", "text": "Source code for langchain.tools.brave_search.tool\nfrom __future__ import annotations\nfrom typing import Any, Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.brave_search import BraveSearchWrapper\n[docs]class BraveSearch(BaseTool):\n name = \"brave_search\"\n description = (\n \"a search engine. \"\n \"useful for when you need to answer questions about current events.\"\n \" input should be a search query.\"\n )\n search_wrapper: BraveSearchWrapper\n[docs] @classmethod\n def from_api_key(\n cls, api_key: str, search_kwargs: Optional[dict] = None, **kwargs: Any\n ) -> BraveSearch:\n wrapper = BraveSearchWrapper(api_key=api_key, search_kwargs=search_kwargs or {})\n return cls(search_wrapper=wrapper, **kwargs)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return self.search_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"BraveSearch does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/brave_search/tool.html"} {"id": "08b2c3cedba3-0", "text": "Source code for langchain.tools.youtube.search\n\"\"\"\nAdapted from https://github.com/venuv/langchain_yt_tools\nCustomYTSearchTool searches YouTube videos related to a person\nand returns a specified number of video URLs.\nInput to this tool should be a comma separated list,\n - the first part contains a person name\n - and the second(optional) a number that is the\n maximum number of video results to return\n \"\"\"\nimport json\nfrom typing import Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools import BaseTool\n[docs]class YouTubeSearchTool(BaseTool):\n name = \"youtube_search\"\n description = (\n \"search for youtube videos associated with a person. \"\n \"the input to this tool should be a comma separated list, \"\n \"the first part contains a person name and the second a \"\n \"number that is the maximum number of video results \"\n \"to return aka num_results. the second part is optional\"\n )\n def _search(self, person: str, num_results: int) -> str:\n from youtube_search import YoutubeSearch\n results = YoutubeSearch(person, num_results).to_json()\n data = json.loads(results)\n url_suffix_list = [video[\"url_suffix\"] for video in data[\"videos\"]]\n return str(url_suffix_list)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n values = query.split(\",\")\n person = values[0]\n if len(values) > 1:\n num_results = int(values[1])\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/youtube/search.html"} {"id": "08b2c3cedba3-1", "text": "num_results = int(values[1])\n else:\n num_results = 2\n return self._search(person, num_results)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"YouTubeSearchTool does not yet support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/youtube/search.html"} {"id": "028e16b53092-0", "text": "Source code for langchain.tools.playwright.navigate\nfrom __future__ import annotations\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.playwright.base import BaseBrowserTool\nfrom langchain.tools.playwright.utils import (\n aget_current_page,\n get_current_page,\n)\n[docs]class NavigateToolInput(BaseModel):\n \"\"\"Input for NavigateToolInput.\"\"\"\n url: str = Field(..., description=\"url to navigate to\")\n[docs]class NavigateTool(BaseBrowserTool):\n name: str = \"navigate_browser\"\n description: str = \"Navigate a browser to the specified URL\"\n args_schema: Type[BaseModel] = NavigateToolInput\n def _run(\n self,\n url: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.sync_browser is None:\n raise ValueError(f\"Synchronous browser not provided to {self.name}\")\n page = get_current_page(self.sync_browser)\n response = page.goto(url)\n status = response.status if response else \"unknown\"\n return f\"Navigating to {url} returned status code {status}\"\n async def _arun(\n self,\n url: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.async_browser is None:\n raise ValueError(f\"Asynchronous browser not provided to {self.name}\")\n page = await aget_current_page(self.async_browser)\n response = await page.goto(url)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/navigate.html"} {"id": "028e16b53092-1", "text": "response = await page.goto(url)\n status = response.status if response else \"unknown\"\n return f\"Navigating to {url} returned status code {status}\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/navigate.html"} {"id": "91e396051b5e-0", "text": "Source code for langchain.tools.playwright.extract_hyperlinks\nfrom __future__ import annotations\nimport json\nfrom typing import TYPE_CHECKING, Any, Optional, Type\nfrom pydantic import BaseModel, Field, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.playwright.base import BaseBrowserTool\nfrom langchain.tools.playwright.utils import aget_current_page, get_current_page\nif TYPE_CHECKING:\n pass\n[docs]class ExtractHyperlinksToolInput(BaseModel):\n \"\"\"Input for ExtractHyperlinksTool.\"\"\"\n absolute_urls: bool = Field(\n default=False,\n description=\"Return absolute URLs instead of relative URLs\",\n )\n[docs]class ExtractHyperlinksTool(BaseBrowserTool):\n \"\"\"Extract all hyperlinks on the page.\"\"\"\n name: str = \"extract_hyperlinks\"\n description: str = \"Extract all hyperlinks on the current webpage\"\n args_schema: Type[BaseModel] = ExtractHyperlinksToolInput\n[docs] @root_validator\n def check_bs_import(cls, values: dict) -> dict:\n \"\"\"Check that the arguments are valid.\"\"\"\n try:\n from bs4 import BeautifulSoup # noqa: F401\n except ImportError:\n raise ValueError(\n \"The 'beautifulsoup4' package is required to use this tool.\"\n \" Please install it with 'pip install beautifulsoup4'.\"\n )\n return values\n[docs] @staticmethod\n def scrape_page(page: Any, html_content: str, absolute_urls: bool) -> str:\n from urllib.parse import urljoin\n from bs4 import BeautifulSoup\n # Parse the HTML content with BeautifulSoup\n soup = BeautifulSoup(html_content, \"lxml\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/extract_hyperlinks.html"} {"id": "91e396051b5e-1", "text": "soup = BeautifulSoup(html_content, \"lxml\")\n # Find all the anchor elements and extract their href attributes\n anchors = soup.find_all(\"a\")\n if absolute_urls:\n base_url = page.url\n links = [urljoin(base_url, anchor.get(\"href\", \"\")) for anchor in anchors]\n else:\n links = [anchor.get(\"href\", \"\") for anchor in anchors]\n # Return the list of links as a JSON string\n return json.dumps(links)\n def _run(\n self,\n absolute_urls: bool = False,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.sync_browser is None:\n raise ValueError(f\"Synchronous browser not provided to {self.name}\")\n page = get_current_page(self.sync_browser)\n html_content = page.content()\n return self.scrape_page(page, html_content, absolute_urls)\n async def _arun(\n self,\n absolute_urls: bool = False,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n if self.async_browser is None:\n raise ValueError(f\"Asynchronous browser not provided to {self.name}\")\n page = await aget_current_page(self.async_browser)\n html_content = await page.content()\n return self.scrape_page(page, html_content, absolute_urls)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/extract_hyperlinks.html"} {"id": "375419d2ff08-0", "text": "Source code for langchain.tools.playwright.extract_text\nfrom __future__ import annotations\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.playwright.base import BaseBrowserTool\nfrom langchain.tools.playwright.utils import aget_current_page, get_current_page\n[docs]class ExtractTextTool(BaseBrowserTool):\n name: str = \"extract_text\"\n description: str = \"Extract all the text on the current webpage\"\n args_schema: Type[BaseModel] = BaseModel\n[docs] @root_validator\n def check_acheck_bs_importrgs(cls, values: dict) -> dict:\n \"\"\"Check that the arguments are valid.\"\"\"\n try:\n from bs4 import BeautifulSoup # noqa: F401\n except ImportError:\n raise ValueError(\n \"The 'beautifulsoup4' package is required to use this tool.\"\n \" Please install it with 'pip install beautifulsoup4'.\"\n )\n return values\n def _run(self, run_manager: Optional[CallbackManagerForToolRun] = None) -> str:\n \"\"\"Use the tool.\"\"\"\n # Use Beautiful Soup since it's faster than looping through the elements\n from bs4 import BeautifulSoup\n if self.sync_browser is None:\n raise ValueError(f\"Synchronous browser not provided to {self.name}\")\n page = get_current_page(self.sync_browser)\n html_content = page.content()\n # Parse the HTML content with BeautifulSoup\n soup = BeautifulSoup(html_content, \"lxml\")\n return \" \".join(text for text in soup.stripped_strings)\n async def _arun(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/extract_text.html"} {"id": "375419d2ff08-1", "text": "async def _arun(\n self, run_manager: Optional[AsyncCallbackManagerForToolRun] = None\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.async_browser is None:\n raise ValueError(f\"Asynchronous browser not provided to {self.name}\")\n # Use Beautiful Soup since it's faster than looping through the elements\n from bs4 import BeautifulSoup\n page = await aget_current_page(self.async_browser)\n html_content = await page.content()\n # Parse the HTML content with BeautifulSoup\n soup = BeautifulSoup(html_content, \"lxml\")\n return \" \".join(text for text in soup.stripped_strings)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/extract_text.html"} {"id": "7ae7551bc188-0", "text": "Source code for langchain.tools.playwright.base\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, Optional, Tuple, Type\nfrom pydantic import root_validator\nfrom langchain.tools.base import BaseTool\nif TYPE_CHECKING:\n from playwright.async_api import Browser as AsyncBrowser\n from playwright.sync_api import Browser as SyncBrowser\nelse:\n try:\n # We do this so pydantic can resolve the types when instantiating\n from playwright.async_api import Browser as AsyncBrowser\n from playwright.sync_api import Browser as SyncBrowser\n except ImportError:\n pass\n[docs]def lazy_import_playwright_browsers() -> Tuple[Type[AsyncBrowser], Type[SyncBrowser]]:\n \"\"\"\n Lazy import playwright browsers.\n Returns:\n Tuple[Type[AsyncBrowser], Type[SyncBrowser]]:\n AsyncBrowser and SyncBrowser classes.\n \"\"\"\n try:\n from playwright.async_api import Browser as AsyncBrowser # noqa: F401\n from playwright.sync_api import Browser as SyncBrowser # noqa: F401\n except ImportError:\n raise ValueError(\n \"The 'playwright' package is required to use the playwright tools.\"\n \" Please install it with 'pip install playwright'.\"\n )\n return AsyncBrowser, SyncBrowser\n[docs]class BaseBrowserTool(BaseTool):\n \"\"\"Base class for browser tools.\"\"\"\n sync_browser: Optional[\"SyncBrowser\"] = None\n async_browser: Optional[\"AsyncBrowser\"] = None\n[docs] @root_validator\n def validate_browser_provided(cls, values: dict) -> dict:\n \"\"\"Check that the arguments are valid.\"\"\"\n lazy_import_playwright_browsers()\n if values.get(\"async_browser\") is None and values.get(\"sync_browser\") is None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/base.html"} {"id": "7ae7551bc188-1", "text": "raise ValueError(\"Either async_browser or sync_browser must be specified.\")\n return values\n[docs] @classmethod\n def from_browser(\n cls,\n sync_browser: Optional[SyncBrowser] = None,\n async_browser: Optional[AsyncBrowser] = None,\n ) -> BaseBrowserTool:\n \"\"\"Instantiate the tool.\"\"\"\n lazy_import_playwright_browsers()\n return cls(sync_browser=sync_browser, async_browser=async_browser)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/base.html"} {"id": "d76174037091-0", "text": "Source code for langchain.tools.playwright.utils\n\"\"\"Utilities for the Playwright browser tools.\"\"\"\nfrom __future__ import annotations\nimport asyncio\nfrom typing import TYPE_CHECKING, Any, Coroutine, TypeVar\nif TYPE_CHECKING:\n from playwright.async_api import Browser as AsyncBrowser\n from playwright.async_api import Page as AsyncPage\n from playwright.sync_api import Browser as SyncBrowser\n from playwright.sync_api import Page as SyncPage\nasync def aget_current_page(browser: AsyncBrowser) -> AsyncPage:\n \"\"\"\n Asynchronously get the current page of the browser.\n Args:\n browser: The browser (AsyncBrowser) to get the current page from.\n Returns:\n AsyncPage: The current page.\n \"\"\"\n if not browser.contexts:\n context = await browser.new_context()\n return await context.new_page()\n context = browser.contexts[0] # Assuming you're using the default browser context\n if not context.pages:\n return await context.new_page()\n # Assuming the last page in the list is the active one\n return context.pages[-1]\n[docs]def get_current_page(browser: SyncBrowser) -> SyncPage:\n \"\"\"\n Get the current page of the browser.\n Args:\n browser: The browser to get the current page from.\n Returns:\n SyncPage: The current page.\n \"\"\"\n if not browser.contexts:\n context = browser.new_context()\n return context.new_page()\n context = browser.contexts[0] # Assuming you're using the default browser context\n if not context.pages:\n return context.new_page()\n # Assuming the last page in the list is the active one\n return context.pages[-1]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/utils.html"} {"id": "d76174037091-1", "text": "return context.pages[-1]\n[docs]def create_async_playwright_browser(headless: bool = True) -> AsyncBrowser:\n \"\"\"\n Create an async playwright browser.\n Args:\n headless: Whether to run the browser in headless mode. Defaults to True.\n Returns:\n AsyncBrowser: The playwright browser.\n \"\"\"\n from playwright.async_api import async_playwright\n browser = run_async(async_playwright().start())\n return run_async(browser.chromium.launch(headless=headless))\n[docs]def create_sync_playwright_browser(headless: bool = True) -> SyncBrowser:\n \"\"\"\n Create a playwright browser.\n Args:\n headless: Whether to run the browser in headless mode. Defaults to True.\n Returns:\n SyncBrowser: The playwright browser.\n \"\"\"\n from playwright.sync_api import sync_playwright\n browser = sync_playwright().start()\n return browser.chromium.launch(headless=headless)\nT = TypeVar(\"T\")\n[docs]def run_async(coro: Coroutine[Any, Any, T]) -> T:\n \"\"\"Run an async coroutine.\n Args:\n coro: The coroutine to run. Coroutine[Any, Any, T]\n Returns:\n T: The result of the coroutine.\n \"\"\"\n event_loop = asyncio.get_event_loop()\n return event_loop.run_until_complete(coro)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/utils.html"} {"id": "49566e9684cd-0", "text": "Source code for langchain.tools.playwright.current_page\nfrom __future__ import annotations\nfrom typing import Optional, Type\nfrom pydantic import BaseModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.playwright.base import BaseBrowserTool\nfrom langchain.tools.playwright.utils import aget_current_page, get_current_page\n[docs]class CurrentWebPageTool(BaseBrowserTool):\n name: str = \"current_webpage\"\n description: str = \"Returns the URL of the current page\"\n args_schema: Type[BaseModel] = BaseModel\n def _run(\n self,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.sync_browser is None:\n raise ValueError(f\"Synchronous browser not provided to {self.name}\")\n page = get_current_page(self.sync_browser)\n return str(page.url)\n async def _arun(\n self,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.async_browser is None:\n raise ValueError(f\"Asynchronous browser not provided to {self.name}\")\n page = await aget_current_page(self.async_browser)\n return str(page.url)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/current_page.html"} {"id": "8bf504d7fb20-0", "text": "Source code for langchain.tools.playwright.get_elements\nfrom __future__ import annotations\nimport json\nfrom typing import TYPE_CHECKING, List, Optional, Sequence, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.playwright.base import BaseBrowserTool\nfrom langchain.tools.playwright.utils import aget_current_page, get_current_page\nif TYPE_CHECKING:\n from playwright.async_api import Page as AsyncPage\n from playwright.sync_api import Page as SyncPage\n[docs]class GetElementsToolInput(BaseModel):\n \"\"\"Input for GetElementsTool.\"\"\"\n selector: str = Field(\n ...,\n description=\"CSS selector, such as '*', 'div', 'p', 'a', #id, .classname\",\n )\n attributes: List[str] = Field(\n default_factory=lambda: [\"innerText\"],\n description=\"Set of attributes to retrieve for each element\",\n )\nasync def _aget_elements(\n page: AsyncPage, selector: str, attributes: Sequence[str]\n) -> List[dict]:\n \"\"\"Get elements matching the given CSS selector.\"\"\"\n elements = await page.query_selector_all(selector)\n results = []\n for element in elements:\n result = {}\n for attribute in attributes:\n if attribute == \"innerText\":\n val: Optional[str] = await element.inner_text()\n else:\n val = await element.get_attribute(attribute)\n if val is not None and val.strip() != \"\":\n result[attribute] = val\n if result:\n results.append(result)\n return results\ndef _get_elements(\n page: SyncPage, selector: str, attributes: Sequence[str]\n) -> List[dict]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/get_elements.html"} {"id": "8bf504d7fb20-1", "text": ") -> List[dict]:\n \"\"\"Get elements matching the given CSS selector.\"\"\"\n elements = page.query_selector_all(selector)\n results = []\n for element in elements:\n result = {}\n for attribute in attributes:\n if attribute == \"innerText\":\n val: Optional[str] = element.inner_text()\n else:\n val = element.get_attribute(attribute)\n if val is not None and val.strip() != \"\":\n result[attribute] = val\n if result:\n results.append(result)\n return results\n[docs]class GetElementsTool(BaseBrowserTool):\n name: str = \"get_elements\"\n description: str = (\n \"Retrieve elements in the current web page matching the given CSS selector\"\n )\n args_schema: Type[BaseModel] = GetElementsToolInput\n def _run(\n self,\n selector: str,\n attributes: Sequence[str] = [\"innerText\"],\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.sync_browser is None:\n raise ValueError(f\"Synchronous browser not provided to {self.name}\")\n page = get_current_page(self.sync_browser)\n # Navigate to the desired webpage before using this tool\n results = _get_elements(page, selector, attributes)\n return json.dumps(results, ensure_ascii=False)\n async def _arun(\n self,\n selector: str,\n attributes: Sequence[str] = [\"innerText\"],\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.async_browser is None:\n raise ValueError(f\"Asynchronous browser not provided to {self.name}\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/get_elements.html"} {"id": "8bf504d7fb20-2", "text": "raise ValueError(f\"Asynchronous browser not provided to {self.name}\")\n page = await aget_current_page(self.async_browser)\n # Navigate to the desired webpage before using this tool\n results = await _aget_elements(page, selector, attributes)\n return json.dumps(results, ensure_ascii=False)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/get_elements.html"} {"id": "04e80a56b7cd-0", "text": "Source code for langchain.tools.playwright.navigate_back\nfrom __future__ import annotations\nfrom typing import Optional, Type\nfrom pydantic import BaseModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.playwright.base import BaseBrowserTool\nfrom langchain.tools.playwright.utils import (\n aget_current_page,\n get_current_page,\n)\n[docs]class NavigateBackTool(BaseBrowserTool):\n \"\"\"Navigate back to the previous page in the browser history.\"\"\"\n name: str = \"previous_webpage\"\n description: str = \"Navigate back to the previous page in the browser history\"\n args_schema: Type[BaseModel] = BaseModel\n def _run(self, run_manager: Optional[CallbackManagerForToolRun] = None) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.sync_browser is None:\n raise ValueError(f\"Synchronous browser not provided to {self.name}\")\n page = get_current_page(self.sync_browser)\n response = page.go_back()\n if response:\n return (\n f\"Navigated back to the previous page with URL '{response.url}'.\"\n f\" Status code {response.status}\"\n )\n else:\n return \"Unable to navigate back; no previous page in the history\"\n async def _arun(\n self,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.async_browser is None:\n raise ValueError(f\"Asynchronous browser not provided to {self.name}\")\n page = await aget_current_page(self.async_browser)\n response = await page.go_back()\n if response:\n return (", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/navigate_back.html"} {"id": "04e80a56b7cd-1", "text": "response = await page.go_back()\n if response:\n return (\n f\"Navigated back to the previous page with URL '{response.url}'.\"\n f\" Status code {response.status}\"\n )\n else:\n return \"Unable to navigate back; no previous page in the history\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/navigate_back.html"} {"id": "87ef35a05564-0", "text": "Source code for langchain.tools.playwright.click\nfrom __future__ import annotations\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.playwright.base import BaseBrowserTool\nfrom langchain.tools.playwright.utils import (\n aget_current_page,\n get_current_page,\n)\n[docs]class ClickToolInput(BaseModel):\n \"\"\"Input for ClickTool.\"\"\"\n selector: str = Field(..., description=\"CSS selector for the element to click\")\n[docs]class ClickTool(BaseBrowserTool):\n name: str = \"click_element\"\n description: str = \"Click on an element with the given CSS selector\"\n args_schema: Type[BaseModel] = ClickToolInput\n visible_only: bool = True\n \"\"\"Whether to consider only visible elements.\"\"\"\n playwright_strict: bool = False\n \"\"\"Whether to employ Playwright's strict mode when clicking on elements.\"\"\"\n playwright_timeout: float = 1_000\n \"\"\"Timeout (in ms) for Playwright to wait for element to be ready.\"\"\"\n def _selector_effective(self, selector: str) -> str:\n if not self.visible_only:\n return selector\n return f\"{selector} >> visible=1\"\n def _run(\n self,\n selector: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.sync_browser is None:\n raise ValueError(f\"Synchronous browser not provided to {self.name}\")\n page = get_current_page(self.sync_browser)\n # Navigate to the desired webpage before using this tool", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/click.html"} {"id": "87ef35a05564-1", "text": "# Navigate to the desired webpage before using this tool\n selector_effective = self._selector_effective(selector=selector)\n from playwright.sync_api import TimeoutError as PlaywrightTimeoutError\n try:\n page.click(\n selector_effective,\n strict=self.playwright_strict,\n timeout=self.playwright_timeout,\n )\n except PlaywrightTimeoutError:\n return f\"Unable to click on element '{selector}'\"\n return f\"Clicked element '{selector}'\"\n async def _arun(\n self,\n selector: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.async_browser is None:\n raise ValueError(f\"Asynchronous browser not provided to {self.name}\")\n page = await aget_current_page(self.async_browser)\n # Navigate to the desired webpage before using this tool\n selector_effective = self._selector_effective(selector=selector)\n from playwright.async_api import TimeoutError as PlaywrightTimeoutError\n try:\n await page.click(\n selector_effective,\n strict=self.playwright_strict,\n timeout=self.playwright_timeout,\n )\n except PlaywrightTimeoutError:\n return f\"Unable to click on element '{selector}'\"\n return f\"Clicked element '{selector}'\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/click.html"} {"id": "32d26f01ec5a-0", "text": "Source code for langchain.tools.google_search.tool\n\"\"\"Tool for the Google search API.\"\"\"\nfrom typing import Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.google_search import GoogleSearchAPIWrapper\n[docs]class GoogleSearchRun(BaseTool):\n \"\"\"Tool that adds the capability to query the Google search API.\"\"\"\n name = \"google_search\"\n description = (\n \"A wrapper around Google Search. \"\n \"Useful for when you need to answer questions about current events. \"\n \"Input should be a search query.\"\n )\n api_wrapper: GoogleSearchAPIWrapper\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"GoogleSearchRun does not support async\")\n[docs]class GoogleSearchResults(BaseTool):\n \"\"\"Tool that has capability to query the Google Search API and get back json.\"\"\"\n name = \"Google Search Results JSON\"\n description = (\n \"A wrapper around Google Search. \"\n \"Useful for when you need to answer questions about current events. \"\n \"Input should be a search query. Output is a JSON array of the query results\"\n )\n num_results: int = 4\n api_wrapper: GoogleSearchAPIWrapper\n def _run(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/google_search/tool.html"} {"id": "32d26f01ec5a-1", "text": "api_wrapper: GoogleSearchAPIWrapper\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return str(self.api_wrapper.results(query, self.num_results))\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"GoogleSearchRun does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/google_search/tool.html"} {"id": "70631432a19b-0", "text": "Source code for langchain.tools.file_management.read\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.file_management.utils import (\n INVALID_PATH_TEMPLATE,\n BaseFileToolMixin,\n FileValidationError,\n)\n[docs]class ReadFileInput(BaseModel):\n \"\"\"Input for ReadFileTool.\"\"\"\n file_path: str = Field(..., description=\"name of file\")\n[docs]class ReadFileTool(BaseFileToolMixin, BaseTool):\n name: str = \"read_file\"\n args_schema: Type[BaseModel] = ReadFileInput\n description: str = \"Read file from disk\"\n def _run(\n self,\n file_path: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n try:\n read_path = self.get_relative_path(file_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(arg_name=\"file_path\", value=file_path)\n if not read_path.exists():\n return f\"Error: no such file or directory: {file_path}\"\n try:\n with read_path.open(\"r\", encoding=\"utf-8\") as f:\n content = f.read()\n return content\n except Exception as e:\n return \"Error: \" + str(e)\n async def _arun(\n self,\n file_path: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n # TODO: Add aiofiles method\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/file_management/read.html"} {"id": "f6b5bdbf89f1-0", "text": "Source code for langchain.tools.file_management.list_dir\nimport os\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.file_management.utils import (\n INVALID_PATH_TEMPLATE,\n BaseFileToolMixin,\n FileValidationError,\n)\n[docs]class DirectoryListingInput(BaseModel):\n \"\"\"Input for ListDirectoryTool.\"\"\"\n dir_path: str = Field(default=\".\", description=\"Subdirectory to list.\")\n[docs]class ListDirectoryTool(BaseFileToolMixin, BaseTool):\n name: str = \"list_directory\"\n args_schema: Type[BaseModel] = DirectoryListingInput\n description: str = \"List files and directories in a specified folder\"\n def _run(\n self,\n dir_path: str = \".\",\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n try:\n dir_path_ = self.get_relative_path(dir_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(arg_name=\"dir_path\", value=dir_path)\n try:\n entries = os.listdir(dir_path_)\n if entries:\n return \"\\n\".join(entries)\n else:\n return f\"No files found in directory {dir_path}\"\n except Exception as e:\n return \"Error: \" + str(e)\n async def _arun(\n self,\n dir_path: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n # TODO: Add aiofiles method\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/file_management/list_dir.html"} {"id": "5d1f7639f092-0", "text": "Source code for langchain.tools.file_management.delete\nimport os\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.file_management.utils import (\n INVALID_PATH_TEMPLATE,\n BaseFileToolMixin,\n FileValidationError,\n)\n[docs]class FileDeleteInput(BaseModel):\n \"\"\"Input for DeleteFileTool.\"\"\"\n file_path: str = Field(..., description=\"Path of the file to delete\")\n[docs]class DeleteFileTool(BaseFileToolMixin, BaseTool):\n name: str = \"file_delete\"\n args_schema: Type[BaseModel] = FileDeleteInput\n description: str = \"Delete a file\"\n def _run(\n self,\n file_path: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n try:\n file_path_ = self.get_relative_path(file_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(arg_name=\"file_path\", value=file_path)\n if not file_path_.exists():\n return f\"Error: no such file or directory: {file_path}\"\n try:\n os.remove(file_path_)\n return f\"File deleted successfully: {file_path}.\"\n except Exception as e:\n return \"Error: \" + str(e)\n async def _arun(\n self,\n file_path: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n # TODO: Add aiofiles method\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/file_management/delete.html"} {"id": "87d11fb99d83-0", "text": "Source code for langchain.tools.file_management.move\nimport shutil\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.file_management.utils import (\n INVALID_PATH_TEMPLATE,\n BaseFileToolMixin,\n FileValidationError,\n)\n[docs]class FileMoveInput(BaseModel):\n \"\"\"Input for MoveFileTool.\"\"\"\n source_path: str = Field(..., description=\"Path of the file to move\")\n destination_path: str = Field(..., description=\"New path for the moved file\")\n[docs]class MoveFileTool(BaseFileToolMixin, BaseTool):\n name: str = \"move_file\"\n args_schema: Type[BaseModel] = FileMoveInput\n description: str = \"Move or rename a file from one location to another\"\n def _run(\n self,\n source_path: str,\n destination_path: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n try:\n source_path_ = self.get_relative_path(source_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(\n arg_name=\"source_path\", value=source_path\n )\n try:\n destination_path_ = self.get_relative_path(destination_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(\n arg_name=\"destination_path_\", value=destination_path_\n )\n if not source_path_.exists():\n return f\"Error: no such file or directory {source_path}\"\n try:\n # shutil.move expects str args in 3.8", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/file_management/move.html"} {"id": "87d11fb99d83-1", "text": "try:\n # shutil.move expects str args in 3.8\n shutil.move(str(source_path_), destination_path_)\n return f\"File moved successfully from {source_path} to {destination_path}.\"\n except Exception as e:\n return \"Error: \" + str(e)\n async def _arun(\n self,\n source_path: str,\n destination_path: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n # TODO: Add aiofiles method\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/file_management/move.html"} {"id": "6eae2f7139dc-0", "text": "Source code for langchain.tools.file_management.utils\nimport sys\nfrom pathlib import Path\nfrom typing import Optional\nfrom pydantic import BaseModel\n[docs]def is_relative_to(path: Path, root: Path) -> bool:\n \"\"\"Check if path is relative to root.\"\"\"\n if sys.version_info >= (3, 9):\n # No need for a try/except block in Python 3.8+.\n return path.is_relative_to(root)\n try:\n path.relative_to(root)\n return True\n except ValueError:\n return False\nINVALID_PATH_TEMPLATE = (\n \"Error: Access denied to {arg_name}: {value}.\"\n \" Permission granted exclusively to the current working directory\"\n)\n[docs]class FileValidationError(ValueError):\n \"\"\"Error for paths outside the root directory.\"\"\"\n[docs]class BaseFileToolMixin(BaseModel):\n \"\"\"Mixin for file system tools.\"\"\"\n root_dir: Optional[str] = None\n \"\"\"The final path will be chosen relative to root_dir if specified.\"\"\"\n[docs] def get_relative_path(self, file_path: str) -> Path:\n \"\"\"Get the relative path, returning an error if unsupported.\"\"\"\n if self.root_dir is None:\n return Path(file_path)\n return get_validated_relative_path(Path(self.root_dir), file_path)\n[docs]def get_validated_relative_path(root: Path, user_path: str) -> Path:\n \"\"\"Resolve a relative path, raising an error if not within the root directory.\"\"\"\n # Note, this still permits symlinks from outside that point within the root.\n # Further validation would be needed if those are to be disallowed.\n root = root.resolve()\n full_path = (root / user_path).resolve()\n if not is_relative_to(full_path, root):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/file_management/utils.html"} {"id": "6eae2f7139dc-1", "text": "if not is_relative_to(full_path, root):\n raise FileValidationError(\n f\"Path {user_path} is outside of the allowed directory {root}\"\n )\n return full_path", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/file_management/utils.html"} {"id": "746085bb5320-0", "text": "Source code for langchain.tools.file_management.file_search\nimport fnmatch\nimport os\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.file_management.utils import (\n INVALID_PATH_TEMPLATE,\n BaseFileToolMixin,\n FileValidationError,\n)\n[docs]class FileSearchInput(BaseModel):\n \"\"\"Input for FileSearchTool.\"\"\"\n dir_path: str = Field(\n default=\".\",\n description=\"Subdirectory to search in.\",\n )\n pattern: str = Field(\n ...,\n description=\"Unix shell regex, where * matches everything.\",\n )\n[docs]class FileSearchTool(BaseFileToolMixin, BaseTool):\n name: str = \"file_search\"\n args_schema: Type[BaseModel] = FileSearchInput\n description: str = (\n \"Recursively search for files in a subdirectory that match the regex pattern\"\n )\n def _run(\n self,\n pattern: str,\n dir_path: str = \".\",\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n try:\n dir_path_ = self.get_relative_path(dir_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(arg_name=\"dir_path\", value=dir_path)\n matches = []\n try:\n for root, _, filenames in os.walk(dir_path_):\n for filename in fnmatch.filter(filenames, pattern):\n absolute_path = os.path.join(root, filename)\n relative_path = os.path.relpath(absolute_path, dir_path_)\n matches.append(relative_path)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/file_management/file_search.html"} {"id": "746085bb5320-1", "text": "matches.append(relative_path)\n if matches:\n return \"\\n\".join(matches)\n else:\n return f\"No files found for pattern {pattern} in directory {dir_path}\"\n except Exception as e:\n return \"Error: \" + str(e)\n async def _arun(\n self,\n dir_path: str,\n pattern: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n # TODO: Add aiofiles method\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/file_management/file_search.html"} {"id": "a097b7c66ceb-0", "text": "Source code for langchain.tools.file_management.write\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.file_management.utils import (\n INVALID_PATH_TEMPLATE,\n BaseFileToolMixin,\n FileValidationError,\n)\n[docs]class WriteFileInput(BaseModel):\n \"\"\"Input for WriteFileTool.\"\"\"\n file_path: str = Field(..., description=\"name of file\")\n text: str = Field(..., description=\"text to write to file\")\n append: bool = Field(\n default=False, description=\"Whether to append to an existing file.\"\n )\n[docs]class WriteFileTool(BaseFileToolMixin, BaseTool):\n name: str = \"write_file\"\n args_schema: Type[BaseModel] = WriteFileInput\n description: str = \"Write file to disk\"\n def _run(\n self,\n file_path: str,\n text: str,\n append: bool = False,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n try:\n write_path = self.get_relative_path(file_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(arg_name=\"file_path\", value=file_path)\n try:\n write_path.parent.mkdir(exist_ok=True, parents=False)\n mode = \"a\" if append else \"w\"\n with write_path.open(mode, encoding=\"utf-8\") as f:\n f.write(text)\n return f\"File written successfully to {file_path}.\"\n except Exception as e:\n return \"Error: \" + str(e)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/file_management/write.html"} {"id": "a097b7c66ceb-1", "text": "except Exception as e:\n return \"Error: \" + str(e)\n async def _arun(\n self,\n file_path: str,\n text: str,\n append: bool = False,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n # TODO: Add aiofiles method\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/file_management/write.html"} {"id": "4580d05fb5cc-0", "text": "Source code for langchain.tools.file_management.copy\nimport shutil\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.file_management.utils import (\n INVALID_PATH_TEMPLATE,\n BaseFileToolMixin,\n FileValidationError,\n)\n[docs]class FileCopyInput(BaseModel):\n \"\"\"Input for CopyFileTool.\"\"\"\n source_path: str = Field(..., description=\"Path of the file to copy\")\n destination_path: str = Field(..., description=\"Path to save the copied file\")\n[docs]class CopyFileTool(BaseFileToolMixin, BaseTool):\n name: str = \"copy_file\"\n args_schema: Type[BaseModel] = FileCopyInput\n description: str = \"Create a copy of a file in a specified location\"\n def _run(\n self,\n source_path: str,\n destination_path: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n try:\n source_path_ = self.get_relative_path(source_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(\n arg_name=\"source_path\", value=source_path\n )\n try:\n destination_path_ = self.get_relative_path(destination_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(\n arg_name=\"destination_path\", value=destination_path\n )\n try:\n shutil.copy2(source_path_, destination_path_, follow_symlinks=False)\n return f\"File copied successfully from {source_path} to {destination_path}.\"\n except Exception as e:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/file_management/copy.html"} {"id": "4580d05fb5cc-1", "text": "except Exception as e:\n return \"Error: \" + str(e)\n async def _arun(\n self,\n source_path: str,\n destination_path: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n # TODO: Add aiofiles method\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/file_management/copy.html"} {"id": "2600cf63389f-0", "text": "Source code for langchain.tools.openweathermap.tool\n\"\"\"Tool for the OpenWeatherMap API.\"\"\"\nfrom typing import Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities import OpenWeatherMapAPIWrapper\n[docs]class OpenWeatherMapQueryRun(BaseTool):\n \"\"\"Tool that adds the capability to query using the OpenWeatherMap API.\"\"\"\n api_wrapper: OpenWeatherMapAPIWrapper = Field(\n default_factory=OpenWeatherMapAPIWrapper\n )\n name = \"OpenWeatherMap\"\n description = (\n \"A wrapper around OpenWeatherMap API. \"\n \"Useful for fetching current weather information for a specified location. \"\n \"Input should be a location string (e.g. London,GB).\"\n )\n def _run(\n self, location: str, run_manager: Optional[CallbackManagerForToolRun] = None\n ) -> str:\n \"\"\"Use the OpenWeatherMap tool.\"\"\"\n return self.api_wrapper.run(location)\n async def _arun(\n self,\n location: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the OpenWeatherMap tool asynchronously.\"\"\"\n raise NotImplementedError(\"OpenWeatherMapQueryRun does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openweathermap/tool.html"} {"id": "13cf92b9250a-0", "text": "Source code for langchain.tools.google_serper.tool\n\"\"\"Tool for the Serper.dev Google Search API.\"\"\"\nfrom typing import Optional\nfrom pydantic.fields import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.google_serper import GoogleSerperAPIWrapper\n[docs]class GoogleSerperRun(BaseTool):\n \"\"\"Tool that adds the capability to query the Serper.dev Google search API.\"\"\"\n name = \"google_serper\"\n description = (\n \"A low-cost Google Search API.\"\n \"Useful for when you need to answer questions about current events.\"\n \"Input should be a search query.\"\n )\n api_wrapper: GoogleSerperAPIWrapper\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return str(self.api_wrapper.run(query))\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n return (await self.api_wrapper.arun(query)).__str__()\n[docs]class GoogleSerperResults(BaseTool):\n \"\"\"Tool that has capability to query the Serper.dev Google Search API\n and get back json.\"\"\"\n name = \"google_serrper_results_json\"\n description = (\n \"A low-cost Google Search API.\"\n \"Useful for when you need to answer questions about current events.\"\n \"Input should be a search query. Output is a JSON object of the query results\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/google_serper/tool.html"} {"id": "13cf92b9250a-1", "text": ")\n api_wrapper: GoogleSerperAPIWrapper = Field(default_factory=GoogleSerperAPIWrapper)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return str(self.api_wrapper.results(query))\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n return (await self.api_wrapper.aresults(query)).__str__()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/google_serper/tool.html"} {"id": "1be48111b07d-0", "text": "Source code for langchain.tools.azure_cognitive_services.form_recognizer\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.azure_cognitive_services.utils import detect_file_src_type\nfrom langchain.tools.base import BaseTool\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class AzureCogsFormRecognizerTool(BaseTool):\n \"\"\"Tool that queries the Azure Cognitive Services Form Recognizer API.\n In order to set this up, follow instructions at:\n https://learn.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/quickstarts/get-started-sdks-rest-api?view=form-recog-3.0.0&pivots=programming-language-python\n \"\"\"\n azure_cogs_key: str = \"\" #: :meta private:\n azure_cogs_endpoint: str = \"\" #: :meta private:\n doc_analysis_client: Any #: :meta private:\n name = \"azure_cognitive_services_form_recognizer\"\n description = (\n \"A wrapper around Azure Cognitive Services Form Recognizer. \"\n \"Useful for when you need to \"\n \"extract text, tables, and key-value pairs from documents. \"\n \"Input should be a url to a document.\"\n )\n[docs] @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and endpoint exists in environment.\"\"\"\n azure_cogs_key = get_from_dict_or_env(\n values, \"azure_cogs_key\", \"AZURE_COGS_KEY\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/form_recognizer.html"} {"id": "1be48111b07d-1", "text": ")\n azure_cogs_endpoint = get_from_dict_or_env(\n values, \"azure_cogs_endpoint\", \"AZURE_COGS_ENDPOINT\"\n )\n try:\n from azure.ai.formrecognizer import DocumentAnalysisClient\n from azure.core.credentials import AzureKeyCredential\n values[\"doc_analysis_client\"] = DocumentAnalysisClient(\n endpoint=azure_cogs_endpoint,\n credential=AzureKeyCredential(azure_cogs_key),\n )\n except ImportError:\n raise ImportError(\n \"azure-ai-formrecognizer is not installed. \"\n \"Run `pip install azure-ai-formrecognizer` to install.\"\n )\n return values\n def _parse_tables(self, tables: List[Any]) -> List[Any]:\n result = []\n for table in tables:\n rc, cc = table.row_count, table.column_count\n _table = [[\"\" for _ in range(cc)] for _ in range(rc)]\n for cell in table.cells:\n _table[cell.row_index][cell.column_index] = cell.content\n result.append(_table)\n return result\n def _parse_kv_pairs(self, kv_pairs: List[Any]) -> List[Any]:\n result = []\n for kv_pair in kv_pairs:\n key = kv_pair.key.content if kv_pair.key else \"\"\n value = kv_pair.value.content if kv_pair.value else \"\"\n result.append((key, value))\n return result\n def _document_analysis(self, document_path: str) -> Dict:\n document_src_type = detect_file_src_type(document_path)\n if document_src_type == \"local\":\n with open(document_path, \"rb\") as document:\n poller = self.doc_analysis_client.begin_analyze_document(\n \"prebuilt-document\", document\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/form_recognizer.html"} {"id": "1be48111b07d-2", "text": "\"prebuilt-document\", document\n )\n elif document_src_type == \"remote\":\n poller = self.doc_analysis_client.begin_analyze_document_from_url(\n \"prebuilt-document\", document_path\n )\n else:\n raise ValueError(f\"Invalid document path: {document_path}\")\n result = poller.result()\n res_dict = {}\n if result.content is not None:\n res_dict[\"content\"] = result.content\n if result.tables is not None:\n res_dict[\"tables\"] = self._parse_tables(result.tables)\n if result.key_value_pairs is not None:\n res_dict[\"key_value_pairs\"] = self._parse_kv_pairs(result.key_value_pairs)\n return res_dict\n def _format_document_analysis_result(self, document_analysis_result: Dict) -> str:\n formatted_result = []\n if \"content\" in document_analysis_result:\n formatted_result.append(\n f\"Content: {document_analysis_result['content']}\".replace(\"\\n\", \" \")\n )\n if \"tables\" in document_analysis_result:\n for i, table in enumerate(document_analysis_result[\"tables\"]):\n formatted_result.append(f\"Table {i}: {table}\".replace(\"\\n\", \" \"))\n if \"key_value_pairs\" in document_analysis_result:\n for kv_pair in document_analysis_result[\"key_value_pairs\"]:\n formatted_result.append(\n f\"{kv_pair[0]}: {kv_pair[1]}\".replace(\"\\n\", \" \")\n )\n return \"\\n\".join(formatted_result)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/form_recognizer.html"} {"id": "1be48111b07d-3", "text": ") -> str:\n \"\"\"Use the tool.\"\"\"\n try:\n document_analysis_result = self._document_analysis(query)\n if not document_analysis_result:\n return \"No good document analysis result was found\"\n return self._format_document_analysis_result(document_analysis_result)\n except Exception as e:\n raise RuntimeError(f\"Error while running AzureCogsFormRecognizerTool: {e}\")\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"AzureCogsFormRecognizerTool does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/form_recognizer.html"} {"id": "928c20a1ad8b-0", "text": "Source code for langchain.tools.azure_cognitive_services.image_analysis\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, Dict, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.azure_cognitive_services.utils import detect_file_src_type\nfrom langchain.tools.base import BaseTool\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class AzureCogsImageAnalysisTool(BaseTool):\n \"\"\"Tool that queries the Azure Cognitive Services Image Analysis API.\n In order to set this up, follow instructions at:\n https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40\n \"\"\"\n azure_cogs_key: str = \"\" #: :meta private:\n azure_cogs_endpoint: str = \"\" #: :meta private:\n vision_service: Any #: :meta private:\n analysis_options: Any #: :meta private:\n name = \"azure_cognitive_services_image_analysis\"\n description = (\n \"A wrapper around Azure Cognitive Services Image Analysis. \"\n \"Useful for when you need to analyze images. \"\n \"Input should be a url to an image.\"\n )\n[docs] @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and endpoint exists in environment.\"\"\"\n azure_cogs_key = get_from_dict_or_env(\n values, \"azure_cogs_key\", \"AZURE_COGS_KEY\"\n )\n azure_cogs_endpoint = get_from_dict_or_env(\n values, \"azure_cogs_endpoint\", \"AZURE_COGS_ENDPOINT\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/image_analysis.html"} {"id": "928c20a1ad8b-1", "text": "values, \"azure_cogs_endpoint\", \"AZURE_COGS_ENDPOINT\"\n )\n try:\n import azure.ai.vision as sdk\n values[\"vision_service\"] = sdk.VisionServiceOptions(\n endpoint=azure_cogs_endpoint, key=azure_cogs_key\n )\n values[\"analysis_options\"] = sdk.ImageAnalysisOptions()\n values[\"analysis_options\"].features = (\n sdk.ImageAnalysisFeature.CAPTION\n | sdk.ImageAnalysisFeature.OBJECTS\n | sdk.ImageAnalysisFeature.TAGS\n | sdk.ImageAnalysisFeature.TEXT\n )\n except ImportError:\n raise ImportError(\n \"azure-ai-vision is not installed. \"\n \"Run `pip install azure-ai-vision` to install.\"\n )\n return values\n def _image_analysis(self, image_path: str) -> Dict:\n try:\n import azure.ai.vision as sdk\n except ImportError:\n pass\n image_src_type = detect_file_src_type(image_path)\n if image_src_type == \"local\":\n vision_source = sdk.VisionSource(filename=image_path)\n elif image_src_type == \"remote\":\n vision_source = sdk.VisionSource(url=image_path)\n else:\n raise ValueError(f\"Invalid image path: {image_path}\")\n image_analyzer = sdk.ImageAnalyzer(\n self.vision_service, vision_source, self.analysis_options\n )\n result = image_analyzer.analyze()\n res_dict = {}\n if result.reason == sdk.ImageAnalysisResultReason.ANALYZED:\n if result.caption is not None:\n res_dict[\"caption\"] = result.caption.content\n if result.objects is not None:\n res_dict[\"objects\"] = [obj.name for obj in result.objects]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/image_analysis.html"} {"id": "928c20a1ad8b-2", "text": "res_dict[\"objects\"] = [obj.name for obj in result.objects]\n if result.tags is not None:\n res_dict[\"tags\"] = [tag.name for tag in result.tags]\n if result.text is not None:\n res_dict[\"text\"] = [line.content for line in result.text.lines]\n else:\n error_details = sdk.ImageAnalysisErrorDetails.from_result(result)\n raise RuntimeError(\n f\"Image analysis failed.\\n\"\n f\"Reason: {error_details.reason}\\n\"\n f\"Details: {error_details.message}\"\n )\n return res_dict\n def _format_image_analysis_result(self, image_analysis_result: Dict) -> str:\n formatted_result = []\n if \"caption\" in image_analysis_result:\n formatted_result.append(\"Caption: \" + image_analysis_result[\"caption\"])\n if (\n \"objects\" in image_analysis_result\n and len(image_analysis_result[\"objects\"]) > 0\n ):\n formatted_result.append(\n \"Objects: \" + \", \".join(image_analysis_result[\"objects\"])\n )\n if \"tags\" in image_analysis_result and len(image_analysis_result[\"tags\"]) > 0:\n formatted_result.append(\"Tags: \" + \", \".join(image_analysis_result[\"tags\"]))\n if \"text\" in image_analysis_result and len(image_analysis_result[\"text\"]) > 0:\n formatted_result.append(\"Text: \" + \", \".join(image_analysis_result[\"text\"]))\n return \"\\n\".join(formatted_result)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n try:\n image_analysis_result = self._image_analysis(query)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/image_analysis.html"} {"id": "928c20a1ad8b-3", "text": "try:\n image_analysis_result = self._image_analysis(query)\n if not image_analysis_result:\n return \"No good image analysis result was found\"\n return self._format_image_analysis_result(image_analysis_result)\n except Exception as e:\n raise RuntimeError(f\"Error while running AzureCogsImageAnalysisTool: {e}\")\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"AzureCogsImageAnalysisTool does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/image_analysis.html"} {"id": "b2a46d037b4f-0", "text": "Source code for langchain.tools.azure_cognitive_services.speech2text\nfrom __future__ import annotations\nimport logging\nimport time\nfrom typing import Any, Dict, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.azure_cognitive_services.utils import (\n detect_file_src_type,\n download_audio_from_url,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class AzureCogsSpeech2TextTool(BaseTool):\n \"\"\"Tool that queries the Azure Cognitive Services Speech2Text API.\n In order to set this up, follow instructions at:\n https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-speech-to-text?pivots=programming-language-python\n \"\"\"\n azure_cogs_key: str = \"\" #: :meta private:\n azure_cogs_region: str = \"\" #: :meta private:\n speech_language: str = \"en-US\" #: :meta private:\n speech_config: Any #: :meta private:\n name = \"azure_cognitive_services_speech2text\"\n description = (\n \"A wrapper around Azure Cognitive Services Speech2Text. \"\n \"Useful for when you need to transcribe audio to text. \"\n \"Input should be a url to an audio file.\"\n )\n[docs] @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and endpoint exists in environment.\"\"\"\n azure_cogs_key = get_from_dict_or_env(\n values, \"azure_cogs_key\", \"AZURE_COGS_KEY\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/speech2text.html"} {"id": "b2a46d037b4f-1", "text": "values, \"azure_cogs_key\", \"AZURE_COGS_KEY\"\n )\n azure_cogs_region = get_from_dict_or_env(\n values, \"azure_cogs_region\", \"AZURE_COGS_REGION\"\n )\n try:\n import azure.cognitiveservices.speech as speechsdk\n values[\"speech_config\"] = speechsdk.SpeechConfig(\n subscription=azure_cogs_key, region=azure_cogs_region\n )\n except ImportError:\n raise ImportError(\n \"azure-cognitiveservices-speech is not installed. \"\n \"Run `pip install azure-cognitiveservices-speech` to install.\"\n )\n return values\n def _continuous_recognize(self, speech_recognizer: Any) -> str:\n done = False\n text = \"\"\n def stop_cb(evt: Any) -> None:\n \"\"\"callback that stop continuous recognition\"\"\"\n speech_recognizer.stop_continuous_recognition_async()\n nonlocal done\n done = True\n def retrieve_cb(evt: Any) -> None:\n \"\"\"callback that retrieves the intermediate recognition results\"\"\"\n nonlocal text\n text += evt.result.text\n # retrieve text on recognized events\n speech_recognizer.recognized.connect(retrieve_cb)\n # stop continuous recognition on either session stopped or canceled events\n speech_recognizer.session_stopped.connect(stop_cb)\n speech_recognizer.canceled.connect(stop_cb)\n # Start continuous speech recognition\n speech_recognizer.start_continuous_recognition_async()\n while not done:\n time.sleep(0.5)\n return text\n def _speech2text(self, audio_path: str, speech_language: str) -> str:\n try:\n import azure.cognitiveservices.speech as speechsdk", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/speech2text.html"} {"id": "b2a46d037b4f-2", "text": "try:\n import azure.cognitiveservices.speech as speechsdk\n except ImportError:\n pass\n audio_src_type = detect_file_src_type(audio_path)\n if audio_src_type == \"local\":\n audio_config = speechsdk.AudioConfig(filename=audio_path)\n elif audio_src_type == \"remote\":\n tmp_audio_path = download_audio_from_url(audio_path)\n audio_config = speechsdk.AudioConfig(filename=tmp_audio_path)\n else:\n raise ValueError(f\"Invalid audio path: {audio_path}\")\n self.speech_config.speech_recognition_language = speech_language\n speech_recognizer = speechsdk.SpeechRecognizer(self.speech_config, audio_config)\n return self._continuous_recognize(speech_recognizer)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n try:\n text = self._speech2text(query, self.speech_language)\n return text\n except Exception as e:\n raise RuntimeError(f\"Error while running AzureCogsSpeech2TextTool: {e}\")\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"AzureCogsSpeech2TextTool does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/speech2text.html"} {"id": "f5cfde25c673-0", "text": "Source code for langchain.tools.azure_cognitive_services.utils\nimport os\nimport tempfile\nfrom urllib.parse import urlparse\nimport requests\n[docs]def detect_file_src_type(file_path: str) -> str:\n \"\"\"Detect if the file is local or remote.\"\"\"\n if os.path.isfile(file_path):\n return \"local\"\n parsed_url = urlparse(file_path)\n if parsed_url.scheme and parsed_url.netloc:\n return \"remote\"\n return \"invalid\"\n[docs]def download_audio_from_url(audio_url: str) -> str:\n \"\"\"Download audio from url to local.\"\"\"\n ext = audio_url.split(\".\")[-1]\n response = requests.get(audio_url, stream=True)\n response.raise_for_status()\n with tempfile.NamedTemporaryFile(mode=\"wb\", suffix=f\".{ext}\", delete=False) as f:\n for chunk in response.iter_content(chunk_size=8192):\n f.write(chunk)\n return f.name", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/utils.html"} {"id": "b82cdf02708c-0", "text": "Source code for langchain.tools.azure_cognitive_services.text2speech\nfrom __future__ import annotations\nimport logging\nimport tempfile\nfrom typing import Any, Dict, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class AzureCogsText2SpeechTool(BaseTool):\n \"\"\"Tool that queries the Azure Cognitive Services Text2Speech API.\n In order to set this up, follow instructions at:\n https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-text-to-speech?pivots=programming-language-python\n \"\"\"\n azure_cogs_key: str = \"\" #: :meta private:\n azure_cogs_region: str = \"\" #: :meta private:\n speech_language: str = \"en-US\" #: :meta private:\n speech_config: Any #: :meta private:\n name = \"azure_cognitive_services_text2speech\"\n description = (\n \"A wrapper around Azure Cognitive Services Text2Speech. \"\n \"Useful for when you need to convert text to speech. \"\n )\n[docs] @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and endpoint exists in environment.\"\"\"\n azure_cogs_key = get_from_dict_or_env(\n values, \"azure_cogs_key\", \"AZURE_COGS_KEY\"\n )\n azure_cogs_region = get_from_dict_or_env(\n values, \"azure_cogs_region\", \"AZURE_COGS_REGION\"\n )\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/text2speech.html"} {"id": "b82cdf02708c-1", "text": ")\n try:\n import azure.cognitiveservices.speech as speechsdk\n values[\"speech_config\"] = speechsdk.SpeechConfig(\n subscription=azure_cogs_key, region=azure_cogs_region\n )\n except ImportError:\n raise ImportError(\n \"azure-cognitiveservices-speech is not installed. \"\n \"Run `pip install azure-cognitiveservices-speech` to install.\"\n )\n return values\n def _text2speech(self, text: str, speech_language: str) -> str:\n try:\n import azure.cognitiveservices.speech as speechsdk\n except ImportError:\n pass\n self.speech_config.speech_synthesis_language = speech_language\n speech_synthesizer = speechsdk.SpeechSynthesizer(\n speech_config=self.speech_config, audio_config=None\n )\n result = speech_synthesizer.speak_text(text)\n if result.reason == speechsdk.ResultReason.SynthesizingAudioCompleted:\n stream = speechsdk.AudioDataStream(result)\n with tempfile.NamedTemporaryFile(\n mode=\"wb\", suffix=\".wav\", delete=False\n ) as f:\n stream.save_to_wav_file(f.name)\n return f.name\n elif result.reason == speechsdk.ResultReason.Canceled:\n cancellation_details = result.cancellation_details\n logger.debug(f\"Speech synthesis canceled: {cancellation_details.reason}\")\n if cancellation_details.reason == speechsdk.CancellationReason.Error:\n raise RuntimeError(\n f\"Speech synthesis error: {cancellation_details.error_details}\"\n )\n return \"Speech synthesis canceled.\"\n else:\n return f\"Speech synthesis failed: {result.reason}\"\n def _run(\n self,\n query: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/text2speech.html"} {"id": "b82cdf02708c-2", "text": "def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n try:\n speech_file = self._text2speech(query, self.speech_language)\n return speech_file\n except Exception as e:\n raise RuntimeError(f\"Error while running AzureCogsText2SpeechTool: {e}\")\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"AzureCogsText2SpeechTool does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/text2speech.html"} {"id": "abbfdb66435b-0", "text": "Source code for langchain.tools.ddg_search.tool\n\"\"\"Tool for the DuckDuckGo search API.\"\"\"\nimport warnings\nfrom typing import Any, Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.duckduckgo_search import DuckDuckGoSearchAPIWrapper\n[docs]class DuckDuckGoSearchRun(BaseTool):\n \"\"\"Tool that adds the capability to query the DuckDuckGo search API.\"\"\"\n name = \"duckduckgo_search\"\n description = (\n \"A wrapper around DuckDuckGo Search. \"\n \"Useful for when you need to answer questions about current events. \"\n \"Input should be a search query.\"\n )\n api_wrapper: DuckDuckGoSearchAPIWrapper = Field(\n default_factory=DuckDuckGoSearchAPIWrapper\n )\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"DuckDuckGoSearch does not support async\")\n[docs]class DuckDuckGoSearchResults(BaseTool):\n \"\"\"Tool that queries the Duck Duck Go Search API and get back json.\"\"\"\n name = \"DuckDuckGo Results JSON\"\n description = (\n \"A wrapper around Duck Duck Go Search. \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/ddg_search/tool.html"} {"id": "abbfdb66435b-1", "text": "description = (\n \"A wrapper around Duck Duck Go Search. \"\n \"Useful for when you need to answer questions about current events. \"\n \"Input should be a search query. Output is a JSON array of the query results\"\n )\n num_results: int = 4\n api_wrapper: DuckDuckGoSearchAPIWrapper = Field(\n default_factory=DuckDuckGoSearchAPIWrapper\n )\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return str(self.api_wrapper.results(query, self.num_results))\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"DuckDuckGoSearchResults does not support async\")\n[docs]def DuckDuckGoSearchTool(*args: Any, **kwargs: Any) -> DuckDuckGoSearchRun:\n \"\"\"\n Deprecated. Use DuckDuckGoSearchRun instead.\n Args:\n *args:\n **kwargs:\n Returns:\n DuckDuckGoSearchRun\n \"\"\"\n warnings.warn(\n \"DuckDuckGoSearchTool will be deprecated in the future. \"\n \"Please use DuckDuckGoSearchRun instead.\",\n DeprecationWarning,\n )\n return DuckDuckGoSearchRun(*args, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/ddg_search/tool.html"} {"id": "568e902258f0-0", "text": "Source code for langchain.tools.json.tool\n# flake8: noqa\n\"\"\"Tools for working with JSON specs.\"\"\"\nfrom __future__ import annotations\nimport json\nimport re\nfrom pathlib import Path\nfrom typing import Dict, List, Optional, Union\nfrom pydantic import BaseModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\ndef _parse_input(text: str) -> List[Union[str, int]]:\n \"\"\"Parse input of the form data[\"key1\"][0][\"key2\"] into a list of keys.\"\"\"\n _res = re.findall(r\"\\[.*?]\", text)\n # strip the brackets and quotes, convert to int if possible\n res = [i[1:-1].replace('\"', \"\") for i in _res]\n res = [int(i) if i.isdigit() else i for i in res]\n return res\n[docs]class JsonSpec(BaseModel):\n \"\"\"Base class for JSON spec.\"\"\"\n dict_: Dict\n max_value_length: int = 200\n[docs] @classmethod\n def from_file(cls, path: Path) -> JsonSpec:\n \"\"\"Create a JsonSpec from a file.\"\"\"\n if not path.exists():\n raise FileNotFoundError(f\"File not found: {path}\")\n dict_ = json.loads(path.read_text())\n return cls(dict_=dict_)\n[docs] def keys(self, text: str) -> str:\n \"\"\"Return the keys of the dict at the given path.\n Args:\n text: Python representation of the path to the dict (e.g. data[\"key1\"][0][\"key2\"]).\n \"\"\"\n try:\n items = _parse_input(text)\n val = self.dict_", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/json/tool.html"} {"id": "568e902258f0-1", "text": "try:\n items = _parse_input(text)\n val = self.dict_\n for i in items:\n if i:\n val = val[i]\n if not isinstance(val, dict):\n raise ValueError(\n f\"Value at path `{text}` is not a dict, get the value directly.\"\n )\n return str(list(val.keys()))\n except Exception as e:\n return repr(e)\n[docs] def value(self, text: str) -> str:\n \"\"\"Return the value of the dict at the given path.\n Args:\n text: Python representation of the path to the dict (e.g. data[\"key1\"][0][\"key2\"]).\n \"\"\"\n try:\n items = _parse_input(text)\n val = self.dict_\n for i in items:\n val = val[i]\n if isinstance(val, dict) and len(str(val)) > self.max_value_length:\n return \"Value is a large dictionary, should explore its keys directly\"\n str_val = str(val)\n if len(str_val) > self.max_value_length:\n str_val = str_val[: self.max_value_length] + \"...\"\n return str_val\n except Exception as e:\n return repr(e)\n[docs]class JsonListKeysTool(BaseTool):\n \"\"\"Tool for listing keys in a JSON spec.\"\"\"\n name = \"json_spec_list_keys\"\n description = \"\"\"\n Can be used to list all keys at a given path. \n Before calling this you should be SURE that the path to this exists.\n The input is a text representation of the path to the dict in Python syntax (e.g. data[\"key1\"][0][\"key2\"]).\n \"\"\"\n spec: JsonSpec", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/json/tool.html"} {"id": "568e902258f0-2", "text": "\"\"\"\n spec: JsonSpec\n def _run(\n self,\n tool_input: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n return self.spec.keys(tool_input)\n async def _arun(\n self,\n tool_input: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n return self._run(tool_input)\n[docs]class JsonGetValueTool(BaseTool):\n \"\"\"Tool for getting a value in a JSON spec.\"\"\"\n name = \"json_spec_get_value\"\n description = \"\"\"\n Can be used to see value in string format at a given path.\n Before calling this you should be SURE that the path to this exists.\n The input is a text representation of the path to the dict in Python syntax (e.g. data[\"key1\"][0][\"key2\"]).\n \"\"\"\n spec: JsonSpec\n def _run(\n self,\n tool_input: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n return self.spec.value(tool_input)\n async def _arun(\n self,\n tool_input: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n return self._run(tool_input)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/json/tool.html"} {"id": "604158ed5a02-0", "text": "Source code for langchain.tools.pubmed.tool\n\"\"\"Tool for the Pubmed API.\"\"\"\nfrom typing import Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.pupmed import PubMedAPIWrapper\n[docs]class PubmedQueryRun(BaseTool):\n \"\"\"Tool that adds the capability to search using the PubMed API.\"\"\"\n name = \"PubMed\"\n description = (\n \"A wrapper around PubMed.org \"\n \"Useful for when you need to answer questions about Physics, Mathematics, \"\n \"Computer Science, Quantitative Biology, Quantitative Finance, Statistics, \"\n \"Electrical Engineering, and Economics \"\n \"from scientific articles on PubMed.org. \"\n \"Input should be a search query.\"\n )\n api_wrapper: PubMedAPIWrapper = Field(default_factory=PubMedAPIWrapper)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Arxiv tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the PubMed tool asynchronously.\"\"\"\n raise NotImplementedError(\"PubMedAPIWrapper does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/pubmed/tool.html"} {"id": "d5509e94cd46-0", "text": "Source code for langchain.tools.gmail.get_thread\nfrom typing import Dict, Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.gmail.base import GmailBaseTool\n[docs]class GetThreadSchema(BaseModel):\n # From https://support.google.com/mail/answer/7190?hl=en\n thread_id: str = Field(\n ...,\n description=\"The thread ID.\",\n )\n[docs]class GmailGetThread(GmailBaseTool):\n name: str = \"get_gmail_thread\"\n description: str = (\n \"Use this tool to search for email messages.\"\n \" The input must be a valid Gmail query.\"\n \" The output is a JSON list of messages.\"\n )\n args_schema: Type[GetThreadSchema] = GetThreadSchema\n def _run(\n self,\n thread_id: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> Dict:\n \"\"\"Run the tool.\"\"\"\n query = self.api_resource.users().threads().get(userId=\"me\", id=thread_id)\n thread_data = query.execute()\n if not isinstance(thread_data, dict):\n raise ValueError(\"The output of the query must be a list.\")\n messages = thread_data[\"messages\"]\n thread_data[\"messages\"] = []\n keys_to_keep = [\"id\", \"snippet\", \"snippet\"]\n # TODO: Parse body.\n for message in messages:\n thread_data[\"messages\"].append(\n {k: message[k] for k in keys_to_keep if k in message}\n )\n return thread_data\n async def _arun(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/get_thread.html"} {"id": "d5509e94cd46-1", "text": ")\n return thread_data\n async def _arun(\n self,\n thread_id: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> Dict:\n \"\"\"Run the tool.\"\"\"\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/get_thread.html"} {"id": "b9d163d91522-0", "text": "Source code for langchain.tools.gmail.base\n\"\"\"Base class for Gmail tools.\"\"\"\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING\nfrom pydantic import Field\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.gmail.utils import build_resource_service\nif TYPE_CHECKING:\n # This is for linting and IDE typehints\n from googleapiclient.discovery import Resource\nelse:\n try:\n # We do this so pydantic can resolve the types when instantiating\n from googleapiclient.discovery import Resource\n except ImportError:\n pass\n[docs]class GmailBaseTool(BaseTool):\n api_resource: Resource = Field(default_factory=build_resource_service)\n[docs] @classmethod\n def from_api_resource(cls, api_resource: Resource) -> \"GmailBaseTool\":\n return cls(service=api_resource)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/base.html"} {"id": "76e806cceaa4-0", "text": "Source code for langchain.tools.gmail.search\nimport base64\nimport email\nfrom enum import Enum\nfrom typing import Any, Dict, List, Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.gmail.base import GmailBaseTool\nfrom langchain.tools.gmail.utils import clean_email_body\n[docs]class Resource(str, Enum):\n \"\"\"Enumerator of Resources to search.\"\"\"\n THREADS = \"threads\"\n MESSAGES = \"messages\"\n[docs]class SearchArgsSchema(BaseModel):\n # From https://support.google.com/mail/answer/7190?hl=en\n query: str = Field(\n ...,\n description=\"The Gmail query. Example filters include from:sender,\"\n \" to:recipient, subject:subject, -filtered_term,\"\n \" in:folder, is:important|read|starred, after:year/mo/date, \"\n \"before:year/mo/date, label:label_name\"\n ' \"exact phrase\".'\n \" Search newer/older than using d (day), m (month), and y (year): \"\n \"newer_than:2d, older_than:1y.\"\n \" Attachments with extension example: filename:pdf. Multiple term\"\n \" matching example: from:amy OR from:david.\",\n )\n resource: Resource = Field(\n default=Resource.MESSAGES,\n description=\"Whether to search for threads or messages.\",\n )\n max_results: int = Field(\n default=10,\n description=\"The maximum number of results to return.\",\n )\n[docs]class GmailSearch(GmailBaseTool):\n name: str = \"search_gmail\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/search.html"} {"id": "76e806cceaa4-1", "text": "name: str = \"search_gmail\"\n description: str = (\n \"Use this tool to search for email messages or threads.\"\n \" The input must be a valid Gmail query.\"\n \" The output is a JSON list of the requested resource.\"\n )\n args_schema: Type[SearchArgsSchema] = SearchArgsSchema\n def _parse_threads(self, threads: List[Dict[str, Any]]) -> List[Dict[str, Any]]:\n # Add the thread message snippets to the thread results\n results = []\n for thread in threads:\n thread_id = thread[\"id\"]\n thread_data = (\n self.api_resource.users()\n .threads()\n .get(userId=\"me\", id=thread_id)\n .execute()\n )\n messages = thread_data[\"messages\"]\n thread[\"messages\"] = []\n for message in messages:\n snippet = message[\"snippet\"]\n thread[\"messages\"].append({\"snippet\": snippet, \"id\": message[\"id\"]})\n results.append(thread)\n return results\n def _parse_messages(self, messages: List[Dict[str, Any]]) -> List[Dict[str, Any]]:\n results = []\n for message in messages:\n message_id = message[\"id\"]\n message_data = (\n self.api_resource.users()\n .messages()\n .get(userId=\"me\", format=\"raw\", id=message_id)\n .execute()\n )\n raw_message = base64.urlsafe_b64decode(message_data[\"raw\"])\n email_msg = email.message_from_bytes(raw_message)\n subject = email_msg[\"Subject\"]\n sender = email_msg[\"From\"]\n message_body = email_msg.get_payload()\n body = clean_email_body(message_body)\n results.append(\n {", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/search.html"} {"id": "76e806cceaa4-2", "text": "body = clean_email_body(message_body)\n results.append(\n {\n \"id\": message[\"id\"],\n \"threadId\": message_data[\"threadId\"],\n \"snippet\": message_data[\"snippet\"],\n \"body\": body,\n \"subject\": subject,\n \"sender\": sender,\n }\n )\n return results\n def _run(\n self,\n query: str,\n resource: Resource = Resource.MESSAGES,\n max_results: int = 10,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> List[Dict[str, Any]]:\n \"\"\"Run the tool.\"\"\"\n results = (\n self.api_resource.users()\n .messages()\n .list(userId=\"me\", q=query, maxResults=max_results)\n .execute()\n .get(resource.value, [])\n )\n if resource == Resource.THREADS:\n return self._parse_threads(results)\n elif resource == Resource.MESSAGES:\n return self._parse_messages(results)\n else:\n raise NotImplementedError(f\"Resource of type {resource} not implemented.\")\n async def _arun(\n self,\n query: str,\n resource: Resource = Resource.MESSAGES,\n max_results: int = 10,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> List[Dict[str, Any]]:\n \"\"\"Run the tool.\"\"\"\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/search.html"} {"id": "1a16ce6ce4de-0", "text": "Source code for langchain.tools.gmail.utils\n\"\"\"Gmail tool utils.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport os\nfrom typing import TYPE_CHECKING, List, Optional, Tuple\nif TYPE_CHECKING:\n from google.auth.transport.requests import Request\n from google.oauth2.credentials import Credentials\n from google_auth_oauthlib.flow import InstalledAppFlow\n from googleapiclient.discovery import Resource\n from googleapiclient.discovery import build as build_resource\nlogger = logging.getLogger(__name__)\n[docs]def import_google() -> Tuple[Request, Credentials]:\n \"\"\"Import google libraries.\n Returns:\n Tuple[Request, Credentials]: Request and Credentials classes.\n \"\"\"\n # google-auth-httplib2\n try:\n from google.auth.transport.requests import Request # noqa: F401\n from google.oauth2.credentials import Credentials # noqa: F401\n except ImportError:\n raise ImportError(\n \"You need to install google-auth-httplib2 to use this toolkit. \"\n \"Try running pip install --upgrade google-auth-httplib2\"\n )\n return Request, Credentials\n[docs]def import_installed_app_flow() -> InstalledAppFlow:\n \"\"\"Import InstalledAppFlow class.\n Returns:\n InstalledAppFlow: InstalledAppFlow class.\n \"\"\"\n try:\n from google_auth_oauthlib.flow import InstalledAppFlow\n except ImportError:\n raise ValueError(\n \"You need to install google-auth-oauthlib to use this toolkit. \"\n \"Try running pip install --upgrade google-auth-oauthlib\"\n )\n return InstalledAppFlow\n[docs]def import_googleapiclient_resource_builder() -> build_resource:\n \"\"\"Import googleapiclient.discovery.build function.\n Returns:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/utils.html"} {"id": "1a16ce6ce4de-1", "text": "\"\"\"Import googleapiclient.discovery.build function.\n Returns:\n build_resource: googleapiclient.discovery.build function.\n \"\"\"\n try:\n from googleapiclient.discovery import build\n except ImportError:\n raise ValueError(\n \"You need to install googleapiclient to use this toolkit. \"\n \"Try running pip install --upgrade google-api-python-client\"\n )\n return build\nDEFAULT_SCOPES = [\"https://mail.google.com/\"]\nDEFAULT_CREDS_TOKEN_FILE = \"token.json\"\nDEFAULT_CLIENT_SECRETS_FILE = \"credentials.json\"\n[docs]def get_gmail_credentials(\n token_file: Optional[str] = None,\n client_secrets_file: Optional[str] = None,\n scopes: Optional[List[str]] = None,\n) -> Credentials:\n \"\"\"Get credentials.\"\"\"\n # From https://developers.google.com/gmail/api/quickstart/python\n Request, Credentials = import_google()\n InstalledAppFlow = import_installed_app_flow()\n creds = None\n scopes = scopes or DEFAULT_SCOPES\n token_file = token_file or DEFAULT_CREDS_TOKEN_FILE\n client_secrets_file = client_secrets_file or DEFAULT_CLIENT_SECRETS_FILE\n # The file token.json stores the user's access and refresh tokens, and is\n # created automatically when the authorization flow completes for the first\n # time.\n if os.path.exists(token_file):\n creds = Credentials.from_authorized_user_file(token_file, scopes)\n # If there are no (valid) credentials available, let the user log in.\n if not creds or not creds.valid:\n if creds and creds.expired and creds.refresh_token:\n creds.refresh(Request())\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/utils.html"} {"id": "1a16ce6ce4de-2", "text": "creds.refresh(Request())\n else:\n # https://developers.google.com/gmail/api/quickstart/python#authorize_credentials_for_a_desktop_application # noqa\n flow = InstalledAppFlow.from_client_secrets_file(\n client_secrets_file, scopes\n )\n creds = flow.run_local_server(port=0)\n # Save the credentials for the next run\n with open(token_file, \"w\") as token:\n token.write(creds.to_json())\n return creds\n[docs]def build_resource_service(\n credentials: Optional[Credentials] = None,\n service_name: str = \"gmail\",\n service_version: str = \"v1\",\n) -> Resource:\n \"\"\"Build a Gmail service.\"\"\"\n credentials = credentials or get_gmail_credentials()\n builder = import_googleapiclient_resource_builder()\n return builder(service_name, service_version, credentials=credentials)\n[docs]def clean_email_body(body: str) -> str:\n \"\"\"Clean email body.\"\"\"\n try:\n from bs4 import BeautifulSoup\n try:\n soup = BeautifulSoup(str(body), \"html.parser\")\n body = soup.get_text()\n return str(body)\n except Exception as e:\n logger.error(e)\n return str(body)\n except ImportError:\n logger.warning(\"BeautifulSoup not installed. Skipping cleaning.\")\n return str(body)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/utils.html"} {"id": "b4cb8fc80047-0", "text": "Source code for langchain.tools.gmail.create_draft\nimport base64\nfrom email.message import EmailMessage\nfrom typing import List, Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.gmail.base import GmailBaseTool\n[docs]class CreateDraftSchema(BaseModel):\n message: str = Field(\n ...,\n description=\"The message to include in the draft.\",\n )\n to: List[str] = Field(\n ...,\n description=\"The list of recipients.\",\n )\n subject: str = Field(\n ...,\n description=\"The subject of the message.\",\n )\n cc: Optional[List[str]] = Field(\n None,\n description=\"The list of CC recipients.\",\n )\n bcc: Optional[List[str]] = Field(\n None,\n description=\"The list of BCC recipients.\",\n )\n[docs]class GmailCreateDraft(GmailBaseTool):\n name: str = \"create_gmail_draft\"\n description: str = (\n \"Use this tool to create a draft email with the provided message fields.\"\n )\n args_schema: Type[CreateDraftSchema] = CreateDraftSchema\n def _prepare_draft_message(\n self,\n message: str,\n to: List[str],\n subject: str,\n cc: Optional[List[str]] = None,\n bcc: Optional[List[str]] = None,\n ) -> dict:\n draft_message = EmailMessage()\n draft_message.set_content(message)\n draft_message[\"To\"] = \", \".join(to)\n draft_message[\"Subject\"] = subject\n if cc is not None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/create_draft.html"} {"id": "b4cb8fc80047-1", "text": "draft_message[\"Subject\"] = subject\n if cc is not None:\n draft_message[\"Cc\"] = \", \".join(cc)\n if bcc is not None:\n draft_message[\"Bcc\"] = \", \".join(bcc)\n encoded_message = base64.urlsafe_b64encode(draft_message.as_bytes()).decode()\n return {\"message\": {\"raw\": encoded_message}}\n def _run(\n self,\n message: str,\n to: List[str],\n subject: str,\n cc: Optional[List[str]] = None,\n bcc: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n try:\n create_message = self._prepare_draft_message(message, to, subject, cc, bcc)\n draft = (\n self.api_resource.users()\n .drafts()\n .create(userId=\"me\", body=create_message)\n .execute()\n )\n output = f'Draft created. Draft Id: {draft[\"id\"]}'\n return output\n except Exception as e:\n raise Exception(f\"An error occurred: {e}\")\n async def _arun(\n self,\n message: str,\n to: List[str],\n subject: str,\n cc: Optional[List[str]] = None,\n bcc: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n raise NotImplementedError(f\"The tool {self.name} does not support async yet.\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/create_draft.html"} {"id": "257cb184c9cd-0", "text": "Source code for langchain.tools.gmail.get_message\nimport base64\nimport email\nfrom typing import Dict, Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.gmail.base import GmailBaseTool\nfrom langchain.tools.gmail.utils import clean_email_body\n[docs]class SearchArgsSchema(BaseModel):\n message_id: str = Field(\n ...,\n description=\"The unique ID of the email message, retrieved from a search.\",\n )\n[docs]class GmailGetMessage(GmailBaseTool):\n name: str = \"get_gmail_message\"\n description: str = (\n \"Use this tool to fetch an email by message ID.\"\n \" Returns the thread ID, snipet, body, subject, and sender.\"\n )\n args_schema: Type[SearchArgsSchema] = SearchArgsSchema\n def _run(\n self,\n message_id: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> Dict:\n \"\"\"Run the tool.\"\"\"\n query = (\n self.api_resource.users()\n .messages()\n .get(userId=\"me\", format=\"raw\", id=message_id)\n )\n message_data = query.execute()\n raw_message = base64.urlsafe_b64decode(message_data[\"raw\"])\n email_msg = email.message_from_bytes(raw_message)\n subject = email_msg[\"Subject\"]\n sender = email_msg[\"From\"]\n message_body = email_msg.get_payload()\n body = clean_email_body(message_body)\n return {\n \"id\": message_id,\n \"threadId\": message_data[\"threadId\"],\n \"snippet\": message_data[\"snippet\"],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/get_message.html"} {"id": "257cb184c9cd-1", "text": "\"snippet\": message_data[\"snippet\"],\n \"body\": body,\n \"subject\": subject,\n \"sender\": sender,\n }\n async def _arun(\n self,\n message_id: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> Dict:\n \"\"\"Run the tool.\"\"\"\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/get_message.html"} {"id": "2ac7d873fe40-0", "text": "Source code for langchain.tools.gmail.send_message\n\"\"\"Send Gmail messages.\"\"\"\nimport base64\nfrom email.mime.multipart import MIMEMultipart\nfrom email.mime.text import MIMEText\nfrom typing import Any, Dict, List, Optional, Union\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.gmail.base import GmailBaseTool\n[docs]class SendMessageSchema(BaseModel):\n message: str = Field(\n ...,\n description=\"The message to send.\",\n )\n to: Union[str, List[str]] = Field(\n ...,\n description=\"The list of recipients.\",\n )\n subject: str = Field(\n ...,\n description=\"The subject of the message.\",\n )\n cc: Optional[Union[str, List[str]]] = Field(\n None,\n description=\"The list of CC recipients.\",\n )\n bcc: Optional[Union[str, List[str]]] = Field(\n None,\n description=\"The list of BCC recipients.\",\n )\n[docs]class GmailSendMessage(GmailBaseTool):\n name: str = \"send_gmail_message\"\n description: str = (\n \"Use this tool to send email messages.\" \" The input is the message, recipents\"\n )\n def _prepare_message(\n self,\n message: str,\n to: Union[str, List[str]],\n subject: str,\n cc: Optional[Union[str, List[str]]] = None,\n bcc: Optional[Union[str, List[str]]] = None,\n ) -> Dict[str, Any]:\n \"\"\"Create a message for an email.\"\"\"\n mime_message = MIMEMultipart()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/send_message.html"} {"id": "2ac7d873fe40-1", "text": "\"\"\"Create a message for an email.\"\"\"\n mime_message = MIMEMultipart()\n mime_message.attach(MIMEText(message, \"html\"))\n mime_message[\"To\"] = \", \".join(to if isinstance(to, list) else [to])\n mime_message[\"Subject\"] = subject\n if cc is not None:\n mime_message[\"Cc\"] = \", \".join(cc if isinstance(cc, list) else [cc])\n if bcc is not None:\n mime_message[\"Bcc\"] = \", \".join(bcc if isinstance(bcc, list) else [bcc])\n encoded_message = base64.urlsafe_b64encode(mime_message.as_bytes()).decode()\n return {\"raw\": encoded_message}\n def _run(\n self,\n message: str,\n to: Union[str, List[str]],\n subject: str,\n cc: Optional[Union[str, List[str]]] = None,\n bcc: Optional[Union[str, List[str]]] = None,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Run the tool.\"\"\"\n try:\n create_message = self._prepare_message(message, to, subject, cc=cc, bcc=bcc)\n send_message = (\n self.api_resource.users()\n .messages()\n .send(userId=\"me\", body=create_message)\n )\n sent_message = send_message.execute()\n return f'Message sent. Message Id: {sent_message[\"id\"]}'\n except Exception as error:\n raise Exception(f\"An error occurred: {error}\")\n async def _arun(\n self,\n message: str,\n to: Union[str, List[str]],\n subject: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/send_message.html"} {"id": "2ac7d873fe40-2", "text": "to: Union[str, List[str]],\n subject: str,\n cc: Optional[Union[str, List[str]]] = None,\n bcc: Optional[Union[str, List[str]]] = None,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Run the tool asynchronously.\"\"\"\n raise NotImplementedError(f\"The tool {self.name} does not support async yet.\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/send_message.html"} {"id": "9be686ea143a-0", "text": "Source code for langchain.tools.spark_sql.tool\n# flake8: noqa\n\"\"\"Tools for interacting with Spark SQL.\"\"\"\nfrom typing import Any, Dict, Optional\nfrom pydantic import BaseModel, Extra, Field, root_validator\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.chains.llm import LLMChain\nfrom langchain.prompts import PromptTemplate\nfrom langchain.utilities.spark_sql import SparkSQL\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.spark_sql.prompt import QUERY_CHECKER\n[docs]class BaseSparkSQLTool(BaseModel):\n \"\"\"Base tool for interacting with Spark SQL.\"\"\"\n db: SparkSQL = Field(exclude=True)\n # Override BaseTool.Config to appease mypy\n # See https://github.com/pydantic/pydantic/issues/4173\n[docs] class Config(BaseTool.Config):\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n extra = Extra.forbid\n[docs]class QuerySparkSQLTool(BaseSparkSQLTool, BaseTool):\n \"\"\"Tool for querying a Spark SQL.\"\"\"\n name = \"query_sql_db\"\n description = \"\"\"\n Input to this tool is a detailed and correct SQL query, output is a result from the Spark SQL.\n If the query is not correct, an error message will be returned.\n If an error is returned, rewrite the query, check the query, and try again.\n \"\"\"\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Execute the query, return the results or an error message.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/spark_sql/tool.html"} {"id": "9be686ea143a-1", "text": "\"\"\"Execute the query, return the results or an error message.\"\"\"\n return self.db.run_no_throw(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n raise NotImplementedError(\"QuerySqlDbTool does not support async\")\n[docs]class InfoSparkSQLTool(BaseSparkSQLTool, BaseTool):\n \"\"\"Tool for getting metadata about a Spark SQL.\"\"\"\n name = \"schema_sql_db\"\n description = \"\"\"\n Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.\n Be sure that the tables actually exist by calling list_tables_sql_db first!\n Example Input: \"table1, table2, table3\"\n \"\"\"\n def _run(\n self,\n table_names: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Get the schema for tables in a comma-separated list.\"\"\"\n return self.db.get_table_info_no_throw(table_names.split(\", \"))\n async def _arun(\n self,\n table_name: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n raise NotImplementedError(\"SchemaSqlDbTool does not support async\")\n[docs]class ListSparkSQLTool(BaseSparkSQLTool, BaseTool):\n \"\"\"Tool for getting tables names.\"\"\"\n name = \"list_tables_sql_db\"\n description = \"Input is an empty string, output is a comma separated list of tables in the Spark SQL.\"\n def _run(\n self,\n tool_input: str = \"\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/spark_sql/tool.html"} {"id": "9be686ea143a-2", "text": "def _run(\n self,\n tool_input: str = \"\",\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Get the schema for a specific table.\"\"\"\n return \", \".join(self.db.get_usable_table_names())\n async def _arun(\n self,\n tool_input: str = \"\",\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n raise NotImplementedError(\"ListTablesSqlDbTool does not support async\")\n[docs]class QueryCheckerTool(BaseSparkSQLTool, BaseTool):\n \"\"\"Use an LLM to check if a query is correct.\n Adapted from https://www.patterns.app/blog/2023/01/18/crunchbot-sql-analyst-gpt/\"\"\"\n template: str = QUERY_CHECKER\n llm: BaseLanguageModel\n llm_chain: LLMChain = Field(init=False)\n name = \"query_checker_sql_db\"\n description = \"\"\"\n Use this tool to double check if your query is correct before executing it.\n Always use this tool before executing a query with query_sql_db!\n \"\"\"\n[docs] @root_validator(pre=True)\n def initialize_llm_chain(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n if \"llm_chain\" not in values:\n values[\"llm_chain\"] = LLMChain(\n llm=values.get(\"llm\"),\n prompt=PromptTemplate(\n template=QUERY_CHECKER, input_variables=[\"query\"]\n ),\n )\n if values[\"llm_chain\"].prompt.input_variables != [\"query\"]:\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/spark_sql/tool.html"} {"id": "9be686ea143a-3", "text": "raise ValueError(\n \"LLM chain for QueryCheckerTool need to use ['query'] as input_variables \"\n \"for the embedded prompt\"\n )\n return values\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the LLM to check the query.\"\"\"\n return self.llm_chain.predict(query=query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n return await self.llm_chain.apredict(query=query)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/spark_sql/tool.html"} {"id": "108446e3a062-0", "text": "Source code for langchain.tools.wikipedia.tool\n\"\"\"Tool for the Wikipedia API.\"\"\"\nfrom typing import Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.wikipedia import WikipediaAPIWrapper\n[docs]class WikipediaQueryRun(BaseTool):\n \"\"\"Tool that adds the capability to search using the Wikipedia API.\"\"\"\n name = \"Wikipedia\"\n description = (\n \"A wrapper around Wikipedia. \"\n \"Useful for when you need to answer general questions about \"\n \"people, places, companies, facts, historical events, or other subjects. \"\n \"Input should be a search query.\"\n )\n api_wrapper: WikipediaAPIWrapper\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Wikipedia tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Wikipedia tool asynchronously.\"\"\"\n raise NotImplementedError(\"WikipediaQueryRun does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/wikipedia/tool.html"} {"id": "acb31cfbf2bd-0", "text": "Source code for langchain.tools.google_places.tool\n\"\"\"Tool for the Google search API.\"\"\"\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.google_places_api import GooglePlacesAPIWrapper\n[docs]class GooglePlacesSchema(BaseModel):\n query: str = Field(..., description=\"Query for google maps\")\n[docs]class GooglePlacesTool(BaseTool):\n \"\"\"Tool that adds the capability to query the Google places API.\"\"\"\n name = \"google_places\"\n description = (\n \"A wrapper around Google Places. \"\n \"Useful for when you need to validate or \"\n \"discover addressed from ambiguous text. \"\n \"Input should be a search query.\"\n )\n api_wrapper: GooglePlacesAPIWrapper = Field(default_factory=GooglePlacesAPIWrapper)\n args_schema: Type[BaseModel] = GooglePlacesSchema\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"GooglePlacesRun does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/google_places/tool.html"} {"id": "52f8ad970942-0", "text": "Source code for langchain.tools.searx_search.tool\n\"\"\"Tool for the SearxNG search API.\"\"\"\nfrom typing import Optional\nfrom pydantic import Extra\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool, Field\nfrom langchain.utilities.searx_search import SearxSearchWrapper\n[docs]class SearxSearchRun(BaseTool):\n \"\"\"Tool that adds the capability to query a Searx instance.\"\"\"\n name = \"searx_search\"\n description = (\n \"A meta search engine.\"\n \"Useful for when you need to answer questions about current events.\"\n \"Input should be a search query.\"\n )\n wrapper: SearxSearchWrapper\n kwargs: dict = Field(default_factory=dict)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return self.wrapper.run(query, **self.kwargs)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n return await self.wrapper.arun(query, **self.kwargs)\n[docs]class SearxSearchResults(BaseTool):\n \"\"\"Tool that has the capability to query a Searx instance and get back json.\"\"\"\n name = \"Searx Search Results\"\n description = (\n \"A meta search engine.\"\n \"Useful for when you need to answer questions about current events.\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/searx_search/tool.html"} {"id": "52f8ad970942-1", "text": "\"Useful for when you need to answer questions about current events.\"\n \"Input should be a search query. Output is a JSON array of the query results\"\n )\n wrapper: SearxSearchWrapper\n num_results: int = 4\n kwargs: dict = Field(default_factory=dict)\n[docs] class Config:\n \"\"\"Pydantic config.\"\"\"\n extra = Extra.allow\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return str(self.wrapper.results(query, self.num_results, **self.kwargs))\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n return (\n await self.wrapper.aresults(query, self.num_results, **self.kwargs)\n ).__str__()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/searx_search/tool.html"} {"id": "62197be75de6-0", "text": "Source code for langchain.tools.arxiv.tool\n\"\"\"Tool for the Arxiv API.\"\"\"\nfrom typing import Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.arxiv import ArxivAPIWrapper\n[docs]class ArxivQueryRun(BaseTool):\n \"\"\"Tool that adds the capability to search using the Arxiv API.\"\"\"\n name = \"arxiv\"\n description = (\n \"A wrapper around Arxiv.org \"\n \"Useful for when you need to answer questions about Physics, Mathematics, \"\n \"Computer Science, Quantitative Biology, Quantitative Finance, Statistics, \"\n \"Electrical Engineering, and Economics \"\n \"from scientific articles on arxiv.org. \"\n \"Input should be a search query.\"\n )\n api_wrapper: ArxivAPIWrapper = Field(default_factory=ArxivAPIWrapper)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Arxiv tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Arxiv tool asynchronously.\"\"\"\n raise NotImplementedError(\"ArxivAPIWrapper does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/arxiv/tool.html"} {"id": "c5f7f3d8cf3e-0", "text": "Source code for langchain.tools.graphql.tool\nimport json\nfrom typing import Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.graphql import GraphQLAPIWrapper\n[docs]class BaseGraphQLTool(BaseTool):\n \"\"\"Base tool for querying a GraphQL API.\"\"\"\n graphql_wrapper: GraphQLAPIWrapper\n name = \"query_graphql\"\n description = \"\"\"\\\n Input to this tool is a detailed and correct GraphQL query, output is a result from the API.\n If the query is not correct, an error message will be returned.\n If an error is returned with 'Bad request' in it, rewrite the query and try again.\n If an error is returned with 'Unauthorized' in it, do not try again, but tell the user to change their authentication.\n Example Input: query {{ allUsers {{ id, name, email }} }}\\\n \"\"\" # noqa: E501\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n def _run(\n self,\n tool_input: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n result = self.graphql_wrapper.run(tool_input)\n return json.dumps(result, indent=2)\n async def _arun(\n self,\n tool_input: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Graphql tool asynchronously.\"\"\"\n raise NotImplementedError(\"GraphQLAPIWrapper does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/graphql/tool.html"} {"id": "2cee1ef4f0fb-0", "text": "Source code for langchain.tools.powerbi.tool\n\"\"\"Tools for interacting with a Power BI dataset.\"\"\"\nimport logging\nfrom time import perf_counter\nfrom typing import Any, Dict, Optional, Tuple\nfrom pydantic import Field, validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chat_models.openai import _import_tiktoken\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.powerbi.prompt import (\n BAD_REQUEST_RESPONSE,\n DEFAULT_FEWSHOT_EXAMPLES,\n RETRY_RESPONSE,\n)\nfrom langchain.utilities.powerbi import PowerBIDataset, json_to_md\nlogger = logging.getLogger(__name__)\n[docs]class QueryPowerBITool(BaseTool):\n \"\"\"Tool for querying a Power BI Dataset.\"\"\"\n name = \"query_powerbi\"\n description = \"\"\"\n Input to this tool is a detailed question about the dataset, output is a result from the dataset. It will try to answer the question using the dataset, and if it cannot, it will ask for clarification.\n Example Input: \"How many rows are in table1?\"\n \"\"\" # noqa: E501\n llm_chain: LLMChain\n powerbi: PowerBIDataset = Field(exclude=True)\n examples: Optional[str] = DEFAULT_FEWSHOT_EXAMPLES\n session_cache: Dict[str, Any] = Field(default_factory=dict, exclude=True)\n max_iterations: int = 5\n output_token_limit: int = 4000\n tiktoken_model_name: Optional[str] = None # \"cl100k_base\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/powerbi/tool.html"} {"id": "2cee1ef4f0fb-1", "text": "\"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] @validator(\"llm_chain\")\n def validate_llm_chain_input_variables( # pylint: disable=E0213\n cls, llm_chain: LLMChain\n ) -> LLMChain:\n \"\"\"Make sure the LLM chain has the correct input variables.\"\"\"\n for var in llm_chain.prompt.input_variables:\n if var not in [\"tool_input\", \"tables\", \"schemas\", \"examples\"]:\n raise ValueError(\n \"LLM chain for QueryPowerBITool must have input variables ['tool_input', 'tables', 'schemas', 'examples'], found %s\", # noqa: C0301 E501 # pylint: disable=C0301\n llm_chain.prompt.input_variables,\n )\n return llm_chain\n def _check_cache(self, tool_input: str) -> Optional[str]:\n \"\"\"Check if the input is present in the cache.\n If the value is a bad request, overwrite with the escalated version,\n if not present return None.\"\"\"\n if tool_input not in self.session_cache:\n return None\n return self.session_cache[tool_input]\n def _run(\n self,\n tool_input: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Execute the query, return the results or an error message.\"\"\"\n if cache := self._check_cache(tool_input):\n logger.debug(\"Found cached result for %s: %s\", tool_input, cache)\n return cache\n try:\n logger.info(\"Running PBI Query Tool with input: %s\", tool_input)\n query = self.llm_chain.predict(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/powerbi/tool.html"} {"id": "2cee1ef4f0fb-2", "text": "query = self.llm_chain.predict(\n tool_input=tool_input,\n tables=self.powerbi.get_table_names(),\n schemas=self.powerbi.get_schemas(),\n examples=self.examples,\n )\n except Exception as exc: # pylint: disable=broad-except\n self.session_cache[tool_input] = f\"Error on call to LLM: {exc}\"\n return self.session_cache[tool_input]\n if query == \"I cannot answer this\":\n self.session_cache[tool_input] = query\n return self.session_cache[tool_input]\n logger.info(\"PBI Query:\\n%s\", query)\n start_time = perf_counter()\n pbi_result = self.powerbi.run(command=query)\n end_time = perf_counter()\n logger.debug(\"PBI Result: %s\", pbi_result)\n logger.debug(f\"PBI Query duration: {end_time - start_time:0.6f}\")\n result, error = self._parse_output(pbi_result)\n if error is not None and \"TokenExpired\" in error:\n self.session_cache[\n tool_input\n ] = \"Authentication token expired or invalid, please try reauthenticate.\"\n return self.session_cache[tool_input]\n iterations = kwargs.get(\"iterations\", 0)\n if error and iterations < self.max_iterations:\n return self._run(\n tool_input=RETRY_RESPONSE.format(\n tool_input=tool_input, query=query, error=error\n ),\n run_manager=run_manager,\n iterations=iterations + 1,\n )\n self.session_cache[tool_input] = (\n result if result else BAD_REQUEST_RESPONSE.format(error=error)\n )\n return self.session_cache[tool_input]\n async def _arun(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/powerbi/tool.html"} {"id": "2cee1ef4f0fb-3", "text": "async def _arun(\n self,\n tool_input: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Execute the query, return the results or an error message.\"\"\"\n if cache := self._check_cache(tool_input):\n logger.debug(\"Found cached result for %s: %s\", tool_input, cache)\n return f\"{cache}, from cache, you have already asked this question.\"\n try:\n logger.info(\"Running PBI Query Tool with input: %s\", tool_input)\n query = await self.llm_chain.apredict(\n tool_input=tool_input,\n tables=self.powerbi.get_table_names(),\n schemas=self.powerbi.get_schemas(),\n examples=self.examples,\n )\n except Exception as exc: # pylint: disable=broad-except\n self.session_cache[tool_input] = f\"Error on call to LLM: {exc}\"\n return self.session_cache[tool_input]\n if query == \"I cannot answer this\":\n self.session_cache[tool_input] = query\n return self.session_cache[tool_input]\n logger.info(\"PBI Query: %s\", query)\n start_time = perf_counter()\n pbi_result = await self.powerbi.arun(command=query)\n end_time = perf_counter()\n logger.debug(\"PBI Result: %s\", pbi_result)\n logger.debug(f\"PBI Query duration: {end_time - start_time:0.6f}\")\n result, error = self._parse_output(pbi_result)\n if error is not None and (\"TokenExpired\" in error or \"TokenError\" in error):\n self.session_cache[\n tool_input", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/powerbi/tool.html"} {"id": "2cee1ef4f0fb-4", "text": "self.session_cache[\n tool_input\n ] = \"Authentication token expired or invalid, please try to reauthenticate or check the scope of the credential.\" # noqa: E501\n return self.session_cache[tool_input]\n iterations = kwargs.get(\"iterations\", 0)\n if error and iterations < self.max_iterations:\n return await self._arun(\n tool_input=RETRY_RESPONSE.format(\n tool_input=tool_input, query=query, error=error\n ),\n run_manager=run_manager,\n iterations=iterations + 1,\n )\n self.session_cache[tool_input] = (\n result if result else BAD_REQUEST_RESPONSE.format(error=error)\n )\n return self.session_cache[tool_input]\n def _parse_output(\n self, pbi_result: Dict[str, Any]\n ) -> Tuple[Optional[str], Optional[Any]]:\n \"\"\"Parse the output of the query to a markdown table.\"\"\"\n if \"results\" in pbi_result:\n rows = pbi_result[\"results\"][0][\"tables\"][0][\"rows\"]\n if len(rows) == 0:\n logger.info(\"0 records in result, query was valid.\")\n return (\n None,\n \"0 rows returned, this might be correct, but please validate if all filter values were correct?\", # noqa: E501\n )\n result = json_to_md(rows)\n too_long, length = self._result_too_large(result)\n if too_long:\n return (\n f\"Result too large, please try to be more specific or use the `TOPN` function. The result is {length} tokens long, the limit is {self.output_token_limit} tokens.\", # noqa: E501\n None,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/powerbi/tool.html"} {"id": "2cee1ef4f0fb-5", "text": "None,\n )\n return result, None\n if \"error\" in pbi_result:\n if (\n \"pbi.error\" in pbi_result[\"error\"]\n and \"details\" in pbi_result[\"error\"][\"pbi.error\"]\n ):\n return None, pbi_result[\"error\"][\"pbi.error\"][\"details\"][0][\"detail\"]\n return None, pbi_result[\"error\"]\n return None, pbi_result\n def _result_too_large(self, result: str) -> Tuple[bool, int]:\n \"\"\"Tokenize the output of the query.\"\"\"\n if self.tiktoken_model_name:\n tiktoken_ = _import_tiktoken()\n encoding = tiktoken_.encoding_for_model(self.tiktoken_model_name)\n length = len(encoding.encode(result))\n logger.info(\"Result length: %s\", length)\n return length > self.output_token_limit, length\n return False, 0\n[docs]class InfoPowerBITool(BaseTool):\n \"\"\"Tool for getting metadata about a PowerBI Dataset.\"\"\"\n name = \"schema_powerbi\"\n description = \"\"\"\n Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.\n Be sure that the tables actually exist by calling list_tables_powerbi first!\n Example Input: \"table1, table2, table3\"\n \"\"\" # noqa: E501\n powerbi: PowerBIDataset = Field(exclude=True)\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n def _run(\n self,\n tool_input: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/powerbi/tool.html"} {"id": "2cee1ef4f0fb-6", "text": ") -> str:\n \"\"\"Get the schema for tables in a comma-separated list.\"\"\"\n return self.powerbi.get_table_info(tool_input.split(\", \"))\n async def _arun(\n self,\n tool_input: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n return await self.powerbi.aget_table_info(tool_input.split(\", \"))\n[docs]class ListPowerBITool(BaseTool):\n \"\"\"Tool for getting tables names.\"\"\"\n name = \"list_tables_powerbi\"\n description = \"Input is an empty string, output is a comma separated list of tables in the database.\" # noqa: E501 # pylint: disable=C0301\n powerbi: PowerBIDataset = Field(exclude=True)\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n def _run(\n self,\n tool_input: Optional[str] = None,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Get the names of the tables.\"\"\"\n return \", \".join(self.powerbi.get_table_names())\n async def _arun(\n self,\n tool_input: Optional[str] = None,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Get the names of the tables.\"\"\"\n return \", \".join(self.powerbi.get_table_names())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/powerbi/tool.html"} {"id": "fcdd635f8485-0", "text": "Source code for langchain.tools.bing_search.tool\n\"\"\"Tool for the Bing search API.\"\"\"\nfrom typing import Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.bing_search import BingSearchAPIWrapper\n[docs]class BingSearchRun(BaseTool):\n \"\"\"Tool that adds the capability to query the Bing search API.\"\"\"\n name = \"bing_search\"\n description = (\n \"A wrapper around Bing Search. \"\n \"Useful for when you need to answer questions about current events. \"\n \"Input should be a search query.\"\n )\n api_wrapper: BingSearchAPIWrapper\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"BingSearchRun does not support async\")\n[docs]class BingSearchResults(BaseTool):\n \"\"\"Tool that has capability to query the Bing Search API and get back json.\"\"\"\n name = \"Bing Search Results JSON\"\n description = (\n \"A wrapper around Bing Search. \"\n \"Useful for when you need to answer questions about current events. \"\n \"Input should be a search query. Output is a JSON array of the query results\"\n )\n num_results: int = 4\n api_wrapper: BingSearchAPIWrapper\n def _run(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/bing_search/tool.html"} {"id": "fcdd635f8485-1", "text": "api_wrapper: BingSearchAPIWrapper\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return str(self.api_wrapper.results(query, self.num_results))\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"BingSearchResults does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/bing_search/tool.html"} {"id": "cd25f15e196d-0", "text": "Source code for langchain.tools.shell.tool\nimport asyncio\nimport platform\nimport warnings\nfrom typing import List, Optional, Type, Union\nfrom pydantic import BaseModel, Field, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.bash import BashProcess\n[docs]class ShellInput(BaseModel):\n \"\"\"Commands for the Bash Shell tool.\"\"\"\n commands: Union[str, List[str]] = Field(\n ...,\n description=\"List of shell commands to run. Deserialized using json.loads\",\n )\n \"\"\"List of shell commands to run.\"\"\"\n @root_validator\n def _validate_commands(cls, values: dict) -> dict:\n \"\"\"Validate commands.\"\"\"\n # TODO: Add real validators\n commands = values.get(\"commands\")\n if not isinstance(commands, list):\n values[\"commands\"] = [commands]\n # Warn that the bash tool is not safe\n warnings.warn(\n \"The shell tool has no safeguards by default. Use at your own risk.\"\n )\n return values\ndef _get_default_bash_processs() -> BashProcess:\n \"\"\"Get file path from string.\"\"\"\n return BashProcess(return_err_output=True)\ndef _get_platform() -> str:\n \"\"\"Get platform.\"\"\"\n system = platform.system()\n if system == \"Darwin\":\n return \"MacOS\"\n return system\n[docs]class ShellTool(BaseTool):\n \"\"\"Tool to run shell commands.\"\"\"\n process: BashProcess = Field(default_factory=_get_default_bash_processs)\n \"\"\"Bash process to run commands.\"\"\"\n name: str = \"terminal\"\n \"\"\"Name of tool.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/shell/tool.html"} {"id": "cd25f15e196d-1", "text": "name: str = \"terminal\"\n \"\"\"Name of tool.\"\"\"\n description: str = f\"Run shell commands on this {_get_platform()} machine.\"\n \"\"\"Description of tool.\"\"\"\n args_schema: Type[BaseModel] = ShellInput\n \"\"\"Schema for input arguments.\"\"\"\n def _run(\n self,\n commands: Union[str, List[str]],\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Run commands and return final output.\"\"\"\n return self.process.run(commands)\n async def _arun(\n self,\n commands: Union[str, List[str]],\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Run commands asynchronously and return final output.\"\"\"\n return await asyncio.get_event_loop().run_in_executor(\n None, self.process.run, commands\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/shell/tool.html"} {"id": "41e31f11fc7e-0", "text": "Source code for langchain.tools.python.tool\n\"\"\"A tool for running python code in a REPL.\"\"\"\nimport ast\nimport asyncio\nimport re\nimport sys\nfrom contextlib import redirect_stdout\nfrom io import StringIO\nfrom typing import Any, Dict, Optional\nfrom pydantic import Field, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities import PythonREPL\ndef _get_default_python_repl() -> PythonREPL:\n return PythonREPL(_globals=globals(), _locals=None)\n[docs]def sanitize_input(query: str) -> str:\n \"\"\"Sanitize input to the python REPL.\n Remove whitespace, backtick & python (if llm mistakes python console as terminal)\n Args:\n query: The query to sanitize\n Returns:\n str: The sanitized query\n \"\"\"\n # Removes `, whitespace & python from start\n query = re.sub(r\"^(\\s|`)*(?i:python)?\\s*\", \"\", query)\n # Removes whitespace & ` from end\n query = re.sub(r\"(\\s|`)*$\", \"\", query)\n return query\n[docs]class PythonREPLTool(BaseTool):\n \"\"\"A tool for running python code in a REPL.\"\"\"\n name = \"Python_REPL\"\n description = (\n \"A Python shell. Use this to execute python commands. \"\n \"Input should be a valid python command. \"\n \"If you want to see the output of a value, you should print it out \"\n \"with `print(...)`.\"\n )\n python_repl: PythonREPL = Field(default_factory=_get_default_python_repl)\n sanitize_input: bool = True", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/python/tool.html"} {"id": "41e31f11fc7e-1", "text": "sanitize_input: bool = True\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> Any:\n \"\"\"Use the tool.\"\"\"\n if self.sanitize_input:\n query = sanitize_input(query)\n return self.python_repl.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> Any:\n \"\"\"Use the tool asynchronously.\"\"\"\n if self.sanitize_input:\n query = sanitize_input(query)\n loop = asyncio.get_running_loop()\n result = await loop.run_in_executor(None, self.run, query)\n return result\n[docs]class PythonAstREPLTool(BaseTool):\n \"\"\"A tool for running python code in a REPL.\"\"\"\n name = \"python_repl_ast\"\n description = (\n \"A Python shell. Use this to execute python commands. \"\n \"Input should be a valid python command. \"\n \"When using this tool, sometimes output is abbreviated - \"\n \"make sure it does not look abbreviated before using it in your answer.\"\n )\n globals: Optional[Dict] = Field(default_factory=dict)\n locals: Optional[Dict] = Field(default_factory=dict)\n sanitize_input: bool = True\n[docs] @root_validator(pre=True)\n def validate_python_version(cls, values: Dict) -> Dict:\n \"\"\"Validate valid python version.\"\"\"\n if sys.version_info < (3, 9):\n raise ValueError(\n \"This tool relies on Python 3.9 or higher \"\n \"(as it uses new functionality in the `ast` module, \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/python/tool.html"} {"id": "41e31f11fc7e-2", "text": "\"(as it uses new functionality in the `ast` module, \"\n f\"you have Python version: {sys.version}\"\n )\n return values\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n try:\n if self.sanitize_input:\n query = sanitize_input(query)\n tree = ast.parse(query)\n module = ast.Module(tree.body[:-1], type_ignores=[])\n exec(ast.unparse(module), self.globals, self.locals) # type: ignore\n module_end = ast.Module(tree.body[-1:], type_ignores=[])\n module_end_str = ast.unparse(module_end) # type: ignore\n io_buffer = StringIO()\n try:\n with redirect_stdout(io_buffer):\n ret = eval(module_end_str, self.globals, self.locals)\n if ret is None:\n return io_buffer.getvalue()\n else:\n return ret\n except Exception:\n with redirect_stdout(io_buffer):\n exec(module_end_str, self.globals, self.locals)\n return io_buffer.getvalue()\n except Exception as e:\n return \"{}: {}\".format(type(e).__name__, str(e))\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"PythonReplTool does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/python/tool.html"} {"id": "89b7f9a9460f-0", "text": "Source code for langchain.tools.openapi.utils.api_models\n\"\"\"Pydantic models for parsing an OpenAPI spec.\"\"\"\nimport logging\nfrom enum import Enum\nfrom typing import Any, Dict, List, Optional, Sequence, Tuple, Type, Union\nfrom openapi_schema_pydantic import MediaType, Parameter, Reference, RequestBody, Schema\nfrom pydantic import BaseModel, Field\nfrom langchain.tools.openapi.utils.openapi_utils import HTTPVerb, OpenAPISpec\nlogger = logging.getLogger(__name__)\nPRIMITIVE_TYPES = {\n \"integer\": int,\n \"number\": float,\n \"string\": str,\n \"boolean\": bool,\n \"array\": List,\n \"object\": Dict,\n \"null\": None,\n}\n# See https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.1.0.md#parameterIn\n# for more info.\n[docs]class APIPropertyLocation(Enum):\n \"\"\"The location of the property.\"\"\"\n QUERY = \"query\"\n PATH = \"path\"\n HEADER = \"header\"\n COOKIE = \"cookie\" # Not yet supported\n[docs] @classmethod\n def from_str(cls, location: str) -> \"APIPropertyLocation\":\n \"\"\"Parse an APIPropertyLocation.\"\"\"\n try:\n return cls(location)\n except ValueError:\n raise ValueError(\n f\"Invalid APIPropertyLocation. Valid values are {cls.__members__}\"\n )\n_SUPPORTED_MEDIA_TYPES = (\"application/json\",)\nSUPPORTED_LOCATIONS = {\n APIPropertyLocation.QUERY,\n APIPropertyLocation.PATH,\n}\nINVALID_LOCATION_TEMPL = (\n 'Unsupported APIPropertyLocation \"{location}\"'\n \" for parameter {name}. \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"} {"id": "89b7f9a9460f-1", "text": "'Unsupported APIPropertyLocation \"{location}\"'\n \" for parameter {name}. \"\n + f\"Valid values are {[loc.value for loc in SUPPORTED_LOCATIONS]}\"\n)\nSCHEMA_TYPE = Union[str, Type, tuple, None, Enum]\n[docs]class APIPropertyBase(BaseModel):\n \"\"\"Base model for an API property.\"\"\"\n # The name of the parameter is required and is case-sensitive.\n # If \"in\" is \"path\", the \"name\" field must correspond to a template expression\n # within the path field in the Paths Object.\n # If \"in\" is \"header\" and the \"name\" field is \"Accept\", \"Content-Type\",\n # or \"Authorization\", the parameter definition is ignored.\n # For all other cases, the \"name\" corresponds to the parameter\n # name used by the \"in\" property.\n name: str = Field(alias=\"name\")\n \"\"\"The name of the property.\"\"\"\n required: bool = Field(alias=\"required\")\n \"\"\"Whether the property is required.\"\"\"\n type: SCHEMA_TYPE = Field(alias=\"type\")\n \"\"\"The type of the property.\n \n Either a primitive type, a component/parameter type,\n or an array or 'object' (dict) of the above.\"\"\"\n default: Optional[Any] = Field(alias=\"default\", default=None)\n \"\"\"The default value of the property.\"\"\"\n description: Optional[str] = Field(alias=\"description\", default=None)\n \"\"\"The description of the property.\"\"\"\n[docs]class APIProperty(APIPropertyBase):\n \"\"\"A model for a property in the query, path, header, or cookie params.\"\"\"\n location: APIPropertyLocation = Field(alias=\"location\")\n \"\"\"The path/how it's being passed to the endpoint.\"\"\"\n @staticmethod", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"} {"id": "89b7f9a9460f-2", "text": "\"\"\"The path/how it's being passed to the endpoint.\"\"\"\n @staticmethod\n def _cast_schema_list_type(schema: Schema) -> Optional[Union[str, Tuple[str, ...]]]:\n type_ = schema.type\n if not isinstance(type_, list):\n return type_\n else:\n return tuple(type_)\n @staticmethod\n def _get_schema_type_for_enum(parameter: Parameter, schema: Schema) -> Enum:\n \"\"\"Get the schema type when the parameter is an enum.\"\"\"\n param_name = f\"{parameter.name}Enum\"\n return Enum(param_name, {str(v): v for v in schema.enum})\n @staticmethod\n def _get_schema_type_for_array(\n schema: Schema,\n ) -> Optional[Union[str, Tuple[str, ...]]]:\n items = schema.items\n if isinstance(items, Schema):\n schema_type = APIProperty._cast_schema_list_type(items)\n elif isinstance(items, Reference):\n ref_name = items.ref.split(\"/\")[-1]\n schema_type = ref_name # TODO: Add ref definitions to make his valid\n else:\n raise ValueError(f\"Unsupported array items: {items}\")\n if isinstance(schema_type, str):\n # TODO: recurse\n schema_type = (schema_type,)\n return schema_type\n @staticmethod\n def _get_schema_type(parameter: Parameter, schema: Optional[Schema]) -> SCHEMA_TYPE:\n if schema is None:\n return None\n schema_type: SCHEMA_TYPE = APIProperty._cast_schema_list_type(schema)\n if schema_type == \"array\":\n schema_type = APIProperty._get_schema_type_for_array(schema)\n elif schema_type == \"object\":\n # TODO: Resolve array and object types to components.\n raise NotImplementedError(\"Objects not yet supported\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"} {"id": "89b7f9a9460f-3", "text": "raise NotImplementedError(\"Objects not yet supported\")\n elif schema_type in PRIMITIVE_TYPES:\n if schema.enum:\n schema_type = APIProperty._get_schema_type_for_enum(parameter, schema)\n else:\n # Directly use the primitive type\n pass\n else:\n raise NotImplementedError(f\"Unsupported type: {schema_type}\")\n return schema_type\n @staticmethod\n def _validate_location(location: APIPropertyLocation, name: str) -> None:\n if location not in SUPPORTED_LOCATIONS:\n raise NotImplementedError(\n INVALID_LOCATION_TEMPL.format(location=location, name=name)\n )\n @staticmethod\n def _validate_content(content: Optional[Dict[str, MediaType]]) -> None:\n if content:\n raise ValueError(\n \"API Properties with media content not supported. \"\n \"Media content only supported within APIRequestBodyProperty's\"\n )\n @staticmethod\n def _get_schema(parameter: Parameter, spec: OpenAPISpec) -> Optional[Schema]:\n schema = parameter.param_schema\n if isinstance(schema, Reference):\n schema = spec.get_referenced_schema(schema)\n elif schema is None:\n return None\n elif not isinstance(schema, Schema):\n raise ValueError(f\"Error dereferencing schema: {schema}\")\n return schema\n[docs] @staticmethod\n def is_supported_location(location: str) -> bool:\n \"\"\"Return whether the provided location is supported.\"\"\"\n try:\n return APIPropertyLocation.from_str(location) in SUPPORTED_LOCATIONS\n except ValueError:\n return False\n[docs] @classmethod\n def from_parameter(cls, parameter: Parameter, spec: OpenAPISpec) -> \"APIProperty\":\n \"\"\"Instantiate from an OpenAPI Parameter.\"\"\"\n location = APIPropertyLocation.from_str(parameter.param_in)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"} {"id": "89b7f9a9460f-4", "text": "location = APIPropertyLocation.from_str(parameter.param_in)\n cls._validate_location(\n location,\n parameter.name,\n )\n cls._validate_content(parameter.content)\n schema = cls._get_schema(parameter, spec)\n schema_type = cls._get_schema_type(parameter, schema)\n default_val = schema.default if schema is not None else None\n return cls(\n name=parameter.name,\n location=location,\n default=default_val,\n description=parameter.description,\n required=parameter.required,\n type=schema_type,\n )\n[docs]class APIRequestBodyProperty(APIPropertyBase):\n \"\"\"A model for a request body property.\"\"\"\n properties: List[\"APIRequestBodyProperty\"] = Field(alias=\"properties\")\n \"\"\"The sub-properties of the property.\"\"\"\n # This is useful for handling nested property cycles.\n # We can define separate types in that case.\n references_used: List[str] = Field(alias=\"references_used\")\n \"\"\"The references used by the property.\"\"\"\n @classmethod\n def _process_object_schema(\n cls, schema: Schema, spec: OpenAPISpec, references_used: List[str]\n ) -> Tuple[Union[str, List[str], None], List[\"APIRequestBodyProperty\"]]:\n properties = []\n required_props = schema.required or []\n if schema.properties is None:\n raise ValueError(\n f\"No properties found when processing object schema: {schema}\"\n )\n for prop_name, prop_schema in schema.properties.items():\n if isinstance(prop_schema, Reference):\n ref_name = prop_schema.ref.split(\"/\")[-1]\n if ref_name not in references_used:\n references_used.append(ref_name)\n prop_schema = spec.get_referenced_schema(prop_schema)\n else:\n continue", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"} {"id": "89b7f9a9460f-5", "text": "prop_schema = spec.get_referenced_schema(prop_schema)\n else:\n continue\n properties.append(\n cls.from_schema(\n schema=prop_schema,\n name=prop_name,\n required=prop_name in required_props,\n spec=spec,\n references_used=references_used,\n )\n )\n return schema.type, properties\n @classmethod\n def _process_array_schema(\n cls, schema: Schema, name: str, spec: OpenAPISpec, references_used: List[str]\n ) -> str:\n items = schema.items\n if items is not None:\n if isinstance(items, Reference):\n ref_name = items.ref.split(\"/\")[-1]\n if ref_name not in references_used:\n references_used.append(ref_name)\n items = spec.get_referenced_schema(items)\n else:\n pass\n return f\"Array<{ref_name}>\"\n else:\n pass\n if isinstance(items, Schema):\n array_type = cls.from_schema(\n schema=items,\n name=f\"{name}Item\",\n required=True, # TODO: Add required\n spec=spec,\n references_used=references_used,\n )\n return f\"Array<{array_type.type}>\"\n return \"array\"\n[docs] @classmethod\n def from_schema(\n cls,\n schema: Schema,\n name: str,\n required: bool,\n spec: OpenAPISpec,\n references_used: Optional[List[str]] = None,\n ) -> \"APIRequestBodyProperty\":\n \"\"\"Recursively populate from an OpenAPI Schema.\"\"\"\n if references_used is None:\n references_used = []\n schema_type = schema.type", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"} {"id": "89b7f9a9460f-6", "text": "references_used = []\n schema_type = schema.type\n properties: List[APIRequestBodyProperty] = []\n if schema_type == \"object\" and schema.properties:\n schema_type, properties = cls._process_object_schema(\n schema, spec, references_used\n )\n elif schema_type == \"array\":\n schema_type = cls._process_array_schema(schema, name, spec, references_used)\n elif schema_type in PRIMITIVE_TYPES:\n # Use the primitive type directly\n pass\n elif schema_type is None:\n # No typing specified/parsed. WIll map to 'any'\n pass\n else:\n raise ValueError(f\"Unsupported type: {schema_type}\")\n return cls(\n name=name,\n required=required,\n type=schema_type,\n default=schema.default,\n description=schema.description,\n properties=properties,\n references_used=references_used,\n )\n[docs]class APIRequestBody(BaseModel):\n \"\"\"A model for a request body.\"\"\"\n description: Optional[str] = Field(alias=\"description\")\n \"\"\"The description of the request body.\"\"\"\n properties: List[APIRequestBodyProperty] = Field(alias=\"properties\")\n # E.g., application/json - we only support JSON at the moment.\n media_type: str = Field(alias=\"media_type\")\n \"\"\"The media type of the request body.\"\"\"\n @classmethod\n def _process_supported_media_type(\n cls,\n media_type_obj: MediaType,\n spec: OpenAPISpec,\n ) -> List[APIRequestBodyProperty]:\n \"\"\"Process the media type of the request body.\"\"\"\n references_used = []\n schema = media_type_obj.media_type_schema\n if isinstance(schema, Reference):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"} {"id": "89b7f9a9460f-7", "text": "schema = media_type_obj.media_type_schema\n if isinstance(schema, Reference):\n references_used.append(schema.ref.split(\"/\")[-1])\n schema = spec.get_referenced_schema(schema)\n if schema is None:\n raise ValueError(\n f\"Could not resolve schema for media type: {media_type_obj}\"\n )\n api_request_body_properties = []\n required_properties = schema.required or []\n if schema.type == \"object\" and schema.properties:\n for prop_name, prop_schema in schema.properties.items():\n if isinstance(prop_schema, Reference):\n prop_schema = spec.get_referenced_schema(prop_schema)\n api_request_body_properties.append(\n APIRequestBodyProperty.from_schema(\n schema=prop_schema,\n name=prop_name,\n required=prop_name in required_properties,\n spec=spec,\n )\n )\n else:\n api_request_body_properties.append(\n APIRequestBodyProperty(\n name=\"body\",\n required=True,\n type=schema.type,\n default=schema.default,\n description=schema.description,\n properties=[],\n references_used=references_used,\n )\n )\n return api_request_body_properties\n[docs] @classmethod\n def from_request_body(\n cls, request_body: RequestBody, spec: OpenAPISpec\n ) -> \"APIRequestBody\":\n \"\"\"Instantiate from an OpenAPI RequestBody.\"\"\"\n properties = []\n for media_type, media_type_obj in request_body.content.items():\n if media_type not in _SUPPORTED_MEDIA_TYPES:\n continue\n api_request_body_properties = cls._process_supported_media_type(\n media_type_obj,\n spec,\n )\n properties.extend(api_request_body_properties)\n return cls(\n description=request_body.description,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"} {"id": "89b7f9a9460f-8", "text": "return cls(\n description=request_body.description,\n properties=properties,\n media_type=media_type,\n )\n[docs]class APIOperation(BaseModel):\n \"\"\"A model for a single API operation.\"\"\"\n operation_id: str = Field(alias=\"operation_id\")\n \"\"\"The unique identifier of the operation.\"\"\"\n description: Optional[str] = Field(alias=\"description\")\n \"\"\"The description of the operation.\"\"\"\n base_url: str = Field(alias=\"base_url\")\n \"\"\"The base URL of the operation.\"\"\"\n path: str = Field(alias=\"path\")\n \"\"\"The path of the operation.\"\"\"\n method: HTTPVerb = Field(alias=\"method\")\n \"\"\"The HTTP method of the operation.\"\"\"\n properties: Sequence[APIProperty] = Field(alias=\"properties\")\n # TODO: Add parse in used components to be able to specify what type of\n # referenced object it is.\n # \"\"\"The properties of the operation.\"\"\"\n # components: Dict[str, BaseModel] = Field(alias=\"components\")\n request_body: Optional[APIRequestBody] = Field(alias=\"request_body\")\n \"\"\"The request body of the operation.\"\"\"\n @staticmethod\n def _get_properties_from_parameters(\n parameters: List[Parameter], spec: OpenAPISpec\n ) -> List[APIProperty]:\n \"\"\"Get the properties of the operation.\"\"\"\n properties = []\n for param in parameters:\n if APIProperty.is_supported_location(param.param_in):\n properties.append(APIProperty.from_parameter(param, spec))\n elif param.required:\n raise ValueError(\n INVALID_LOCATION_TEMPL.format(\n location=param.param_in, name=param.name\n )\n )\n else:\n logger.warning(\n INVALID_LOCATION_TEMPL.format(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"} {"id": "89b7f9a9460f-9", "text": ")\n else:\n logger.warning(\n INVALID_LOCATION_TEMPL.format(\n location=param.param_in, name=param.name\n )\n + \" Ignoring optional parameter\"\n )\n pass\n return properties\n[docs] @classmethod\n def from_openapi_url(\n cls,\n spec_url: str,\n path: str,\n method: str,\n ) -> \"APIOperation\":\n \"\"\"Create an APIOperation from an OpenAPI URL.\"\"\"\n spec = OpenAPISpec.from_url(spec_url)\n return cls.from_openapi_spec(spec, path, method)\n[docs] @classmethod\n def from_openapi_spec(\n cls,\n spec: OpenAPISpec,\n path: str,\n method: str,\n ) -> \"APIOperation\":\n \"\"\"Create an APIOperation from an OpenAPI spec.\"\"\"\n operation = spec.get_operation(path, method)\n parameters = spec.get_parameters_for_operation(operation)\n properties = cls._get_properties_from_parameters(parameters, spec)\n operation_id = OpenAPISpec.get_cleaned_operation_id(operation, path, method)\n request_body = spec.get_request_body_for_operation(operation)\n api_request_body = (\n APIRequestBody.from_request_body(request_body, spec)\n if request_body is not None\n else None\n )\n description = operation.description or operation.summary\n if not description and spec.paths is not None:\n description = spec.paths[path].description or spec.paths[path].summary\n return cls(\n operation_id=operation_id,\n description=description,\n base_url=spec.base_url,\n path=path,\n method=method,\n properties=properties,\n request_body=api_request_body,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"} {"id": "89b7f9a9460f-10", "text": "properties=properties,\n request_body=api_request_body,\n )\n[docs] @staticmethod\n def ts_type_from_python(type_: SCHEMA_TYPE) -> str:\n if type_ is None:\n # TODO: Handle Nones better. These often result when\n # parsing specs that are < v3\n return \"any\"\n elif isinstance(type_, str):\n return {\n \"str\": \"string\",\n \"integer\": \"number\",\n \"float\": \"number\",\n \"date-time\": \"string\",\n }.get(type_, type_)\n elif isinstance(type_, tuple):\n return f\"Array<{APIOperation.ts_type_from_python(type_[0])}>\"\n elif isinstance(type_, type) and issubclass(type_, Enum):\n return \" | \".join([f\"'{e.value}'\" for e in type_])\n else:\n return str(type_)\n def _format_nested_properties(\n self, properties: List[APIRequestBodyProperty], indent: int = 2\n ) -> str:\n \"\"\"Format nested properties.\"\"\"\n formatted_props = []\n for prop in properties:\n prop_name = prop.name\n prop_type = self.ts_type_from_python(prop.type)\n prop_required = \"\" if prop.required else \"?\"\n prop_desc = f\"/* {prop.description} */\" if prop.description else \"\"\n if prop.properties:\n nested_props = self._format_nested_properties(\n prop.properties, indent + 2\n )\n prop_type = f\"{{\\n{nested_props}\\n{' ' * indent}}}\"\n formatted_props.append(\n f\"{prop_desc}\\n{' ' * indent}{prop_name}{prop_required}: {prop_type},\"\n )\n return \"\\n\".join(formatted_props)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"} {"id": "89b7f9a9460f-11", "text": ")\n return \"\\n\".join(formatted_props)\n[docs] def to_typescript(self) -> str:\n \"\"\"Get typescript string representation of the operation.\"\"\"\n operation_name = self.operation_id\n params = []\n if self.request_body:\n formatted_request_body_props = self._format_nested_properties(\n self.request_body.properties\n )\n params.append(formatted_request_body_props)\n for prop in self.properties:\n prop_name = prop.name\n prop_type = self.ts_type_from_python(prop.type)\n prop_required = \"\" if prop.required else \"?\"\n prop_desc = f\"/* {prop.description} */\" if prop.description else \"\"\n params.append(f\"{prop_desc}\\n\\t\\t{prop_name}{prop_required}: {prop_type},\")\n formatted_params = \"\\n\".join(params).strip()\n description_str = f\"/* {self.description} */\" if self.description else \"\"\n typescript_definition = f\"\"\"\n{description_str}\ntype {operation_name} = (_: {{\n{formatted_params}\n}}) => any;\n\"\"\"\n return typescript_definition.strip()\n @property\n def query_params(self) -> List[str]:\n return [\n property.name\n for property in self.properties\n if property.location == APIPropertyLocation.QUERY\n ]\n @property\n def path_params(self) -> List[str]:\n return [\n property.name\n for property in self.properties\n if property.location == APIPropertyLocation.PATH\n ]\n @property\n def body_params(self) -> List[str]:\n if self.request_body is None:\n return []\n return [prop.name for prop in self.request_body.properties]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"} {"id": "e9c1515f1b11-0", "text": "Source code for langchain.tools.metaphor_search.tool\n\"\"\"Tool for the Metaphor search API.\"\"\"\nfrom typing import Dict, List, Optional, Union\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.metaphor_search import MetaphorSearchAPIWrapper\n[docs]class MetaphorSearchResults(BaseTool):\n \"\"\"Tool that has capability to query the Metaphor Search API and get back json.\"\"\"\n name = \"metaphor_search_results_json\"\n description = (\n \"A wrapper around Metaphor Search. \"\n \"Input should be a Metaphor-optimized query. \"\n \"Output is a JSON array of the query results\"\n )\n api_wrapper: MetaphorSearchAPIWrapper\n def _run(\n self,\n query: str,\n num_results: int,\n include_domains: Optional[List[str]] = None,\n exclude_domains: Optional[List[str]] = None,\n start_crawl_date: Optional[str] = None,\n end_crawl_date: Optional[str] = None,\n start_published_date: Optional[str] = None,\n end_published_date: Optional[str] = None,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> Union[List[Dict], str]:\n \"\"\"Use the tool.\"\"\"\n try:\n return self.api_wrapper.results(\n query,\n num_results,\n include_domains,\n exclude_domains,\n start_crawl_date,\n end_crawl_date,\n start_published_date,\n end_published_date,\n )\n except Exception as e:\n return repr(e)\n async def _arun(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/metaphor_search/tool.html"} {"id": "e9c1515f1b11-1", "text": "return repr(e)\n async def _arun(\n self,\n query: str,\n num_results: int,\n include_domains: Optional[List[str]] = None,\n exclude_domains: Optional[List[str]] = None,\n start_crawl_date: Optional[str] = None,\n end_crawl_date: Optional[str] = None,\n start_published_date: Optional[str] = None,\n end_published_date: Optional[str] = None,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> Union[List[Dict], str]:\n \"\"\"Use the tool asynchronously.\"\"\"\n try:\n return await self.api_wrapper.results_async(\n query,\n num_results,\n include_domains,\n exclude_domains,\n start_crawl_date,\n end_crawl_date,\n start_published_date,\n end_published_date,\n )\n except Exception as e:\n return repr(e)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/metaphor_search/tool.html"} {"id": "36f176bba8a8-0", "text": "Source code for langchain.tools.interaction.tool\n\"\"\"Tools for interacting with the user.\"\"\"\nimport warnings\nfrom typing import Any\nfrom langchain.tools.human.tool import HumanInputRun\n[docs]def StdInInquireTool(*args: Any, **kwargs: Any) -> HumanInputRun:\n \"\"\"Tool for asking the user for input.\"\"\"\n warnings.warn(\n \"StdInInquireTool will be deprecated in the future. \"\n \"Please use HumanInputRun instead.\",\n DeprecationWarning,\n )\n return HumanInputRun(*args, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/interaction/tool.html"} {"id": "5a444f612c46-0", "text": "Source code for langchain.tools.vectorstore.tool\n\"\"\"Tools for interacting with vectorstores.\"\"\"\nimport json\nfrom typing import Any, Dict, Optional\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain\nfrom langchain.llms.openai import OpenAI\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.tools.base import BaseTool\nfrom langchain.vectorstores.base import VectorStore\n[docs]class BaseVectorStoreTool(BaseModel):\n \"\"\"Base class for tools that use a VectorStore.\"\"\"\n vectorstore: VectorStore = Field(exclude=True)\n llm: BaseLanguageModel = Field(default_factory=lambda: OpenAI(temperature=0))\n[docs] class Config(BaseTool.Config):\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\ndef _create_description_from_template(values: Dict[str, Any]) -> Dict[str, Any]:\n values[\"description\"] = values[\"template\"].format(name=values[\"name\"])\n return values\n[docs]class VectorStoreQATool(BaseVectorStoreTool, BaseTool):\n \"\"\"Tool for the VectorDBQA chain. To be initialized with name and chain.\"\"\"\n[docs] @staticmethod\n def get_description(name: str, description: str) -> str:\n template: str = (\n \"Useful for when you need to answer questions about {name}. \"\n \"Whenever you need information about {description} \"\n \"you should ALWAYS use this. \"\n \"Input should be a fully formed question.\"\n )\n return template.format(name=name, description=description)\n def _run(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/vectorstore/tool.html"} {"id": "5a444f612c46-1", "text": "def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n chain = RetrievalQA.from_chain_type(\n self.llm, retriever=self.vectorstore.as_retriever()\n )\n return chain.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"VectorStoreQATool does not support async\")\n[docs]class VectorStoreQAWithSourcesTool(BaseVectorStoreTool, BaseTool):\n \"\"\"Tool for the VectorDBQAWithSources chain.\"\"\"\n[docs] @staticmethod\n def get_description(name: str, description: str) -> str:\n template: str = (\n \"Useful for when you need to answer questions about {name} and the sources \"\n \"used to construct the answer. \"\n \"Whenever you need information about {description} \"\n \"you should ALWAYS use this. \"\n \" Input should be a fully formed question. \"\n \"Output is a json serialized dictionary with keys `answer` and `sources`. \"\n \"Only use this tool if the user explicitly asks for sources.\"\n )\n return template.format(name=name, description=description)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n chain = RetrievalQAWithSourcesChain.from_chain_type(\n self.llm, retriever=self.vectorstore.as_retriever()\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/vectorstore/tool.html"} {"id": "5a444f612c46-2", "text": "self.llm, retriever=self.vectorstore.as_retriever()\n )\n return json.dumps(chain({chain.question_key: query}, return_only_outputs=True))\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"VectorStoreQAWithSourcesTool does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/vectorstore/tool.html"} {"id": "dddeafadbe55-0", "text": "Source code for langchain.retrievers.azure_cognitive_search\n\"\"\"Retriever wrapper for Azure Cognitive Search.\"\"\"\nfrom __future__ import annotations\nimport json\nfrom typing import Dict, List, Optional\nimport aiohttp\nimport requests\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.utils import get_from_dict_or_env\n[docs]class AzureCognitiveSearchRetriever(BaseRetriever):\n \"\"\"Wrapper around Azure Cognitive Search.\"\"\"\n service_name: str = \"\"\n \"\"\"Name of Azure Cognitive Search service\"\"\"\n index_name: str = \"\"\n \"\"\"Name of Index inside Azure Cognitive Search service\"\"\"\n api_key: str = \"\"\n \"\"\"API Key. Both Admin and Query keys work, but for reading data it's\n recommended to use a Query key.\"\"\"\n api_version: str = \"2020-06-30\"\n \"\"\"API version\"\"\"\n aiosession: Optional[aiohttp.ClientSession] = None\n \"\"\"ClientSession, in case we want to reuse connection for better performance.\"\"\"\n content_key: str = \"content\"\n \"\"\"Key in a retrieved result to set as the Document page_content.\"\"\"\n[docs] class Config:\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that service name, index name and api key exists in environment.\"\"\"\n values[\"service_name\"] = get_from_dict_or_env(\n values, \"service_name\", \"AZURE_COGNITIVE_SEARCH_SERVICE_NAME\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/azure_cognitive_search.html"} {"id": "dddeafadbe55-1", "text": ")\n values[\"index_name\"] = get_from_dict_or_env(\n values, \"index_name\", \"AZURE_COGNITIVE_SEARCH_INDEX_NAME\"\n )\n values[\"api_key\"] = get_from_dict_or_env(\n values, \"api_key\", \"AZURE_COGNITIVE_SEARCH_API_KEY\"\n )\n return values\n def _build_search_url(self, query: str) -> str:\n base_url = f\"https://{self.service_name}.search.windows.net/\"\n endpoint_path = f\"indexes/{self.index_name}/docs?api-version={self.api_version}\"\n return base_url + endpoint_path + f\"&search={query}\"\n @property\n def _headers(self) -> Dict[str, str]:\n return {\n \"Content-Type\": \"application/json\",\n \"api-key\": self.api_key,\n }\n def _search(self, query: str) -> List[dict]:\n search_url = self._build_search_url(query)\n response = requests.get(search_url, headers=self._headers)\n if response.status_code != 200:\n raise Exception(f\"Error in search request: {response}\")\n return json.loads(response.text)[\"value\"]\n async def _asearch(self, query: str) -> List[dict]:\n search_url = self._build_search_url(query)\n if not self.aiosession:\n async with aiohttp.ClientSession() as session:\n async with session.get(search_url, headers=self._headers) as response:\n response_json = await response.json()\n else:\n async with self.aiosession.get(\n search_url, headers=self._headers\n ) as response:\n response_json = await response.json()\n return response_json[\"value\"]\n def _get_relevant_documents(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/azure_cognitive_search.html"} {"id": "dddeafadbe55-2", "text": "return response_json[\"value\"]\n def _get_relevant_documents(\n self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n ) -> List[Document]:\n search_results = self._search(query)\n return [\n Document(page_content=result.pop(self.content_key), metadata=result)\n for result in search_results\n ]\n async def _aget_relevant_documents(\n self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n ) -> List[Document]:\n search_results = await self._asearch(query)\n return [\n Document(page_content=result.pop(self.content_key), metadata=result)\n for result in search_results\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/azure_cognitive_search.html"} {"id": "3c4ae0572dd2-0", "text": "Source code for langchain.retrievers.zep\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, Any, Dict, List, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.schema import BaseRetriever, Document\nif TYPE_CHECKING:\n from zep_python import MemorySearchResult\n[docs]class ZepRetriever(BaseRetriever):\n \"\"\"A Retriever implementation for the Zep long-term memory store. Search your\n user's long-term chat history with Zep.\n Note: You will need to provide the user's `session_id` to use this retriever.\n More on Zep:\n Zep provides long-term conversation storage for LLM apps. The server stores,\n summarizes, embeds, indexes, and enriches conversational AI chat\n histories, and exposes them via simple, low-latency APIs.\n For server installation instructions, see:\n https://docs.getzep.com/deployment/quickstart/\n \"\"\"\n zep_client: Any\n session_id: str\n top_k: Optional[int]\n[docs] @root_validator(pre=True)\n def create_client(cls, values: dict) -> dict:\n try:\n from zep_python import ZepClient\n except ImportError:\n raise ValueError(\n \"Could not import zep-python package. \"\n \"Please install it with `pip install zep-python`.\"\n )\n values[\"zep_client\"] = values.get(\n \"zep_client\",\n ZepClient(base_url=values[\"url\"], api_key=values.get(\"api_key\")),\n )\n return values", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/zep.html"} {"id": "3c4ae0572dd2-1", "text": ")\n return values\n def _search_result_to_doc(\n self, results: List[MemorySearchResult]\n ) -> List[Document]:\n return [\n Document(\n page_content=r.message.pop(\"content\"),\n metadata={\"score\": r.dist, **r.message},\n )\n for r in results\n if r.message\n ]\n def _get_relevant_documents(\n self,\n query: str,\n *,\n run_manager: CallbackManagerForRetrieverRun,\n metadata: Optional[Dict] = None,\n ) -> List[Document]:\n from zep_python import MemorySearchPayload\n payload: MemorySearchPayload = MemorySearchPayload(\n text=query, metadata=metadata\n )\n results: List[MemorySearchResult] = self.zep_client.search_memory(\n self.session_id, payload, limit=self.top_k\n )\n return self._search_result_to_doc(results)\n async def _aget_relevant_documents(\n self,\n query: str,\n *,\n run_manager: AsyncCallbackManagerForRetrieverRun,\n metadata: Optional[Dict] = None,\n ) -> List[Document]:\n from zep_python import MemorySearchPayload\n payload: MemorySearchPayload = MemorySearchPayload(\n text=query, metadata=metadata\n )\n results: List[MemorySearchResult] = await self.zep_client.asearch_memory(\n self.session_id, payload, limit=self.top_k\n )\n return self._search_result_to_doc(results)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/zep.html"} {"id": "e0db0c7b6aaa-0", "text": "Source code for langchain.retrievers.time_weighted_retriever\n\"\"\"Retriever that combines embedding similarity with recency in retrieving values.\"\"\"\nimport datetime\nfrom copy import deepcopy\nfrom typing import Any, Dict, List, Optional, Tuple\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.vectorstores.base import VectorStore\ndef _get_hours_passed(time: datetime.datetime, ref_time: datetime.datetime) -> float:\n \"\"\"Get the hours passed between two datetime objects.\"\"\"\n return (time - ref_time).total_seconds() / 3600\n[docs]class TimeWeightedVectorStoreRetriever(BaseRetriever):\n \"\"\"Retriever combining embedding similarity with recency.\"\"\"\n vectorstore: VectorStore\n \"\"\"The vectorstore to store documents and determine salience.\"\"\"\n search_kwargs: dict = Field(default_factory=lambda: dict(k=100))\n \"\"\"Keyword arguments to pass to the vectorstore similarity search.\"\"\"\n # TODO: abstract as a queue\n memory_stream: List[Document] = Field(default_factory=list)\n \"\"\"The memory_stream of documents to search through.\"\"\"\n decay_rate: float = Field(default=0.01)\n \"\"\"The exponential decay factor used as (1.0-decay_rate)**(hrs_passed).\"\"\"\n k: int = 4\n \"\"\"The maximum number of documents to retrieve in a given call.\"\"\"\n other_score_keys: List[str] = []\n \"\"\"Other keys in the metadata to factor into the score, e.g. 'importance'.\"\"\"\n default_salience: Optional[float] = None\n \"\"\"The salience to assign memories not retrieved from the vector store.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/time_weighted_retriever.html"} {"id": "e0db0c7b6aaa-1", "text": "\"\"\"The salience to assign memories not retrieved from the vector store.\n None assigns no salience to documents not fetched from the vector store.\n \"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n def _get_combined_score(\n self,\n document: Document,\n vector_relevance: Optional[float],\n current_time: datetime.datetime,\n ) -> float:\n \"\"\"Return the combined score for a document.\"\"\"\n hours_passed = _get_hours_passed(\n current_time,\n document.metadata[\"last_accessed_at\"],\n )\n score = (1.0 - self.decay_rate) ** hours_passed\n for key in self.other_score_keys:\n if key in document.metadata:\n score += document.metadata[key]\n if vector_relevance is not None:\n score += vector_relevance\n return score\n[docs] def get_salient_docs(self, query: str) -> Dict[int, Tuple[Document, float]]:\n \"\"\"Return documents that are salient to the query.\"\"\"\n docs_and_scores: List[Tuple[Document, float]]\n docs_and_scores = self.vectorstore.similarity_search_with_relevance_scores(\n query, **self.search_kwargs\n )\n results = {}\n for fetched_doc, relevance in docs_and_scores:\n if \"buffer_idx\" in fetched_doc.metadata:\n buffer_idx = fetched_doc.metadata[\"buffer_idx\"]\n doc = self.memory_stream[buffer_idx]\n results[buffer_idx] = (doc, relevance)\n return results\n def _get_relevant_documents(\n self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n ) -> List[Document]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/time_weighted_retriever.html"} {"id": "e0db0c7b6aaa-2", "text": ") -> List[Document]:\n \"\"\"Return documents that are relevant to the query.\"\"\"\n current_time = datetime.datetime.now()\n docs_and_scores = {\n doc.metadata[\"buffer_idx\"]: (doc, self.default_salience)\n for doc in self.memory_stream[-self.k :]\n }\n # If a doc is considered salient, update the salience score\n docs_and_scores.update(self.get_salient_docs(query))\n rescored_docs = [\n (doc, self._get_combined_score(doc, relevance, current_time))\n for doc, relevance in docs_and_scores.values()\n ]\n rescored_docs.sort(key=lambda x: x[1], reverse=True)\n result = []\n # Ensure frequently accessed memories aren't forgotten\n for doc, _ in rescored_docs[: self.k]:\n # TODO: Update vector store doc once `update` method is exposed.\n buffered_doc = self.memory_stream[doc.metadata[\"buffer_idx\"]]\n buffered_doc.metadata[\"last_accessed_at\"] = current_time\n result.append(buffered_doc)\n return result\n async def _aget_relevant_documents(\n self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n ) -> List[Document]:\n \"\"\"Return documents that are relevant to the query.\"\"\"\n raise NotImplementedError\n[docs] def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:\n \"\"\"Add documents to vectorstore.\"\"\"\n current_time = kwargs.get(\"current_time\")\n if current_time is None:\n current_time = datetime.datetime.now()\n # Avoid mutating input documents\n dup_docs = [deepcopy(d) for d in documents]\n for i, doc in enumerate(dup_docs):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/time_weighted_retriever.html"} {"id": "e0db0c7b6aaa-3", "text": "for i, doc in enumerate(dup_docs):\n if \"last_accessed_at\" not in doc.metadata:\n doc.metadata[\"last_accessed_at\"] = current_time\n if \"created_at\" not in doc.metadata:\n doc.metadata[\"created_at\"] = current_time\n doc.metadata[\"buffer_idx\"] = len(self.memory_stream) + i\n self.memory_stream.extend(dup_docs)\n return self.vectorstore.add_documents(dup_docs, **kwargs)\n[docs] async def aadd_documents(\n self, documents: List[Document], **kwargs: Any\n ) -> List[str]:\n \"\"\"Add documents to vectorstore.\"\"\"\n current_time = kwargs.get(\"current_time\")\n if current_time is None:\n current_time = datetime.datetime.now()\n # Avoid mutating input documents\n dup_docs = [deepcopy(d) for d in documents]\n for i, doc in enumerate(dup_docs):\n if \"last_accessed_at\" not in doc.metadata:\n doc.metadata[\"last_accessed_at\"] = current_time\n if \"created_at\" not in doc.metadata:\n doc.metadata[\"created_at\"] = current_time\n doc.metadata[\"buffer_idx\"] = len(self.memory_stream) + i\n self.memory_stream.extend(dup_docs)\n return await self.vectorstore.aadd_documents(dup_docs, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/time_weighted_retriever.html"} {"id": "1687011ddfda-0", "text": "Source code for langchain.retrievers.milvus\n\"\"\"Milvus Retriever\"\"\"\nimport warnings\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.vectorstores.milvus import Milvus\n# TODO: Update to MilvusClient + Hybrid Search when available\n[docs]class MilvusRetriever(BaseRetriever):\n \"\"\"Retriever that uses the Milvus API.\"\"\"\n embedding_function: Embeddings\n collection_name: str = \"LangChainCollection\"\n connection_args: Optional[Dict[str, Any]] = None\n consistency_level: str = \"Session\"\n search_params: Optional[dict] = None\n store: Milvus\n retriever: BaseRetriever\n[docs] @root_validator(pre=True)\n def create_retriever(cls, values: Dict) -> Dict:\n \"\"\"Create the Milvus store and retriever.\"\"\"\n values[\"store\"] = Milvus(\n values[\"embedding_function\"],\n values[\"collection_name\"],\n values[\"connection_args\"],\n values[\"consistency_level\"],\n )\n values[\"retriever\"] = values[\"store\"].as_retriever(\n search_kwargs={\"param\": values[\"search_params\"]}\n )\n return values\n[docs] def add_texts(\n self, texts: List[str], metadatas: Optional[List[dict]] = None\n ) -> None:\n \"\"\"Add text to the Milvus store\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/milvus.html"} {"id": "1687011ddfda-1", "text": "\"\"\"Add text to the Milvus store\n Args:\n texts (List[str]): The text\n metadatas (List[dict]): Metadata dicts, must line up with existing store\n \"\"\"\n self.store.add_texts(texts, metadatas)\n def _get_relevant_documents(\n self,\n query: str,\n *,\n run_manager: CallbackManagerForRetrieverRun,\n **kwargs: Any,\n ) -> List[Document]:\n return self.retriever.get_relevant_documents(\n query, run_manager=run_manager.get_child(), **kwargs\n )\n async def _aget_relevant_documents(\n self,\n query: str,\n *,\n run_manager: AsyncCallbackManagerForRetrieverRun,\n **kwargs: Any,\n ) -> List[Document]:\n raise NotImplementedError\n[docs]def MilvusRetreiver(*args: Any, **kwargs: Any) -> MilvusRetriever:\n \"\"\"Deprecated MilvusRetreiver. Please use MilvusRetriever ('i' before 'e') instead.\n Args:\n *args:\n **kwargs:\n Returns:\n MilvusRetriever\n \"\"\"\n warnings.warn(\n \"MilvusRetreiver will be deprecated in the future. \"\n \"Please use MilvusRetriever ('i' before 'e') instead.\",\n DeprecationWarning,\n )\n return MilvusRetriever(*args, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/milvus.html"} {"id": "8221c1b6ecd0-0", "text": "Source code for langchain.retrievers.merger_retriever\nfrom typing import List\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.schema import BaseRetriever, Document\n[docs]class MergerRetriever(BaseRetriever):\n \"\"\"\n This class merges the results of multiple retrievers.\n Args:\n retrievers: A list of retrievers to merge.\n \"\"\"\n retrievers: List[BaseRetriever]\n def _get_relevant_documents(\n self,\n query: str,\n *,\n run_manager: CallbackManagerForRetrieverRun,\n ) -> List[Document]:\n \"\"\"\n Get the relevant documents for a given query.\n Args:\n query: The query to search for.\n Returns:\n A list of relevant documents.\n \"\"\"\n # Merge the results of the retrievers.\n merged_documents = self.merge_documents(query, run_manager)\n return merged_documents\n async def _aget_relevant_documents(\n self,\n query: str,\n *,\n run_manager: AsyncCallbackManagerForRetrieverRun,\n ) -> List[Document]:\n \"\"\"\n Asynchronously get the relevant documents for a given query.\n Args:\n query: The query to search for.\n Returns:\n A list of relevant documents.\n \"\"\"\n # Merge the results of the retrievers.\n merged_documents = await self.amerge_documents(query, run_manager)\n return merged_documents\n[docs] def merge_documents(\n self, query: str, run_manager: CallbackManagerForRetrieverRun\n ) -> List[Document]:\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/merger_retriever.html"} {"id": "8221c1b6ecd0-1", "text": ") -> List[Document]:\n \"\"\"\n Merge the results of the retrievers.\n Args:\n query: The query to search for.\n Returns:\n A list of merged documents.\n \"\"\"\n # Get the results of all retrievers.\n retriever_docs = [\n retriever.get_relevant_documents(\n query, callbacks=run_manager.get_child(\"retriever_{}\".format(i + 1))\n )\n for i, retriever in enumerate(self.retrievers)\n ]\n # Merge the results of the retrievers.\n merged_documents = []\n max_docs = max(len(docs) for docs in retriever_docs)\n for i in range(max_docs):\n for retriever, doc in zip(self.retrievers, retriever_docs):\n if i < len(doc):\n merged_documents.append(doc[i])\n return merged_documents\n[docs] async def amerge_documents(\n self, query: str, run_manager: AsyncCallbackManagerForRetrieverRun\n ) -> List[Document]:\n \"\"\"\n Asynchronously merge the results of the retrievers.\n Args:\n query: The query to search for.\n Returns:\n A list of merged documents.\n \"\"\"\n # Get the results of all retrievers.\n retriever_docs = [\n await retriever.aget_relevant_documents(\n query, callbacks=run_manager.get_child(\"retriever_{}\".format(i + 1))\n )\n for i, retriever in enumerate(self.retrievers)\n ]\n # Merge the results of the retrievers.\n merged_documents = []\n max_docs = max(len(docs) for docs in retriever_docs)\n for i in range(max_docs):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/merger_retriever.html"} {"id": "8221c1b6ecd0-2", "text": "for i in range(max_docs):\n for retriever, doc in zip(self.retrievers, retriever_docs):\n if i < len(doc):\n merged_documents.append(doc[i])\n return merged_documents", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/merger_retriever.html"} {"id": "4bd77435e210-0", "text": "Source code for langchain.retrievers.pubmed\n\"\"\"A retriever that uses PubMed API to retrieve documents.\"\"\"\nfrom typing import List\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.utilities.pupmed import PubMedAPIWrapper\n[docs]class PubMedRetriever(BaseRetriever, PubMedAPIWrapper):\n \"\"\"\n It is effectively a wrapper for PubMedAPIWrapper.\n It wraps load() to get_relevant_documents().\n It uses all PubMedAPIWrapper arguments without any change.\n \"\"\"\n def _get_relevant_documents(\n self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n ) -> List[Document]:\n return self.load_docs(query=query)\n async def _aget_relevant_documents(\n self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n ) -> List[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/pubmed.html"} {"id": "b4e00e97db4b-0", "text": "Source code for langchain.retrievers.tfidf\n\"\"\"TF-IDF Retriever.\nLargely based on\nhttps://github.com/asvskartheek/Text-Retrieval/blob/master/TF-IDF%20Search%20Engine%20(SKLEARN).ipynb\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, Iterable, List, Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.schema import BaseRetriever, Document\n[docs]class TFIDFRetriever(BaseRetriever):\n vectorizer: Any\n docs: List[Document]\n tfidf_array: Any\n k: int = 4\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] @classmethod\n def from_texts(\n cls,\n texts: Iterable[str],\n metadatas: Optional[Iterable[dict]] = None,\n tfidf_params: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> TFIDFRetriever:\n try:\n from sklearn.feature_extraction.text import TfidfVectorizer\n except ImportError:\n raise ImportError(\n \"Could not import scikit-learn, please install with `pip install \"\n \"scikit-learn`.\"\n )\n tfidf_params = tfidf_params or {}\n vectorizer = TfidfVectorizer(**tfidf_params)\n tfidf_array = vectorizer.fit_transform(texts)\n metadatas = metadatas or ({} for _ in texts)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/tfidf.html"} {"id": "b4e00e97db4b-1", "text": "metadatas = metadatas or ({} for _ in texts)\n docs = [Document(page_content=t, metadata=m) for t, m in zip(texts, metadatas)]\n return cls(vectorizer=vectorizer, docs=docs, tfidf_array=tfidf_array, **kwargs)\n[docs] @classmethod\n def from_documents(\n cls,\n documents: Iterable[Document],\n *,\n tfidf_params: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> TFIDFRetriever:\n texts, metadatas = zip(*((d.page_content, d.metadata) for d in documents))\n return cls.from_texts(\n texts=texts, tfidf_params=tfidf_params, metadatas=metadatas, **kwargs\n )\n def _get_relevant_documents(\n self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n ) -> List[Document]:\n from sklearn.metrics.pairwise import cosine_similarity\n query_vec = self.vectorizer.transform(\n [query]\n ) # Ip -- (n_docs,x), Op -- (n_docs,n_Feats)\n results = cosine_similarity(self.tfidf_array, query_vec).reshape(\n (-1,)\n ) # Op -- (n_docs,1) -- Cosine Sim with each doc\n return_docs = [self.docs[i] for i in results.argsort()[-self.k :][::-1]]\n return return_docs\n async def _aget_relevant_documents(\n self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n ) -> List[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/tfidf.html"} {"id": "93c4246a2c2b-0", "text": "Source code for langchain.retrievers.docarray\nfrom enum import Enum\nfrom typing import Any, Dict, List, Optional, Union\nimport numpy as np\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\n[docs]class SearchType(str, Enum):\n \"\"\"Enumerator of the types of search to perform.\"\"\"\n similarity = \"similarity\"\n mmr = \"mmr\"\n[docs]class DocArrayRetriever(BaseRetriever):\n \"\"\"\n Retriever class for DocArray Document Indices.\n Currently, supports 5 backends:\n InMemoryExactNNIndex, HnswDocumentIndex, QdrantDocumentIndex,\n ElasticDocIndex, and WeaviateDocumentIndex.\n Args:\n index: One of the above-mentioned index instances\n embeddings: Embedding model to represent text as vectors\n search_field: Field to consider for searching in the documents.\n Should be an embedding/vector/tensor.\n content_field: Field that represents the main content in your document schema.\n Will be used as a `page_content`. Everything else will go into `metadata`.\n search_type: Type of search to perform (similarity / mmr)\n filters: Filters applied for document retrieval.\n top_k: Number of documents to return\n \"\"\"\n index: Any\n embeddings: Embeddings\n search_field: str\n content_field: str\n search_type: SearchType = SearchType.similarity\n top_k: int = 1\n filters: Optional[Any] = None\n[docs] class Config:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/docarray.html"} {"id": "93c4246a2c2b-1", "text": "filters: Optional[Any] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n def _get_relevant_documents(\n self,\n query: str,\n *,\n run_manager: CallbackManagerForRetrieverRun,\n ) -> List[Document]:\n \"\"\"Get documents relevant for a query.\n Args:\n query: string to find relevant documents for\n Returns:\n List of relevant documents\n \"\"\"\n query_emb = np.array(self.embeddings.embed_query(query))\n if self.search_type == SearchType.similarity:\n results = self._similarity_search(query_emb)\n elif self.search_type == SearchType.mmr:\n results = self._mmr_search(query_emb)\n else:\n raise ValueError(\n f\"Search type {self.search_type} does not exist. \"\n f\"Choose either 'similarity' or 'mmr'.\"\n )\n return results\n def _search(\n self, query_emb: np.ndarray, top_k: int\n ) -> List[Union[Dict[str, Any], Any]]:\n \"\"\"\n Perform a search using the query embedding and return top_k documents.\n Args:\n query_emb: Query represented as an embedding\n top_k: Number of documents to return\n Returns:\n A list of top_k documents matching the query\n \"\"\"\n from docarray.index import ElasticDocIndex, WeaviateDocumentIndex\n filter_args = {}\n search_field = self.search_field\n if isinstance(self.index, WeaviateDocumentIndex):\n filter_args[\"where_filter\"] = self.filters\n search_field = \"\"\n elif isinstance(self.index, ElasticDocIndex):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/docarray.html"} {"id": "93c4246a2c2b-2", "text": "search_field = \"\"\n elif isinstance(self.index, ElasticDocIndex):\n filter_args[\"query\"] = self.filters\n else:\n filter_args[\"filter_query\"] = self.filters\n if self.filters:\n query = (\n self.index.build_query() # get empty query object\n .find(\n query=query_emb, search_field=search_field\n ) # add vector similarity search\n .filter(**filter_args) # add filter search\n .build(limit=top_k) # build the query\n )\n # execute the combined query and return the results\n docs = self.index.execute_query(query)\n if hasattr(docs, \"documents\"):\n docs = docs.documents\n docs = docs[:top_k]\n else:\n docs = self.index.find(\n query=query_emb, search_field=search_field, limit=top_k\n ).documents\n return docs\n def _similarity_search(self, query_emb: np.ndarray) -> List[Document]:\n \"\"\"\n Perform a similarity search.\n Args:\n query_emb: Query represented as an embedding\n Returns:\n A list of documents most similar to the query\n \"\"\"\n docs = self._search(query_emb=query_emb, top_k=self.top_k)\n results = [self._docarray_to_langchain_doc(doc) for doc in docs]\n return results\n def _mmr_search(self, query_emb: np.ndarray) -> List[Document]:\n \"\"\"\n Perform a maximal marginal relevance (mmr) search.\n Args:\n query_emb: Query represented as an embedding\n Returns:\n A list of diverse documents related to the query\n \"\"\"\n docs = self._search(query_emb=query_emb, top_k=20)\n mmr_selected = maximal_marginal_relevance(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/docarray.html"} {"id": "93c4246a2c2b-3", "text": "mmr_selected = maximal_marginal_relevance(\n query_emb,\n [\n doc[self.search_field]\n if isinstance(doc, dict)\n else getattr(doc, self.search_field)\n for doc in docs\n ],\n k=self.top_k,\n )\n results = [self._docarray_to_langchain_doc(docs[idx]) for idx in mmr_selected]\n return results\n def _docarray_to_langchain_doc(self, doc: Union[Dict[str, Any], Any]) -> Document:\n \"\"\"\n Convert a DocArray document (which also might be a dict)\n to a langchain document format.\n DocArray document can contain arbitrary fields, so the mapping is done\n in the following way:\n page_content <-> content_field\n metadata <-> all other fields excluding\n tensors and embeddings (so float, int, string)\n Args:\n doc: DocArray document\n Returns:\n Document in langchain format\n Raises:\n ValueError: If the document doesn't contain the content field\n \"\"\"\n fields = doc.keys() if isinstance(doc, dict) else doc.__fields__\n if self.content_field not in fields:\n raise ValueError(\n f\"Document does not contain the content field - {self.content_field}.\"\n )\n lc_doc = Document(\n page_content=doc[self.content_field]\n if isinstance(doc, dict)\n else getattr(doc, self.content_field)\n )\n for name in fields:\n value = doc[name] if isinstance(doc, dict) else getattr(doc, name)\n if (\n isinstance(value, (str, int, float, bool))\n and name != self.content_field\n ):\n lc_doc.metadata[name] = value\n return lc_doc", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/docarray.html"} {"id": "93c4246a2c2b-4", "text": "):\n lc_doc.metadata[name] = value\n return lc_doc\n async def _aget_relevant_documents(\n self,\n query: str,\n *,\n run_manager: AsyncCallbackManagerForRetrieverRun,\n ) -> List[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/docarray.html"} {"id": "7bf914d72e6d-0", "text": "Source code for langchain.retrievers.vespa_retriever\n\"\"\"Wrapper for retrieving documents from Vespa.\"\"\"\nfrom __future__ import annotations\nimport json\nfrom typing import TYPE_CHECKING, Any, Dict, List, Literal, Optional, Sequence, Union\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.schema import BaseRetriever, Document\nif TYPE_CHECKING:\n from vespa.application import Vespa\n[docs]class VespaRetriever(BaseRetriever):\n \"\"\"Retriever that uses the Vespa.\"\"\"\n app: Vespa\n body: Dict\n content_field: str\n metadata_fields: Sequence[str]\n def _query(self, body: Dict) -> List[Document]:\n response = self.app.query(body)\n if not str(response.status_code).startswith(\"2\"):\n raise RuntimeError(\n \"Could not retrieve data from Vespa. Error code: {}\".format(\n response.status_code\n )\n )\n root = response.json[\"root\"]\n if \"errors\" in root:\n raise RuntimeError(json.dumps(root[\"errors\"]))\n docs = []\n for child in response.hits:\n page_content = child[\"fields\"].pop(self.content_field, \"\")\n if self.metadata_fields == \"*\":\n metadata = child[\"fields\"]\n else:\n metadata = {mf: child[\"fields\"].get(mf) for mf in self.metadata_fields}\n metadata[\"id\"] = child[\"id\"]\n docs.append(Document(page_content=page_content, metadata=metadata))\n return docs\n def _get_relevant_documents(\n self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n ) -> List[Document]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/vespa_retriever.html"} {"id": "7bf914d72e6d-1", "text": ") -> List[Document]:\n body = self.body.copy()\n body[\"query\"] = query\n return self._query(body)\n async def _aget_relevant_documents(\n self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n ) -> List[Document]:\n raise NotImplementedError\n[docs] def get_relevant_documents_with_filter(\n self, query: str, *, _filter: Optional[str] = None\n ) -> List[Document]:\n body = self.body.copy()\n _filter = f\" and {_filter}\" if _filter else \"\"\n body[\"yql\"] = body[\"yql\"] + _filter\n body[\"query\"] = query\n return self._query(body)\n[docs] @classmethod\n def from_params(\n cls,\n url: str,\n content_field: str,\n *,\n k: Optional[int] = None,\n metadata_fields: Union[Sequence[str], Literal[\"*\"]] = (),\n sources: Union[Sequence[str], Literal[\"*\"], None] = None,\n _filter: Optional[str] = None,\n yql: Optional[str] = None,\n **kwargs: Any,\n ) -> VespaRetriever:\n \"\"\"Instantiate retriever from params.\n Args:\n url (str): Vespa app URL.\n content_field (str): Field in results to return as Document page_content.\n k (Optional[int]): Number of Documents to return. Defaults to None.\n metadata_fields(Sequence[str] or \"*\"): Fields in results to include in\n document metadata. Defaults to empty tuple ().\n sources (Sequence[str] or \"*\" or None): Sources to retrieve\n from. Defaults to None.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/vespa_retriever.html"} {"id": "7bf914d72e6d-2", "text": "from. Defaults to None.\n _filter (Optional[str]): Document filter condition expressed in YQL.\n Defaults to None.\n yql (Optional[str]): Full YQL query to be used. Should not be specified\n if _filter or sources are specified. Defaults to None.\n kwargs (Any): Keyword arguments added to query body.\n \"\"\"\n try:\n from vespa.application import Vespa\n except ImportError:\n raise ImportError(\n \"pyvespa is not installed, please install with `pip install pyvespa`\"\n )\n app = Vespa(url)\n body = kwargs.copy()\n if yql and (sources or _filter):\n raise ValueError(\n \"yql should only be specified if both sources and _filter are not \"\n \"specified.\"\n )\n else:\n if metadata_fields == \"*\":\n _fields = \"*\"\n body[\"summary\"] = \"short\"\n else:\n _fields = \", \".join([content_field] + list(metadata_fields or []))\n _sources = \", \".join(sources) if isinstance(sources, Sequence) else \"*\"\n _filter = f\" and {_filter}\" if _filter else \"\"\n yql = f\"select {_fields} from sources {_sources} where userQuery(){_filter}\"\n body[\"yql\"] = yql\n if k:\n body[\"hits\"] = k\n return cls(\n app=app,\n body=body,\n content_field=content_field,\n metadata_fields=metadata_fields,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/vespa_retriever.html"} {"id": "8c35e9a909c2-0", "text": "Source code for langchain.retrievers.contextual_compression\n\"\"\"Retriever that wraps a base retriever and filters the results.\"\"\"\nfrom typing import Any, List\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.retrievers.document_compressors.base import (\n BaseDocumentCompressor,\n)\nfrom langchain.schema import BaseRetriever, Document\n[docs]class ContextualCompressionRetriever(BaseRetriever):\n \"\"\"Retriever that wraps a base retriever and compresses the results.\"\"\"\n base_compressor: BaseDocumentCompressor\n \"\"\"Compressor for compressing retrieved documents.\"\"\"\n base_retriever: BaseRetriever\n \"\"\"Base Retriever to use for getting relevant documents.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n def _get_relevant_documents(\n self,\n query: str,\n *,\n run_manager: CallbackManagerForRetrieverRun,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Get documents relevant for a query.\n Args:\n query: string to find relevant documents for\n Returns:\n Sequence of relevant documents\n \"\"\"\n docs = self.base_retriever.get_relevant_documents(\n query, callbacks=run_manager.get_child(), **kwargs\n )\n if docs:\n compressed_docs = self.base_compressor.compress_documents(\n docs, query, callbacks=run_manager.get_child()\n )\n return list(compressed_docs)\n else:\n return []\n async def _aget_relevant_documents(\n self,\n query: str,\n *,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/contextual_compression.html"} {"id": "8c35e9a909c2-1", "text": "self,\n query: str,\n *,\n run_manager: AsyncCallbackManagerForRetrieverRun,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Get documents relevant for a query.\n Args:\n query: string to find relevant documents for\n Returns:\n List of relevant documents\n \"\"\"\n docs = await self.base_retriever.aget_relevant_documents(\n query, callbacks=run_manager.get_child(), **kwargs\n )\n if docs:\n compressed_docs = await self.base_compressor.acompress_documents(\n docs, query, callbacks=run_manager.get_child()\n )\n return list(compressed_docs)\n else:\n return []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/contextual_compression.html"} {"id": "6cf065e3e09e-0", "text": "Source code for langchain.retrievers.metal\nfrom typing import Any, List, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.schema import BaseRetriever, Document\n[docs]class MetalRetriever(BaseRetriever):\n \"\"\"Retriever that uses the Metal API.\"\"\"\n client: Any\n params: Optional[dict] = None\n[docs] @root_validator(pre=True)\n def validate_client(cls, values: dict) -> dict:\n \"\"\"Validate that the client is of the correct type.\"\"\"\n from metal_sdk.metal import Metal\n if \"client\" in values:\n client = values[\"client\"]\n if not isinstance(client, Metal):\n raise ValueError(\n \"Got unexpected client, should be of type metal_sdk.metal.Metal. \"\n f\"Instead, got {type(client)}\"\n )\n values[\"params\"] = values.get(\"params\", {})\n return values\n def _get_relevant_documents(\n self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n ) -> List[Document]:\n results = self.client.search({\"text\": query}, **self.params)\n final_results = []\n for r in results[\"data\"]:\n metadata = {k: v for k, v in r.items() if k != \"text\"}\n final_results.append(Document(page_content=r[\"text\"], metadata=metadata))\n return final_results\n async def _aget_relevant_documents(\n self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n ) -> List[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/metal.html"} {"id": "ed03c4d02edf-0", "text": "Source code for langchain.retrievers.knn\n\"\"\"KNN Retriever.\nLargely based on\nhttps://github.com/karpathy/randomfun/blob/master/knn_vs_svm.ipynb\"\"\"\nfrom __future__ import annotations\nimport concurrent.futures\nfrom typing import Any, List, Optional\nimport numpy as np\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import BaseRetriever, Document\n[docs]def create_index(contexts: List[str], embeddings: Embeddings) -> np.ndarray:\n \"\"\"\n Create an index of embeddings for a list of contexts.\n Args:\n contexts: List of contexts to embed.\n embeddings: Embeddings model to use.\n Returns:\n Index of embeddings.\n \"\"\"\n with concurrent.futures.ThreadPoolExecutor() as executor:\n return np.array(list(executor.map(embeddings.embed_query, contexts)))\n[docs]class KNNRetriever(BaseRetriever):\n \"\"\"KNN Retriever.\"\"\"\n embeddings: Embeddings\n index: Any\n texts: List[str]\n k: int = 4\n relevancy_threshold: Optional[float] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] @classmethod\n def from_texts(\n cls, texts: List[str], embeddings: Embeddings, **kwargs: Any\n ) -> KNNRetriever:\n index = create_index(texts, embeddings)\n return cls(embeddings=embeddings, index=index, texts=texts, **kwargs)\n def _get_relevant_documents(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/knn.html"} {"id": "ed03c4d02edf-1", "text": "def _get_relevant_documents(\n self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n ) -> List[Document]:\n query_embeds = np.array(self.embeddings.embed_query(query))\n # calc L2 norm\n index_embeds = self.index / np.sqrt((self.index**2).sum(1, keepdims=True))\n query_embeds = query_embeds / np.sqrt((query_embeds**2).sum())\n similarities = index_embeds.dot(query_embeds)\n sorted_ix = np.argsort(-similarities)\n denominator = np.max(similarities) - np.min(similarities) + 1e-6\n normalized_similarities = (similarities - np.min(similarities)) / denominator\n top_k_results = [\n Document(page_content=self.texts[row])\n for row in sorted_ix[0 : self.k]\n if (\n self.relevancy_threshold is None\n or normalized_similarities[row] >= self.relevancy_threshold\n )\n ]\n return top_k_results\n async def _aget_relevant_documents(\n self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n ) -> List[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/knn.html"} {"id": "953469f1733a-0", "text": "Source code for langchain.retrievers.chaindesk\nfrom typing import Any, List, Optional\nimport aiohttp\nimport requests\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.schema import BaseRetriever, Document\n[docs]class ChaindeskRetriever(BaseRetriever):\n \"\"\"Retriever that uses the Chaindesk API.\"\"\"\n datastore_url: str\n top_k: Optional[int]\n api_key: Optional[str]\n def __init__(\n self,\n datastore_url: str,\n top_k: Optional[int] = None,\n api_key: Optional[str] = None,\n ):\n self.datastore_url = datastore_url\n self.api_key = api_key\n self.top_k = top_k\n def _get_relevant_documents(\n self,\n query: str,\n *,\n run_manager: CallbackManagerForRetrieverRun,\n **kwargs: Any,\n ) -> List[Document]:\n response = requests.post(\n self.datastore_url,\n json={\n \"query\": query,\n **({\"topK\": self.top_k} if self.top_k is not None else {}),\n },\n headers={\n \"Content-Type\": \"application/json\",\n **(\n {\"Authorization\": f\"Bearer {self.api_key}\"}\n if self.api_key is not None\n else {}\n ),\n },\n )\n data = response.json()\n return [\n Document(\n page_content=r[\"text\"],\n metadata={\"source\": r[\"source\"], \"score\": r[\"score\"]},\n )\n for r in data[\"results\"]\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/chaindesk.html"} {"id": "953469f1733a-1", "text": ")\n for r in data[\"results\"]\n ]\n async def _aget_relevant_documents(\n self,\n query: str,\n *,\n run_manager: AsyncCallbackManagerForRetrieverRun,\n **kwargs: Any,\n ) -> List[Document]:\n async with aiohttp.ClientSession() as session:\n async with session.request(\n \"POST\",\n self.datastore_url,\n json={\n \"query\": query,\n **({\"topK\": self.top_k} if self.top_k is not None else {}),\n },\n headers={\n \"Content-Type\": \"application/json\",\n **(\n {\"Authorization\": f\"Bearer {self.api_key}\"}\n if self.api_key is not None\n else {}\n ),\n },\n ) as response:\n data = await response.json()\n return [\n Document(\n page_content=r[\"text\"],\n metadata={\"source\": r[\"source\"], \"score\": r[\"score\"]},\n )\n for r in data[\"results\"]\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/chaindesk.html"} {"id": "844a42e7238a-0", "text": "Source code for langchain.retrievers.svm\n\"\"\"SMV Retriever.\nLargely based on\nhttps://github.com/karpathy/randomfun/blob/master/knn_vs_svm.ipynb\"\"\"\nfrom __future__ import annotations\nimport concurrent.futures\nfrom typing import Any, List, Optional\nimport numpy as np\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import BaseRetriever, Document\n[docs]def create_index(contexts: List[str], embeddings: Embeddings) -> np.ndarray:\n \"\"\"\n Create an index of embeddings for a list of contexts.\n Args:\n contexts: List of contexts to embed.\n embeddings: Embeddings model to use.\n Returns:\n Index of embeddings.\n \"\"\"\n with concurrent.futures.ThreadPoolExecutor() as executor:\n return np.array(list(executor.map(embeddings.embed_query, contexts)))\n[docs]class SVMRetriever(BaseRetriever):\n \"\"\"SVM Retriever.\"\"\"\n embeddings: Embeddings\n index: Any\n texts: List[str]\n k: int = 4\n relevancy_threshold: Optional[float] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] @classmethod\n def from_texts(\n cls, texts: List[str], embeddings: Embeddings, **kwargs: Any\n ) -> SVMRetriever:\n index = create_index(texts, embeddings)\n return cls(embeddings=embeddings, index=index, texts=texts, **kwargs)\n def _get_relevant_documents(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/svm.html"} {"id": "844a42e7238a-1", "text": "def _get_relevant_documents(\n self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n ) -> List[Document]:\n from sklearn import svm\n query_embeds = np.array(self.embeddings.embed_query(query))\n x = np.concatenate([query_embeds[None, ...], self.index])\n y = np.zeros(x.shape[0])\n y[0] = 1\n clf = svm.LinearSVC(\n class_weight=\"balanced\", verbose=False, max_iter=10000, tol=1e-6, C=0.1\n )\n clf.fit(x, y)\n similarities = clf.decision_function(x)\n sorted_ix = np.argsort(-similarities)\n # svm.LinearSVC in scikit-learn is non-deterministic.\n # if a text is the same as a query, there is no guarantee\n # the query will be in the first index.\n # this performs a simple swap, this works because anything\n # left of the 0 should be equivalent.\n zero_index = np.where(sorted_ix == 0)[0][0]\n if zero_index != 0:\n sorted_ix[0], sorted_ix[zero_index] = sorted_ix[zero_index], sorted_ix[0]\n denominator = np.max(similarities) - np.min(similarities) + 1e-6\n normalized_similarities = (similarities - np.min(similarities)) / denominator\n top_k_results = []\n for row in sorted_ix[1 : self.k + 1]:\n if (\n self.relevancy_threshold is None\n or normalized_similarities[row] >= self.relevancy_threshold\n ):\n top_k_results.append(Document(page_content=self.texts[row - 1]))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/svm.html"} {"id": "844a42e7238a-2", "text": "):\n top_k_results.append(Document(page_content=self.texts[row - 1]))\n return top_k_results\n async def _aget_relevant_documents(\n self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n ) -> List[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/svm.html"} {"id": "40476133d6ec-0", "text": "Source code for langchain.retrievers.zilliz\n\"\"\"Zilliz Retriever\"\"\"\nimport warnings\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.vectorstores.zilliz import Zilliz\n# TODO: Update to ZillizClient + Hybrid Search when available\n[docs]class ZillizRetriever(BaseRetriever):\n \"\"\"Retriever that uses the Zilliz API.\"\"\"\n embedding_function: Embeddings\n collection_name: str = \"LangChainCollection\"\n connection_args: Optional[Dict[str, Any]] = None\n consistency_level: str = \"Session\"\n search_params: Optional[dict] = None\n store: Zilliz\n retriever: BaseRetriever\n[docs] @root_validator(pre=True)\n def create_client(cls, values: dict) -> dict:\n values[\"store\"] = Zilliz(\n values[\"embedding_function\"],\n values[\"collection_name\"],\n values[\"connection_args\"],\n values[\"consistency_level\"],\n )\n values[\"retriever\"] = values[\"store\"].as_retriever(\n search_kwargs={\"param\": values[\"search_params\"]}\n )\n return values\n[docs] def add_texts(\n self, texts: List[str], metadatas: Optional[List[dict]] = None\n ) -> None:\n \"\"\"Add text to the Zilliz store\n Args:\n texts (List[str]): The text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/zilliz.html"} {"id": "40476133d6ec-1", "text": "Args:\n texts (List[str]): The text\n metadatas (List[dict]): Metadata dicts, must line up with existing store\n \"\"\"\n self.store.add_texts(texts, metadatas)\n def _get_relevant_documents(\n self,\n query: str,\n *,\n run_manager: CallbackManagerForRetrieverRun,\n **kwargs: Any,\n ) -> List[Document]:\n return self.retriever.get_relevant_documents(\n query, run_manager=run_manager.get_child(), **kwargs\n )\n async def _aget_relevant_documents(\n self,\n query: str,\n *,\n run_manager: AsyncCallbackManagerForRetrieverRun,\n **kwargs: Any,\n ) -> List[Document]:\n raise NotImplementedError\n[docs]def ZillizRetreiver(*args: Any, **kwargs: Any) -> ZillizRetriever:\n \"\"\"\n Deprecated ZillizRetreiver. Please use ZillizRetriever ('i' before 'e') instead.\n Args:\n *args:\n **kwargs:\n Returns:\n ZillizRetriever\n \"\"\"\n warnings.warn(\n \"ZillizRetreiver will be deprecated in the future. \"\n \"Please use ZillizRetriever ('i' before 'e') instead.\",\n DeprecationWarning,\n )\n return ZillizRetriever(*args, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/zilliz.html"} {"id": "2d660d4de54d-0", "text": "Source code for langchain.retrievers.pinecone_hybrid_search\n\"\"\"Taken from: https://docs.pinecone.io/docs/hybrid-search\"\"\"\nimport hashlib\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import BaseRetriever, Document\n[docs]def hash_text(text: str) -> str:\n \"\"\"Hash a text using SHA256.\n Args:\n text: Text to hash.\n Returns:\n Hashed text.\n \"\"\"\n return str(hashlib.sha256(text.encode(\"utf-8\")).hexdigest())\n[docs]def create_index(\n contexts: List[str],\n index: Any,\n embeddings: Embeddings,\n sparse_encoder: Any,\n ids: Optional[List[str]] = None,\n metadatas: Optional[List[dict]] = None,\n) -> None:\n \"\"\"\n Create a Pinecone index from a list of contexts.\n Modifies the index argument in-place.\n Args:\n contexts: List of contexts to embed.\n index: Pinecone index to use.\n embeddings: Embeddings model to use.\n sparse_encoder: Sparse encoder to use.\n ids: List of ids to use for the documents.\n metadatas: List of metadata to use for the documents.\n \"\"\"\n batch_size = 32\n _iterator = range(0, len(contexts), batch_size)\n try:\n from tqdm.auto import tqdm\n _iterator = tqdm(_iterator)\n except ImportError:\n pass\n if ids is None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/pinecone_hybrid_search.html"} {"id": "2d660d4de54d-1", "text": "except ImportError:\n pass\n if ids is None:\n # create unique ids using hash of the text\n ids = [hash_text(context) for context in contexts]\n for i in _iterator:\n # find end of batch\n i_end = min(i + batch_size, len(contexts))\n # extract batch\n context_batch = contexts[i:i_end]\n batch_ids = ids[i:i_end]\n metadata_batch = (\n metadatas[i:i_end] if metadatas else [{} for _ in context_batch]\n )\n # add context passages as metadata\n meta = [\n {\"context\": context, **metadata}\n for context, metadata in zip(context_batch, metadata_batch)\n ]\n # create dense vectors\n dense_embeds = embeddings.embed_documents(context_batch)\n # create sparse vectors\n sparse_embeds = sparse_encoder.encode_documents(context_batch)\n for s in sparse_embeds:\n s[\"values\"] = [float(s1) for s1 in s[\"values\"]]\n vectors = []\n # loop through the data and create dictionaries for upserts\n for doc_id, sparse, dense, metadata in zip(\n batch_ids, sparse_embeds, dense_embeds, meta\n ):\n vectors.append(\n {\n \"id\": doc_id,\n \"sparse_values\": sparse,\n \"values\": dense,\n \"metadata\": metadata,\n }\n )\n # upload the documents to the new hybrid index\n index.upsert(vectors)\n[docs]class PineconeHybridSearchRetriever(BaseRetriever):\n embeddings: Embeddings\n \"\"\"description\"\"\"\n sparse_encoder: Any\n index: Any\n top_k: int = 4", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/pinecone_hybrid_search.html"} {"id": "2d660d4de54d-2", "text": "sparse_encoder: Any\n index: Any\n top_k: int = 4\n alpha: float = 0.5\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] def add_texts(\n self,\n texts: List[str],\n ids: Optional[List[str]] = None,\n metadatas: Optional[List[dict]] = None,\n ) -> None:\n create_index(\n texts,\n self.index,\n self.embeddings,\n self.sparse_encoder,\n ids=ids,\n metadatas=metadatas,\n )\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n try:\n from pinecone_text.hybrid import hybrid_convex_scale # noqa:F401\n from pinecone_text.sparse.base_sparse_encoder import (\n BaseSparseEncoder, # noqa:F401\n )\n except ImportError:\n raise ValueError(\n \"Could not import pinecone_text python package. \"\n \"Please install it with `pip install pinecone_text`.\"\n )\n return values\n def _get_relevant_documents(\n self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n ) -> List[Document]:\n from pinecone_text.hybrid import hybrid_convex_scale\n sparse_vec = self.sparse_encoder.encode_queries(query)\n # convert the question into a dense vector\n dense_vec = self.embeddings.embed_query(query)\n # scale alpha with hybrid_scale", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/pinecone_hybrid_search.html"} {"id": "2d660d4de54d-3", "text": "dense_vec = self.embeddings.embed_query(query)\n # scale alpha with hybrid_scale\n dense_vec, sparse_vec = hybrid_convex_scale(dense_vec, sparse_vec, self.alpha)\n sparse_vec[\"values\"] = [float(s1) for s1 in sparse_vec[\"values\"]]\n # query pinecone with the query parameters\n result = self.index.query(\n vector=dense_vec,\n sparse_vector=sparse_vec,\n top_k=self.top_k,\n include_metadata=True,\n )\n final_result = []\n for res in result[\"matches\"]:\n context = res[\"metadata\"].pop(\"context\")\n final_result.append(\n Document(page_content=context, metadata=res[\"metadata\"])\n )\n # return search results as json\n return final_result\n async def _aget_relevant_documents(\n self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n ) -> List[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/pinecone_hybrid_search.html"} {"id": "fd43c84a1d9e-0", "text": "Source code for langchain.retrievers.weaviate_hybrid_search\n\"\"\"Wrapper around weaviate vector database.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional, cast\nfrom uuid import uuid4\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.docstore.document import Document\nfrom langchain.schema import BaseRetriever\n[docs]class WeaviateHybridSearchRetriever(BaseRetriever):\n \"\"\"Retriever that uses Weaviate's hybrid search to retrieve documents.\"\"\"\n client: Any\n \"\"\"keyword arguments to pass to the Weaviate client.\"\"\"\n index_name: str\n \"\"\"The name of the index to use.\"\"\"\n text_key: str\n \"\"\"The name of the text key to use.\"\"\"\n alpha: float = 0.5\n \"\"\"The weight of the text key in the hybrid search.\"\"\"\n k: int = 4\n \"\"\"The number of results to return.\"\"\"\n attributes: List[str]\n \"\"\"The attributes to return in the results.\"\"\"\n create_schema_if_missing: bool = True\n \"\"\"Whether to create the schema if it doesn't exist.\"\"\"\n[docs] @root_validator(pre=True)\n def validate_client(\n cls,\n values: Dict[str, Any],\n ) -> Dict[str, Any]:\n try:\n import weaviate\n except ImportError:\n raise ImportError(\n \"Could not import weaviate python package. \"\n \"Please install it with `pip install weaviate-client`.\"\n )\n if not isinstance(values[\"client\"], weaviate.Client):\n client = values[\"client\"]\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/weaviate_hybrid_search.html"} {"id": "fd43c84a1d9e-1", "text": "client = values[\"client\"]\n raise ValueError(\n f\"client should be an instance of weaviate.Client, got {type(client)}\"\n )\n if values[\"attributes\"] is None:\n values[\"attributes\"] = []\n cast(List, values[\"attributes\"]).append(values[\"text_key\"])\n if values[\"create_schema_if_missing\"]:\n class_obj = {\n \"class\": values[\"index_name\"],\n \"properties\": [{\"name\": values[\"text_key\"], \"dataType\": [\"text\"]}],\n \"vectorizer\": \"text2vec-openai\",\n }\n if not values[\"client\"].schema.exists(values[\"index_name\"]):\n values[\"client\"].schema.create_class(class_obj)\n return values\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n # added text_key\n[docs] def add_documents(self, docs: List[Document], **kwargs: Any) -> List[str]:\n \"\"\"Upload documents to Weaviate.\"\"\"\n from weaviate.util import get_valid_uuid\n with self.client.batch as batch:\n ids = []\n for i, doc in enumerate(docs):\n metadata = doc.metadata or {}\n data_properties = {self.text_key: doc.page_content, **metadata}\n # If the UUID of one of the objects already exists\n # then the existing objectwill be replaced by the new object.\n if \"uuids\" in kwargs:\n _id = kwargs[\"uuids\"][i]\n else:\n _id = get_valid_uuid(uuid4())\n batch.add_data_object(data_properties, self.index_name, _id)\n ids.append(_id)\n return ids\n def _get_relevant_documents(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/weaviate_hybrid_search.html"} {"id": "fd43c84a1d9e-2", "text": "return ids\n def _get_relevant_documents(\n self,\n query: str,\n *,\n run_manager: CallbackManagerForRetrieverRun,\n where_filter: Optional[Dict[str, object]] = None,\n ) -> List[Document]:\n \"\"\"Look up similar documents in Weaviate.\"\"\"\n query_obj = self.client.query.get(self.index_name, self.attributes)\n if where_filter:\n query_obj = query_obj.with_where(where_filter)\n result = query_obj.with_hybrid(query, alpha=self.alpha).with_limit(self.k).do()\n if \"errors\" in result:\n raise ValueError(f\"Error during query: {result['errors']}\")\n docs = []\n for res in result[\"data\"][\"Get\"][self.index_name]:\n text = res.pop(self.text_key)\n docs.append(Document(page_content=text, metadata=res))\n return docs\n async def _aget_relevant_documents(\n self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n ) -> List[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/weaviate_hybrid_search.html"} {"id": "ee91f77d728d-0", "text": "Source code for langchain.retrievers.kendra\nimport re\nfrom typing import Any, Dict, List, Literal, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.docstore.document import Document\nfrom langchain.schema import BaseRetriever\n[docs]def clean_excerpt(excerpt: str) -> str:\n \"\"\"Cleans an excerpt from Kendra.\n Args:\n excerpt: The excerpt to clean.\n Returns:\n The cleaned excerpt.\n \"\"\"\n if not excerpt:\n return excerpt\n res = re.sub(\"\\s+\", \" \", excerpt).replace(\"...\", \"\")\n return res\n[docs]def combined_text(title: str, excerpt: str) -> str:\n \"\"\"Combines a title and an excerpt into a single string.\n Args:\n title: The title of the document.\n excerpt: The excerpt of the document.\n Returns:\n The combined text.\n \"\"\"\n if not title or not excerpt:\n return \"\"\n return f\"Document Title: {title} \\nDocument Excerpt: \\n{excerpt}\\n\"\n[docs]class Highlight(BaseModel, extra=Extra.allow):\n BeginOffset: int\n EndOffset: int\n TopAnswer: Optional[bool]\n Type: Optional[str]\n[docs]class TextWithHighLights(BaseModel, extra=Extra.allow):\n Text: str\n Highlights: Optional[Any]\n[docs]class AdditionalResultAttributeValue(BaseModel, extra=Extra.allow):\n TextWithHighlightsValue: TextWithHighLights\n[docs]class AdditionalResultAttribute(BaseModel, extra=Extra.allow):\n Key: str", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/kendra.html"} {"id": "ee91f77d728d-1", "text": "Key: str\n ValueType: Literal[\"TEXT_WITH_HIGHLIGHTS_VALUE\"]\n Value: AdditionalResultAttributeValue\n[docs] def get_value_text(self) -> str:\n return self.Value.TextWithHighlightsValue.Text\n[docs]class QueryResultItem(BaseModel, extra=Extra.allow):\n DocumentId: str\n DocumentTitle: TextWithHighLights\n DocumentURI: Optional[str]\n FeedbackToken: Optional[str]\n Format: Optional[str]\n Id: Optional[str]\n Type: Optional[str]\n AdditionalAttributes: Optional[List[AdditionalResultAttribute]] = []\n DocumentExcerpt: Optional[TextWithHighLights]\n[docs] def get_attribute_value(self) -> str:\n if not self.AdditionalAttributes:\n return \"\"\n if not self.AdditionalAttributes[0]:\n return \"\"\n else:\n return self.AdditionalAttributes[0].get_value_text()\n[docs] def get_excerpt(self) -> str:\n if (\n self.AdditionalAttributes\n and self.AdditionalAttributes[0].Key == \"AnswerText\"\n ):\n excerpt = self.get_attribute_value()\n elif self.DocumentExcerpt:\n excerpt = self.DocumentExcerpt.Text\n else:\n excerpt = \"\"\n return clean_excerpt(excerpt)\n[docs] def to_doc(self) -> Document:\n title = self.DocumentTitle.Text\n source = self.DocumentURI\n excerpt = self.get_excerpt()\n type = self.Type\n page_content = combined_text(title, excerpt)\n metadata = {\"source\": source, \"title\": title, \"excerpt\": excerpt, \"type\": type}\n return Document(page_content=page_content, metadata=metadata)\n[docs]class QueryResult(BaseModel, extra=Extra.allow):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/kendra.html"} {"id": "ee91f77d728d-2", "text": "[docs]class QueryResult(BaseModel, extra=Extra.allow):\n ResultItems: List[QueryResultItem]\n[docs] def get_top_k_docs(self, top_n: int) -> List[Document]:\n items_len = len(self.ResultItems)\n count = items_len if items_len < top_n else top_n\n docs = [self.ResultItems[i].to_doc() for i in range(0, count)]\n return docs\n[docs]class DocumentAttributeValue(BaseModel, extra=Extra.allow):\n DateValue: Optional[str]\n LongValue: Optional[int]\n StringListValue: Optional[List[str]]\n StringValue: Optional[str]\n[docs]class DocumentAttribute(BaseModel, extra=Extra.allow):\n Key: str\n Value: DocumentAttributeValue\n[docs]class RetrieveResultItem(BaseModel, extra=Extra.allow):\n Content: Optional[str]\n DocumentAttributes: Optional[List[DocumentAttribute]] = []\n DocumentId: Optional[str]\n DocumentTitle: Optional[str]\n DocumentURI: Optional[str]\n Id: Optional[str]\n[docs] def get_excerpt(self) -> str:\n if not self.Content:\n return \"\"\n return clean_excerpt(self.Content)\n[docs] def to_doc(self) -> Document:\n title = self.DocumentTitle if self.DocumentTitle else \"\"\n source = self.DocumentURI\n excerpt = self.get_excerpt()\n page_content = combined_text(title, excerpt)\n metadata = {\"source\": source, \"title\": title, \"excerpt\": excerpt}\n return Document(page_content=page_content, metadata=metadata)\n[docs]class RetrieveResult(BaseModel, extra=Extra.allow):\n QueryId: str\n ResultItems: List[RetrieveResultItem]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/kendra.html"} {"id": "ee91f77d728d-3", "text": "QueryId: str\n ResultItems: List[RetrieveResultItem]\n[docs] def get_top_k_docs(self, top_n: int) -> List[Document]:\n items_len = len(self.ResultItems)\n count = items_len if items_len < top_n else top_n\n docs = [self.ResultItems[i].to_doc() for i in range(0, count)]\n return docs\n[docs]class AmazonKendraRetriever(BaseRetriever):\n \"\"\"Retriever class to query documents from Amazon Kendra Index.\n Args:\n index_id: Kendra index id\n region_name: The aws region e.g., `us-west-2`.\n Fallsback to AWS_DEFAULT_REGION env variable\n or region specified in ~/.aws/config.\n credentials_profile_name: The name of the profile in the ~/.aws/credentials\n or ~/.aws/config files, which has either access keys or role information\n specified. If not specified, the default credential profile or, if on an\n EC2 instance, credentials from IMDS will be used.\n top_k: No of results to return\n attribute_filter: Additional filtering of results based on metadata\n See: https://docs.aws.amazon.com/kendra/latest/APIReference\n client: boto3 client for Kendra\n Example:\n .. code-block:: python\n retriever = AmazonKendraRetriever(\n index_id=\"c0806df7-e76b-4bce-9b5c-d5582f6b1a03\"\n )\n \"\"\"\n index_id: str\n region_name: Optional[str] = None\n credentials_profile_name: Optional[str] = None\n top_k: int = 3\n attribute_filter: Optional[Dict] = None\n client: Any", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/kendra.html"} {"id": "ee91f77d728d-4", "text": "attribute_filter: Optional[Dict] = None\n client: Any\n[docs] @root_validator(pre=True)\n def create_client(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n if values[\"client\"] is not None:\n return values\n try:\n import boto3\n if values[\"credentials_profile_name\"] is not None:\n session = boto3.Session(profile_name=values[\"credentials_profile_name\"])\n else:\n # use default credentials\n session = boto3.Session()\n client_params = {}\n if values[\"region_name\"] is not None:\n client_params[\"region_name\"] = values[\"region_name\"]\n values[\"client\"] = session.client(\"kendra\", **client_params)\n return values\n except ImportError:\n raise ModuleNotFoundError(\n \"Could not import boto3 python package. \"\n \"Please install it with `pip install boto3`.\"\n )\n except Exception as e:\n raise ValueError(\n \"Could not load credentials to authenticate with AWS client. \"\n \"Please check that credentials in the specified \"\n \"profile name are valid.\"\n ) from e\n def _kendra_query(\n self,\n query: str,\n top_k: int,\n attribute_filter: Optional[Dict] = None,\n ) -> List[Document]:\n if attribute_filter is not None:\n response = self.client.retrieve(\n IndexId=self.index_id,\n QueryText=query.strip(),\n PageSize=top_k,\n AttributeFilter=attribute_filter,\n )\n else:\n response = self.client.retrieve(\n IndexId=self.index_id, QueryText=query.strip(), PageSize=top_k\n )\n r_result = RetrieveResult.parse_obj(response)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/kendra.html"} {"id": "ee91f77d728d-5", "text": ")\n r_result = RetrieveResult.parse_obj(response)\n result_len = len(r_result.ResultItems)\n if result_len == 0:\n # retrieve API returned 0 results, call query API\n if attribute_filter is not None:\n response = self.client.query(\n IndexId=self.index_id,\n QueryText=query.strip(),\n PageSize=top_k,\n AttributeFilter=attribute_filter,\n )\n else:\n response = self.client.query(\n IndexId=self.index_id, QueryText=query.strip(), PageSize=top_k\n )\n q_result = QueryResult.parse_obj(response)\n docs = q_result.get_top_k_docs(top_k)\n else:\n docs = r_result.get_top_k_docs(top_k)\n return docs\n def _get_relevant_documents(\n self,\n query: str,\n *,\n run_manager: CallbackManagerForRetrieverRun,\n ) -> List[Document]:\n \"\"\"Run search on Kendra index and get top k documents\n Example:\n .. code-block:: python\n docs = retriever.get_relevant_documents('This is my query')\n \"\"\"\n docs = self._kendra_query(query, self.top_k, self.attribute_filter)\n return docs\n async def _aget_relevant_documents(\n self,\n query: str,\n *,\n run_manager: AsyncCallbackManagerForRetrieverRun,\n ) -> List[Document]:\n raise NotImplementedError(\"Async version is not implemented for Kendra yet.\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/kendra.html"} {"id": "13fdeae73772-0", "text": "Source code for langchain.retrievers.wikipedia\nfrom typing import List\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.utilities.wikipedia import WikipediaAPIWrapper\n[docs]class WikipediaRetriever(BaseRetriever, WikipediaAPIWrapper):\n \"\"\"\n It is effectively a wrapper for WikipediaAPIWrapper.\n It wraps load() to get_relevant_documents().\n It uses all WikipediaAPIWrapper arguments without any change.\n \"\"\"\n def _get_relevant_documents(\n self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n ) -> List[Document]:\n return self.load(query=query)\n async def _aget_relevant_documents(\n self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n ) -> List[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/wikipedia.html"} {"id": "723872b8268b-0", "text": "Source code for langchain.retrievers.databerry\nfrom typing import List, Optional\nimport aiohttp\nimport requests\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.schema import BaseRetriever, Document\n[docs]class DataberryRetriever(BaseRetriever):\n \"\"\"Retriever that uses the Databerry API.\"\"\"\n datastore_url: str\n top_k: Optional[int]\n api_key: Optional[str]\n def _get_relevant_documents(\n self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n ) -> List[Document]:\n response = requests.post(\n self.datastore_url,\n json={\n \"query\": query,\n **({\"topK\": self.top_k} if self.top_k is not None else {}),\n },\n headers={\n \"Content-Type\": \"application/json\",\n **(\n {\"Authorization\": f\"Bearer {self.api_key}\"}\n if self.api_key is not None\n else {}\n ),\n },\n )\n data = response.json()\n return [\n Document(\n page_content=r[\"text\"],\n metadata={\"source\": r[\"source\"], \"score\": r[\"score\"]},\n )\n for r in data[\"results\"]\n ]\n async def _aget_relevant_documents(\n self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n ) -> List[Document]:\n async with aiohttp.ClientSession() as session:\n async with session.request(\n \"POST\",\n self.datastore_url,\n json={\n \"query\": query,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/databerry.html"} {"id": "723872b8268b-1", "text": "self.datastore_url,\n json={\n \"query\": query,\n **({\"topK\": self.top_k} if self.top_k is not None else {}),\n },\n headers={\n \"Content-Type\": \"application/json\",\n **(\n {\"Authorization\": f\"Bearer {self.api_key}\"}\n if self.api_key is not None\n else {}\n ),\n },\n ) as response:\n data = await response.json()\n return [\n Document(\n page_content=r[\"text\"],\n metadata={\"source\": r[\"source\"], \"score\": r[\"score\"]},\n )\n for r in data[\"results\"]\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/databerry.html"} {"id": "530d5f3005b0-0", "text": "Source code for langchain.retrievers.elastic_search_bm25\n\"\"\"Wrapper around Elasticsearch vector database.\"\"\"\nfrom __future__ import annotations\nimport uuid\nfrom typing import Any, Iterable, List\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.docstore.document import Document\nfrom langchain.schema import BaseRetriever\n[docs]class ElasticSearchBM25Retriever(BaseRetriever):\n \"\"\"Wrapper around Elasticsearch using BM25 as a retrieval method.\n To connect to an Elasticsearch instance that requires login credentials,\n including Elastic Cloud, use the Elasticsearch URL format\n https://username:password@es_host:9243. For example, to connect to Elastic\n Cloud, create the Elasticsearch URL with the required authentication details and\n pass it to the ElasticVectorSearch constructor as the named parameter\n elasticsearch_url.\n You can obtain your Elastic Cloud URL and login credentials by logging in to the\n Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and\n navigating to the \"Deployments\" page.\n To obtain your Elastic Cloud password for the default \"elastic\" user:\n 1. Log in to the Elastic Cloud console at https://cloud.elastic.co\n 2. Go to \"Security\" > \"Users\"\n 3. Locate the \"elastic\" user and click \"Edit\"\n 4. Click \"Reset password\"\n 5. Follow the prompts to reset the password\n The format for Elastic Cloud URLs is\n https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.\n \"\"\"\n client: Any\n index_name: str\n[docs] @classmethod\n def create(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/elastic_search_bm25.html"} {"id": "530d5f3005b0-1", "text": "index_name: str\n[docs] @classmethod\n def create(\n cls, elasticsearch_url: str, index_name: str, k1: float = 2.0, b: float = 0.75\n ) -> ElasticSearchBM25Retriever:\n from elasticsearch import Elasticsearch\n # Create an Elasticsearch client instance\n es = Elasticsearch(elasticsearch_url)\n # Define the index settings and mappings\n settings = {\n \"analysis\": {\"analyzer\": {\"default\": {\"type\": \"standard\"}}},\n \"similarity\": {\n \"custom_bm25\": {\n \"type\": \"BM25\",\n \"k1\": k1,\n \"b\": b,\n }\n },\n }\n mappings = {\n \"properties\": {\n \"content\": {\n \"type\": \"text\",\n \"similarity\": \"custom_bm25\", # Use the custom BM25 similarity\n }\n }\n }\n # Create the index with the specified settings and mappings\n es.indices.create(index=index_name, mappings=mappings, settings=settings)\n return cls(client=es, index_name=index_name)\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n refresh_indices: bool = True,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the retriever.\n Args:\n texts: Iterable of strings to add to the retriever.\n refresh_indices: bool to refresh ElasticSearch indices\n Returns:\n List of ids from adding the texts into the retriever.\n \"\"\"\n try:\n from elasticsearch.helpers import bulk\n except ImportError:\n raise ValueError(\n \"Could not import elasticsearch python package. \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/elastic_search_bm25.html"} {"id": "530d5f3005b0-2", "text": "raise ValueError(\n \"Could not import elasticsearch python package. \"\n \"Please install it with `pip install elasticsearch`.\"\n )\n requests = []\n ids = []\n for i, text in enumerate(texts):\n _id = str(uuid.uuid4())\n request = {\n \"_op_type\": \"index\",\n \"_index\": self.index_name,\n \"content\": text,\n \"_id\": _id,\n }\n ids.append(_id)\n requests.append(request)\n bulk(self.client, requests)\n if refresh_indices:\n self.client.indices.refresh(index=self.index_name)\n return ids\n def _get_relevant_documents(\n self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n ) -> List[Document]:\n query_dict = {\"query\": {\"match\": {\"content\": query}}}\n res = self.client.search(index=self.index_name, body=query_dict)\n docs = []\n for r in res[\"hits\"][\"hits\"]:\n docs.append(Document(page_content=r[\"_source\"][\"content\"]))\n return docs\n async def _aget_relevant_documents(\n self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n ) -> List[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/elastic_search_bm25.html"} {"id": "58a2ee084c30-0", "text": "Source code for langchain.retrievers.multi_query\nimport logging\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.chains.llm import LLMChain\nfrom langchain.llms.base import BaseLLM\nfrom langchain.output_parsers.pydantic import PydanticOutputParser\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema import BaseRetriever, Document\nlogger = logging.getLogger(__name__)\n[docs]class LineList(BaseModel):\n lines: List[str] = Field(description=\"Lines of text\")\n[docs]class LineListOutputParser(PydanticOutputParser):\n def __init__(self) -> None:\n super().__init__(pydantic_object=LineList)\n[docs] def parse(self, text: str) -> LineList:\n lines = text.strip().split(\"\\n\")\n return LineList(lines=lines)\n# Default prompt\nDEFAULT_QUERY_PROMPT = PromptTemplate(\n input_variables=[\"question\"],\n template=\"\"\"You are an AI language model assistant. Your task is \n to generate 3 different versions of the given user \n question to retrieve relevant documents from a vector database. \n By generating multiple perspectives on the user question, \n your goal is to help the user overcome some of the limitations \n of distance-based similarity search. Provide these alternative \n questions seperated by newlines. Original question: {question}\"\"\",\n)\n[docs]class MultiQueryRetriever(BaseRetriever):\n \"\"\"Given a user query, use an LLM to write a set of queries.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/multi_query.html"} {"id": "58a2ee084c30-1", "text": "\"\"\"Given a user query, use an LLM to write a set of queries.\n Retrieve docs for each query. Rake the unique union of all retrieved docs.\"\"\"\n retriever: BaseRetriever\n llm_chain: LLMChain\n verbose: bool = True\n parser_key: str = \"lines\"\n[docs] @classmethod\n def from_llm(\n cls,\n retriever: BaseRetriever,\n llm: BaseLLM,\n prompt: PromptTemplate = DEFAULT_QUERY_PROMPT,\n parser_key: str = \"lines\",\n ) -> \"MultiQueryRetriever\":\n \"\"\"Initialize from llm using default template.\n Args:\n retriever: retriever to query documents from\n llm: llm for query generation using DEFAULT_QUERY_PROMPT\n Returns:\n MultiQueryRetriever\n \"\"\"\n output_parser = LineListOutputParser()\n llm_chain = LLMChain(llm=llm, prompt=prompt, output_parser=output_parser)\n return cls(\n retriever=retriever,\n llm_chain=llm_chain,\n parser_key=parser_key,\n )\n def _get_relevant_documents(\n self,\n query: str,\n *,\n run_manager: CallbackManagerForRetrieverRun,\n ) -> List[Document]:\n \"\"\"Get relevated documents given a user query.\n Args:\n question: user query\n Returns:\n Unique union of relevant documents from all generated queries\n \"\"\"\n queries = self.generate_queries(query, run_manager)\n documents = self.retrieve_documents(queries, run_manager)\n unique_documents = self.unique_union(documents)\n return unique_documents\n async def _aget_relevant_documents(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/multi_query.html"} {"id": "58a2ee084c30-2", "text": "return unique_documents\n async def _aget_relevant_documents(\n self,\n query: str,\n *,\n run_manager: AsyncCallbackManagerForRetrieverRun,\n ) -> List[Document]:\n raise NotImplementedError\n[docs] def generate_queries(\n self, question: str, run_manager: CallbackManagerForRetrieverRun\n ) -> List[str]:\n \"\"\"Generate queries based upon user input.\n Args:\n question: user query\n Returns:\n List of LLM generated queries that are similar to the user input\n \"\"\"\n response = self.llm_chain(\n {\"question\": question}, callbacks=run_manager.get_child()\n )\n lines = getattr(response[\"text\"], self.parser_key, [])\n if self.verbose:\n logger.info(f\"Generated queries: {lines}\")\n return lines\n[docs] def retrieve_documents(\n self, queries: List[str], run_manager: CallbackManagerForRetrieverRun\n ) -> List[Document]:\n \"\"\"Run all LLM generated queries.\n Args:\n queries: query list\n Returns:\n List of retrived Documents\n \"\"\"\n documents = []\n for query in queries:\n docs = self.retriever.get_relevant_documents(\n query, callbacks=run_manager.get_child()\n )\n documents.extend(docs)\n return documents\n[docs] def unique_union(self, documents: List[Document]) -> List[Document]:\n \"\"\"Get uniqe Documents.\n Args:\n documents: List of retrived Documents\n Returns:\n List of unique retrived Documents\n \"\"\"\n # Create a dictionary with page_content as keys to remove duplicates\n # TODO: Add Document ID property (e.g., UUID)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/multi_query.html"} {"id": "58a2ee084c30-3", "text": "# TODO: Add Document ID property (e.g., UUID)\n unique_documents_dict = {\n (doc.page_content, tuple(sorted(doc.metadata.items()))): doc\n for doc in documents\n }\n unique_documents = list(unique_documents_dict.values())\n return unique_documents", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/multi_query.html"} {"id": "de211cdfe315-0", "text": "Source code for langchain.retrievers.llama_index\nfrom typing import Any, Dict, List, cast\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.schema import BaseRetriever, Document\n[docs]class LlamaIndexRetriever(BaseRetriever):\n \"\"\"Question-answering with sources over an LlamaIndex data structure.\"\"\"\n index: Any\n query_kwargs: Dict = Field(default_factory=dict)\n def _get_relevant_documents(\n self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n ) -> List[Document]:\n \"\"\"Get documents relevant for a query.\"\"\"\n try:\n from llama_index.indices.base import BaseGPTIndex\n from llama_index.response.schema import Response\n except ImportError:\n raise ImportError(\n \"You need to install `pip install llama-index` to use this retriever.\"\n )\n index = cast(BaseGPTIndex, self.index)\n response = index.query(query, response_mode=\"no_text\", **self.query_kwargs)\n response = cast(Response, response)\n # parse source nodes\n docs = []\n for source_node in response.source_nodes:\n metadata = source_node.extra_info or {}\n docs.append(\n Document(page_content=source_node.source_text, metadata=metadata)\n )\n return docs\n async def _aget_relevant_documents(\n self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n ) -> List[Document]:\n raise NotImplementedError(\"LlamaIndexRetriever does not support async\")\n[docs]class LlamaIndexGraphRetriever(BaseRetriever):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/llama_index.html"} {"id": "de211cdfe315-1", "text": "[docs]class LlamaIndexGraphRetriever(BaseRetriever):\n \"\"\"Question-answering with sources over an LlamaIndex graph data structure.\"\"\"\n graph: Any\n query_configs: List[Dict] = Field(default_factory=list)\n def _get_relevant_documents(\n self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n ) -> List[Document]:\n \"\"\"Get documents relevant for a query.\"\"\"\n try:\n from llama_index.composability.graph import (\n QUERY_CONFIG_TYPE,\n ComposableGraph,\n )\n from llama_index.response.schema import Response\n except ImportError:\n raise ImportError(\n \"You need to install `pip install llama-index` to use this retriever.\"\n )\n graph = cast(ComposableGraph, self.graph)\n # for now, inject response_mode=\"no_text\" into query configs\n for query_config in self.query_configs:\n query_config[\"response_mode\"] = \"no_text\"\n query_configs = cast(List[QUERY_CONFIG_TYPE], self.query_configs)\n response = graph.query(query, query_configs=query_configs)\n response = cast(Response, response)\n # parse source nodes\n docs = []\n for source_node in response.source_nodes:\n metadata = source_node.extra_info or {}\n docs.append(\n Document(page_content=source_node.source_text, metadata=metadata)\n )\n return docs\n async def _aget_relevant_documents(\n self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n ) -> List[Document]:\n raise NotImplementedError(\"LlamaIndexGraphRetriever does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/llama_index.html"} {"id": "73b5dda820e0-0", "text": "Source code for langchain.retrievers.chatgpt_plugin_retriever\nfrom __future__ import annotations\nfrom typing import List, Optional\nimport aiohttp\nimport requests\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.schema import BaseRetriever, Document\n[docs]class ChatGPTPluginRetriever(BaseRetriever):\n url: str\n bearer_token: str\n top_k: int = 3\n filter: Optional[dict] = None\n aiosession: Optional[aiohttp.ClientSession] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n def _get_relevant_documents(\n self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n ) -> List[Document]:\n url, json, headers = self._create_request(query)\n response = requests.post(url, json=json, headers=headers)\n results = response.json()[\"results\"][0][\"results\"]\n docs = []\n for d in results:\n content = d.pop(\"text\")\n metadata = d.pop(\"metadata\", d)\n if metadata.get(\"source_id\"):\n metadata[\"source\"] = metadata.pop(\"source_id\")\n docs.append(Document(page_content=content, metadata=metadata))\n return docs\n async def _aget_relevant_documents(\n self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n ) -> List[Document]:\n url, json, headers = self._create_request(query)\n if not self.aiosession:\n async with aiohttp.ClientSession() as session:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/chatgpt_plugin_retriever.html"} {"id": "73b5dda820e0-1", "text": "async with aiohttp.ClientSession() as session:\n async with session.post(url, headers=headers, json=json) as response:\n res = await response.json()\n else:\n async with self.aiosession.post(\n url, headers=headers, json=json\n ) as response:\n res = await response.json()\n results = res[\"results\"][0][\"results\"]\n docs = []\n for d in results:\n content = d.pop(\"text\")\n metadata = d.pop(\"metadata\", d)\n if metadata.get(\"source_id\"):\n metadata[\"source\"] = metadata.pop(\"source_id\")\n docs.append(Document(page_content=content, metadata=metadata))\n return docs\n def _create_request(self, query: str) -> tuple[str, dict, dict]:\n url = f\"{self.url}/query\"\n json = {\n \"queries\": [\n {\n \"query\": query,\n \"filter\": self.filter,\n \"top_k\": self.top_k,\n }\n ]\n }\n headers = {\n \"Content-Type\": \"application/json\",\n \"Authorization\": f\"Bearer {self.bearer_token}\",\n }\n return url, json, headers", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/chatgpt_plugin_retriever.html"} {"id": "3029fda47652-0", "text": "Source code for langchain.retrievers.arxiv\nfrom typing import List\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.utilities.arxiv import ArxivAPIWrapper\n[docs]class ArxivRetriever(BaseRetriever, ArxivAPIWrapper):\n \"\"\"\n It is effectively a wrapper for ArxivAPIWrapper.\n It wraps load() to get_relevant_documents().\n It uses all ArxivAPIWrapper arguments without any change.\n \"\"\"\n def _get_relevant_documents(\n self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n ) -> List[Document]:\n return self.load(query=query)\n async def _aget_relevant_documents(\n self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n ) -> List[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/arxiv.html"} {"id": "0af43f44778e-0", "text": "Source code for langchain.retrievers.remote_retriever\nfrom typing import List, Optional\nimport aiohttp\nimport requests\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.schema import BaseRetriever, Document\n[docs]class RemoteLangChainRetriever(BaseRetriever):\n url: str\n headers: Optional[dict] = None\n input_key: str = \"message\"\n response_key: str = \"response\"\n page_content_key: str = \"page_content\"\n metadata_key: str = \"metadata\"\n def _get_relevant_documents(\n self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n ) -> List[Document]:\n response = requests.post(\n self.url, json={self.input_key: query}, headers=self.headers\n )\n result = response.json()\n return [\n Document(\n page_content=r[self.page_content_key], metadata=r[self.metadata_key]\n )\n for r in result[self.response_key]\n ]\n async def _aget_relevant_documents(\n self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n ) -> List[Document]:\n async with aiohttp.ClientSession() as session:\n async with session.request(\n \"POST\", self.url, headers=self.headers, json={self.input_key: query}\n ) as response:\n result = await response.json()\n return [\n Document(\n page_content=r[self.page_content_key], metadata=r[self.metadata_key]\n )\n for r in result[self.response_key]\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/remote_retriever.html"} {"id": "62c6345e2df5-0", "text": "Source code for langchain.retrievers.self_query.qdrant\n\"\"\"Logic for converting internal query language to a valid Qdrant query.\"\"\"\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, Tuple\nfrom langchain.chains.query_constructor.ir import (\n Comparator,\n Comparison,\n Operation,\n Operator,\n StructuredQuery,\n Visitor,\n)\nif TYPE_CHECKING:\n from qdrant_client.http import models as rest\n[docs]class QdrantTranslator(Visitor):\n \"\"\"Logic for converting internal query language elements to valid filters.\"\"\"\n def __init__(self, metadata_key: str):\n self.metadata_key = metadata_key\n[docs] def visit_operation(self, operation: Operation) -> rest.Filter:\n from qdrant_client.http import models as rest\n args = [arg.accept(self) for arg in operation.arguments]\n operator = {\n Operator.AND: \"must\",\n Operator.OR: \"should\",\n Operator.NOT: \"must_not\",\n }[operation.operator]\n return rest.Filter(**{operator: args})\n[docs] def visit_comparison(self, comparison: Comparison) -> rest.FieldCondition:\n from qdrant_client.http import models as rest\n self._validate_func(comparison.comparator)\n attribute = self.metadata_key + \".\" + comparison.attribute\n if comparison.comparator == Comparator.EQ:\n return rest.FieldCondition(\n key=attribute, match=rest.MatchValue(value=comparison.value)\n )\n kwargs = {comparison.comparator.value: comparison.value}\n return rest.FieldCondition(key=attribute, range=rest.Range(**kwargs))\n[docs] def visit_structured_query(\n self, structured_query: StructuredQuery\n ) -> Tuple[str, dict]:\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/self_query/qdrant.html"} {"id": "62c6345e2df5-1", "text": ") -> Tuple[str, dict]:\n try:\n from qdrant_client.http import models as rest\n except ImportError as e:\n raise ImportError(\n \"Cannot import qdrant_client. Please install with `pip install \"\n \"qdrant-client`.\"\n ) from e\n if structured_query.filter is None:\n kwargs = {}\n else:\n filter = structured_query.filter.accept(self)\n if isinstance(filter, rest.FieldCondition):\n filter = rest.Filter(must=[filter])\n kwargs = {\"filter\": filter}\n return structured_query.query, kwargs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/self_query/qdrant.html"} {"id": "52fac2cbf392-0", "text": "Source code for langchain.retrievers.self_query.base\n\"\"\"Retriever that generates and executes structured queries over its own data source.\"\"\"\nfrom typing import Any, Dict, List, Optional, Type, cast\nfrom pydantic import BaseModel, Field, root_validator\nfrom langchain import LLMChain\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.chains.query_constructor.base import load_query_constructor_chain\nfrom langchain.chains.query_constructor.ir import StructuredQuery, Visitor\nfrom langchain.chains.query_constructor.schema import AttributeInfo\nfrom langchain.retrievers.self_query.chroma import ChromaTranslator\nfrom langchain.retrievers.self_query.myscale import MyScaleTranslator\nfrom langchain.retrievers.self_query.pinecone import PineconeTranslator\nfrom langchain.retrievers.self_query.qdrant import QdrantTranslator\nfrom langchain.retrievers.self_query.weaviate import WeaviateTranslator\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.vectorstores import (\n Chroma,\n MyScale,\n Pinecone,\n Qdrant,\n VectorStore,\n Weaviate,\n)\ndef _get_builtin_translator(vectorstore: VectorStore) -> Visitor:\n \"\"\"Get the translator class corresponding to the vector store class.\"\"\"\n vectorstore_cls = vectorstore.__class__\n BUILTIN_TRANSLATORS: Dict[Type[VectorStore], Type[Visitor]] = {\n Pinecone: PineconeTranslator,\n Chroma: ChromaTranslator,\n Weaviate: WeaviateTranslator,\n Qdrant: QdrantTranslator,\n MyScale: MyScaleTranslator,\n }", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/self_query/base.html"} {"id": "52fac2cbf392-1", "text": "MyScale: MyScaleTranslator,\n }\n if vectorstore_cls not in BUILTIN_TRANSLATORS:\n raise ValueError(\n f\"Self query retriever with Vector Store type {vectorstore_cls}\"\n f\" not supported.\"\n )\n if isinstance(vectorstore, Qdrant):\n return QdrantTranslator(metadata_key=vectorstore.metadata_payload_key)\n elif isinstance(vectorstore, MyScale):\n return MyScaleTranslator(metadata_key=vectorstore.metadata_column)\n return BUILTIN_TRANSLATORS[vectorstore_cls]()\n[docs]class SelfQueryRetriever(BaseRetriever, BaseModel):\n \"\"\"Retriever that wraps around a vector store and uses an LLM to generate\n the vector store queries.\"\"\"\n vectorstore: VectorStore\n \"\"\"The underlying vector store from which documents will be retrieved.\"\"\"\n llm_chain: LLMChain\n \"\"\"The LLMChain for generating the vector store queries.\"\"\"\n search_type: str = \"similarity\"\n \"\"\"The search type to perform on the vector store.\"\"\"\n search_kwargs: dict = Field(default_factory=dict)\n \"\"\"Keyword arguments to pass in to the vector store search.\"\"\"\n structured_query_translator: Visitor\n \"\"\"Translator for turning internal query language into vectorstore search params.\"\"\"\n verbose: bool = False\n \"\"\"Use original query instead of the revised new query from LLM\"\"\"\n use_original_query: bool = False\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] @root_validator(pre=True)\n def validate_translator(cls, values: Dict) -> Dict:\n \"\"\"Validate translator.\"\"\"\n if \"structured_query_translator\" not in values:\n values[\"structured_query_translator\"] = _get_builtin_translator(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/self_query/base.html"} {"id": "52fac2cbf392-2", "text": "values[\"structured_query_translator\"] = _get_builtin_translator(\n values[\"vectorstore\"]\n )\n return values\n def _get_relevant_documents(\n self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n ) -> List[Document]:\n \"\"\"Get documents relevant for a query.\n Args:\n query: string to find relevant documents for\n Returns:\n List of relevant documents\n \"\"\"\n inputs = self.llm_chain.prep_inputs({\"query\": query})\n structured_query = cast(\n StructuredQuery,\n self.llm_chain.predict_and_parse(\n callbacks=run_manager.get_child(), **inputs\n ),\n )\n if self.verbose:\n print(structured_query)\n new_query, new_kwargs = self.structured_query_translator.visit_structured_query(\n structured_query\n )\n if structured_query.limit is not None:\n new_kwargs[\"k\"] = structured_query.limit\n if self.use_original_query:\n new_query = query\n search_kwargs = {**self.search_kwargs, **new_kwargs}\n docs = self.vectorstore.search(new_query, self.search_type, **search_kwargs)\n return docs\n async def _aget_relevant_documents(\n self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n ) -> List[Document]:\n raise NotImplementedError\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n vectorstore: VectorStore,\n document_contents: str,\n metadata_field_info: List[AttributeInfo],\n structured_query_translator: Optional[Visitor] = None,\n chain_kwargs: Optional[Dict] = None,\n enable_limit: bool = False,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/self_query/base.html"} {"id": "52fac2cbf392-3", "text": "enable_limit: bool = False,\n use_original_query: bool = False,\n **kwargs: Any,\n ) -> \"SelfQueryRetriever\":\n if structured_query_translator is None:\n structured_query_translator = _get_builtin_translator(vectorstore)\n chain_kwargs = chain_kwargs or {}\n if \"allowed_comparators\" not in chain_kwargs:\n chain_kwargs[\n \"allowed_comparators\"\n ] = structured_query_translator.allowed_comparators\n if \"allowed_operators\" not in chain_kwargs:\n chain_kwargs[\n \"allowed_operators\"\n ] = structured_query_translator.allowed_operators\n llm_chain = load_query_constructor_chain(\n llm,\n document_contents,\n metadata_field_info,\n enable_limit=enable_limit,\n **chain_kwargs,\n )\n return cls(\n llm_chain=llm_chain,\n vectorstore=vectorstore,\n use_original_query=use_original_query,\n structured_query_translator=structured_query_translator,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/self_query/base.html"} {"id": "b12f7a66cd37-0", "text": "Source code for langchain.retrievers.self_query.weaviate\n\"\"\"Logic for converting internal query language to a valid Weaviate query.\"\"\"\nfrom typing import Dict, Tuple, Union\nfrom langchain.chains.query_constructor.ir import (\n Comparator,\n Comparison,\n Operation,\n Operator,\n StructuredQuery,\n Visitor,\n)\n[docs]class WeaviateTranslator(Visitor):\n \"\"\"Logic for converting internal query language elements to valid filters.\"\"\"\n allowed_operators = [Operator.AND, Operator.OR]\n \"\"\"Subset of allowed logical operators.\"\"\"\n allowed_comparators = [Comparator.EQ]\n def _format_func(self, func: Union[Operator, Comparator]) -> str:\n self._validate_func(func)\n # https://weaviate.io/developers/weaviate/api/graphql/filters\n map_dict = {Operator.AND: \"And\", Operator.OR: \"Or\", Comparator.EQ: \"Equal\"}\n return map_dict[func]\n[docs] def visit_operation(self, operation: Operation) -> Dict:\n args = [arg.accept(self) for arg in operation.arguments]\n return {\"operator\": self._format_func(operation.operator), \"operands\": args}\n[docs] def visit_comparison(self, comparison: Comparison) -> Dict:\n return {\n \"path\": [comparison.attribute],\n \"operator\": self._format_func(comparison.comparator),\n \"valueText\": comparison.value,\n }\n[docs] def visit_structured_query(\n self, structured_query: StructuredQuery\n ) -> Tuple[str, dict]:\n if structured_query.filter is None:\n kwargs = {}\n else:\n kwargs = {\"where_filter\": structured_query.filter.accept(self)}\n return structured_query.query, kwargs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/self_query/weaviate.html"} {"id": "5c9e2f121bfb-0", "text": "Source code for langchain.retrievers.self_query.chroma\n\"\"\"Logic for converting internal query language to a valid Chroma query.\"\"\"\nfrom typing import Dict, Tuple, Union\nfrom langchain.chains.query_constructor.ir import (\n Comparator,\n Comparison,\n Operation,\n Operator,\n StructuredQuery,\n Visitor,\n)\n[docs]class ChromaTranslator(Visitor):\n \"\"\"Logic for converting internal query language elements to valid filters.\"\"\"\n allowed_operators = [Operator.AND, Operator.OR]\n \"\"\"Subset of allowed logical operators.\"\"\"\n allowed_comparators = [\n Comparator.EQ,\n Comparator.GT,\n Comparator.GTE,\n Comparator.LT,\n Comparator.LTE,\n ]\n \"\"\"Subset of allowed logical comparators.\"\"\"\n def _format_func(self, func: Union[Operator, Comparator]) -> str:\n self._validate_func(func)\n return f\"${func.value}\"\n[docs] def visit_operation(self, operation: Operation) -> Dict:\n args = [arg.accept(self) for arg in operation.arguments]\n return {self._format_func(operation.operator): args}\n[docs] def visit_comparison(self, comparison: Comparison) -> Dict:\n return {\n comparison.attribute: {\n self._format_func(comparison.comparator): comparison.value\n }\n }\n[docs] def visit_structured_query(\n self, structured_query: StructuredQuery\n ) -> Tuple[str, dict]:\n if structured_query.filter is None:\n kwargs = {}\n else:\n kwargs = {\"filter\": structured_query.filter.accept(self)}\n return structured_query.query, kwargs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/self_query/chroma.html"} {"id": "0b546dc72761-0", "text": "Source code for langchain.retrievers.self_query.myscale\nimport datetime\nimport re\nfrom typing import Any, Callable, Dict, Tuple\nfrom langchain.chains.query_constructor.ir import (\n Comparator,\n Comparison,\n Operation,\n Operator,\n StructuredQuery,\n Visitor,\n)\n[docs]def DEFAULT_COMPOSER(op_name: str) -> Callable:\n \"\"\"\n Default composer for logical operators.\n Args:\n op_name: Name of the operator.\n Returns:\n Callable that takes a list of arguments and returns a string.\n \"\"\"\n def f(*args: Any) -> str:\n args_: map[str] = map(str, args)\n return f\" {op_name} \".join(args_)\n return f\n[docs]def FUNCTION_COMPOSER(op_name: str) -> Callable:\n \"\"\"\n Composer for functions.\n Args:\n op_name: Name of the function.\n Returns:\n Callable that takes a list of arguments and returns a string.\n \"\"\"\n def f(*args: Any) -> str:\n args_: map[str] = map(str, args)\n return f\"{op_name}({','.join(args_)})\"\n return f\n[docs]class MyScaleTranslator(Visitor):\n \"\"\"Logic for converting internal query language elements to valid filters.\"\"\"\n allowed_operators = [Operator.AND, Operator.OR, Operator.NOT]\n \"\"\"Subset of allowed logical operators.\"\"\"\n allowed_comparators = [\n Comparator.EQ,\n Comparator.GT,\n Comparator.GTE,\n Comparator.LT,\n Comparator.LTE,\n Comparator.CONTAIN,\n Comparator.LIKE,\n ]\n map_dict = {", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/self_query/myscale.html"} {"id": "0b546dc72761-1", "text": "Comparator.LIKE,\n ]\n map_dict = {\n Operator.AND: DEFAULT_COMPOSER(\"AND\"),\n Operator.OR: DEFAULT_COMPOSER(\"OR\"),\n Operator.NOT: DEFAULT_COMPOSER(\"NOT\"),\n Comparator.EQ: DEFAULT_COMPOSER(\"=\"),\n Comparator.GT: DEFAULT_COMPOSER(\">\"),\n Comparator.GTE: DEFAULT_COMPOSER(\">=\"),\n Comparator.LT: DEFAULT_COMPOSER(\"<\"),\n Comparator.LTE: DEFAULT_COMPOSER(\"<=\"),\n Comparator.CONTAIN: FUNCTION_COMPOSER(\"has\"),\n Comparator.LIKE: DEFAULT_COMPOSER(\"ILIKE\"),\n }\n def __init__(self, metadata_key: str = \"metadata\") -> None:\n super().__init__()\n self.metadata_key = metadata_key\n[docs] def visit_operation(self, operation: Operation) -> Dict:\n args = [arg.accept(self) for arg in operation.arguments]\n func = operation.operator\n self._validate_func(func)\n return self.map_dict[func](*args)\n[docs] def visit_comparison(self, comparison: Comparison) -> Dict:\n regex = \"\\((.*?)\\)\"\n matched = re.search(\"\\(\\w+\\)\", comparison.attribute)\n # If arbitrary function is applied to an attribute\n if matched:\n attr = re.sub(\n regex,\n f\"({self.metadata_key}.{matched.group(0)[1:-1]})\",\n comparison.attribute,\n )\n else:\n attr = f\"{self.metadata_key}.{comparison.attribute}\"\n value = comparison.value\n comp = comparison.comparator\n value = f\"'{value}'\" if type(value) is str else value\n # convert timestamp for datetime objects\n if type(value) is datetime.date:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/self_query/myscale.html"} {"id": "0b546dc72761-2", "text": "# convert timestamp for datetime objects\n if type(value) is datetime.date:\n attr = f\"parseDateTime32BestEffort({attr})\"\n value = f\"parseDateTime32BestEffort('{value.strftime('%Y-%m-%d')}')\"\n # string pattern match\n if comp is Comparator.LIKE:\n value = f\"'%{value[1:-1]}%'\"\n return self.map_dict[comp](attr, value)\n[docs] def visit_structured_query(\n self, structured_query: StructuredQuery\n ) -> Tuple[str, dict]:\n print(structured_query)\n if structured_query.filter is None:\n kwargs = {}\n else:\n kwargs = {\"where_str\": structured_query.filter.accept(self)}\n return structured_query.query, kwargs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/self_query/myscale.html"} {"id": "6b84ea38df42-0", "text": "Source code for langchain.retrievers.self_query.pinecone\n\"\"\"Logic for converting internal query language to a valid Pinecone query.\"\"\"\nfrom typing import Dict, Tuple, Union\nfrom langchain.chains.query_constructor.ir import (\n Comparator,\n Comparison,\n Operation,\n Operator,\n StructuredQuery,\n Visitor,\n)\n[docs]class PineconeTranslator(Visitor):\n \"\"\"Logic for converting internal query language elements to valid filters.\"\"\"\n allowed_operators = [Operator.AND, Operator.OR]\n \"\"\"Subset of allowed logical operators.\"\"\"\n def _format_func(self, func: Union[Operator, Comparator]) -> str:\n self._validate_func(func)\n return f\"${func.value}\"\n[docs] def visit_operation(self, operation: Operation) -> Dict:\n args = [arg.accept(self) for arg in operation.arguments]\n return {self._format_func(operation.operator): args}\n[docs] def visit_comparison(self, comparison: Comparison) -> Dict:\n return {\n comparison.attribute: {\n self._format_func(comparison.comparator): comparison.value\n }\n }\n[docs] def visit_structured_query(\n self, structured_query: StructuredQuery\n ) -> Tuple[str, dict]:\n if structured_query.filter is None:\n kwargs = {}\n else:\n kwargs = {\"filter\": structured_query.filter.accept(self)}\n return structured_query.query, kwargs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/self_query/pinecone.html"} {"id": "19a535b5df25-0", "text": "Source code for langchain.retrievers.document_compressors.chain_filter\n\"\"\"Filter that uses an LLM to drop documents that aren't relevant to the query.\"\"\"\nfrom typing import Any, Callable, Dict, Optional, Sequence\nfrom langchain import LLMChain, PromptTemplate\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.output_parsers.boolean import BooleanOutputParser\nfrom langchain.retrievers.document_compressors.base import BaseDocumentCompressor\nfrom langchain.retrievers.document_compressors.chain_filter_prompt import (\n prompt_template,\n)\nfrom langchain.schema import BasePromptTemplate, Document\nfrom langchain.schema.language_model import BaseLanguageModel\ndef _get_default_chain_prompt() -> PromptTemplate:\n return PromptTemplate(\n template=prompt_template,\n input_variables=[\"question\", \"context\"],\n output_parser=BooleanOutputParser(),\n )\n[docs]def default_get_input(query: str, doc: Document) -> Dict[str, Any]:\n \"\"\"Return the compression chain input.\"\"\"\n return {\"question\": query, \"context\": doc.page_content}\n[docs]class LLMChainFilter(BaseDocumentCompressor):\n \"\"\"Filter that drops documents that aren't relevant to the query.\"\"\"\n llm_chain: LLMChain\n \"\"\"LLM wrapper to use for filtering documents. \n The chain prompt is expected to have a BooleanOutputParser.\"\"\"\n get_input: Callable[[str, Document], dict] = default_get_input\n \"\"\"Callable for constructing the chain input from the query and a Document.\"\"\"\n[docs] def compress_documents(\n self,\n documents: Sequence[Document],\n query: str,\n callbacks: Optional[Callbacks] = None,\n ) -> Sequence[Document]:\n \"\"\"Filter down documents based on their relevance to the query.\"\"\"\n filtered_docs = []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/chain_filter.html"} {"id": "19a535b5df25-1", "text": "\"\"\"Filter down documents based on their relevance to the query.\"\"\"\n filtered_docs = []\n for doc in documents:\n _input = self.get_input(query, doc)\n include_doc = self.llm_chain.predict_and_parse(\n **_input, callbacks=callbacks\n )\n if include_doc:\n filtered_docs.append(doc)\n return filtered_docs\n[docs] async def acompress_documents(\n self,\n documents: Sequence[Document],\n query: str,\n callbacks: Optional[Callbacks] = None,\n ) -> Sequence[Document]:\n \"\"\"Filter down documents.\"\"\"\n raise NotImplementedError\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n prompt: Optional[BasePromptTemplate] = None,\n **kwargs: Any\n ) -> \"LLMChainFilter\":\n _prompt = prompt if prompt is not None else _get_default_chain_prompt()\n llm_chain = LLMChain(llm=llm, prompt=_prompt)\n return cls(llm_chain=llm_chain, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/chain_filter.html"} {"id": "96c5da5263e3-0", "text": "Source code for langchain.retrievers.document_compressors.cohere_rerank\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, Dict, Optional, Sequence\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.retrievers.document_compressors.base import BaseDocumentCompressor\nfrom langchain.schema import Document\nfrom langchain.utils import get_from_dict_or_env\nif TYPE_CHECKING:\n from cohere import Client\nelse:\n # We do to avoid pydantic annotation issues when actually instantiating\n # while keeping this import optional\n try:\n from cohere import Client\n except ImportError:\n pass\n[docs]class CohereRerank(BaseDocumentCompressor):\n client: Client\n top_n: int = 3\n model: str = \"rerank-english-v2.0\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n cohere_api_key = get_from_dict_or_env(\n values, \"cohere_api_key\", \"COHERE_API_KEY\"\n )\n try:\n import cohere\n values[\"client\"] = cohere.Client(cohere_api_key)\n except ImportError:\n raise ImportError(\n \"Could not import cohere python package. \"\n \"Please install it with `pip install cohere`.\"\n )\n return values\n[docs] def compress_documents(\n self,\n documents: Sequence[Document],\n query: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/cohere_rerank.html"} {"id": "96c5da5263e3-1", "text": "self,\n documents: Sequence[Document],\n query: str,\n callbacks: Optional[Callbacks] = None,\n ) -> Sequence[Document]:\n if len(documents) == 0: # to avoid empty api call\n return []\n doc_list = list(documents)\n _docs = [d.page_content for d in doc_list]\n results = self.client.rerank(\n model=self.model, query=query, documents=_docs, top_n=self.top_n\n )\n final_results = []\n for r in results:\n doc = doc_list[r.index]\n doc.metadata[\"relevance_score\"] = r.relevance_score\n final_results.append(doc)\n return final_results\n[docs] async def acompress_documents(\n self,\n documents: Sequence[Document],\n query: str,\n callbacks: Optional[Callbacks] = None,\n ) -> Sequence[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/cohere_rerank.html"} {"id": "553bd8b4dc1d-0", "text": "Source code for langchain.retrievers.document_compressors.base\n\"\"\"Interface for retrieved document compressors.\"\"\"\nfrom abc import ABC, abstractmethod\nfrom inspect import signature\nfrom typing import List, Optional, Sequence, Union\nfrom pydantic import BaseModel\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.schema import BaseDocumentTransformer, Document\n[docs]class BaseDocumentCompressor(BaseModel, ABC):\n \"\"\"Base abstraction interface for document compression.\"\"\"\n[docs] @abstractmethod\n def compress_documents(\n self,\n documents: Sequence[Document],\n query: str,\n callbacks: Optional[Callbacks] = None,\n ) -> Sequence[Document]:\n \"\"\"Compress retrieved documents given the query context.\"\"\"\n[docs] @abstractmethod\n async def acompress_documents(\n self,\n documents: Sequence[Document],\n query: str,\n callbacks: Optional[Callbacks] = None,\n ) -> Sequence[Document]:\n \"\"\"Compress retrieved documents given the query context.\"\"\"\n[docs]class DocumentCompressorPipeline(BaseDocumentCompressor):\n \"\"\"Document compressor that uses a pipeline of transformers.\"\"\"\n transformers: List[Union[BaseDocumentTransformer, BaseDocumentCompressor]]\n \"\"\"List of document filters that are chained together and run in sequence.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def compress_documents(\n self,\n documents: Sequence[Document],\n query: str,\n callbacks: Optional[Callbacks] = None,\n ) -> Sequence[Document]:\n \"\"\"Transform a list of documents.\"\"\"\n for _transformer in self.transformers:\n if isinstance(_transformer, BaseDocumentCompressor):\n accepts_callbacks = (", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/base.html"} {"id": "553bd8b4dc1d-1", "text": "if isinstance(_transformer, BaseDocumentCompressor):\n accepts_callbacks = (\n signature(_transformer.compress_documents).parameters.get(\n \"callbacks\"\n )\n is not None\n )\n if accepts_callbacks:\n documents = _transformer.compress_documents(\n documents, query, callbacks=callbacks\n )\n else:\n documents = _transformer.compress_documents(documents, query)\n elif isinstance(_transformer, BaseDocumentTransformer):\n documents = _transformer.transform_documents(documents)\n else:\n raise ValueError(f\"Got unexpected transformer type: {_transformer}\")\n return documents\n[docs] async def acompress_documents(\n self,\n documents: Sequence[Document],\n query: str,\n callbacks: Optional[Callbacks] = None,\n ) -> Sequence[Document]:\n \"\"\"Compress retrieved documents given the query context.\"\"\"\n for _transformer in self.transformers:\n if isinstance(_transformer, BaseDocumentCompressor):\n accepts_callbacks = (\n signature(_transformer.acompress_documents).parameters.get(\n \"callbacks\"\n )\n is not None\n )\n if accepts_callbacks:\n documents = await _transformer.acompress_documents(\n documents, query, callbacks=callbacks\n )\n else:\n documents = await _transformer.acompress_documents(documents, query)\n elif isinstance(_transformer, BaseDocumentTransformer):\n documents = await _transformer.atransform_documents(documents)\n else:\n raise ValueError(f\"Got unexpected transformer type: {_transformer}\")\n return documents", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/base.html"} {"id": "03e58b7452c1-0", "text": "Source code for langchain.retrievers.document_compressors.embeddings_filter\n\"\"\"Document compressor that uses embeddings to drop documents unrelated to the query.\"\"\"\nfrom typing import Callable, Dict, Optional, Sequence\nimport numpy as np\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.document_transformers import (\n _get_embeddings_from_stateful_docs,\n get_stateful_documents,\n)\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.math_utils import cosine_similarity\nfrom langchain.retrievers.document_compressors.base import (\n BaseDocumentCompressor,\n)\nfrom langchain.schema import Document\n[docs]class EmbeddingsFilter(BaseDocumentCompressor):\n embeddings: Embeddings\n \"\"\"Embeddings to use for embedding document contents and queries.\"\"\"\n similarity_fn: Callable = cosine_similarity\n \"\"\"Similarity function for comparing documents. Function expected to take as input\n two matrices (List[List[float]]) and return a matrix of scores where higher values\n indicate greater similarity.\"\"\"\n k: Optional[int] = 20\n \"\"\"The number of relevant documents to return. Can be set to None, in which case\n `similarity_threshold` must be specified. Defaults to 20.\"\"\"\n similarity_threshold: Optional[float]\n \"\"\"Threshold for determining when two documents are similar enough\n to be considered redundant. Defaults to None, must be specified if `k` is set\n to None.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] @root_validator()\n def validate_params(cls, values: Dict) -> Dict:\n \"\"\"Validate similarity parameters.\"\"\"\n if values[\"k\"] is None and values[\"similarity_threshold\"] is None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/embeddings_filter.html"} {"id": "03e58b7452c1-1", "text": "if values[\"k\"] is None and values[\"similarity_threshold\"] is None:\n raise ValueError(\"Must specify one of `k` or `similarity_threshold`.\")\n return values\n[docs] def compress_documents(\n self,\n documents: Sequence[Document],\n query: str,\n callbacks: Optional[Callbacks] = None,\n ) -> Sequence[Document]:\n \"\"\"Filter documents based on similarity of their embeddings to the query.\"\"\"\n stateful_documents = get_stateful_documents(documents)\n embedded_documents = _get_embeddings_from_stateful_docs(\n self.embeddings, stateful_documents\n )\n embedded_query = self.embeddings.embed_query(query)\n similarity = self.similarity_fn([embedded_query], embedded_documents)[0]\n included_idxs = np.arange(len(embedded_documents))\n if self.k is not None:\n included_idxs = np.argsort(similarity)[::-1][: self.k]\n if self.similarity_threshold is not None:\n similar_enough = np.where(\n similarity[included_idxs] > self.similarity_threshold\n )\n included_idxs = included_idxs[similar_enough]\n return [stateful_documents[i] for i in included_idxs]\n[docs] async def acompress_documents(\n self,\n documents: Sequence[Document],\n query: str,\n callbacks: Optional[Callbacks] = None,\n ) -> Sequence[Document]:\n \"\"\"Filter down documents.\"\"\"\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/embeddings_filter.html"} {"id": "95598805dd43-0", "text": "Source code for langchain.retrievers.document_compressors.chain_extract\n\"\"\"DocumentFilter that uses an LLM chain to extract the relevant parts of documents.\"\"\"\nfrom __future__ import annotations\nimport asyncio\nfrom typing import Any, Callable, Dict, Optional, Sequence\nfrom langchain import LLMChain, PromptTemplate\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.retrievers.document_compressors.base import BaseDocumentCompressor\nfrom langchain.retrievers.document_compressors.chain_extract_prompt import (\n prompt_template,\n)\nfrom langchain.schema import BaseOutputParser, Document\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]def default_get_input(query: str, doc: Document) -> Dict[str, Any]:\n \"\"\"Return the compression chain input.\"\"\"\n return {\"question\": query, \"context\": doc.page_content}\n[docs]class NoOutputParser(BaseOutputParser[str]):\n \"\"\"Parse outputs that could return a null string of some sort.\"\"\"\n no_output_str: str = \"NO_OUTPUT\"\n[docs] def parse(self, text: str) -> str:\n cleaned_text = text.strip()\n if cleaned_text == self.no_output_str:\n return \"\"\n return cleaned_text\ndef _get_default_chain_prompt() -> PromptTemplate:\n output_parser = NoOutputParser()\n template = prompt_template.format(no_output_str=output_parser.no_output_str)\n return PromptTemplate(\n template=template,\n input_variables=[\"question\", \"context\"],\n output_parser=output_parser,\n )\n[docs]class LLMChainExtractor(BaseDocumentCompressor):\n llm_chain: LLMChain\n \"\"\"LLM wrapper to use for compressing documents.\"\"\"\n get_input: Callable[[str, Document], dict] = default_get_input", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/chain_extract.html"} {"id": "95598805dd43-1", "text": "get_input: Callable[[str, Document], dict] = default_get_input\n \"\"\"Callable for constructing the chain input from the query and a Document.\"\"\"\n[docs] def compress_documents(\n self,\n documents: Sequence[Document],\n query: str,\n callbacks: Optional[Callbacks] = None,\n ) -> Sequence[Document]:\n \"\"\"Compress page content of raw documents.\"\"\"\n compressed_docs = []\n for doc in documents:\n _input = self.get_input(query, doc)\n output = self.llm_chain.predict_and_parse(**_input, callbacks=callbacks)\n if len(output) == 0:\n continue\n compressed_docs.append(Document(page_content=output, metadata=doc.metadata))\n return compressed_docs\n[docs] async def acompress_documents(\n self,\n documents: Sequence[Document],\n query: str,\n callbacks: Optional[Callbacks] = None,\n ) -> Sequence[Document]:\n \"\"\"Compress page content of raw documents asynchronously.\"\"\"\n outputs = await asyncio.gather(\n *[\n self.llm_chain.apredict_and_parse(\n **self.get_input(query, doc), callbacks=callbacks\n )\n for doc in documents\n ]\n )\n compressed_docs = []\n for i, doc in enumerate(documents):\n if len(outputs[i]) == 0:\n continue\n compressed_docs.append(\n Document(page_content=outputs[i], metadata=doc.metadata)\n )\n return compressed_docs\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n prompt: Optional[PromptTemplate] = None,\n get_input: Optional[Callable[[str, Document], str]] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/chain_extract.html"} {"id": "95598805dd43-2", "text": "get_input: Optional[Callable[[str, Document], str]] = None,\n llm_chain_kwargs: Optional[dict] = None,\n ) -> LLMChainExtractor:\n \"\"\"Initialize from LLM.\"\"\"\n _prompt = prompt if prompt is not None else _get_default_chain_prompt()\n _get_input = get_input if get_input is not None else default_get_input\n llm_chain = LLMChain(llm=llm, prompt=_prompt, **(llm_chain_kwargs or {}))\n return cls(llm_chain=llm_chain, get_input=_get_input)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/chain_extract.html"} {"id": "bd4cb7e25143-0", "text": "Source code for langchain.schema.prompt\nfrom __future__ import annotations\nfrom abc import ABC, abstractmethod\nfrom typing import List\nfrom langchain.load.serializable import Serializable\nfrom langchain.schema.messages import BaseMessage\n[docs]class PromptValue(Serializable, ABC):\n \"\"\"Base abstract class for inputs to any language model.\n PromptValues can be converted to both LLM (pure text-generation) inputs and\n ChatModel inputs.\n \"\"\"\n[docs] @abstractmethod\n def to_string(self) -> str:\n \"\"\"Return prompt value as string.\"\"\"\n[docs] @abstractmethod\n def to_messages(self) -> List[BaseMessage]:\n \"\"\"Return prompt as a list of Messages.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/prompt.html"} {"id": "05cbb93fe637-0", "text": "Source code for langchain.schema.prompt_template\nfrom __future__ import annotations\nimport json\nfrom abc import ABC, abstractmethod\nfrom pathlib import Path\nfrom typing import Any, Callable, Dict, List, Mapping, Optional, Union\nimport yaml\nfrom pydantic import Field, root_validator\nfrom langchain.load.serializable import Serializable\nfrom langchain.schema.document import Document\nfrom langchain.schema.output_parser import BaseOutputParser\nfrom langchain.schema.prompt import PromptValue\n[docs]class BasePromptTemplate(Serializable, ABC):\n \"\"\"Base class for all prompt templates, returning a prompt.\"\"\"\n input_variables: List[str]\n \"\"\"A list of the names of the variables the prompt template expects.\"\"\"\n output_parser: Optional[BaseOutputParser] = None\n \"\"\"How to parse the output of calling an LLM on this formatted prompt.\"\"\"\n partial_variables: Mapping[str, Union[str, Callable[[], str]]] = Field(\n default_factory=dict\n )\n @property\n def lc_serializable(self) -> bool:\n return True\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] @abstractmethod\n def format_prompt(self, **kwargs: Any) -> PromptValue:\n \"\"\"Create Chat Messages.\"\"\"\n[docs] @root_validator()\n def validate_variable_names(cls, values: Dict) -> Dict:\n \"\"\"Validate variable names do not include restricted names.\"\"\"\n if \"stop\" in values[\"input_variables\"]:\n raise ValueError(\n \"Cannot have an input variable named 'stop', as it is used internally,\"\n \" please rename.\"\n )\n if \"stop\" in values[\"partial_variables\"]:\n raise ValueError(\n \"Cannot have an partial variable named 'stop', as it is used \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/prompt_template.html"} {"id": "05cbb93fe637-1", "text": "\"Cannot have an partial variable named 'stop', as it is used \"\n \"internally, please rename.\"\n )\n overall = set(values[\"input_variables\"]).intersection(\n values[\"partial_variables\"]\n )\n if overall:\n raise ValueError(\n f\"Found overlapping input and partial variables: {overall}\"\n )\n return values\n[docs] def partial(self, **kwargs: Union[str, Callable[[], str]]) -> BasePromptTemplate:\n \"\"\"Return a partial of the prompt template.\"\"\"\n prompt_dict = self.__dict__.copy()\n prompt_dict[\"input_variables\"] = list(\n set(self.input_variables).difference(kwargs)\n )\n prompt_dict[\"partial_variables\"] = {**self.partial_variables, **kwargs}\n return type(self)(**prompt_dict)\n def _merge_partial_and_user_variables(self, **kwargs: Any) -> Dict[str, Any]:\n # Get partial params:\n partial_kwargs = {\n k: v if isinstance(v, str) else v()\n for k, v in self.partial_variables.items()\n }\n return {**partial_kwargs, **kwargs}\n[docs] @abstractmethod\n def format(self, **kwargs: Any) -> str:\n \"\"\"Format the prompt with the inputs.\n Args:\n kwargs: Any arguments to be passed to the prompt template.\n Returns:\n A formatted string.\n Example:\n .. code-block:: python\n prompt.format(variable1=\"foo\")\n \"\"\"\n @property\n def _prompt_type(self) -> str:\n \"\"\"Return the prompt type key.\"\"\"\n raise NotImplementedError\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return dictionary representation of prompt.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/prompt_template.html"} {"id": "05cbb93fe637-2", "text": "\"\"\"Return dictionary representation of prompt.\"\"\"\n prompt_dict = super().dict(**kwargs)\n prompt_dict[\"_type\"] = self._prompt_type\n return prompt_dict\n[docs] def save(self, file_path: Union[Path, str]) -> None:\n \"\"\"Save the prompt.\n Args:\n file_path: Path to directory to save prompt to.\n Example:\n .. code-block:: python\n prompt.save(file_path=\"path/prompt.yaml\")\n \"\"\"\n if self.partial_variables:\n raise ValueError(\"Cannot save prompt with partial variables.\")\n # Convert file to Path object.\n if isinstance(file_path, str):\n save_path = Path(file_path)\n else:\n save_path = file_path\n directory_path = save_path.parent\n directory_path.mkdir(parents=True, exist_ok=True)\n # Fetch dictionary to save\n prompt_dict = self.dict()\n if save_path.suffix == \".json\":\n with open(file_path, \"w\") as f:\n json.dump(prompt_dict, f, indent=4)\n elif save_path.suffix == \".yaml\":\n with open(file_path, \"w\") as f:\n yaml.dump(prompt_dict, f, default_flow_style=False)\n else:\n raise ValueError(f\"{save_path} must be json or yaml\")\n[docs]def format_document(doc: Document, prompt: BasePromptTemplate) -> str:\n \"\"\"Format a document into a string based on a prompt template.\n First, this pulls information from the document from two sources:\n 1. `page_content`:\n This takes the information from the `document.page_content`\n and assigns it to a variable named `page_content`.\n 2. metadata:\n This takes information from `document.metadata` and assigns", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/prompt_template.html"} {"id": "05cbb93fe637-3", "text": "2. metadata:\n This takes information from `document.metadata` and assigns\n it to variables of the same name.\n Those variables are then passed into the `prompt` to produce a formatted string.\n Args:\n doc: Document, the page_content and metadata will be used to create\n the final string.\n prompt: BasePromptTemplate, will be used to format the page_content\n and metadata into the final string.\n Returns:\n string of the document formatted.\n Example:\n .. code-block:: python\n from langchain.schema import Document\n from langchain.prompts import PromptTemplate\n doc = Document(page_content=\"This is a joke\", metadata={\"page\": \"1\"})\n prompt = PromptTemplate.from_template(\"Page {page}: {page_content}\")\n format_document(doc, prompt)\n >>> \"Page 1: This is a joke\"\n \"\"\"\n base_info = {\"page_content\": doc.page_content, **doc.metadata}\n missing_metadata = set(prompt.input_variables).difference(base_info)\n if len(missing_metadata) > 0:\n required_metadata = [\n iv for iv in prompt.input_variables if iv != \"page_content\"\n ]\n raise ValueError(\n f\"Document prompt requires documents to have metadata variables: \"\n f\"{required_metadata}. Received document with missing metadata: \"\n f\"{list(missing_metadata)}.\"\n )\n document_info = {k: base_info[k] for k in prompt.input_variables}\n return prompt.format(**document_info)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/prompt_template.html"} {"id": "ebc501cc6ed0-0", "text": "Source code for langchain.schema.output\nfrom __future__ import annotations\nfrom copy import deepcopy\nfrom typing import Any, Dict, List, Optional\nfrom uuid import UUID\nfrom pydantic import BaseModel, root_validator\nfrom langchain.load.serializable import Serializable\nfrom langchain.schema.messages import BaseMessage\n[docs]class Generation(Serializable):\n \"\"\"A single text generation output.\"\"\"\n text: str\n \"\"\"Generated text output.\"\"\"\n generation_info: Optional[Dict[str, Any]] = None\n \"\"\"Raw response from the provider. May include things like the \n reason for finishing or token log probabilities.\n \"\"\"\n # TODO: add log probs as separate attribute\n @property\n def lc_serializable(self) -> bool:\n \"\"\"Whether this class is LangChain serializable.\"\"\"\n return True\n[docs]class ChatGeneration(Generation):\n \"\"\"A single chat generation output.\"\"\"\n text: str = \"\"\n \"\"\"*SHOULD NOT BE SET DIRECTLY* The text contents of the output message.\"\"\"\n message: BaseMessage\n \"\"\"The message output by the chat model.\"\"\"\n[docs] @root_validator\n def set_text(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Set the text attribute to be the contents of the message.\"\"\"\n values[\"text\"] = values[\"message\"].content\n return values\n[docs]class RunInfo(BaseModel):\n \"\"\"Class that contains metadata for a single execution of a Chain or model.\"\"\"\n run_id: UUID\n \"\"\"A unique identifier for the model or chain run.\"\"\"\n[docs]class ChatResult(BaseModel):\n \"\"\"Class that contains all results for a single chat model call.\"\"\"\n generations: List[ChatGeneration]\n \"\"\"List of the chat generations. This is a List because an input can have multiple", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/output.html"} {"id": "ebc501cc6ed0-1", "text": "\"\"\"List of the chat generations. This is a List because an input can have multiple \n candidate generations.\n \"\"\"\n llm_output: Optional[dict] = None\n \"\"\"For arbitrary LLM provider specific output.\"\"\"\n[docs]class LLMResult(BaseModel):\n \"\"\"Class that contains all results for a batched LLM call.\"\"\"\n generations: List[List[Generation]]\n \"\"\"List of generated outputs. This is a List[List[]] because\n each input could have multiple candidate generations.\"\"\"\n llm_output: Optional[dict] = None\n \"\"\"Arbitrary LLM provider-specific output.\"\"\"\n run: Optional[List[RunInfo]] = None\n \"\"\"List of metadata info for model call for each input.\"\"\"\n[docs] def flatten(self) -> List[LLMResult]:\n \"\"\"Flatten generations into a single list.\n Unpack List[List[Generation]] -> List[LLMResult] where each returned LLMResult\n contains only a single Generation. If token usage information is available,\n it is kept only for the LLMResult corresponding to the top-choice\n Generation, to avoid over-counting of token usage downstream.\n Returns:\n List of LLMResults where each returned LLMResult contains a single\n Generation.\n \"\"\"\n llm_results = []\n for i, gen_list in enumerate(self.generations):\n # Avoid double counting tokens in OpenAICallback\n if i == 0:\n llm_results.append(\n LLMResult(\n generations=[gen_list],\n llm_output=self.llm_output,\n )\n )\n else:\n if self.llm_output is not None:\n llm_output = deepcopy(self.llm_output)\n llm_output[\"token_usage\"] = dict()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/output.html"} {"id": "ebc501cc6ed0-2", "text": "llm_output[\"token_usage\"] = dict()\n else:\n llm_output = None\n llm_results.append(\n LLMResult(\n generations=[gen_list],\n llm_output=llm_output,\n )\n )\n return llm_results\n def __eq__(self, other: object) -> bool:\n \"\"\"Check for LLMResult equality by ignoring any metadata related to runs.\"\"\"\n if not isinstance(other, LLMResult):\n return NotImplemented\n return (\n self.generations == other.generations\n and self.llm_output == other.llm_output\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/output.html"} {"id": "e4f28dd3c232-0", "text": "Source code for langchain.schema.memory\nfrom __future__ import annotations\nfrom abc import ABC, abstractmethod\nfrom typing import Any, Dict, List\nfrom langchain.load.serializable import Serializable\nfrom langchain.schema.messages import AIMessage, BaseMessage, HumanMessage\n[docs]class BaseMemory(Serializable, ABC):\n \"\"\"Base abstract class for memory in Chains.\n Memory refers to state in Chains. Memory can be used to store information about\n past executions of a Chain and inject that information into the inputs of\n future executions of the Chain. For example, for conversational Chains Memory\n can be used to store conversations and automatically add them to future model\n prompts so that the model has the necessary context to respond coherently to\n the latest input.\n Example:\n .. code-block:: python\n class SimpleMemory(BaseMemory):\n memories: Dict[str, Any] = dict()\n @property\n def memory_variables(self) -> List[str]:\n return list(self.memories.keys())\n def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n return self.memories\n def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n pass\n def clear(self) -> None:\n pass\n \"\"\" # noqa: E501\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n @property\n @abstractmethod\n def memory_variables(self) -> List[str]:\n \"\"\"The string keys this memory class will add to chain inputs.\"\"\"\n[docs] @abstractmethod\n def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Return key-value pairs given the text input to the chain.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/memory.html"} {"id": "e4f28dd3c232-1", "text": "\"\"\"Return key-value pairs given the text input to the chain.\"\"\"\n[docs] @abstractmethod\n def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Save the context of this chain run to memory.\"\"\"\n[docs] @abstractmethod\n def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"\n[docs]class BaseChatMessageHistory(ABC):\n \"\"\"Abstract base class for storing chat message history.\n See `ChatMessageHistory` for default implementation.\n Example:\n .. code-block:: python\n class FileChatMessageHistory(BaseChatMessageHistory):\n storage_path: str\n session_id: str\n @property\n def messages(self):\n with open(os.path.join(storage_path, session_id), 'r:utf-8') as f:\n messages = json.loads(f.read())\n return messages_from_dict(messages)\n def add_message(self, message: BaseMessage) -> None:\n messages = self.messages.append(_message_to_dict(message))\n with open(os.path.join(storage_path, session_id), 'w') as f:\n json.dump(f, messages)\n def clear(self):\n with open(os.path.join(storage_path, session_id), 'w') as f:\n f.write(\"[]\")\n \"\"\"\n messages: List[BaseMessage]\n \"\"\"A list of Messages stored in-memory.\"\"\"\n[docs] def add_user_message(self, message: str) -> None:\n \"\"\"Convenience method for adding a human message string to the store.\n Args:\n message: The string contents of a human message.\n \"\"\"\n self.add_message(HumanMessage(content=message))\n[docs] def add_ai_message(self, message: str) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/memory.html"} {"id": "e4f28dd3c232-2", "text": "[docs] def add_ai_message(self, message: str) -> None:\n \"\"\"Convenience method for adding an AI message string to the store.\n Args:\n message: The string contents of an AI message.\n \"\"\"\n self.add_message(AIMessage(content=message))\n # TODO: Make this an abstractmethod.\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Add a Message object to the store.\n Args:\n message: A BaseMessage object to store.\n \"\"\"\n raise NotImplementedError\n[docs] @abstractmethod\n def clear(self) -> None:\n \"\"\"Remove all messages from the store\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/memory.html"} {"id": "7571573176bb-0", "text": "Source code for langchain.schema.document\nfrom __future__ import annotations\nfrom abc import ABC, abstractmethod\nfrom typing import Any, Sequence\nfrom pydantic import Field\nfrom langchain.load.serializable import Serializable\n[docs]class Document(Serializable):\n \"\"\"Class for storing a piece of text and associated metadata.\"\"\"\n page_content: str\n \"\"\"String text.\"\"\"\n metadata: dict = Field(default_factory=dict)\n \"\"\"Arbitrary metadata about the page content (e.g., source, relationships to other\n documents, etc.).\n \"\"\"\n[docs]class BaseDocumentTransformer(ABC):\n \"\"\"Abstract base class for document transformation systems.\n A document transformation system takes a sequence of Documents and returns a\n sequence of transformed Documents.\n Example:\n .. code-block:: python\n class EmbeddingsRedundantFilter(BaseDocumentTransformer, BaseModel):\n embeddings: Embeddings\n similarity_fn: Callable = cosine_similarity\n similarity_threshold: float = 0.95\n class Config:\n arbitrary_types_allowed = True\n def transform_documents(\n self, documents: Sequence[Document], **kwargs: Any\n ) -> Sequence[Document]:\n stateful_documents = get_stateful_documents(documents)\n embedded_documents = _get_embeddings_from_stateful_docs(\n self.embeddings, stateful_documents\n )\n included_idxs = _filter_similar_embeddings(\n embedded_documents, self.similarity_fn, self.similarity_threshold\n )\n return [stateful_documents[i] for i in sorted(included_idxs)]\n async def atransform_documents(\n self, documents: Sequence[Document], **kwargs: Any\n ) -> Sequence[Document]:\n raise NotImplementedError\n \"\"\" # noqa: E501\n[docs] @abstractmethod\n def transform_documents(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/document.html"} {"id": "7571573176bb-1", "text": "[docs] @abstractmethod\n def transform_documents(\n self, documents: Sequence[Document], **kwargs: Any\n ) -> Sequence[Document]:\n \"\"\"Transform a list of documents.\n Args:\n documents: A sequence of Documents to be transformed.\n Returns:\n A list of transformed Documents.\n \"\"\"\n[docs] @abstractmethod\n async def atransform_documents(\n self, documents: Sequence[Document], **kwargs: Any\n ) -> Sequence[Document]:\n \"\"\"Asynchronously transform a list of documents.\n Args:\n documents: A sequence of Documents to be transformed.\n Returns:\n A list of transformed Documents.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/document.html"} {"id": "4d7f297b6557-0", "text": "Source code for langchain.schema.agent\nfrom __future__ import annotations\nfrom dataclasses import dataclass\nfrom typing import NamedTuple, Union\n@dataclass\nclass AgentAction:\n \"\"\"A full description of an action for an ActionAgent to execute.\"\"\"\n tool: str\n \"\"\"The name of the Tool to execute.\"\"\"\n tool_input: Union[str, dict]\n \"\"\"The input to pass in to the Tool.\"\"\"\n log: str\n \"\"\"Additional information to log about the action.\"\"\"\n[docs]class AgentFinish(NamedTuple):\n \"\"\"The final return value of an ActionAgent.\"\"\"\n return_values: dict\n \"\"\"Dictionary of return values.\"\"\"\n log: str\n \"\"\"Additional information to log about the return value\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/agent.html"} {"id": "ba336a3d517d-0", "text": "Source code for langchain.schema.retriever\nfrom __future__ import annotations\nimport warnings\nfrom abc import ABC, abstractmethod\nfrom inspect import signature\nfrom typing import TYPE_CHECKING, Any, Dict, List, Optional\nfrom langchain.load.dump import dumpd\nfrom langchain.load.serializable import Serializable\nfrom langchain.schema.document import Document\nif TYPE_CHECKING:\n from langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n Callbacks,\n )\n[docs]class BaseRetriever(Serializable, ABC):\n \"\"\"Abstract base class for a Document retrieval system.\n A retrieval system is defined as something that can take string queries and return\n the most 'relevant' Documents from some source.\n Example:\n .. code-block:: python\n class TFIDFRetriever(BaseRetriever, BaseModel):\n vectorizer: Any\n docs: List[Document]\n tfidf_array: Any\n k: int = 4\n class Config:\n arbitrary_types_allowed = True\n def get_relevant_documents(self, query: str) -> List[Document]:\n from sklearn.metrics.pairwise import cosine_similarity\n # Ip -- (n_docs,x), Op -- (n_docs,n_Feats)\n query_vec = self.vectorizer.transform([query])\n # Op -- (n_docs,1) -- Cosine Sim with each doc\n results = cosine_similarity(self.tfidf_array, query_vec).reshape((-1,))\n return [self.docs[i] for i in results.argsort()[-self.k :][::-1]]\n async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError\n \"\"\" # noqa: E501\n[docs] class Config:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/retriever.html"} {"id": "ba336a3d517d-1", "text": "\"\"\" # noqa: E501\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n _new_arg_supported: bool = False\n _expects_other_args: bool = False\n tags: Optional[List[str]] = None\n \"\"\"Optional list of tags associated with the retriever. Defaults to None\n These tags will be associated with each call to this retriever,\n and passed as arguments to the handlers defined in `callbacks`.\n You can use these to eg identify a specific instance of a retriever with its \n use case.\n \"\"\"\n metadata: Optional[Dict[str, Any]] = None\n \"\"\"Optional metadata associated with the retriever. Defaults to None\n This metadata will be associated with each call to this retriever,\n and passed as arguments to the handlers defined in `callbacks`.\n You can use these to eg identify a specific instance of a retriever with its \n use case.\n \"\"\"\n def __init_subclass__(cls, **kwargs: Any) -> None:\n super().__init_subclass__(**kwargs)\n # Version upgrade for old retrievers that implemented the public\n # methods directly.\n if cls.get_relevant_documents != BaseRetriever.get_relevant_documents:\n warnings.warn(\n \"Retrievers must implement abstract `_get_relevant_documents` method\"\n \" instead of `get_relevant_documents`\",\n DeprecationWarning,\n )\n swap = cls.get_relevant_documents\n cls.get_relevant_documents = ( # type: ignore[assignment]\n BaseRetriever.get_relevant_documents\n )\n cls._get_relevant_documents = swap # type: ignore[assignment]\n if (", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/retriever.html"} {"id": "ba336a3d517d-2", "text": "if (\n hasattr(cls, \"aget_relevant_documents\")\n and cls.aget_relevant_documents != BaseRetriever.aget_relevant_documents\n ):\n warnings.warn(\n \"Retrievers must implement abstract `_aget_relevant_documents` method\"\n \" instead of `aget_relevant_documents`\",\n DeprecationWarning,\n )\n aswap = cls.aget_relevant_documents\n cls.aget_relevant_documents = ( # type: ignore[assignment]\n BaseRetriever.aget_relevant_documents\n )\n cls._aget_relevant_documents = aswap # type: ignore[assignment]\n parameters = signature(cls._get_relevant_documents).parameters\n cls._new_arg_supported = parameters.get(\"run_manager\") is not None\n # If a V1 retriever broke the interface and expects additional arguments\n cls._expects_other_args = (\n len(set(parameters.keys()) - {\"self\", \"query\", \"run_manager\"}) > 0\n )\n @abstractmethod\n def _get_relevant_documents(\n self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n ) -> List[Document]:\n \"\"\"Get documents relevant to a query.\n Args:\n query: String to find relevant documents for\n run_manager: The callbacks handler to use\n Returns:\n List of relevant documents\n \"\"\"\n @abstractmethod\n async def _aget_relevant_documents(\n self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n ) -> List[Document]:\n \"\"\"Asynchronously get documents relevant to a query.\n Args:\n query: String to find relevant documents for\n run_manager: The callbacks handler to use\n Returns:\n List of relevant documents\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/retriever.html"} {"id": "ba336a3d517d-3", "text": "Returns:\n List of relevant documents\n \"\"\"\n[docs] def get_relevant_documents(\n self,\n query: str,\n *,\n callbacks: Callbacks = None,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Retrieve documents relevant to a query.\n Args:\n query: string to find relevant documents for\n callbacks: Callback manager or list of callbacks\n tags: Optional list of tags associated with the retriever. Defaults to None\n These tags will be associated with each call to this retriever,\n and passed as arguments to the handlers defined in `callbacks`.\n metadata: Optional metadata associated with the retriever. Defaults to None\n This metadata will be associated with each call to this retriever,\n and passed as arguments to the handlers defined in `callbacks`.\n Returns:\n List of relevant documents\n \"\"\"\n from langchain.callbacks.manager import CallbackManager\n callback_manager = CallbackManager.configure(\n callbacks,\n None,\n verbose=kwargs.get(\"verbose\", False),\n inheritable_tags=tags,\n local_tags=self.tags,\n inheritable_metadata=metadata,\n local_metadata=self.metadata,\n )\n run_manager = callback_manager.on_retriever_start(\n dumpd(self),\n query,\n **kwargs,\n )\n try:\n _kwargs = kwargs if self._expects_other_args else {}\n if self._new_arg_supported:\n result = self._get_relevant_documents(\n query, run_manager=run_manager, **_kwargs\n )\n else:\n result = self._get_relevant_documents(query, **_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/retriever.html"} {"id": "ba336a3d517d-4", "text": "else:\n result = self._get_relevant_documents(query, **_kwargs)\n except Exception as e:\n run_manager.on_retriever_error(e)\n raise e\n else:\n run_manager.on_retriever_end(\n result,\n **kwargs,\n )\n return result\n[docs] async def aget_relevant_documents(\n self,\n query: str,\n *,\n callbacks: Callbacks = None,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Asynchronously get documents relevant to a query.\n Args:\n query: string to find relevant documents for\n callbacks: Callback manager or list of callbacks\n tags: Optional list of tags associated with the retriever. Defaults to None\n These tags will be associated with each call to this retriever,\n and passed as arguments to the handlers defined in `callbacks`.\n metadata: Optional metadata associated with the retriever. Defaults to None\n This metadata will be associated with each call to this retriever,\n and passed as arguments to the handlers defined in `callbacks`.\n Returns:\n List of relevant documents\n \"\"\"\n from langchain.callbacks.manager import AsyncCallbackManager\n callback_manager = AsyncCallbackManager.configure(\n callbacks,\n None,\n verbose=kwargs.get(\"verbose\", False),\n inheritable_tags=tags,\n local_tags=self.tags,\n inheritable_metadata=metadata,\n local_metadata=self.metadata,\n )\n run_manager = await callback_manager.on_retriever_start(\n dumpd(self),\n query,\n **kwargs,\n )\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/retriever.html"} {"id": "ba336a3d517d-5", "text": "query,\n **kwargs,\n )\n try:\n _kwargs = kwargs if self._expects_other_args else {}\n if self._new_arg_supported:\n result = await self._aget_relevant_documents(\n query, run_manager=run_manager, **_kwargs\n )\n else:\n result = await self._aget_relevant_documents(query, **_kwargs)\n except Exception as e:\n await run_manager.on_retriever_error(e)\n raise e\n else:\n await run_manager.on_retriever_end(\n result,\n **kwargs,\n )\n return result", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/retriever.html"} {"id": "b639417a1f42-0", "text": "Source code for langchain.schema.messages\nfrom __future__ import annotations\nfrom abc import abstractmethod\nfrom typing import List, Sequence\nfrom pydantic import Field\nfrom langchain.load.serializable import Serializable\n[docs]def get_buffer_string(\n messages: Sequence[BaseMessage], human_prefix: str = \"Human\", ai_prefix: str = \"AI\"\n) -> str:\n \"\"\"Convert sequence of Messages to strings and concatenate them into one string.\n Args:\n messages: Messages to be converted to strings.\n human_prefix: The prefix to prepend to contents of HumanMessages.\n ai_prefix: THe prefix to prepend to contents of AIMessages.\n Returns:\n A single string concatenation of all input messages.\n Example:\n .. code-block:: python\n from langchain.schema import AIMessage, HumanMessage\n messages = [\n HumanMessage(content=\"Hi, how are you?\"),\n AIMessage(content=\"Good, how are you?\"),\n ]\n get_buffer_string(messages)\n # -> \"Human: Hi, how are you?\\nAI: Good, how are you?\"\n \"\"\"\n string_messages = []\n for m in messages:\n if isinstance(m, HumanMessage):\n role = human_prefix\n elif isinstance(m, AIMessage):\n role = ai_prefix\n elif isinstance(m, SystemMessage):\n role = \"System\"\n elif isinstance(m, FunctionMessage):\n role = \"Function\"\n elif isinstance(m, ChatMessage):\n role = m.role\n else:\n raise ValueError(f\"Got unsupported message type: {m}\")\n message = f\"{role}: {m.content}\"\n if isinstance(m, AIMessage) and \"function_call\" in m.additional_kwargs:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/messages.html"} {"id": "b639417a1f42-1", "text": "if isinstance(m, AIMessage) and \"function_call\" in m.additional_kwargs:\n message += f\"{m.additional_kwargs['function_call']}\"\n string_messages.append(message)\n return \"\\n\".join(string_messages)\n[docs]class BaseMessage(Serializable):\n \"\"\"The base abstract Message class.\n Messages are the inputs and outputs of ChatModels.\n \"\"\"\n content: str\n \"\"\"The string contents of the message.\"\"\"\n additional_kwargs: dict = Field(default_factory=dict)\n \"\"\"Any additional information.\"\"\"\n @property\n @abstractmethod\n def type(self) -> str:\n \"\"\"Type of the Message, used for serialization.\"\"\"\n @property\n def lc_serializable(self) -> bool:\n \"\"\"Whether this class is LangChain serializable.\"\"\"\n return True\n[docs]class HumanMessage(BaseMessage):\n \"\"\"A Message from a human.\"\"\"\n example: bool = False\n \"\"\"Whether this Message is being passed in to the model as part of an example \n conversation.\n \"\"\"\n @property\n def type(self) -> str:\n \"\"\"Type of the message, used for serialization.\"\"\"\n return \"human\"\n[docs]class AIMessage(BaseMessage):\n \"\"\"A Message from an AI.\"\"\"\n example: bool = False\n \"\"\"Whether this Message is being passed in to the model as part of an example \n conversation.\n \"\"\"\n @property\n def type(self) -> str:\n \"\"\"Type of the message, used for serialization.\"\"\"\n return \"ai\"\n[docs]class SystemMessage(BaseMessage):\n \"\"\"A Message for priming AI behavior, usually passed in as the first of a sequence\n of input messages.\n \"\"\"\n @property\n def type(self) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/messages.html"} {"id": "b639417a1f42-2", "text": "\"\"\"\n @property\n def type(self) -> str:\n \"\"\"Type of the message, used for serialization.\"\"\"\n return \"system\"\n[docs]class FunctionMessage(BaseMessage):\n \"\"\"A Message for passing the result of executing a function back to a model.\"\"\"\n name: str\n \"\"\"The name of the function that was executed.\"\"\"\n @property\n def type(self) -> str:\n \"\"\"Type of the message, used for serialization.\"\"\"\n return \"function\"\n[docs]class ChatMessage(BaseMessage):\n \"\"\"A Message that can be assigned an arbitrary speaker (i.e. role).\"\"\"\n role: str\n \"\"\"The speaker / role of the Message.\"\"\"\n @property\n def type(self) -> str:\n \"\"\"Type of the message, used for serialization.\"\"\"\n return \"chat\"\ndef _message_to_dict(message: BaseMessage) -> dict:\n return {\"type\": message.type, \"data\": message.dict()}\n[docs]def messages_to_dict(messages: Sequence[BaseMessage]) -> List[dict]:\n \"\"\"Convert a sequence of Messages to a list of dictionaries.\n Args:\n messages: Sequence of messages (as BaseMessages) to convert.\n Returns:\n List of messages as dicts.\n \"\"\"\n return [_message_to_dict(m) for m in messages]\ndef _message_from_dict(message: dict) -> BaseMessage:\n _type = message[\"type\"]\n if _type == \"human\":\n return HumanMessage(**message[\"data\"])\n elif _type == \"ai\":\n return AIMessage(**message[\"data\"])\n elif _type == \"system\":\n return SystemMessage(**message[\"data\"])\n elif _type == \"chat\":\n return ChatMessage(**message[\"data\"])\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/messages.html"} {"id": "b639417a1f42-3", "text": "return ChatMessage(**message[\"data\"])\n else:\n raise ValueError(f\"Got unexpected message type: {_type}\")\n[docs]def messages_from_dict(messages: List[dict]) -> List[BaseMessage]:\n \"\"\"Convert a sequence of messages from dicts to Message objects.\n Args:\n messages: Sequence of messages (as dicts) to convert.\n Returns:\n List of messages (BaseMessages).\n \"\"\"\n return [_message_from_dict(m) for m in messages]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/messages.html"} {"id": "d10d14d3b239-0", "text": "Source code for langchain.schema.output_parser\nfrom __future__ import annotations\nfrom abc import ABC, abstractmethod\nfrom typing import Any, Dict, Generic, List, Optional, TypeVar\nfrom langchain.load.serializable import Serializable\nfrom langchain.schema.output import Generation\nfrom langchain.schema.prompt import PromptValue\nT = TypeVar(\"T\")\n[docs]class BaseLLMOutputParser(Serializable, ABC, Generic[T]):\n \"\"\"Abstract base class for parsing the outputs of a model.\"\"\"\n[docs] @abstractmethod\n def parse_result(self, result: List[Generation]) -> T:\n \"\"\"Parse a list of candidate model Generations into a specific format.\n Args:\n result: A list of Generations to be parsed. The Generations are assumed\n to be different candidate outputs for a single model input.\n Returns:\n Structured output.\n \"\"\"\n[docs]class BaseOutputParser(BaseLLMOutputParser, ABC, Generic[T]):\n \"\"\"Class to parse the output of an LLM call.\n Output parsers help structure language model responses.\n Example:\n .. code-block:: python\n class BooleanOutputParser(BaseOutputParser[bool]):\n true_val: str = \"YES\"\n false_val: str = \"NO\"\n def parse(self, text: str) -> bool:\n cleaned_text = text.strip().upper()\n if cleaned_text not in (self.true_val.upper(), self.false_val.upper()):\n raise OutputParserException(\n f\"BooleanOutputParser expected output value to either be \"\n f\"{self.true_val} or {self.false_val} (case-insensitive). \"\n f\"Received {cleaned_text}.\"\n )\n return cleaned_text == self.true_val.upper()\n @property\n def _type(self) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/output_parser.html"} {"id": "d10d14d3b239-1", "text": "@property\n def _type(self) -> str:\n return \"boolean_output_parser\"\n \"\"\" # noqa: E501\n[docs] def parse_result(self, result: List[Generation]) -> T:\n \"\"\"Parse a list of candidate model Generations into a specific format.\n The return value is parsed from only the first Generation in the result, which\n is assumed to be the highest-likelihood Generation.\n Args:\n result: A list of Generations to be parsed. The Generations are assumed\n to be different candidate outputs for a single model input.\n Returns:\n Structured output.\n \"\"\"\n return self.parse(result[0].text)\n[docs] @abstractmethod\n def parse(self, text: str) -> T:\n \"\"\"Parse a single string model output into some structure.\n Args:\n text: String output of language model.\n Returns:\n Structured output.\n \"\"\"\n # TODO: rename 'completion' -> 'text'.\n[docs] def parse_with_prompt(self, completion: str, prompt: PromptValue) -> Any:\n \"\"\"Parse the output of an LLM call with the input prompt for context.\n The prompt is largely provided in the event the OutputParser wants\n to retry or fix the output in some way, and needs information from\n the prompt to do so.\n Args:\n completion: String output of language model.\n prompt: Input PromptValue.\n Returns:\n Structured output\n \"\"\"\n return self.parse(completion)\n[docs] def get_format_instructions(self) -> str:\n \"\"\"Instructions on how the LLM output should be formatted.\"\"\"\n raise NotImplementedError\n @property\n def _type(self) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/output_parser.html"} {"id": "d10d14d3b239-2", "text": "raise NotImplementedError\n @property\n def _type(self) -> str:\n \"\"\"Return the output parser type for serialization.\"\"\"\n raise NotImplementedError(\n f\"_type property is not implemented in class {self.__class__.__name__}.\"\n \" This is required for serialization.\"\n )\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return dictionary representation of output parser.\"\"\"\n output_parser_dict = super().dict(**kwargs)\n output_parser_dict[\"_type\"] = self._type\n return output_parser_dict\n[docs]class NoOpOutputParser(BaseOutputParser[str]):\n \"\"\"'No operation' OutputParser that returns the text as is.\"\"\"\n @property\n def lc_serializable(self) -> bool:\n \"\"\"Whether the class LangChain serializable.\"\"\"\n return True\n @property\n def _type(self) -> str:\n \"\"\"Return the output parser type for serialization.\"\"\"\n return \"default\"\n[docs] def parse(self, text: str) -> str:\n \"\"\"Returns the input text with no changes.\"\"\"\n return text\n[docs]class OutputParserException(ValueError):\n \"\"\"Exception that output parsers should raise to signify a parsing error.\n This exists to differentiate parsing errors from other code or execution errors\n that also may arise inside the output parser. OutputParserExceptions will be\n available to catch and handle in ways to fix the parsing error, while other\n errors will be raised.\n Args:\n error: The error that's being re-raised or an error message.\n observation: String explanation of error which can be passed to a\n model to try and remediate the issue.\n llm_output: String model output which is error-ing.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/output_parser.html"} {"id": "d10d14d3b239-3", "text": "llm_output: String model output which is error-ing.\n send_to_llm: Whether to send the observation and llm_output back to an Agent\n after an OutputParserException has been raised. This gives the underlying\n model driving the agent the context that the previous output was improperly\n structured, in the hopes that it will update the output to the correct\n format.\n \"\"\"\n def __init__(\n self,\n error: Any,\n observation: Optional[str] = None,\n llm_output: Optional[str] = None,\n send_to_llm: bool = False,\n ):\n super(OutputParserException, self).__init__(error)\n if send_to_llm:\n if observation is None or llm_output is None:\n raise ValueError(\n \"Arguments 'observation' & 'llm_output'\"\n \" are required if 'send_to_llm' is True\"\n )\n self.observation = observation\n self.llm_output = llm_output\n self.send_to_llm = send_to_llm", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/output_parser.html"} {"id": "ddf09f1cd196-0", "text": "Source code for langchain.schema.language_model\nfrom __future__ import annotations\nfrom abc import ABC, abstractmethod\nfrom typing import TYPE_CHECKING, Any, List, Optional, Sequence, Set\nfrom langchain.load.serializable import Serializable\nfrom langchain.schema.messages import BaseMessage, get_buffer_string\nfrom langchain.schema.output import LLMResult\nfrom langchain.schema.prompt import PromptValue\nif TYPE_CHECKING:\n from langchain.callbacks.manager import Callbacks\ndef _get_token_ids_default_method(text: str) -> List[int]:\n \"\"\"Encode the text into token IDs.\"\"\"\n # TODO: this method may not be exact.\n # TODO: this method may differ based on model (eg codex).\n try:\n from transformers import GPT2TokenizerFast\n except ImportError:\n raise ImportError(\n \"Could not import transformers python package. \"\n \"This is needed in order to calculate get_token_ids. \"\n \"Please install it with `pip install transformers`.\"\n )\n # create a GPT-2 tokenizer instance\n tokenizer = GPT2TokenizerFast.from_pretrained(\"gpt2\")\n # tokenize the text using the GPT-2 tokenizer\n return tokenizer.encode(text)\n[docs]class BaseLanguageModel(Serializable, ABC):\n \"\"\"Abstract base class for interfacing with language models.\n All language model wrappers inherit from BaseLanguageModel.\n Exposes three main methods:\n - generate_prompt: generate language model outputs for a sequence of prompt\n values. A prompt value is a model input that can be converted to any language\n model input format (string or messages).\n - predict: pass in a single string to a language model and return a string\n prediction.\n - predict_messages: pass in a sequence of BaseMessages (corresponding to a single", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/language_model.html"} {"id": "ddf09f1cd196-1", "text": "- predict_messages: pass in a sequence of BaseMessages (corresponding to a single\n model call) to a language model and return a BaseMessage prediction.\n Each of these has an equivalent asynchronous method.\n \"\"\"\n[docs] @abstractmethod\n def generate_prompt(\n self,\n prompts: List[PromptValue],\n stop: Optional[List[str]] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> LLMResult:\n \"\"\"Pass a sequence of prompts to the model and return model generations.\n This method should make use of batched calls for models that expose a batched\n API.\n Use this method when you want to:\n 1. take advantage of batched calls,\n 2. need more output from the model than just the top generated value,\n 3. are building chains that are agnostic to the underlying language model\n type (e.g., pure text completion models vs chat models).\n Args:\n prompts: List of PromptValues. A PromptValue is an object that can be\n converted to match the format of any language model (string for pure\n text generation models and BaseMessages for chat models).\n stop: Stop words to use when generating. Model output is cut off at the\n first occurrence of any of these substrings.\n callbacks: Callbacks to pass through. Used for executing additional\n functionality, such as logging or streaming, throughout generation.\n **kwargs: Arbitrary additional keyword arguments. These are usually passed\n to the model provider API call.\n Returns:\n An LLMResult, which contains a list of candidate Generations for each input\n prompt and additional model provider-specific output.\n \"\"\"\n[docs] @abstractmethod\n async def agenerate_prompt(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/language_model.html"} {"id": "ddf09f1cd196-2", "text": "\"\"\"\n[docs] @abstractmethod\n async def agenerate_prompt(\n self,\n prompts: List[PromptValue],\n stop: Optional[List[str]] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> LLMResult:\n \"\"\"Asynchronously pass a sequence of prompts and return model generations.\n This method should make use of batched calls for models that expose a batched\n API.\n Use this method when you want to:\n 1. take advantage of batched calls,\n 2. need more output from the model than just the top generated value,\n 3. are building chains that are agnostic to the underlying language model\n type (e.g., pure text completion models vs chat models).\n Args:\n prompts: List of PromptValues. A PromptValue is an object that can be\n converted to match the format of any language model (string for pure\n text generation models and BaseMessages for chat models).\n stop: Stop words to use when generating. Model output is cut off at the\n first occurrence of any of these substrings.\n callbacks: Callbacks to pass through. Used for executing additional\n functionality, such as logging or streaming, throughout generation.\n **kwargs: Arbitrary additional keyword arguments. These are usually passed\n to the model provider API call.\n Returns:\n An LLMResult, which contains a list of candidate Generations for each input\n prompt and additional model provider-specific output.\n \"\"\"\n[docs] @abstractmethod\n def predict(\n self, text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any\n ) -> str:\n \"\"\"Pass a single string input to the model and return a string prediction.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/language_model.html"} {"id": "ddf09f1cd196-3", "text": "\"\"\"Pass a single string input to the model and return a string prediction.\n Use this method when passing in raw text. If you want to pass in specific\n types of chat messages, use predict_messages.\n Args:\n text: String input to pass to the model.\n stop: Stop words to use when generating. Model output is cut off at the\n first occurrence of any of these substrings.\n **kwargs: Arbitrary additional keyword arguments. These are usually passed\n to the model provider API call.\n Returns:\n Top model prediction as a string.\n \"\"\"\n[docs] @abstractmethod\n def predict_messages(\n self,\n messages: List[BaseMessage],\n *,\n stop: Optional[Sequence[str]] = None,\n **kwargs: Any,\n ) -> BaseMessage:\n \"\"\"Pass a message sequence to the model and return a message prediction.\n Use this method when passing in chat messages. If you want to pass in raw text,\n use predict.\n Args:\n messages: A sequence of chat messages corresponding to a single model input.\n stop: Stop words to use when generating. Model output is cut off at the\n first occurrence of any of these substrings.\n **kwargs: Arbitrary additional keyword arguments. These are usually passed\n to the model provider API call.\n Returns:\n Top model prediction as a message.\n \"\"\"\n[docs] @abstractmethod\n async def apredict(\n self, text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any\n ) -> str:\n \"\"\"Asynchronously pass a string to the model and return a string prediction.\n Use this method when calling pure text generation models and only the top\n candidate generation is needed.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/language_model.html"} {"id": "ddf09f1cd196-4", "text": "candidate generation is needed.\n Args:\n text: String input to pass to the model.\n stop: Stop words to use when generating. Model output is cut off at the\n first occurrence of any of these substrings.\n **kwargs: Arbitrary additional keyword arguments. These are usually passed\n to the model provider API call.\n Returns:\n Top model prediction as a string.\n \"\"\"\n[docs] @abstractmethod\n async def apredict_messages(\n self,\n messages: List[BaseMessage],\n *,\n stop: Optional[Sequence[str]] = None,\n **kwargs: Any,\n ) -> BaseMessage:\n \"\"\"Asynchronously pass messages to the model and return a message prediction.\n Use this method when calling chat models and only the top\n candidate generation is needed.\n Args:\n messages: A sequence of chat messages corresponding to a single model input.\n stop: Stop words to use when generating. Model output is cut off at the\n first occurrence of any of these substrings.\n **kwargs: Arbitrary additional keyword arguments. These are usually passed\n to the model provider API call.\n Returns:\n Top model prediction as a message.\n \"\"\"\n[docs] def get_token_ids(self, text: str) -> List[int]:\n \"\"\"Return the ordered ids of the tokens in a text.\n Args:\n text: The string input to tokenize.\n Returns:\n A list of ids corresponding to the tokens in the text, in order they occur\n in the text.\n \"\"\"\n return _get_token_ids_default_method(text)\n[docs] def get_num_tokens(self, text: str) -> int:\n \"\"\"Get the number of tokens present in the text.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/language_model.html"} {"id": "ddf09f1cd196-5", "text": "\"\"\"Get the number of tokens present in the text.\n Useful for checking if an input will fit in a model's context window.\n Args:\n text: The string input to tokenize.\n Returns:\n The integer number of tokens in the text.\n \"\"\"\n return len(self.get_token_ids(text))\n[docs] def get_num_tokens_from_messages(self, messages: List[BaseMessage]) -> int:\n \"\"\"Get the number of tokens in the messages.\n Useful for checking if an input will fit in a model's context window.\n Args:\n messages: The message inputs to tokenize.\n Returns:\n The sum of the number of tokens across the messages.\n \"\"\"\n return sum([self.get_num_tokens(get_buffer_string([m])) for m in messages])\n @classmethod\n def _all_required_field_names(cls) -> Set:\n all_required_field_names = set()\n for field in cls.__fields__.values():\n all_required_field_names.add(field.name)\n if field.has_alias:\n all_required_field_names.add(field.alias)\n return all_required_field_names", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema/language_model.html"} {"id": "ce743f13509a-0", "text": "Source code for langchain.chains.prompt_selector\nfrom abc import ABC, abstractmethod\nfrom typing import Callable, List, Tuple\nfrom pydantic import BaseModel, Field\nfrom langchain.chat_models.base import BaseChatModel\nfrom langchain.llms.base import BaseLLM\nfrom langchain.schema import BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class BasePromptSelector(BaseModel, ABC):\n[docs] @abstractmethod\n def get_prompt(self, llm: BaseLanguageModel) -> BasePromptTemplate:\n \"\"\"Get default prompt for a language model.\"\"\"\n[docs]class ConditionalPromptSelector(BasePromptSelector):\n \"\"\"Prompt collection that goes through conditionals.\"\"\"\n default_prompt: BasePromptTemplate\n conditionals: List[\n Tuple[Callable[[BaseLanguageModel], bool], BasePromptTemplate]\n ] = Field(default_factory=list)\n[docs] def get_prompt(self, llm: BaseLanguageModel) -> BasePromptTemplate:\n for condition, prompt in self.conditionals:\n if condition(llm):\n return prompt\n return self.default_prompt\n[docs]def is_llm(llm: BaseLanguageModel) -> bool:\n \"\"\"Check if the language model is a LLM.\n Args:\n llm: Language model to check.\n Returns:\n True if the language model is a BaseLLM model, False otherwise.\n \"\"\"\n return isinstance(llm, BaseLLM)\n[docs]def is_chat_model(llm: BaseLanguageModel) -> bool:\n \"\"\"Check if the language model is a chat model.\n Args:\n llm: Language model to check.\n Returns:\n True if the language model is a BaseChatModel model, False otherwise.\n \"\"\"\n return isinstance(llm, BaseChatModel)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/prompt_selector.html"} {"id": "f044d180e9cd-0", "text": "Source code for langchain.chains.llm\n\"\"\"Chain that just formats a prompt and calls an LLM.\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom typing import Any, Dict, List, Optional, Sequence, Tuple, Union\nfrom pydantic import Extra, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManager,\n AsyncCallbackManagerForChainRun,\n CallbackManager,\n CallbackManagerForChainRun,\n Callbacks,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.input import get_colored_text\nfrom langchain.load.dump import dumpd\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema import (\n BaseLLMOutputParser,\n BasePromptTemplate,\n LLMResult,\n NoOpOutputParser,\n PromptValue,\n)\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class LLMChain(Chain):\n \"\"\"Chain to run queries against LLMs.\n Example:\n .. code-block:: python\n from langchain import LLMChain, OpenAI, PromptTemplate\n prompt_template = \"Tell me a {adjective} joke\"\n prompt = PromptTemplate(\n input_variables=[\"adjective\"], template=prompt_template\n )\n llm = LLMChain(llm=OpenAI(), prompt=prompt)\n \"\"\"\n @property\n def lc_serializable(self) -> bool:\n return True\n prompt: BasePromptTemplate\n \"\"\"Prompt object to use.\"\"\"\n llm: BaseLanguageModel\n \"\"\"Language model to call.\"\"\"\n output_key: str = \"text\" #: :meta private:\n output_parser: BaseLLMOutputParser = Field(default_factory=NoOpOutputParser)\n \"\"\"Output parser to use.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm.html"} {"id": "f044d180e9cd-1", "text": "\"\"\"Output parser to use.\n Defaults to one that takes the most likely string but does not change it \n otherwise.\"\"\"\n return_final_only: bool = True\n \"\"\"Whether to return only the final parsed result. Defaults to True.\n If false, will return a bunch of extra information about the generation.\"\"\"\n llm_kwargs: dict = Field(default_factory=dict)\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Will be whatever keys the prompt expects.\n :meta private:\n \"\"\"\n return self.prompt.input_variables\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Will always return text key.\n :meta private:\n \"\"\"\n if self.return_final_only:\n return [self.output_key]\n else:\n return [self.output_key, \"full_generation\"]\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n response = self.generate([inputs], run_manager=run_manager)\n return self.create_outputs(response)[0]\n[docs] def generate(\n self,\n input_list: List[Dict[str, Any]],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> LLMResult:\n \"\"\"Generate LLM result from inputs.\"\"\"\n prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)\n return self.llm.generate_prompt(\n prompts,\n stop,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm.html"} {"id": "f044d180e9cd-2", "text": "return self.llm.generate_prompt(\n prompts,\n stop,\n callbacks=run_manager.get_child() if run_manager else None,\n **self.llm_kwargs,\n )\n[docs] async def agenerate(\n self,\n input_list: List[Dict[str, Any]],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> LLMResult:\n \"\"\"Generate LLM result from inputs.\"\"\"\n prompts, stop = await self.aprep_prompts(input_list, run_manager=run_manager)\n return await self.llm.agenerate_prompt(\n prompts,\n stop,\n callbacks=run_manager.get_child() if run_manager else None,\n **self.llm_kwargs,\n )\n[docs] def prep_prompts(\n self,\n input_list: List[Dict[str, Any]],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Tuple[List[PromptValue], Optional[List[str]]]:\n \"\"\"Prepare prompts from inputs.\"\"\"\n stop = None\n if \"stop\" in input_list[0]:\n stop = input_list[0][\"stop\"]\n prompts = []\n for inputs in input_list:\n selected_inputs = {k: inputs[k] for k in self.prompt.input_variables}\n prompt = self.prompt.format_prompt(**selected_inputs)\n _colored_text = get_colored_text(prompt.to_string(), \"green\")\n _text = \"Prompt after formatting:\\n\" + _colored_text\n if run_manager:\n run_manager.on_text(_text, end=\"\\n\", verbose=self.verbose)\n if \"stop\" in inputs and inputs[\"stop\"] != stop:\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm.html"} {"id": "f044d180e9cd-3", "text": "raise ValueError(\n \"If `stop` is present in any inputs, should be present in all.\"\n )\n prompts.append(prompt)\n return prompts, stop\n[docs] async def aprep_prompts(\n self,\n input_list: List[Dict[str, Any]],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Tuple[List[PromptValue], Optional[List[str]]]:\n \"\"\"Prepare prompts from inputs.\"\"\"\n stop = None\n if \"stop\" in input_list[0]:\n stop = input_list[0][\"stop\"]\n prompts = []\n for inputs in input_list:\n selected_inputs = {k: inputs[k] for k in self.prompt.input_variables}\n prompt = self.prompt.format_prompt(**selected_inputs)\n _colored_text = get_colored_text(prompt.to_string(), \"green\")\n _text = \"Prompt after formatting:\\n\" + _colored_text\n if run_manager:\n await run_manager.on_text(_text, end=\"\\n\", verbose=self.verbose)\n if \"stop\" in inputs and inputs[\"stop\"] != stop:\n raise ValueError(\n \"If `stop` is present in any inputs, should be present in all.\"\n )\n prompts.append(prompt)\n return prompts, stop\n[docs] def apply(\n self, input_list: List[Dict[str, Any]], callbacks: Callbacks = None\n ) -> List[Dict[str, str]]:\n \"\"\"Utilize the LLM generate method for speed gains.\"\"\"\n callback_manager = CallbackManager.configure(\n callbacks, self.callbacks, self.verbose\n )\n run_manager = callback_manager.on_chain_start(\n dumpd(self),\n {\"input_list\": input_list},\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm.html"} {"id": "f044d180e9cd-4", "text": "dumpd(self),\n {\"input_list\": input_list},\n )\n try:\n response = self.generate(input_list, run_manager=run_manager)\n except (KeyboardInterrupt, Exception) as e:\n run_manager.on_chain_error(e)\n raise e\n outputs = self.create_outputs(response)\n run_manager.on_chain_end({\"outputs\": outputs})\n return outputs\n[docs] async def aapply(\n self, input_list: List[Dict[str, Any]], callbacks: Callbacks = None\n ) -> List[Dict[str, str]]:\n \"\"\"Utilize the LLM generate method for speed gains.\"\"\"\n callback_manager = AsyncCallbackManager.configure(\n callbacks, self.callbacks, self.verbose\n )\n run_manager = await callback_manager.on_chain_start(\n dumpd(self),\n {\"input_list\": input_list},\n )\n try:\n response = await self.agenerate(input_list, run_manager=run_manager)\n except (KeyboardInterrupt, Exception) as e:\n await run_manager.on_chain_error(e)\n raise e\n outputs = self.create_outputs(response)\n await run_manager.on_chain_end({\"outputs\": outputs})\n return outputs\n @property\n def _run_output_key(self) -> str:\n return self.output_key\n[docs] def create_outputs(self, llm_result: LLMResult) -> List[Dict[str, Any]]:\n \"\"\"Create outputs from response.\"\"\"\n result = [\n # Get the text of the top generated string.\n {\n self.output_key: self.output_parser.parse_result(generation),\n \"full_generation\": generation,\n }\n for generation in llm_result.generations\n ]\n if self.return_final_only:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm.html"} {"id": "f044d180e9cd-5", "text": "]\n if self.return_final_only:\n result = [{self.output_key: r[self.output_key]} for r in result]\n return result\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n response = await self.agenerate([inputs], run_manager=run_manager)\n return self.create_outputs(response)[0]\n[docs] def predict(self, callbacks: Callbacks = None, **kwargs: Any) -> str:\n \"\"\"Format prompt with kwargs and pass to LLM.\n Args:\n callbacks: Callbacks to pass to LLMChain\n **kwargs: Keys to pass to prompt template.\n Returns:\n Completion from LLM.\n Example:\n .. code-block:: python\n completion = llm.predict(adjective=\"funny\")\n \"\"\"\n return self(kwargs, callbacks=callbacks)[self.output_key]\n[docs] async def apredict(self, callbacks: Callbacks = None, **kwargs: Any) -> str:\n \"\"\"Format prompt with kwargs and pass to LLM.\n Args:\n callbacks: Callbacks to pass to LLMChain\n **kwargs: Keys to pass to prompt template.\n Returns:\n Completion from LLM.\n Example:\n .. code-block:: python\n completion = llm.predict(adjective=\"funny\")\n \"\"\"\n return (await self.acall(kwargs, callbacks=callbacks))[self.output_key]\n[docs] def predict_and_parse(\n self, callbacks: Callbacks = None, **kwargs: Any\n ) -> Union[str, List[str], Dict[str, Any]]:\n \"\"\"Call predict and then parse the results.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm.html"} {"id": "f044d180e9cd-6", "text": "\"\"\"Call predict and then parse the results.\"\"\"\n warnings.warn(\n \"The predict_and_parse method is deprecated, \"\n \"instead pass an output parser directly to LLMChain.\"\n )\n result = self.predict(callbacks=callbacks, **kwargs)\n if self.prompt.output_parser is not None:\n return self.prompt.output_parser.parse(result)\n else:\n return result\n[docs] async def apredict_and_parse(\n self, callbacks: Callbacks = None, **kwargs: Any\n ) -> Union[str, List[str], Dict[str, str]]:\n \"\"\"Call apredict and then parse the results.\"\"\"\n warnings.warn(\n \"The apredict_and_parse method is deprecated, \"\n \"instead pass an output parser directly to LLMChain.\"\n )\n result = await self.apredict(callbacks=callbacks, **kwargs)\n if self.prompt.output_parser is not None:\n return self.prompt.output_parser.parse(result)\n else:\n return result\n[docs] def apply_and_parse(\n self, input_list: List[Dict[str, Any]], callbacks: Callbacks = None\n ) -> Sequence[Union[str, List[str], Dict[str, str]]]:\n \"\"\"Call apply and then parse the results.\"\"\"\n warnings.warn(\n \"The apply_and_parse method is deprecated, \"\n \"instead pass an output parser directly to LLMChain.\"\n )\n result = self.apply(input_list, callbacks=callbacks)\n return self._parse_generation(result)\n def _parse_generation(\n self, generation: List[Dict[str, str]]\n ) -> Sequence[Union[str, List[str], Dict[str, str]]]:\n if self.prompt.output_parser is not None:\n return [", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm.html"} {"id": "f044d180e9cd-7", "text": "if self.prompt.output_parser is not None:\n return [\n self.prompt.output_parser.parse(res[self.output_key])\n for res in generation\n ]\n else:\n return generation\n[docs] async def aapply_and_parse(\n self, input_list: List[Dict[str, Any]], callbacks: Callbacks = None\n ) -> Sequence[Union[str, List[str], Dict[str, str]]]:\n \"\"\"Call apply and then parse the results.\"\"\"\n warnings.warn(\n \"The aapply_and_parse method is deprecated, \"\n \"instead pass an output parser directly to LLMChain.\"\n )\n result = await self.aapply(input_list, callbacks=callbacks)\n return self._parse_generation(result)\n @property\n def _chain_type(self) -> str:\n return \"llm_chain\"\n[docs] @classmethod\n def from_string(cls, llm: BaseLanguageModel, template: str) -> LLMChain:\n \"\"\"Create LLMChain from LLM and template.\"\"\"\n prompt_template = PromptTemplate.from_template(template)\n return cls(llm=llm, prompt=prompt_template)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm.html"} {"id": "c5831400bed0-0", "text": "Source code for langchain.chains.base\n\"\"\"Base interface that all chains should implement.\"\"\"\nimport inspect\nimport json\nimport logging\nimport warnings\nfrom abc import ABC, abstractmethod\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Optional, Union\nimport yaml\nfrom pydantic import Field, root_validator, validator\nimport langchain\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.callbacks.manager import (\n AsyncCallbackManager,\n AsyncCallbackManagerForChainRun,\n CallbackManager,\n CallbackManagerForChainRun,\n Callbacks,\n)\nfrom langchain.load.dump import dumpd\nfrom langchain.load.serializable import Serializable\nfrom langchain.schema import RUN_KEY, BaseMemory, RunInfo\nlogger = logging.getLogger(__name__)\ndef _get_verbosity() -> bool:\n return langchain.verbose\n[docs]class Chain(Serializable, ABC):\n \"\"\"Abstract base class for creating structured sequences of calls to components.\n Chains should be used to encode a sequence of calls to components like\n models, document retrievers, other chains, etc., and provide a simple interface\n to this sequence.\n The Chain interface makes it easy to create apps that are:\n - Stateful: add Memory to any Chain to give it state,\n - Observable: pass Callbacks to a Chain to execute additional functionality,\n like logging, outside the main sequence of component calls,\n - Composable: the Chain API is flexible enough that it is easy to combine\n Chains with other components, including other Chains.\n The main methods exposed by chains are:\n - `__call__`: Chains are callable. The `__call__` method is the primary way to\n execute a Chain. This takes inputs as a dictionary and returns a\n dictionary output.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/base.html"} {"id": "c5831400bed0-1", "text": "execute a Chain. This takes inputs as a dictionary and returns a\n dictionary output.\n - `run`: A convenience method that takes inputs as args/kwargs and returns the\n output as a string. This method can only be used for a subset of chains and\n cannot return as rich of an output as `__call__`.\n \"\"\"\n memory: Optional[BaseMemory] = None\n \"\"\"Optional memory object. Defaults to None.\n Memory is a class that gets called at the start \n and at the end of every chain. At the start, memory loads variables and passes\n them along in the chain. At the end, it saves any returned variables.\n There are many different types of memory - please see memory docs \n for the full catalog.\"\"\"\n callbacks: Callbacks = Field(default=None, exclude=True)\n \"\"\"Optional list of callback handlers (or callback manager). Defaults to None.\n Callback handlers are called throughout the lifecycle of a call to a chain,\n starting with on_chain_start, ending with on_chain_end or on_chain_error.\n Each custom chain can optionally call additional callback methods, see Callback docs\n for full details.\"\"\"\n callback_manager: Optional[BaseCallbackManager] = Field(default=None, exclude=True)\n \"\"\"Deprecated, use `callbacks` instead.\"\"\"\n verbose: bool = Field(default_factory=_get_verbosity)\n \"\"\"Whether or not run in verbose mode. In verbose mode, some intermediate logs\n will be printed to the console. Defaults to `langchain.verbose` value.\"\"\"\n tags: Optional[List[str]] = None\n \"\"\"Optional list of tags associated with the chain. Defaults to None\n These tags will be associated with each call to this chain,\n and passed as arguments to the handlers defined in `callbacks`.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/base.html"} {"id": "c5831400bed0-2", "text": "and passed as arguments to the handlers defined in `callbacks`.\n You can use these to eg identify a specific instance of a chain with its use case.\n \"\"\"\n metadata: Optional[Dict[str, Any]] = None\n \"\"\"Optional metadata associated with the chain. Defaults to None\n This metadata will be associated with each call to this chain,\n and passed as arguments to the handlers defined in `callbacks`.\n You can use these to eg identify a specific instance of a chain with its use case.\n \"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n @property\n def _chain_type(self) -> str:\n raise NotImplementedError(\"Saving not supported for this chain type.\")\n[docs] @root_validator()\n def raise_callback_manager_deprecation(cls, values: Dict) -> Dict:\n \"\"\"Raise deprecation warning if callback_manager is used.\"\"\"\n if values.get(\"callback_manager\") is not None:\n warnings.warn(\n \"callback_manager is deprecated. Please use callbacks instead.\",\n DeprecationWarning,\n )\n values[\"callbacks\"] = values.pop(\"callback_manager\", None)\n return values\n[docs] @validator(\"verbose\", pre=True, always=True)\n def set_verbose(cls, verbose: Optional[bool]) -> bool:\n \"\"\"Set the chain verbosity.\n Defaults to the global setting if not specified by the user.\n \"\"\"\n if verbose is None:\n return _get_verbosity()\n else:\n return verbose\n @property\n @abstractmethod\n def input_keys(self) -> List[str]:\n \"\"\"Return the keys expected to be in the chain input.\"\"\"\n @property\n @abstractmethod\n def output_keys(self) -> List[str]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/base.html"} {"id": "c5831400bed0-3", "text": "@property\n @abstractmethod\n def output_keys(self) -> List[str]:\n \"\"\"Return the keys expected to be in the chain output.\"\"\"\n def _validate_inputs(self, inputs: Dict[str, Any]) -> None:\n \"\"\"Check that all inputs are present.\"\"\"\n missing_keys = set(self.input_keys).difference(inputs)\n if missing_keys:\n raise ValueError(f\"Missing some input keys: {missing_keys}\")\n def _validate_outputs(self, outputs: Dict[str, Any]) -> None:\n missing_keys = set(self.output_keys).difference(outputs)\n if missing_keys:\n raise ValueError(f\"Missing some output keys: {missing_keys}\")\n @abstractmethod\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Execute the chain.\n This is a private method that is not user-facing. It is only called within\n `Chain.__call__`, which is the user-facing wrapper method that handles\n callbacks configuration and some input/output processing.\n Args:\n inputs: A dict of named inputs to the chain. Assumed to contain all inputs\n specified in `Chain.input_keys`, including any inputs added by memory.\n run_manager: The callbacks manager that contains the callback handlers for\n this run of the chain.\n Returns:\n A dict of named outputs. Should contain all outputs specified in\n `Chain.output_keys`.\n \"\"\"\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Asynchronously execute the chain.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/base.html"} {"id": "c5831400bed0-4", "text": ") -> Dict[str, Any]:\n \"\"\"Asynchronously execute the chain.\n This is a private method that is not user-facing. It is only called within\n `Chain.acall`, which is the user-facing wrapper method that handles\n callbacks configuration and some input/output processing.\n Args:\n inputs: A dict of named inputs to the chain. Assumed to contain all inputs\n specified in `Chain.input_keys`, including any inputs added by memory.\n run_manager: The callbacks manager that contains the callback handlers for\n this run of the chain.\n Returns:\n A dict of named outputs. Should contain all outputs specified in\n `Chain.output_keys`.\n \"\"\"\n raise NotImplementedError(\"Async call not supported for this chain type.\")\n[docs] def __call__(\n self,\n inputs: Union[Dict[str, Any], Any],\n return_only_outputs: bool = False,\n callbacks: Callbacks = None,\n *,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n include_run_info: bool = False,\n ) -> Dict[str, Any]:\n \"\"\"Execute the chain.\n Args:\n inputs: Dictionary of inputs, or single input if chain expects\n only one param. Should contain all inputs specified in\n `Chain.input_keys` except for inputs that will be set by the chain's\n memory.\n return_only_outputs: Whether to return only outputs in the\n response. If True, only new keys generated by this chain will be\n returned. If False, both input keys and new keys generated by this\n chain will be returned. Defaults to False.\n callbacks: Callbacks to use for this chain run. These will be called in", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/base.html"} {"id": "c5831400bed0-5", "text": "callbacks: Callbacks to use for this chain run. These will be called in\n addition to callbacks passed to the chain during construction, but only\n these runtime callbacks will propagate to calls to other objects.\n tags: List of string tags to pass to all callbacks. These will be passed in\n addition to tags passed to the chain during construction, but only\n these runtime tags will propagate to calls to other objects.\n metadata: Optional metadata associated with the chain. Defaults to None\n include_run_info: Whether to include run info in the response. Defaults\n to False.\n Returns:\n A dict of named outputs. Should contain all outputs specified in\n `Chain.output_keys`.\n \"\"\"\n inputs = self.prep_inputs(inputs)\n callback_manager = CallbackManager.configure(\n callbacks,\n self.callbacks,\n self.verbose,\n tags,\n self.tags,\n metadata,\n self.metadata,\n )\n new_arg_supported = inspect.signature(self._call).parameters.get(\"run_manager\")\n run_manager = callback_manager.on_chain_start(\n dumpd(self),\n inputs,\n )\n try:\n outputs = (\n self._call(inputs, run_manager=run_manager)\n if new_arg_supported\n else self._call(inputs)\n )\n except (KeyboardInterrupt, Exception) as e:\n run_manager.on_chain_error(e)\n raise e\n run_manager.on_chain_end(outputs)\n final_outputs: Dict[str, Any] = self.prep_outputs(\n inputs, outputs, return_only_outputs\n )\n if include_run_info:\n final_outputs[RUN_KEY] = RunInfo(run_id=run_manager.run_id)\n return final_outputs\n[docs] async def acall(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/base.html"} {"id": "c5831400bed0-6", "text": "return final_outputs\n[docs] async def acall(\n self,\n inputs: Union[Dict[str, Any], Any],\n return_only_outputs: bool = False,\n callbacks: Callbacks = None,\n *,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n include_run_info: bool = False,\n ) -> Dict[str, Any]:\n \"\"\"Asynchronously execute the chain.\n Args:\n inputs: Dictionary of inputs, or single input if chain expects\n only one param. Should contain all inputs specified in\n `Chain.input_keys` except for inputs that will be set by the chain's\n memory.\n return_only_outputs: Whether to return only outputs in the\n response. If True, only new keys generated by this chain will be\n returned. If False, both input keys and new keys generated by this\n chain will be returned. Defaults to False.\n callbacks: Callbacks to use for this chain run. These will be called in\n addition to callbacks passed to the chain during construction, but only\n these runtime callbacks will propagate to calls to other objects.\n tags: List of string tags to pass to all callbacks. These will be passed in\n addition to tags passed to the chain during construction, but only\n these runtime tags will propagate to calls to other objects.\n metadata: Optional metadata associated with the chain. Defaults to None\n include_run_info: Whether to include run info in the response. Defaults\n to False.\n Returns:\n A dict of named outputs. Should contain all outputs specified in\n `Chain.output_keys`.\n \"\"\"\n inputs = self.prep_inputs(inputs)\n callback_manager = AsyncCallbackManager.configure(\n callbacks,\n self.callbacks,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/base.html"} {"id": "c5831400bed0-7", "text": "callback_manager = AsyncCallbackManager.configure(\n callbacks,\n self.callbacks,\n self.verbose,\n tags,\n self.tags,\n metadata,\n self.metadata,\n )\n new_arg_supported = inspect.signature(self._acall).parameters.get(\"run_manager\")\n run_manager = await callback_manager.on_chain_start(\n dumpd(self),\n inputs,\n )\n try:\n outputs = (\n await self._acall(inputs, run_manager=run_manager)\n if new_arg_supported\n else await self._acall(inputs)\n )\n except (KeyboardInterrupt, Exception) as e:\n await run_manager.on_chain_error(e)\n raise e\n await run_manager.on_chain_end(outputs)\n final_outputs: Dict[str, Any] = self.prep_outputs(\n inputs, outputs, return_only_outputs\n )\n if include_run_info:\n final_outputs[RUN_KEY] = RunInfo(run_id=run_manager.run_id)\n return final_outputs\n[docs] def prep_outputs(\n self,\n inputs: Dict[str, str],\n outputs: Dict[str, str],\n return_only_outputs: bool = False,\n ) -> Dict[str, str]:\n \"\"\"Validate and prepare chain outputs, and save info about this run to memory.\n Args:\n inputs: Dictionary of chain inputs, including any inputs added by chain\n memory.\n outputs: Dictionary of initial chain outputs.\n return_only_outputs: Whether to only return the chain outputs. If False,\n inputs are also added to the final outputs.\n Returns:\n A dict of the final chain outputs.\n \"\"\"\n self._validate_outputs(outputs)\n if self.memory is not None:\n self.memory.save_context(inputs, outputs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/base.html"} {"id": "c5831400bed0-8", "text": "if self.memory is not None:\n self.memory.save_context(inputs, outputs)\n if return_only_outputs:\n return outputs\n else:\n return {**inputs, **outputs}\n[docs] def prep_inputs(self, inputs: Union[Dict[str, Any], Any]) -> Dict[str, str]:\n \"\"\"Validate and prepare chain inputs, including adding inputs from memory.\n Args:\n inputs: Dictionary of raw inputs, or single input if chain expects\n only one param. Should contain all inputs specified in\n `Chain.input_keys` except for inputs that will be set by the chain's\n memory.\n Returns:\n A dictionary of all inputs, including those added by the chain's memory.\n \"\"\"\n if not isinstance(inputs, dict):\n _input_keys = set(self.input_keys)\n if self.memory is not None:\n # If there are multiple input keys, but some get set by memory so that\n # only one is not set, we can still figure out which key it is.\n _input_keys = _input_keys.difference(self.memory.memory_variables)\n if len(_input_keys) != 1:\n raise ValueError(\n f\"A single string input was passed in, but this chain expects \"\n f\"multiple inputs ({_input_keys}). When a chain expects \"\n f\"multiple inputs, please call it by passing in a dictionary, \"\n \"eg `chain({'foo': 1, 'bar': 2})`\"\n )\n inputs = {list(_input_keys)[0]: inputs}\n if self.memory is not None:\n external_context = self.memory.load_memory_variables(inputs)\n inputs = dict(inputs, **external_context)\n self._validate_inputs(inputs)\n return inputs\n @property", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/base.html"} {"id": "c5831400bed0-9", "text": "self._validate_inputs(inputs)\n return inputs\n @property\n def _run_output_key(self) -> str:\n if len(self.output_keys) != 1:\n raise ValueError(\n f\"`run` not supported when there is not exactly \"\n f\"one output key. Got {self.output_keys}.\"\n )\n return self.output_keys[0]\n[docs] def run(\n self,\n *args: Any,\n callbacks: Callbacks = None,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Convenience method for executing chain when there's a single string output.\n The main difference between this method and `Chain.__call__` is that this method\n can only be used for chains that return a single string output. If a Chain\n has more outputs, a non-string output, or you want to return the inputs/run\n info along with the outputs, use `Chain.__call__`.\n The other difference is that this method expects inputs to be passed directly in\n as positional arguments or keyword arguments, whereas `Chain.__call__` expects\n a single input dictionary with all the inputs.\n Args:\n *args: If the chain expects a single input, it can be passed in as the\n sole positional argument.\n callbacks: Callbacks to use for this chain run. These will be called in\n addition to callbacks passed to the chain during construction, but only\n these runtime callbacks will propagate to calls to other objects.\n tags: List of string tags to pass to all callbacks. These will be passed in\n addition to tags passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/base.html"} {"id": "c5831400bed0-10", "text": "addition to tags passed to the chain during construction, but only\n these runtime tags will propagate to calls to other objects.\n **kwargs: If the chain expects multiple inputs, they can be passed in\n directly as keyword arguments.\n Returns:\n The chain output as a string.\n Example:\n .. code-block:: python\n # Suppose we have a single-input chain that takes a 'question' string:\n chain.run(\"What's the temperature in Boise, Idaho?\")\n # -> \"The temperature in Boise is...\"\n # Suppose we have a multi-input chain that takes a 'question' string\n # and 'context' string:\n question = \"What's the temperature in Boise, Idaho?\"\n context = \"Weather report for Boise, Idaho on 07/03/23...\"\n chain.run(question=question, context=context)\n # -> \"The temperature in Boise is...\"\n \"\"\"\n # Run at start to make sure this is possible/defined\n _output_key = self._run_output_key\n if args and not kwargs:\n if len(args) != 1:\n raise ValueError(\"`run` supports only one positional argument.\")\n return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[\n _output_key\n ]\n if kwargs and not args:\n return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[\n _output_key\n ]\n if not kwargs and not args:\n raise ValueError(\n \"`run` supported with either positional arguments or keyword arguments,\"\n \" but none were provided.\"\n )\n else:\n raise ValueError(\n f\"`run` supported with either positional arguments or keyword arguments\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/base.html"} {"id": "c5831400bed0-11", "text": "raise ValueError(\n f\"`run` supported with either positional arguments or keyword arguments\"\n f\" but not both. Got args: {args} and kwargs: {kwargs}.\"\n )\n[docs] async def arun(\n self,\n *args: Any,\n callbacks: Callbacks = None,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Convenience method for executing chain when there's a single string output.\n The main difference between this method and `Chain.__call__` is that this method\n can only be used for chains that return a single string output. If a Chain\n has more outputs, a non-string output, or you want to return the inputs/run\n info along with the outputs, use `Chain.__call__`.\n The other difference is that this method expects inputs to be passed directly in\n as positional arguments or keyword arguments, whereas `Chain.__call__` expects\n a single input dictionary with all the inputs.\n Args:\n *args: If the chain expects a single input, it can be passed in as the\n sole positional argument.\n callbacks: Callbacks to use for this chain run. These will be called in\n addition to callbacks passed to the chain during construction, but only\n these runtime callbacks will propagate to calls to other objects.\n tags: List of string tags to pass to all callbacks. These will be passed in\n addition to tags passed to the chain during construction, but only\n these runtime tags will propagate to calls to other objects.\n **kwargs: If the chain expects multiple inputs, they can be passed in\n directly as keyword arguments.\n Returns:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/base.html"} {"id": "c5831400bed0-12", "text": "directly as keyword arguments.\n Returns:\n The chain output as a string.\n Example:\n .. code-block:: python\n # Suppose we have a single-input chain that takes a 'question' string:\n await chain.arun(\"What's the temperature in Boise, Idaho?\")\n # -> \"The temperature in Boise is...\"\n # Suppose we have a multi-input chain that takes a 'question' string\n # and 'context' string:\n question = \"What's the temperature in Boise, Idaho?\"\n context = \"Weather report for Boise, Idaho on 07/03/23...\"\n await chain.arun(question=question, context=context)\n # -> \"The temperature in Boise is...\"\n \"\"\"\n if len(self.output_keys) != 1:\n raise ValueError(\n f\"`run` not supported when there is not exactly \"\n f\"one output key. Got {self.output_keys}.\"\n )\n elif args and not kwargs:\n if len(args) != 1:\n raise ValueError(\"`run` supports only one positional argument.\")\n return (\n await self.acall(\n args[0], callbacks=callbacks, tags=tags, metadata=metadata\n )\n )[self.output_keys[0]]\n if kwargs and not args:\n return (\n await self.acall(\n kwargs, callbacks=callbacks, tags=tags, metadata=metadata\n )\n )[self.output_keys[0]]\n raise ValueError(\n f\"`run` supported with either positional arguments or keyword arguments\"\n f\" but not both. Got args: {args} and kwargs: {kwargs}.\"\n )\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return dictionary representation of chain.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/base.html"} {"id": "c5831400bed0-13", "text": "\"\"\"Return dictionary representation of chain.\n Expects `Chain._chain_type` property to be implemented and for memory to be\n null.\n Args:\n **kwargs: Keyword arguments passed to default `pydantic.BaseModel.dict`\n method.\n Returns:\n A dictionary representation of the chain.\n Example:\n ..code-block:: python\n chain.dict(exclude_unset=True)\n # -> {\"_type\": \"foo\", \"verbose\": False, ...}\n \"\"\"\n if self.memory is not None:\n raise ValueError(\"Saving of memory is not yet supported.\")\n _dict = super().dict(**kwargs)\n _dict[\"_type\"] = self._chain_type\n return _dict\n[docs] def save(self, file_path: Union[Path, str]) -> None:\n \"\"\"Save the chain.\n Expects `Chain._chain_type` property to be implemented and for memory to be\n null.\n Args:\n file_path: Path to file to save the chain to.\n Example:\n .. code-block:: python\n chain.save(file_path=\"path/chain.yaml\")\n \"\"\"\n # Convert file to Path object.\n if isinstance(file_path, str):\n save_path = Path(file_path)\n else:\n save_path = file_path\n directory_path = save_path.parent\n directory_path.mkdir(parents=True, exist_ok=True)\n # Fetch dictionary to save\n chain_dict = self.dict()\n if save_path.suffix == \".json\":\n with open(file_path, \"w\") as f:\n json.dump(chain_dict, f, indent=4)\n elif save_path.suffix == \".yaml\":\n with open(file_path, \"w\") as f:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/base.html"} {"id": "c5831400bed0-14", "text": "with open(file_path, \"w\") as f:\n yaml.dump(chain_dict, f, default_flow_style=False)\n else:\n raise ValueError(f\"{save_path} must be json or yaml\")\n[docs] def apply(\n self, input_list: List[Dict[str, Any]], callbacks: Callbacks = None\n ) -> List[Dict[str, str]]:\n \"\"\"Call the chain on all inputs in the list.\"\"\"\n return [self(inputs, callbacks=callbacks) for inputs in input_list]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/base.html"} {"id": "5271085360bf-0", "text": "Source code for langchain.chains.mapreduce\n\"\"\"Map-reduce chain.\nSplits up a document, sends the smaller parts to the LLM with one prompt,\nthen combines the results with another one.\n\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra\nfrom langchain.callbacks.manager import CallbackManagerForChainRun, Callbacks\nfrom langchain.chains import ReduceDocumentsChain\nfrom langchain.chains.base import Chain\nfrom langchain.chains.combine_documents.base import BaseCombineDocumentsChain\nfrom langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain\nfrom langchain.chains.combine_documents.stuff import StuffDocumentsChain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.docstore.document import Document\nfrom langchain.schema import BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.text_splitter import TextSplitter\n[docs]class MapReduceChain(Chain):\n \"\"\"Map-reduce chain.\"\"\"\n combine_documents_chain: BaseCombineDocumentsChain\n \"\"\"Chain to use to combine documents.\"\"\"\n text_splitter: TextSplitter\n \"\"\"Text splitter to use.\"\"\"\n input_key: str = \"input_text\" #: :meta private:\n output_key: str = \"output_text\" #: :meta private:\n[docs] @classmethod\n def from_params(\n cls,\n llm: BaseLanguageModel,\n prompt: BasePromptTemplate,\n text_splitter: TextSplitter,\n callbacks: Callbacks = None,\n combine_chain_kwargs: Optional[Mapping[str, Any]] = None,\n reduce_chain_kwargs: Optional[Mapping[str, Any]] = None,\n **kwargs: Any,\n ) -> MapReduceChain:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/mapreduce.html"} {"id": "5271085360bf-1", "text": "**kwargs: Any,\n ) -> MapReduceChain:\n \"\"\"Construct a map-reduce chain that uses the chain for map and reduce.\"\"\"\n llm_chain = LLMChain(llm=llm, prompt=prompt, callbacks=callbacks)\n stuff_chain = StuffDocumentsChain(\n llm_chain=llm_chain,\n callbacks=callbacks,\n **(reduce_chain_kwargs if reduce_chain_kwargs else {}),\n )\n reduce_documents_chain = ReduceDocumentsChain(\n combine_documents_chain=stuff_chain\n )\n combine_documents_chain = MapReduceDocumentsChain(\n llm_chain=llm_chain,\n reduce_documents_chain=reduce_documents_chain,\n callbacks=callbacks,\n **(combine_chain_kwargs if combine_chain_kwargs else {}),\n )\n return cls(\n combine_documents_chain=combine_documents_chain,\n text_splitter=text_splitter,\n callbacks=callbacks,\n **kwargs,\n )\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n # Split the larger text into smaller chunks.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/mapreduce.html"} {"id": "5271085360bf-2", "text": "# Split the larger text into smaller chunks.\n doc_text = inputs.pop(self.input_key)\n texts = self.text_splitter.split_text(doc_text)\n docs = [Document(page_content=text) for text in texts]\n _inputs: Dict[str, Any] = {\n **inputs,\n self.combine_documents_chain.input_key: docs,\n }\n outputs = self.combine_documents_chain.run(\n _inputs, callbacks=_run_manager.get_child()\n )\n return {self.output_key: outputs}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/mapreduce.html"} {"id": "1becbe332613-0", "text": "Source code for langchain.chains.llm_requests\n\"\"\"Chain that hits a URL and then uses an LLM to parse results.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains import LLMChain\nfrom langchain.chains.base import Chain\nfrom langchain.requests import TextRequestsWrapper\nDEFAULT_HEADERS = {\n \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36\" # noqa: E501\n}\n[docs]class LLMRequestsChain(Chain):\n \"\"\"Chain that hits a URL and then uses an LLM to parse results.\"\"\"\n llm_chain: LLMChain\n requests_wrapper: TextRequestsWrapper = Field(\n default_factory=lambda: TextRequestsWrapper(headers=DEFAULT_HEADERS),\n exclude=True,\n )\n text_length: int = 8000\n requests_key: str = \"requests_result\" #: :meta private:\n input_key: str = \"url\" #: :meta private:\n output_key: str = \"output\" #: :meta private:\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Will be whatever keys the prompt expects.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Will always return text key.\n :meta private:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_requests.html"} {"id": "1becbe332613-1", "text": "\"\"\"Will always return text key.\n :meta private:\n \"\"\"\n return [self.output_key]\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n try:\n from bs4 import BeautifulSoup # noqa: F401\n except ImportError:\n raise ValueError(\n \"Could not import bs4 python package. \"\n \"Please install it with `pip install bs4`.\"\n )\n return values\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n from bs4 import BeautifulSoup\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n # Other keys are assumed to be needed for LLM prediction\n other_keys = {k: v for k, v in inputs.items() if k != self.input_key}\n url = inputs[self.input_key]\n res = self.requests_wrapper.get(url)\n # extract the text from the html\n soup = BeautifulSoup(res, \"html.parser\")\n other_keys[self.requests_key] = soup.get_text()[: self.text_length]\n result = self.llm_chain.predict(\n callbacks=_run_manager.get_child(), **other_keys\n )\n return {self.output_key: result}\n @property\n def _chain_type(self) -> str:\n return \"llm_requests_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_requests.html"} {"id": "ef7f29b335f8-0", "text": "Source code for langchain.chains.loading\n\"\"\"Functionality for loading chains.\"\"\"\nimport json\nfrom pathlib import Path\nfrom typing import Any, Union\nimport yaml\nfrom langchain.chains import ReduceDocumentsChain\nfrom langchain.chains.api.base import APIChain\nfrom langchain.chains.base import Chain\nfrom langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain\nfrom langchain.chains.combine_documents.map_rerank import MapRerankDocumentsChain\nfrom langchain.chains.combine_documents.refine import RefineDocumentsChain\nfrom langchain.chains.combine_documents.stuff import StuffDocumentsChain\nfrom langchain.chains.graph_qa.cypher import GraphCypherQAChain\nfrom langchain.chains.hyde.base import HypotheticalDocumentEmbedder\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.llm_bash.base import LLMBashChain\nfrom langchain.chains.llm_checker.base import LLMCheckerChain\nfrom langchain.chains.llm_math.base import LLMMathChain\nfrom langchain.chains.llm_requests import LLMRequestsChain\nfrom langchain.chains.pal.base import PALChain\nfrom langchain.chains.qa_with_sources.base import QAWithSourcesChain\nfrom langchain.chains.qa_with_sources.vector_db import VectorDBQAWithSourcesChain\nfrom langchain.chains.retrieval_qa.base import RetrievalQA, VectorDBQA\nfrom langchain.chains.sql_database.base import SQLDatabaseChain\nfrom langchain.llms.loading import load_llm, load_llm_from_config\nfrom langchain.prompts.loading import (\n _load_output_parser,\n load_prompt,\n load_prompt_from_config,\n)\nfrom langchain.utilities.loading import try_load_from_hub\nURL_BASE = \"https://raw.githubusercontent.com/hwchase17/langchain-hub/master/chains/\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} {"id": "ef7f29b335f8-1", "text": "def _load_llm_chain(config: dict, **kwargs: Any) -> LLMChain:\n \"\"\"Load LLM chain from config dict.\"\"\"\n if \"llm\" in config:\n llm_config = config.pop(\"llm\")\n llm = load_llm_from_config(llm_config)\n elif \"llm_path\" in config:\n llm = load_llm(config.pop(\"llm_path\"))\n else:\n raise ValueError(\"One of `llm` or `llm_path` must be present.\")\n if \"prompt\" in config:\n prompt_config = config.pop(\"prompt\")\n prompt = load_prompt_from_config(prompt_config)\n elif \"prompt_path\" in config:\n prompt = load_prompt(config.pop(\"prompt_path\"))\n else:\n raise ValueError(\"One of `prompt` or `prompt_path` must be present.\")\n _load_output_parser(config)\n return LLMChain(llm=llm, prompt=prompt, **config)\ndef _load_hyde_chain(config: dict, **kwargs: Any) -> HypotheticalDocumentEmbedder:\n \"\"\"Load hypothetical document embedder chain from config dict.\"\"\"\n if \"llm_chain\" in config:\n llm_chain_config = config.pop(\"llm_chain\")\n llm_chain = load_chain_from_config(llm_chain_config)\n elif \"llm_chain_path\" in config:\n llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n else:\n raise ValueError(\"One of `llm_chain` or `llm_chain_path` must be present.\")\n if \"embeddings\" in kwargs:\n embeddings = kwargs.pop(\"embeddings\")\n else:\n raise ValueError(\"`embeddings` must be present.\")\n return HypotheticalDocumentEmbedder(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} {"id": "ef7f29b335f8-2", "text": "return HypotheticalDocumentEmbedder(\n llm_chain=llm_chain, base_embeddings=embeddings, **config\n )\ndef _load_stuff_documents_chain(config: dict, **kwargs: Any) -> StuffDocumentsChain:\n if \"llm_chain\" in config:\n llm_chain_config = config.pop(\"llm_chain\")\n llm_chain = load_chain_from_config(llm_chain_config)\n elif \"llm_chain_path\" in config:\n llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n else:\n raise ValueError(\"One of `llm_chain` or `llm_chain_config` must be present.\")\n if not isinstance(llm_chain, LLMChain):\n raise ValueError(f\"Expected LLMChain, got {llm_chain}\")\n if \"document_prompt\" in config:\n prompt_config = config.pop(\"document_prompt\")\n document_prompt = load_prompt_from_config(prompt_config)\n elif \"document_prompt_path\" in config:\n document_prompt = load_prompt(config.pop(\"document_prompt_path\"))\n else:\n raise ValueError(\n \"One of `document_prompt` or `document_prompt_path` must be present.\"\n )\n return StuffDocumentsChain(\n llm_chain=llm_chain, document_prompt=document_prompt, **config\n )\ndef _load_map_reduce_documents_chain(\n config: dict, **kwargs: Any\n) -> MapReduceDocumentsChain:\n if \"llm_chain\" in config:\n llm_chain_config = config.pop(\"llm_chain\")\n llm_chain = load_chain_from_config(llm_chain_config)\n elif \"llm_chain_path\" in config:\n llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} {"id": "ef7f29b335f8-3", "text": "llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n else:\n raise ValueError(\"One of `llm_chain` or `llm_chain_config` must be present.\")\n if not isinstance(llm_chain, LLMChain):\n raise ValueError(f\"Expected LLMChain, got {llm_chain}\")\n if \"combine_document_chain\" in config:\n combine_document_chain_config = config.pop(\"combine_document_chain\")\n combine_documents_chain = load_chain_from_config(combine_document_chain_config)\n elif \"combine_document_chain_path\" in config:\n combine_documents_chain = load_chain(config.pop(\"combine_document_chain_path\"))\n else:\n raise ValueError(\n \"One of `combine_document_chain` or \"\n \"`combine_document_chain_path` must be present.\"\n )\n if \"collapse_document_chain\" in config:\n collapse_document_chain_config = config.pop(\"collapse_document_chain\")\n if collapse_document_chain_config is None:\n collapse_documents_chain = None\n else:\n collapse_documents_chain = load_chain_from_config(\n collapse_document_chain_config\n )\n elif \"collapse_document_chain_path\" in config:\n collapse_documents_chain = load_chain(\n config.pop(\"collapse_document_chain_path\")\n )\n else:\n collapse_documents_chain = None\n reduce_documents_chain = ReduceDocumentsChain(\n combine_documents_chain=combine_documents_chain,\n collapse_documents_chain=collapse_documents_chain,\n )\n return MapReduceDocumentsChain(\n llm_chain=llm_chain,\n reduce_documents_chain=reduce_documents_chain,\n **config,\n )\ndef _load_llm_bash_chain(config: dict, **kwargs: Any) -> LLMBashChain:\n llm_chain = None\n if \"llm_chain\" in config:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} {"id": "ef7f29b335f8-4", "text": "llm_chain = None\n if \"llm_chain\" in config:\n llm_chain_config = config.pop(\"llm_chain\")\n llm_chain = load_chain_from_config(llm_chain_config)\n elif \"llm_chain_path\" in config:\n llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n # llm attribute is deprecated in favor of llm_chain, here to support old configs\n elif \"llm\" in config:\n llm_config = config.pop(\"llm\")\n llm = load_llm_from_config(llm_config)\n # llm_path attribute is deprecated in favor of llm_chain_path,\n # its to support old configs\n elif \"llm_path\" in config:\n llm = load_llm(config.pop(\"llm_path\"))\n else:\n raise ValueError(\"One of `llm_chain` or `llm_chain_path` must be present.\")\n if \"prompt\" in config:\n prompt_config = config.pop(\"prompt\")\n prompt = load_prompt_from_config(prompt_config)\n elif \"prompt_path\" in config:\n prompt = load_prompt(config.pop(\"prompt_path\"))\n if llm_chain:\n return LLMBashChain(llm_chain=llm_chain, prompt=prompt, **config)\n else:\n return LLMBashChain(llm=llm, prompt=prompt, **config)\ndef _load_llm_checker_chain(config: dict, **kwargs: Any) -> LLMCheckerChain:\n if \"llm\" in config:\n llm_config = config.pop(\"llm\")\n llm = load_llm_from_config(llm_config)\n elif \"llm_path\" in config:\n llm = load_llm(config.pop(\"llm_path\"))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} {"id": "ef7f29b335f8-5", "text": "llm = load_llm(config.pop(\"llm_path\"))\n else:\n raise ValueError(\"One of `llm` or `llm_path` must be present.\")\n if \"create_draft_answer_prompt\" in config:\n create_draft_answer_prompt_config = config.pop(\"create_draft_answer_prompt\")\n create_draft_answer_prompt = load_prompt_from_config(\n create_draft_answer_prompt_config\n )\n elif \"create_draft_answer_prompt_path\" in config:\n create_draft_answer_prompt = load_prompt(\n config.pop(\"create_draft_answer_prompt_path\")\n )\n if \"list_assertions_prompt\" in config:\n list_assertions_prompt_config = config.pop(\"list_assertions_prompt\")\n list_assertions_prompt = load_prompt_from_config(list_assertions_prompt_config)\n elif \"list_assertions_prompt_path\" in config:\n list_assertions_prompt = load_prompt(config.pop(\"list_assertions_prompt_path\"))\n if \"check_assertions_prompt\" in config:\n check_assertions_prompt_config = config.pop(\"check_assertions_prompt\")\n check_assertions_prompt = load_prompt_from_config(\n check_assertions_prompt_config\n )\n elif \"check_assertions_prompt_path\" in config:\n check_assertions_prompt = load_prompt(\n config.pop(\"check_assertions_prompt_path\")\n )\n if \"revised_answer_prompt\" in config:\n revised_answer_prompt_config = config.pop(\"revised_answer_prompt\")\n revised_answer_prompt = load_prompt_from_config(revised_answer_prompt_config)\n elif \"revised_answer_prompt_path\" in config:\n revised_answer_prompt = load_prompt(config.pop(\"revised_answer_prompt_path\"))\n return LLMCheckerChain(\n llm=llm,\n create_draft_answer_prompt=create_draft_answer_prompt,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} {"id": "ef7f29b335f8-6", "text": "llm=llm,\n create_draft_answer_prompt=create_draft_answer_prompt,\n list_assertions_prompt=list_assertions_prompt,\n check_assertions_prompt=check_assertions_prompt,\n revised_answer_prompt=revised_answer_prompt,\n **config,\n )\ndef _load_llm_math_chain(config: dict, **kwargs: Any) -> LLMMathChain:\n llm_chain = None\n if \"llm_chain\" in config:\n llm_chain_config = config.pop(\"llm_chain\")\n llm_chain = load_chain_from_config(llm_chain_config)\n elif \"llm_chain_path\" in config:\n llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n # llm attribute is deprecated in favor of llm_chain, here to support old configs\n elif \"llm\" in config:\n llm_config = config.pop(\"llm\")\n llm = load_llm_from_config(llm_config)\n # llm_path attribute is deprecated in favor of llm_chain_path,\n # its to support old configs\n elif \"llm_path\" in config:\n llm = load_llm(config.pop(\"llm_path\"))\n else:\n raise ValueError(\"One of `llm_chain` or `llm_chain_path` must be present.\")\n if \"prompt\" in config:\n prompt_config = config.pop(\"prompt\")\n prompt = load_prompt_from_config(prompt_config)\n elif \"prompt_path\" in config:\n prompt = load_prompt(config.pop(\"prompt_path\"))\n if llm_chain:\n return LLMMathChain(llm_chain=llm_chain, prompt=prompt, **config)\n else:\n return LLMMathChain(llm=llm, prompt=prompt, **config)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} {"id": "ef7f29b335f8-7", "text": "return LLMMathChain(llm=llm, prompt=prompt, **config)\ndef _load_map_rerank_documents_chain(\n config: dict, **kwargs: Any\n) -> MapRerankDocumentsChain:\n if \"llm_chain\" in config:\n llm_chain_config = config.pop(\"llm_chain\")\n llm_chain = load_chain_from_config(llm_chain_config)\n elif \"llm_chain_path\" in config:\n llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n else:\n raise ValueError(\"One of `llm_chain` or `llm_chain_config` must be present.\")\n return MapRerankDocumentsChain(llm_chain=llm_chain, **config)\ndef _load_pal_chain(config: dict, **kwargs: Any) -> PALChain:\n llm_chain = None\n if \"llm_chain\" in config:\n llm_chain_config = config.pop(\"llm_chain\")\n llm_chain = load_chain_from_config(llm_chain_config)\n elif \"llm_chain_path\" in config:\n llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n # llm attribute is deprecated in favor of llm_chain, here to support old configs\n elif \"llm\" in config:\n llm_config = config.pop(\"llm\")\n llm = load_llm_from_config(llm_config)\n # llm_path attribute is deprecated in favor of llm_chain_path,\n # its to support old configs\n elif \"llm_path\" in config:\n llm = load_llm(config.pop(\"llm_path\"))\n else:\n raise ValueError(\"One of `llm_chain` or `llm_chain_path` must be present.\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} {"id": "ef7f29b335f8-8", "text": "if \"prompt\" in config:\n prompt_config = config.pop(\"prompt\")\n prompt = load_prompt_from_config(prompt_config)\n elif \"prompt_path\" in config:\n prompt = load_prompt(config.pop(\"prompt_path\"))\n else:\n raise ValueError(\"One of `prompt` or `prompt_path` must be present.\")\n if llm_chain:\n return PALChain(llm_chain=llm_chain, prompt=prompt, **config)\n else:\n return PALChain(llm=llm, prompt=prompt, **config)\ndef _load_refine_documents_chain(config: dict, **kwargs: Any) -> RefineDocumentsChain:\n if \"initial_llm_chain\" in config:\n initial_llm_chain_config = config.pop(\"initial_llm_chain\")\n initial_llm_chain = load_chain_from_config(initial_llm_chain_config)\n elif \"initial_llm_chain_path\" in config:\n initial_llm_chain = load_chain(config.pop(\"initial_llm_chain_path\"))\n else:\n raise ValueError(\n \"One of `initial_llm_chain` or `initial_llm_chain_config` must be present.\"\n )\n if \"refine_llm_chain\" in config:\n refine_llm_chain_config = config.pop(\"refine_llm_chain\")\n refine_llm_chain = load_chain_from_config(refine_llm_chain_config)\n elif \"refine_llm_chain_path\" in config:\n refine_llm_chain = load_chain(config.pop(\"refine_llm_chain_path\"))\n else:\n raise ValueError(\n \"One of `refine_llm_chain` or `refine_llm_chain_config` must be present.\"\n )\n if \"document_prompt\" in config:\n prompt_config = config.pop(\"document_prompt\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} {"id": "ef7f29b335f8-9", "text": "prompt_config = config.pop(\"document_prompt\")\n document_prompt = load_prompt_from_config(prompt_config)\n elif \"document_prompt_path\" in config:\n document_prompt = load_prompt(config.pop(\"document_prompt_path\"))\n return RefineDocumentsChain(\n initial_llm_chain=initial_llm_chain,\n refine_llm_chain=refine_llm_chain,\n document_prompt=document_prompt,\n **config,\n )\ndef _load_qa_with_sources_chain(config: dict, **kwargs: Any) -> QAWithSourcesChain:\n if \"combine_documents_chain\" in config:\n combine_documents_chain_config = config.pop(\"combine_documents_chain\")\n combine_documents_chain = load_chain_from_config(combine_documents_chain_config)\n elif \"combine_documents_chain_path\" in config:\n combine_documents_chain = load_chain(config.pop(\"combine_documents_chain_path\"))\n else:\n raise ValueError(\n \"One of `combine_documents_chain` or \"\n \"`combine_documents_chain_path` must be present.\"\n )\n return QAWithSourcesChain(combine_documents_chain=combine_documents_chain, **config)\ndef _load_sql_database_chain(config: dict, **kwargs: Any) -> SQLDatabaseChain:\n if \"database\" in kwargs:\n database = kwargs.pop(\"database\")\n else:\n raise ValueError(\"`database` must be present.\")\n if \"llm\" in config:\n llm_config = config.pop(\"llm\")\n llm = load_llm_from_config(llm_config)\n elif \"llm_path\" in config:\n llm = load_llm(config.pop(\"llm_path\"))\n else:\n raise ValueError(\"One of `llm` or `llm_path` must be present.\")\n if \"prompt\" in config:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} {"id": "ef7f29b335f8-10", "text": "if \"prompt\" in config:\n prompt_config = config.pop(\"prompt\")\n prompt = load_prompt_from_config(prompt_config)\n else:\n prompt = None\n return SQLDatabaseChain.from_llm(llm, database, prompt=prompt, **config)\ndef _load_vector_db_qa_with_sources_chain(\n config: dict, **kwargs: Any\n) -> VectorDBQAWithSourcesChain:\n if \"vectorstore\" in kwargs:\n vectorstore = kwargs.pop(\"vectorstore\")\n else:\n raise ValueError(\"`vectorstore` must be present.\")\n if \"combine_documents_chain\" in config:\n combine_documents_chain_config = config.pop(\"combine_documents_chain\")\n combine_documents_chain = load_chain_from_config(combine_documents_chain_config)\n elif \"combine_documents_chain_path\" in config:\n combine_documents_chain = load_chain(config.pop(\"combine_documents_chain_path\"))\n else:\n raise ValueError(\n \"One of `combine_documents_chain` or \"\n \"`combine_documents_chain_path` must be present.\"\n )\n return VectorDBQAWithSourcesChain(\n combine_documents_chain=combine_documents_chain,\n vectorstore=vectorstore,\n **config,\n )\ndef _load_retrieval_qa(config: dict, **kwargs: Any) -> RetrievalQA:\n if \"retriever\" in kwargs:\n retriever = kwargs.pop(\"retriever\")\n else:\n raise ValueError(\"`retriever` must be present.\")\n if \"combine_documents_chain\" in config:\n combine_documents_chain_config = config.pop(\"combine_documents_chain\")\n combine_documents_chain = load_chain_from_config(combine_documents_chain_config)\n elif \"combine_documents_chain_path\" in config:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} {"id": "ef7f29b335f8-11", "text": "elif \"combine_documents_chain_path\" in config:\n combine_documents_chain = load_chain(config.pop(\"combine_documents_chain_path\"))\n else:\n raise ValueError(\n \"One of `combine_documents_chain` or \"\n \"`combine_documents_chain_path` must be present.\"\n )\n return RetrievalQA(\n combine_documents_chain=combine_documents_chain,\n retriever=retriever,\n **config,\n )\ndef _load_vector_db_qa(config: dict, **kwargs: Any) -> VectorDBQA:\n if \"vectorstore\" in kwargs:\n vectorstore = kwargs.pop(\"vectorstore\")\n else:\n raise ValueError(\"`vectorstore` must be present.\")\n if \"combine_documents_chain\" in config:\n combine_documents_chain_config = config.pop(\"combine_documents_chain\")\n combine_documents_chain = load_chain_from_config(combine_documents_chain_config)\n elif \"combine_documents_chain_path\" in config:\n combine_documents_chain = load_chain(config.pop(\"combine_documents_chain_path\"))\n else:\n raise ValueError(\n \"One of `combine_documents_chain` or \"\n \"`combine_documents_chain_path` must be present.\"\n )\n return VectorDBQA(\n combine_documents_chain=combine_documents_chain,\n vectorstore=vectorstore,\n **config,\n )\ndef _load_graph_cypher_chain(config: dict, **kwargs: Any) -> GraphCypherQAChain:\n if \"graph\" in kwargs:\n graph = kwargs.pop(\"graph\")\n else:\n raise ValueError(\"`graph` must be present.\")\n if \"cypher_generation_chain\" in config:\n cypher_generation_chain_config = config.pop(\"cypher_generation_chain\")\n cypher_generation_chain = load_chain_from_config(cypher_generation_chain_config)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} {"id": "ef7f29b335f8-12", "text": "cypher_generation_chain = load_chain_from_config(cypher_generation_chain_config)\n else:\n raise ValueError(\"`cypher_generation_chain` must be present.\")\n if \"qa_chain\" in config:\n qa_chain_config = config.pop(\"qa_chain\")\n qa_chain = load_chain_from_config(qa_chain_config)\n else:\n raise ValueError(\"`qa_chain` must be present.\")\n return GraphCypherQAChain(\n graph=graph,\n cypher_generation_chain=cypher_generation_chain,\n qa_chain=qa_chain,\n **config,\n )\ndef _load_api_chain(config: dict, **kwargs: Any) -> APIChain:\n if \"api_request_chain\" in config:\n api_request_chain_config = config.pop(\"api_request_chain\")\n api_request_chain = load_chain_from_config(api_request_chain_config)\n elif \"api_request_chain_path\" in config:\n api_request_chain = load_chain(config.pop(\"api_request_chain_path\"))\n else:\n raise ValueError(\n \"One of `api_request_chain` or `api_request_chain_path` must be present.\"\n )\n if \"api_answer_chain\" in config:\n api_answer_chain_config = config.pop(\"api_answer_chain\")\n api_answer_chain = load_chain_from_config(api_answer_chain_config)\n elif \"api_answer_chain_path\" in config:\n api_answer_chain = load_chain(config.pop(\"api_answer_chain_path\"))\n else:\n raise ValueError(\n \"One of `api_answer_chain` or `api_answer_chain_path` must be present.\"\n )\n if \"requests_wrapper\" in kwargs:\n requests_wrapper = kwargs.pop(\"requests_wrapper\")\n else:\n raise ValueError(\"`requests_wrapper` must be present.\")\n return APIChain(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} {"id": "ef7f29b335f8-13", "text": "raise ValueError(\"`requests_wrapper` must be present.\")\n return APIChain(\n api_request_chain=api_request_chain,\n api_answer_chain=api_answer_chain,\n requests_wrapper=requests_wrapper,\n **config,\n )\ndef _load_llm_requests_chain(config: dict, **kwargs: Any) -> LLMRequestsChain:\n if \"llm_chain\" in config:\n llm_chain_config = config.pop(\"llm_chain\")\n llm_chain = load_chain_from_config(llm_chain_config)\n elif \"llm_chain_path\" in config:\n llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n else:\n raise ValueError(\"One of `llm_chain` or `llm_chain_path` must be present.\")\n if \"requests_wrapper\" in kwargs:\n requests_wrapper = kwargs.pop(\"requests_wrapper\")\n return LLMRequestsChain(\n llm_chain=llm_chain, requests_wrapper=requests_wrapper, **config\n )\n else:\n return LLMRequestsChain(llm_chain=llm_chain, **config)\ntype_to_loader_dict = {\n \"api_chain\": _load_api_chain,\n \"hyde_chain\": _load_hyde_chain,\n \"llm_chain\": _load_llm_chain,\n \"llm_bash_chain\": _load_llm_bash_chain,\n \"llm_checker_chain\": _load_llm_checker_chain,\n \"llm_math_chain\": _load_llm_math_chain,\n \"llm_requests_chain\": _load_llm_requests_chain,\n \"pal_chain\": _load_pal_chain,\n \"qa_with_sources_chain\": _load_qa_with_sources_chain,\n \"stuff_documents_chain\": _load_stuff_documents_chain,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} {"id": "ef7f29b335f8-14", "text": "\"stuff_documents_chain\": _load_stuff_documents_chain,\n \"map_reduce_documents_chain\": _load_map_reduce_documents_chain,\n \"map_rerank_documents_chain\": _load_map_rerank_documents_chain,\n \"refine_documents_chain\": _load_refine_documents_chain,\n \"sql_database_chain\": _load_sql_database_chain,\n \"vector_db_qa_with_sources_chain\": _load_vector_db_qa_with_sources_chain,\n \"vector_db_qa\": _load_vector_db_qa,\n \"retrieval_qa\": _load_retrieval_qa,\n \"graph_cypher_chain\": _load_graph_cypher_chain,\n}\n[docs]def load_chain_from_config(config: dict, **kwargs: Any) -> Chain:\n \"\"\"Load chain from Config Dict.\"\"\"\n if \"_type\" not in config:\n raise ValueError(\"Must specify a chain Type in config\")\n config_type = config.pop(\"_type\")\n if config_type not in type_to_loader_dict:\n raise ValueError(f\"Loading {config_type} chain not supported\")\n chain_loader = type_to_loader_dict[config_type]\n return chain_loader(config, **kwargs)\n[docs]def load_chain(path: Union[str, Path], **kwargs: Any) -> Chain:\n \"\"\"Unified method for loading a chain from LangChainHub or local fs.\"\"\"\n if hub_result := try_load_from_hub(\n path, _load_chain_from_file, \"chains\", {\"json\", \"yaml\"}, **kwargs\n ):\n return hub_result\n else:\n return _load_chain_from_file(path, **kwargs)\ndef _load_chain_from_file(file: Union[str, Path], **kwargs: Any) -> Chain:\n \"\"\"Load chain from file.\"\"\"\n # Convert file to Path object.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} {"id": "ef7f29b335f8-15", "text": "\"\"\"Load chain from file.\"\"\"\n # Convert file to Path object.\n if isinstance(file, str):\n file_path = Path(file)\n else:\n file_path = file\n # Load from either json or yaml.\n if file_path.suffix == \".json\":\n with open(file_path) as f:\n config = json.load(f)\n elif file_path.suffix == \".yaml\":\n with open(file_path, \"r\") as f:\n config = yaml.safe_load(f)\n else:\n raise ValueError(\"File type must be json or yaml\")\n # Override default 'verbose' and 'memory' for the chain\n if \"verbose\" in kwargs:\n config[\"verbose\"] = kwargs.pop(\"verbose\")\n if \"memory\" in kwargs:\n config[\"memory\"] = kwargs.pop(\"memory\")\n # Load the chain from the config now.\n return load_chain_from_config(config, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} {"id": "e033f9fe64cf-0", "text": "Source code for langchain.chains.moderation\n\"\"\"Pass input through a moderation endpoint.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.utils import get_from_dict_or_env\n[docs]class OpenAIModerationChain(Chain):\n \"\"\"Pass input through a moderation endpoint.\n To use, you should have the ``openai`` python package installed, and the\n environment variable ``OPENAI_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the openai.create call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.chains import OpenAIModerationChain\n moderation = OpenAIModerationChain()\n \"\"\"\n client: Any #: :meta private:\n model_name: Optional[str] = None\n \"\"\"Moderation model name to use.\"\"\"\n error: bool = False\n \"\"\"Whether or not to error if bad content was found.\"\"\"\n input_key: str = \"input\" #: :meta private:\n output_key: str = \"output\" #: :meta private:\n openai_api_key: Optional[str] = None\n openai_organization: Optional[str] = None\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n openai_api_key = get_from_dict_or_env(\n values, \"openai_api_key\", \"OPENAI_API_KEY\"\n )\n openai_organization = get_from_dict_or_env(\n values,\n \"openai_organization\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/moderation.html"} {"id": "e033f9fe64cf-1", "text": "values,\n \"openai_organization\",\n \"OPENAI_ORGANIZATION\",\n default=\"\",\n )\n try:\n import openai\n openai.api_key = openai_api_key\n if openai_organization:\n openai.organization = openai_organization\n values[\"client\"] = openai.Moderation\n except ImportError:\n raise ImportError(\n \"Could not import openai python package. \"\n \"Please install it with `pip install openai`.\"\n )\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n def _moderate(self, text: str, results: dict) -> str:\n if results[\"flagged\"]:\n error_str = \"Text was found that violates OpenAI's content policy.\"\n if self.error:\n raise ValueError(error_str)\n else:\n return error_str\n return text\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n text = inputs[self.input_key]\n results = self.client.create(text)\n output = self._moderate(text, results[\"results\"][0])\n return {self.output_key: output}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/moderation.html"} {"id": "cb9e98f888fc-0", "text": "Source code for langchain.chains.sequential\n\"\"\"Chain pipeline where the outputs of one step feed directly into next.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.input import get_color_mapping\n[docs]class SequentialChain(Chain):\n \"\"\"Chain where the outputs of one chain feed directly into next.\"\"\"\n chains: List[Chain]\n input_variables: List[str]\n output_variables: List[str] #: :meta private:\n return_all: bool = False\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return expected input keys to the chain.\n :meta private:\n \"\"\"\n return self.input_variables\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return output key.\n :meta private:\n \"\"\"\n return self.output_variables\n[docs] @root_validator(pre=True)\n def validate_chains(cls, values: Dict) -> Dict:\n \"\"\"Validate that the correct inputs exist for all chains.\"\"\"\n chains = values[\"chains\"]\n input_variables = values[\"input_variables\"]\n memory_keys = list()\n if \"memory\" in values and values[\"memory\"] is not None:\n \"\"\"Validate that prompt input variables are consistent.\"\"\"\n memory_keys = values[\"memory\"].memory_variables\n if set(input_variables).intersection(set(memory_keys)):\n overlapping_keys = set(input_variables) & set(memory_keys)\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/sequential.html"} {"id": "cb9e98f888fc-1", "text": "overlapping_keys = set(input_variables) & set(memory_keys)\n raise ValueError(\n f\"The the input key(s) {''.join(overlapping_keys)} are found \"\n f\"in the Memory keys ({memory_keys}) - please use input and \"\n f\"memory keys that don't overlap.\"\n )\n known_variables = set(input_variables + memory_keys)\n for chain in chains:\n missing_vars = set(chain.input_keys).difference(known_variables)\n if missing_vars:\n raise ValueError(\n f\"Missing required input keys: {missing_vars}, \"\n f\"only had {known_variables}\"\n )\n overlapping_keys = known_variables.intersection(chain.output_keys)\n if overlapping_keys:\n raise ValueError(\n f\"Chain returned keys that already exist: {overlapping_keys}\"\n )\n known_variables |= set(chain.output_keys)\n if \"output_variables\" not in values:\n if values.get(\"return_all\", False):\n output_keys = known_variables.difference(input_variables)\n else:\n output_keys = chains[-1].output_keys\n values[\"output_variables\"] = output_keys\n else:\n missing_vars = set(values[\"output_variables\"]).difference(known_variables)\n if missing_vars:\n raise ValueError(\n f\"Expected output variables that were not found: {missing_vars}.\"\n )\n return values\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n known_values = inputs.copy()\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n for i, chain in enumerate(self.chains):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/sequential.html"} {"id": "cb9e98f888fc-2", "text": "for i, chain in enumerate(self.chains):\n callbacks = _run_manager.get_child()\n outputs = chain(known_values, return_only_outputs=True, callbacks=callbacks)\n known_values.update(outputs)\n return {k: known_values[k] for k in self.output_variables}\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n known_values = inputs.copy()\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n for i, chain in enumerate(self.chains):\n outputs = await chain.acall(\n known_values, return_only_outputs=True, callbacks=callbacks\n )\n known_values.update(outputs)\n return {k: known_values[k] for k in self.output_variables}\n[docs]class SimpleSequentialChain(Chain):\n \"\"\"Simple chain where the outputs of one step feed directly into next.\"\"\"\n chains: List[Chain]\n strip_outputs: bool = False\n input_key: str = \"input\" #: :meta private:\n output_key: str = \"output\" #: :meta private:\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return output key.\n :meta private:\n \"\"\"\n return [self.output_key]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/sequential.html"} {"id": "cb9e98f888fc-3", "text": ":meta private:\n \"\"\"\n return [self.output_key]\n[docs] @root_validator()\n def validate_chains(cls, values: Dict) -> Dict:\n \"\"\"Validate that chains are all single input/output.\"\"\"\n for chain in values[\"chains\"]:\n if len(chain.input_keys) != 1:\n raise ValueError(\n \"Chains used in SimplePipeline should all have one input, got \"\n f\"{chain} with {len(chain.input_keys)} inputs.\"\n )\n if len(chain.output_keys) != 1:\n raise ValueError(\n \"Chains used in SimplePipeline should all have one output, got \"\n f\"{chain} with {len(chain.output_keys)} outputs.\"\n )\n return values\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n _input = inputs[self.input_key]\n color_mapping = get_color_mapping([str(i) for i in range(len(self.chains))])\n for i, chain in enumerate(self.chains):\n _input = chain.run(_input, callbacks=_run_manager.get_child(f\"step_{i+1}\"))\n if self.strip_outputs:\n _input = _input.strip()\n _run_manager.on_text(\n _input, color=color_mapping[str(i)], end=\"\\n\", verbose=self.verbose\n )\n return {self.output_key: _input}\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/sequential.html"} {"id": "cb9e98f888fc-4", "text": "run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n _input = inputs[self.input_key]\n color_mapping = get_color_mapping([str(i) for i in range(len(self.chains))])\n for i, chain in enumerate(self.chains):\n _input = await chain.arun(_input, callbacks=callbacks)\n if self.strip_outputs:\n _input = _input.strip()\n await _run_manager.on_text(\n _input, color=color_mapping[str(i)], end=\"\\n\", verbose=self.verbose\n )\n return {self.output_key: _input}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/sequential.html"} {"id": "f84871e73673-0", "text": "Source code for langchain.chains.transform\n\"\"\"Chain that runs an arbitrary python function.\"\"\"\nfrom typing import Callable, Dict, List, Optional\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\n[docs]class TransformChain(Chain):\n \"\"\"Chain transform chain output.\n Example:\n .. code-block:: python\n from langchain import TransformChain\n transform_chain = TransformChain(input_variables=[\"text\"],\n output_variables[\"entities\"], transform=func())\n \"\"\"\n input_variables: List[str]\n output_variables: List[str]\n transform: Callable[[Dict[str, str]], Dict[str, str]]\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input keys.\n :meta private:\n \"\"\"\n return self.input_variables\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return output keys.\n :meta private:\n \"\"\"\n return self.output_variables\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n return self.transform(inputs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/transform.html"} {"id": "15c66a42dbf8-0", "text": "Source code for langchain.chains.llm_bash.prompt\n# flake8: noqa\nfrom __future__ import annotations\nimport re\nfrom typing import List\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema import BaseOutputParser, OutputParserException\n_PROMPT_TEMPLATE = \"\"\"If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put \"#!/bin/bash\" in your answer. Make sure to reason step by step, using this format:\nQuestion: \"copy the files in the directory named 'target' into a new directory at the same level as target called 'myNewDirectory'\"\nI need to take the following actions:\n- List all files in the directory\n- Create a new directory\n- Copy the files from the first directory into the second directory\n```bash\nls\nmkdir myNewDirectory\ncp -r target/* myNewDirectory\n```\nThat is the format. Begin!\nQuestion: {question}\"\"\"\n[docs]class BashOutputParser(BaseOutputParser):\n \"\"\"Parser for bash output.\"\"\"\n[docs] def parse(self, text: str) -> List[str]:\n if \"```bash\" in text:\n return self.get_code_blocks(text)\n else:\n raise OutputParserException(\n f\"Failed to parse bash output. Got: {text}\",\n )\n[docs] @staticmethod\n def get_code_blocks(t: str) -> List[str]:\n \"\"\"Get multiple code blocks from the LLM result.\"\"\"\n code_blocks: List[str] = []\n # Bash markdown code blocks\n pattern = re.compile(r\"```bash(.*?)(?:\\n\\s*)```\", re.DOTALL)\n for match in pattern.finditer(t):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_bash/prompt.html"} {"id": "15c66a42dbf8-1", "text": "for match in pattern.finditer(t):\n matched = match.group(1).strip()\n if matched:\n code_blocks.extend(\n [line for line in matched.split(\"\\n\") if line.strip()]\n )\n return code_blocks\n @property\n def _type(self) -> str:\n return \"bash\"\nPROMPT = PromptTemplate(\n input_variables=[\"question\"],\n template=_PROMPT_TEMPLATE,\n output_parser=BashOutputParser(),\n)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_bash/prompt.html"} {"id": "8bbac8a08644-0", "text": "Source code for langchain.chains.llm_bash.base\n\"\"\"Chain that interprets a prompt and executes bash code to perform bash operations.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport warnings\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.llm_bash.prompt import PROMPT\nfrom langchain.schema import BasePromptTemplate, OutputParserException\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.utilities.bash import BashProcess\nlogger = logging.getLogger(__name__)\n[docs]class LLMBashChain(Chain):\n \"\"\"Chain that interprets a prompt and executes bash code to perform bash operations.\n Example:\n .. code-block:: python\n from langchain import LLMBashChain, OpenAI\n llm_bash = LLMBashChain.from_llm(OpenAI())\n \"\"\"\n llm_chain: LLMChain\n llm: Optional[BaseLanguageModel] = None\n \"\"\"[Deprecated] LLM wrapper to use.\"\"\"\n input_key: str = \"question\" #: :meta private:\n output_key: str = \"answer\" #: :meta private:\n prompt: BasePromptTemplate = PROMPT\n \"\"\"[Deprecated]\"\"\"\n bash_process: BashProcess = Field(default_factory=BashProcess) #: :meta private:\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] @root_validator(pre=True)\n def raise_deprecation(cls, values: Dict) -> Dict:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_bash/base.html"} {"id": "8bbac8a08644-1", "text": "def raise_deprecation(cls, values: Dict) -> Dict:\n if \"llm\" in values:\n warnings.warn(\n \"Directly instantiating an LLMBashChain with an llm is deprecated. \"\n \"Please instantiate with llm_chain or using the from_llm class method.\"\n )\n if \"llm_chain\" not in values and values[\"llm\"] is not None:\n prompt = values.get(\"prompt\", PROMPT)\n values[\"llm_chain\"] = LLMChain(llm=values[\"llm\"], prompt=prompt)\n return values\n[docs] @root_validator\n def validate_prompt(cls, values: Dict) -> Dict:\n if values[\"llm_chain\"].prompt.output_parser is None:\n raise ValueError(\n \"The prompt used by llm_chain is expected to have an output_parser.\"\n )\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Expect output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n _run_manager.on_text(inputs[self.input_key], verbose=self.verbose)\n t = self.llm_chain.predict(\n question=inputs[self.input_key], callbacks=_run_manager.get_child()\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_bash/base.html"} {"id": "8bbac8a08644-2", "text": "question=inputs[self.input_key], callbacks=_run_manager.get_child()\n )\n _run_manager.on_text(t, color=\"green\", verbose=self.verbose)\n t = t.strip()\n try:\n parser = self.llm_chain.prompt.output_parser\n command_list = parser.parse(t) # type: ignore[union-attr]\n except OutputParserException as e:\n _run_manager.on_chain_error(e, verbose=self.verbose)\n raise e\n if self.verbose:\n _run_manager.on_text(\"\\nCode: \", verbose=self.verbose)\n _run_manager.on_text(\n str(command_list), color=\"yellow\", verbose=self.verbose\n )\n output = self.bash_process.run(command_list)\n _run_manager.on_text(\"\\nAnswer: \", verbose=self.verbose)\n _run_manager.on_text(output, color=\"yellow\", verbose=self.verbose)\n return {self.output_key: output}\n @property\n def _chain_type(self) -> str:\n return \"llm_bash_chain\"\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n prompt: BasePromptTemplate = PROMPT,\n **kwargs: Any,\n ) -> LLMBashChain:\n llm_chain = LLMChain(llm=llm, prompt=prompt)\n return cls(llm_chain=llm_chain, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_bash/base.html"} {"id": "9337aa86b748-0", "text": "Source code for langchain.chains.api.base\n\"\"\"Chain that makes API calls and summarizes the responses to answer a question.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Field, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n)\nfrom langchain.chains.api.prompt import API_RESPONSE_PROMPT, API_URL_PROMPT\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.requests import TextRequestsWrapper\nfrom langchain.schema import BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class APIChain(Chain):\n \"\"\"Chain that makes API calls and summarizes the responses to answer a question.\"\"\"\n api_request_chain: LLMChain\n api_answer_chain: LLMChain\n requests_wrapper: TextRequestsWrapper = Field(exclude=True)\n api_docs: str\n question_key: str = \"question\" #: :meta private:\n output_key: str = \"output\" #: :meta private:\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.question_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Expect output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n[docs] @root_validator(pre=True)\n def validate_api_request_prompt(cls, values: Dict) -> Dict:\n \"\"\"Check that api request prompt expects the right variables.\"\"\"\n input_vars = values[\"api_request_chain\"].prompt.input_variables\n expected_vars = {\"question\", \"api_docs\"}\n if set(input_vars) != expected_vars:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/api/base.html"} {"id": "9337aa86b748-1", "text": "if set(input_vars) != expected_vars:\n raise ValueError(\n f\"Input variables should be {expected_vars}, got {input_vars}\"\n )\n return values\n[docs] @root_validator(pre=True)\n def validate_api_answer_prompt(cls, values: Dict) -> Dict:\n \"\"\"Check that api answer prompt expects the right variables.\"\"\"\n input_vars = values[\"api_answer_chain\"].prompt.input_variables\n expected_vars = {\"question\", \"api_docs\", \"api_url\", \"api_response\"}\n if set(input_vars) != expected_vars:\n raise ValueError(\n f\"Input variables should be {expected_vars}, got {input_vars}\"\n )\n return values\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n question = inputs[self.question_key]\n api_url = self.api_request_chain.predict(\n question=question,\n api_docs=self.api_docs,\n callbacks=_run_manager.get_child(),\n )\n _run_manager.on_text(api_url, color=\"green\", end=\"\\n\", verbose=self.verbose)\n api_url = api_url.strip()\n api_response = self.requests_wrapper.get(api_url)\n _run_manager.on_text(\n api_response, color=\"yellow\", end=\"\\n\", verbose=self.verbose\n )\n answer = self.api_answer_chain.predict(\n question=question,\n api_docs=self.api_docs,\n api_url=api_url,\n api_response=api_response,\n callbacks=_run_manager.get_child(),\n )\n return {self.output_key: answer}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/api/base.html"} {"id": "9337aa86b748-2", "text": ")\n return {self.output_key: answer}\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n question = inputs[self.question_key]\n api_url = await self.api_request_chain.apredict(\n question=question,\n api_docs=self.api_docs,\n callbacks=_run_manager.get_child(),\n )\n await _run_manager.on_text(\n api_url, color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n api_url = api_url.strip()\n api_response = await self.requests_wrapper.aget(api_url)\n await _run_manager.on_text(\n api_response, color=\"yellow\", end=\"\\n\", verbose=self.verbose\n )\n answer = await self.api_answer_chain.apredict(\n question=question,\n api_docs=self.api_docs,\n api_url=api_url,\n api_response=api_response,\n callbacks=_run_manager.get_child(),\n )\n return {self.output_key: answer}\n[docs] @classmethod\n def from_llm_and_api_docs(\n cls,\n llm: BaseLanguageModel,\n api_docs: str,\n headers: Optional[dict] = None,\n api_url_prompt: BasePromptTemplate = API_URL_PROMPT,\n api_response_prompt: BasePromptTemplate = API_RESPONSE_PROMPT,\n **kwargs: Any,\n ) -> APIChain:\n \"\"\"Load chain from just an LLM and the api docs.\"\"\"\n get_request_chain = LLMChain(llm=llm, prompt=api_url_prompt)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/api/base.html"} {"id": "9337aa86b748-3", "text": "requests_wrapper = TextRequestsWrapper(headers=headers)\n get_answer_chain = LLMChain(llm=llm, prompt=api_response_prompt)\n return cls(\n api_request_chain=get_request_chain,\n api_answer_chain=get_answer_chain,\n requests_wrapper=requests_wrapper,\n api_docs=api_docs,\n **kwargs,\n )\n @property\n def _chain_type(self) -> str:\n return \"api_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/api/base.html"} {"id": "48e986f39841-0", "text": "Source code for langchain.chains.api.openapi.chain\n\"\"\"Chain that makes API calls and summarizes the responses to answer a question.\"\"\"\nfrom __future__ import annotations\nimport json\nfrom typing import Any, Dict, List, NamedTuple, Optional, cast\nfrom pydantic import BaseModel, Field\nfrom requests import Response\nfrom langchain.callbacks.manager import CallbackManagerForChainRun, Callbacks\nfrom langchain.chains.api.openapi.requests_chain import APIRequesterChain\nfrom langchain.chains.api.openapi.response_chain import APIResponderChain\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.requests import Requests\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.tools.openapi.utils.api_models import APIOperation\nclass _ParamMapping(NamedTuple):\n \"\"\"Mapping from parameter name to parameter value.\"\"\"\n query_params: List[str]\n body_params: List[str]\n path_params: List[str]\n[docs]class OpenAPIEndpointChain(Chain, BaseModel):\n \"\"\"Chain interacts with an OpenAPI endpoint using natural language.\"\"\"\n api_request_chain: LLMChain\n api_response_chain: Optional[LLMChain]\n api_operation: APIOperation\n requests: Requests = Field(exclude=True, default_factory=Requests)\n param_mapping: _ParamMapping = Field(alias=\"param_mapping\")\n return_intermediate_steps: bool = False\n instructions_key: str = \"instructions\" #: :meta private:\n output_key: str = \"output\" #: :meta private:\n max_text_length: Optional[int] = Field(ge=0) #: :meta private:\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.instructions_key]\n @property", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/chain.html"} {"id": "48e986f39841-1", "text": "\"\"\"\n return [self.instructions_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Expect output key.\n :meta private:\n \"\"\"\n if not self.return_intermediate_steps:\n return [self.output_key]\n else:\n return [self.output_key, \"intermediate_steps\"]\n def _construct_path(self, args: Dict[str, str]) -> str:\n \"\"\"Construct the path from the deserialized input.\"\"\"\n path = self.api_operation.base_url + self.api_operation.path\n for param in self.param_mapping.path_params:\n path = path.replace(f\"{{{param}}}\", str(args.pop(param, \"\")))\n return path\n def _extract_query_params(self, args: Dict[str, str]) -> Dict[str, str]:\n \"\"\"Extract the query params from the deserialized input.\"\"\"\n query_params = {}\n for param in self.param_mapping.query_params:\n if param in args:\n query_params[param] = args.pop(param)\n return query_params\n def _extract_body_params(self, args: Dict[str, str]) -> Optional[Dict[str, str]]:\n \"\"\"Extract the request body params from the deserialized input.\"\"\"\n body_params = None\n if self.param_mapping.body_params:\n body_params = {}\n for param in self.param_mapping.body_params:\n if param in args:\n body_params[param] = args.pop(param)\n return body_params\n[docs] def deserialize_json_input(self, serialized_args: str) -> dict:\n \"\"\"Use the serialized typescript dictionary.\n Resolve the path, query params dict, and optional requestBody dict.\n \"\"\"\n args: dict = json.loads(serialized_args)\n path = self._construct_path(args)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/chain.html"} {"id": "48e986f39841-2", "text": "path = self._construct_path(args)\n body_params = self._extract_body_params(args)\n query_params = self._extract_query_params(args)\n return {\n \"url\": path,\n \"data\": body_params,\n \"params\": query_params,\n }\n def _get_output(self, output: str, intermediate_steps: dict) -> dict:\n \"\"\"Return the output from the API call.\"\"\"\n if self.return_intermediate_steps:\n return {\n self.output_key: output,\n \"intermediate_steps\": intermediate_steps,\n }\n else:\n return {self.output_key: output}\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n intermediate_steps = {}\n instructions = inputs[self.instructions_key]\n instructions = instructions[: self.max_text_length]\n _api_arguments = self.api_request_chain.predict_and_parse(\n instructions=instructions, callbacks=_run_manager.get_child()\n )\n api_arguments = cast(str, _api_arguments)\n intermediate_steps[\"request_args\"] = api_arguments\n _run_manager.on_text(\n api_arguments, color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n if api_arguments.startswith(\"ERROR\"):\n return self._get_output(api_arguments, intermediate_steps)\n elif api_arguments.startswith(\"MESSAGE:\"):\n return self._get_output(\n api_arguments[len(\"MESSAGE:\") :], intermediate_steps\n )\n try:\n request_args = self.deserialize_json_input(api_arguments)\n method = getattr(self.requests, self.api_operation.method.value)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/chain.html"} {"id": "48e986f39841-3", "text": "method = getattr(self.requests, self.api_operation.method.value)\n api_response: Response = method(**request_args)\n if api_response.status_code != 200:\n method_str = str(self.api_operation.method.value)\n response_text = (\n f\"{api_response.status_code}: {api_response.reason}\"\n + f\"\\nFor {method_str.upper()} {request_args['url']}\\n\"\n + f\"Called with args: {request_args['params']}\"\n )\n else:\n response_text = api_response.text\n except Exception as e:\n response_text = f\"Error with message {str(e)}\"\n response_text = response_text[: self.max_text_length]\n intermediate_steps[\"response_text\"] = response_text\n _run_manager.on_text(\n response_text, color=\"blue\", end=\"\\n\", verbose=self.verbose\n )\n if self.api_response_chain is not None:\n _answer = self.api_response_chain.predict_and_parse(\n response=response_text,\n instructions=instructions,\n callbacks=_run_manager.get_child(),\n )\n answer = cast(str, _answer)\n _run_manager.on_text(answer, color=\"yellow\", end=\"\\n\", verbose=self.verbose)\n return self._get_output(answer, intermediate_steps)\n else:\n return self._get_output(response_text, intermediate_steps)\n[docs] @classmethod\n def from_url_and_method(\n cls,\n spec_url: str,\n path: str,\n method: str,\n llm: BaseLanguageModel,\n requests: Optional[Requests] = None,\n return_intermediate_steps: bool = False,\n **kwargs: Any\n # TODO: Handle async\n ) -> \"OpenAPIEndpointChain\":", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/chain.html"} {"id": "48e986f39841-4", "text": "# TODO: Handle async\n ) -> \"OpenAPIEndpointChain\":\n \"\"\"Create an OpenAPIEndpoint from a spec at the specified url.\"\"\"\n operation = APIOperation.from_openapi_url(spec_url, path, method)\n return cls.from_api_operation(\n operation,\n requests=requests,\n llm=llm,\n return_intermediate_steps=return_intermediate_steps,\n **kwargs,\n )\n[docs] @classmethod\n def from_api_operation(\n cls,\n operation: APIOperation,\n llm: BaseLanguageModel,\n requests: Optional[Requests] = None,\n verbose: bool = False,\n return_intermediate_steps: bool = False,\n raw_response: bool = False,\n callbacks: Callbacks = None,\n **kwargs: Any\n # TODO: Handle async\n ) -> \"OpenAPIEndpointChain\":\n \"\"\"Create an OpenAPIEndpointChain from an operation and a spec.\"\"\"\n param_mapping = _ParamMapping(\n query_params=operation.query_params,\n body_params=operation.body_params,\n path_params=operation.path_params,\n )\n requests_chain = APIRequesterChain.from_llm_and_typescript(\n llm,\n typescript_definition=operation.to_typescript(),\n verbose=verbose,\n callbacks=callbacks,\n )\n if raw_response:\n response_chain = None\n else:\n response_chain = APIResponderChain.from_llm(\n llm, verbose=verbose, callbacks=callbacks\n )\n _requests = requests or Requests()\n return cls(\n api_request_chain=requests_chain,\n api_response_chain=response_chain,\n api_operation=operation,\n requests=_requests,\n param_mapping=param_mapping,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/chain.html"} {"id": "48e986f39841-5", "text": "requests=_requests,\n param_mapping=param_mapping,\n verbose=verbose,\n return_intermediate_steps=return_intermediate_steps,\n callbacks=callbacks,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/chain.html"} {"id": "5ceea30fac7a-0", "text": "Source code for langchain.chains.api.openapi.response_chain\n\"\"\"Response parser.\"\"\"\nimport json\nimport re\nfrom typing import Any\nfrom langchain.chains.api.openapi.prompts import RESPONSE_TEMPLATE\nfrom langchain.chains.llm import LLMChain\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema import BaseOutputParser\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class APIResponderOutputParser(BaseOutputParser):\n \"\"\"Parse the response and error tags.\"\"\"\n def _load_json_block(self, serialized_block: str) -> str:\n try:\n response_content = json.loads(serialized_block, strict=False)\n return response_content.get(\"response\", \"ERROR parsing response.\")\n except json.JSONDecodeError:\n return \"ERROR parsing response.\"\n except:\n raise\n[docs] def parse(self, llm_output: str) -> str:\n \"\"\"Parse the response and error tags.\"\"\"\n json_match = re.search(r\"```json(.*?)```\", llm_output, re.DOTALL)\n if json_match:\n return self._load_json_block(json_match.group(1).strip())\n else:\n raise ValueError(f\"No response found in output: {llm_output}.\")\n @property\n def _type(self) -> str:\n return \"api_responder\"\n[docs]class APIResponderChain(LLMChain):\n \"\"\"Get the response parser.\"\"\"\n[docs] @classmethod\n def from_llm(\n cls, llm: BaseLanguageModel, verbose: bool = True, **kwargs: Any\n ) -> LLMChain:\n \"\"\"Get the response parser.\"\"\"\n output_parser = APIResponderOutputParser()\n prompt = PromptTemplate(\n template=RESPONSE_TEMPLATE,\n output_parser=output_parser,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/response_chain.html"} {"id": "5ceea30fac7a-1", "text": "template=RESPONSE_TEMPLATE,\n output_parser=output_parser,\n input_variables=[\"response\", \"instructions\"],\n )\n return cls(prompt=prompt, llm=llm, verbose=verbose, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/response_chain.html"} {"id": "13813f44e0a7-0", "text": "Source code for langchain.chains.api.openapi.requests_chain\n\"\"\"request parser.\"\"\"\nimport json\nimport re\nfrom typing import Any\nfrom langchain.chains.api.openapi.prompts import REQUEST_TEMPLATE\nfrom langchain.chains.llm import LLMChain\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema import BaseOutputParser\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class APIRequesterOutputParser(BaseOutputParser):\n \"\"\"Parse the request and error tags.\"\"\"\n def _load_json_block(self, serialized_block: str) -> str:\n try:\n return json.dumps(json.loads(serialized_block, strict=False))\n except json.JSONDecodeError:\n return \"ERROR serializing request.\"\n[docs] def parse(self, llm_output: str) -> str:\n \"\"\"Parse the request and error tags.\"\"\"\n json_match = re.search(r\"```json(.*?)```\", llm_output, re.DOTALL)\n if json_match:\n return self._load_json_block(json_match.group(1).strip())\n message_match = re.search(r\"```text(.*?)```\", llm_output, re.DOTALL)\n if message_match:\n return f\"MESSAGE: {message_match.group(1).strip()}\"\n return \"ERROR making request\"\n @property\n def _type(self) -> str:\n return \"api_requester\"\n[docs]class APIRequesterChain(LLMChain):\n \"\"\"Get the request parser.\"\"\"\n[docs] @classmethod\n def from_llm_and_typescript(\n cls,\n llm: BaseLanguageModel,\n typescript_definition: str,\n verbose: bool = True,\n **kwargs: Any,\n ) -> LLMChain:\n \"\"\"Get the request parser.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/requests_chain.html"} {"id": "13813f44e0a7-1", "text": ") -> LLMChain:\n \"\"\"Get the request parser.\"\"\"\n output_parser = APIRequesterOutputParser()\n prompt = PromptTemplate(\n template=REQUEST_TEMPLATE,\n output_parser=output_parser,\n partial_variables={\"schema\": typescript_definition},\n input_variables=[\"instructions\"],\n )\n return cls(prompt=prompt, llm=llm, verbose=verbose, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/requests_chain.html"} {"id": "b014d86906e2-0", "text": "Source code for langchain.chains.retrieval_qa.base\n\"\"\"Chain for question-answering against a vector database.\"\"\"\nfrom __future__ import annotations\nimport inspect\nimport warnings\nfrom abc import abstractmethod\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.chains.combine_documents.base import BaseCombineDocumentsChain\nfrom langchain.chains.combine_documents.stuff import StuffDocumentsChain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.question_answering import load_qa_chain\nfrom langchain.chains.question_answering.stuff_prompt import PROMPT_SELECTOR\nfrom langchain.prompts import PromptTemplate\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.vectorstores.base import VectorStore\n[docs]class BaseRetrievalQA(Chain):\n combine_documents_chain: BaseCombineDocumentsChain\n \"\"\"Chain to use to combine the documents.\"\"\"\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n return_source_documents: bool = False\n \"\"\"Return the source documents.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n allow_population_by_field_name = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/retrieval_qa/base.html"} {"id": "b014d86906e2-1", "text": "@property\n def output_keys(self) -> List[str]:\n \"\"\"Return the output keys.\n :meta private:\n \"\"\"\n _output_keys = [self.output_key]\n if self.return_source_documents:\n _output_keys = _output_keys + [\"source_documents\"]\n return _output_keys\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n prompt: Optional[PromptTemplate] = None,\n **kwargs: Any,\n ) -> BaseRetrievalQA:\n \"\"\"Initialize from LLM.\"\"\"\n _prompt = prompt or PROMPT_SELECTOR.get_prompt(llm)\n llm_chain = LLMChain(llm=llm, prompt=_prompt)\n document_prompt = PromptTemplate(\n input_variables=[\"page_content\"], template=\"Context:\\n{page_content}\"\n )\n combine_documents_chain = StuffDocumentsChain(\n llm_chain=llm_chain,\n document_variable_name=\"context\",\n document_prompt=document_prompt,\n )\n return cls(combine_documents_chain=combine_documents_chain, **kwargs)\n[docs] @classmethod\n def from_chain_type(\n cls,\n llm: BaseLanguageModel,\n chain_type: str = \"stuff\",\n chain_type_kwargs: Optional[dict] = None,\n **kwargs: Any,\n ) -> BaseRetrievalQA:\n \"\"\"Load chain from chain type.\"\"\"\n _chain_type_kwargs = chain_type_kwargs or {}\n combine_documents_chain = load_qa_chain(\n llm, chain_type=chain_type, **_chain_type_kwargs\n )\n return cls(combine_documents_chain=combine_documents_chain, **kwargs)\n @abstractmethod\n def _get_docs(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/retrieval_qa/base.html"} {"id": "b014d86906e2-2", "text": "@abstractmethod\n def _get_docs(\n self,\n question: str,\n *,\n run_manager: CallbackManagerForChainRun,\n ) -> List[Document]:\n \"\"\"Get documents to do question answering over.\"\"\"\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Run get_relevant_text and llm on input query.\n If chain has 'return_source_documents' as 'True', returns\n the retrieved documents as well under the key 'source_documents'.\n Example:\n .. code-block:: python\n res = indexqa({'query': 'This is my query'})\n answer, docs = res['result'], res['source_documents']\n \"\"\"\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n question = inputs[self.input_key]\n accepts_run_manager = (\n \"run_manager\" in inspect.signature(self._get_docs).parameters\n )\n if accepts_run_manager:\n docs = self._get_docs(question, run_manager=_run_manager)\n else:\n docs = self._get_docs(question) # type: ignore[call-arg]\n answer = self.combine_documents_chain.run(\n input_documents=docs, question=question, callbacks=_run_manager.get_child()\n )\n if self.return_source_documents:\n return {self.output_key: answer, \"source_documents\": docs}\n else:\n return {self.output_key: answer}\n @abstractmethod\n async def _aget_docs(\n self,\n question: str,\n *,\n run_manager: AsyncCallbackManagerForChainRun,\n ) -> List[Document]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/retrieval_qa/base.html"} {"id": "b014d86906e2-3", "text": "run_manager: AsyncCallbackManagerForChainRun,\n ) -> List[Document]:\n \"\"\"Get documents to do question answering over.\"\"\"\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Run get_relevant_text and llm on input query.\n If chain has 'return_source_documents' as 'True', returns\n the retrieved documents as well under the key 'source_documents'.\n Example:\n .. code-block:: python\n res = indexqa({'query': 'This is my query'})\n answer, docs = res['result'], res['source_documents']\n \"\"\"\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n question = inputs[self.input_key]\n accepts_run_manager = (\n \"run_manager\" in inspect.signature(self._aget_docs).parameters\n )\n if accepts_run_manager:\n docs = await self._aget_docs(question, run_manager=_run_manager)\n else:\n docs = await self._aget_docs(question) # type: ignore[call-arg]\n answer = await self.combine_documents_chain.arun(\n input_documents=docs, question=question, callbacks=_run_manager.get_child()\n )\n if self.return_source_documents:\n return {self.output_key: answer, \"source_documents\": docs}\n else:\n return {self.output_key: answer}\n[docs]class RetrievalQA(BaseRetrievalQA):\n \"\"\"Chain for question-answering against an index.\n Example:\n .. code-block:: python\n from langchain.llms import OpenAI\n from langchain.chains import RetrievalQA", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/retrieval_qa/base.html"} {"id": "b014d86906e2-4", "text": "from langchain.chains import RetrievalQA\n from langchain.faiss import FAISS\n from langchain.vectorstores.base import VectorStoreRetriever\n retriever = VectorStoreRetriever(vectorstore=FAISS(...))\n retrievalQA = RetrievalQA.from_llm(llm=OpenAI(), retriever=retriever)\n \"\"\"\n retriever: BaseRetriever = Field(exclude=True)\n def _get_docs(\n self,\n question: str,\n *,\n run_manager: CallbackManagerForChainRun,\n ) -> List[Document]:\n \"\"\"Get docs.\"\"\"\n return self.retriever.get_relevant_documents(\n question, callbacks=run_manager.get_child()\n )\n async def _aget_docs(\n self,\n question: str,\n *,\n run_manager: AsyncCallbackManagerForChainRun,\n ) -> List[Document]:\n \"\"\"Get docs.\"\"\"\n return await self.retriever.aget_relevant_documents(\n question, callbacks=run_manager.get_child()\n )\n @property\n def _chain_type(self) -> str:\n \"\"\"Return the chain type.\"\"\"\n return \"retrieval_qa\"\n[docs]class VectorDBQA(BaseRetrievalQA):\n \"\"\"Chain for question-answering against a vector database.\"\"\"\n vectorstore: VectorStore = Field(exclude=True, alias=\"vectorstore\")\n \"\"\"Vector Database to connect to.\"\"\"\n k: int = 4\n \"\"\"Number of documents to query for.\"\"\"\n search_type: str = \"similarity\"\n \"\"\"Search type to use over vectorstore. `similarity` or `mmr`.\"\"\"\n search_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Extra search args.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/retrieval_qa/base.html"} {"id": "b014d86906e2-5", "text": "\"\"\"Extra search args.\"\"\"\n[docs] @root_validator()\n def raise_deprecation(cls, values: Dict) -> Dict:\n warnings.warn(\n \"`VectorDBQA` is deprecated - \"\n \"please use `from langchain.chains import RetrievalQA`\"\n )\n return values\n[docs] @root_validator()\n def validate_search_type(cls, values: Dict) -> Dict:\n \"\"\"Validate search type.\"\"\"\n if \"search_type\" in values:\n search_type = values[\"search_type\"]\n if search_type not in (\"similarity\", \"mmr\"):\n raise ValueError(f\"search_type of {search_type} not allowed.\")\n return values\n def _get_docs(\n self,\n question: str,\n *,\n run_manager: CallbackManagerForChainRun,\n ) -> List[Document]:\n \"\"\"Get docs.\"\"\"\n if self.search_type == \"similarity\":\n docs = self.vectorstore.similarity_search(\n question, k=self.k, **self.search_kwargs\n )\n elif self.search_type == \"mmr\":\n docs = self.vectorstore.max_marginal_relevance_search(\n question, k=self.k, **self.search_kwargs\n )\n else:\n raise ValueError(f\"search_type of {self.search_type} not allowed.\")\n return docs\n async def _aget_docs(\n self,\n question: str,\n *,\n run_manager: AsyncCallbackManagerForChainRun,\n ) -> List[Document]:\n \"\"\"Get docs.\"\"\"\n raise NotImplementedError(\"VectorDBQA does not support async\")\n @property\n def _chain_type(self) -> str:\n \"\"\"Return the chain type.\"\"\"\n return \"vector_db_qa\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/retrieval_qa/base.html"} {"id": "fb5adb5ba392-0", "text": "Source code for langchain.chains.sql_database.base\n\"\"\"Chain for interacting with SQL Database.\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.sql_database.prompt import DECIDER_PROMPT, PROMPT, SQL_PROMPTS\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema import BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.sql_database import SQLDatabase\nfrom langchain.tools.sql_database.prompt import QUERY_CHECKER\nINTERMEDIATE_STEPS_KEY = \"intermediate_steps\"\n[docs]class SQLDatabaseChain(Chain):\n \"\"\"Chain for interacting with SQL Database.\n Example:\n .. code-block:: python\n from langchain import SQLDatabaseChain, OpenAI, SQLDatabase\n db = SQLDatabase(...)\n db_chain = SQLDatabaseChain.from_llm(OpenAI(), db)\n \"\"\"\n llm_chain: LLMChain\n llm: Optional[BaseLanguageModel] = None\n \"\"\"[Deprecated] LLM wrapper to use.\"\"\"\n database: SQLDatabase = Field(exclude=True)\n \"\"\"SQL Database to connect to.\"\"\"\n prompt: Optional[BasePromptTemplate] = None\n \"\"\"[Deprecated] Prompt to use to translate natural language to SQL.\"\"\"\n top_k: int = 5\n \"\"\"Number of results to return from the query\"\"\"\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n return_intermediate_steps: bool = False", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html"} {"id": "fb5adb5ba392-1", "text": "return_intermediate_steps: bool = False\n \"\"\"Whether or not to return the intermediate steps along with the final answer.\"\"\"\n return_direct: bool = False\n \"\"\"Whether or not to return the result of querying the SQL table directly.\"\"\"\n use_query_checker: bool = False\n \"\"\"Whether or not the query checker tool should be used to attempt \n to fix the initial SQL from the LLM.\"\"\"\n query_checker_prompt: Optional[BasePromptTemplate] = None\n \"\"\"The prompt template that should be used by the query checker\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] @root_validator(pre=True)\n def raise_deprecation(cls, values: Dict) -> Dict:\n if \"llm\" in values:\n warnings.warn(\n \"Directly instantiating an SQLDatabaseChain with an llm is deprecated. \"\n \"Please instantiate with llm_chain argument or using the from_llm \"\n \"class method.\"\n )\n if \"llm_chain\" not in values and values[\"llm\"] is not None:\n database = values[\"database\"]\n prompt = values.get(\"prompt\") or SQL_PROMPTS.get(\n database.dialect, PROMPT\n )\n values[\"llm_chain\"] = LLMChain(llm=values[\"llm\"], prompt=prompt)\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the singular input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the singular output key.\n :meta private:\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html"} {"id": "fb5adb5ba392-2", "text": "\"\"\"Return the singular output key.\n :meta private:\n \"\"\"\n if not self.return_intermediate_steps:\n return [self.output_key]\n else:\n return [self.output_key, INTERMEDIATE_STEPS_KEY]\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n input_text = f\"{inputs[self.input_key]}\\nSQLQuery:\"\n _run_manager.on_text(input_text, verbose=self.verbose)\n # If not present, then defaults to None which is all tables.\n table_names_to_use = inputs.get(\"table_names_to_use\")\n table_info = self.database.get_table_info(table_names=table_names_to_use)\n llm_inputs = {\n \"input\": input_text,\n \"top_k\": str(self.top_k),\n \"dialect\": self.database.dialect,\n \"table_info\": table_info,\n \"stop\": [\"\\nSQLResult:\"],\n }\n intermediate_steps: List = []\n try:\n intermediate_steps.append(llm_inputs) # input: sql generation\n sql_cmd = self.llm_chain.predict(\n callbacks=_run_manager.get_child(),\n **llm_inputs,\n ).strip()\n if not self.use_query_checker:\n _run_manager.on_text(sql_cmd, color=\"green\", verbose=self.verbose)\n intermediate_steps.append(\n sql_cmd\n ) # output: sql generation (no checker)\n intermediate_steps.append({\"sql_cmd\": sql_cmd}) # input: sql exec\n result = self.database.run(sql_cmd)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html"} {"id": "fb5adb5ba392-3", "text": "result = self.database.run(sql_cmd)\n intermediate_steps.append(str(result)) # output: sql exec\n else:\n query_checker_prompt = self.query_checker_prompt or PromptTemplate(\n template=QUERY_CHECKER, input_variables=[\"query\", \"dialect\"]\n )\n query_checker_chain = LLMChain(\n llm=self.llm_chain.llm, prompt=query_checker_prompt\n )\n query_checker_inputs = {\n \"query\": sql_cmd,\n \"dialect\": self.database.dialect,\n }\n checked_sql_command: str = query_checker_chain.predict(\n callbacks=_run_manager.get_child(), **query_checker_inputs\n ).strip()\n intermediate_steps.append(\n checked_sql_command\n ) # output: sql generation (checker)\n _run_manager.on_text(\n checked_sql_command, color=\"green\", verbose=self.verbose\n )\n intermediate_steps.append(\n {\"sql_cmd\": checked_sql_command}\n ) # input: sql exec\n result = self.database.run(checked_sql_command)\n intermediate_steps.append(str(result)) # output: sql exec\n sql_cmd = checked_sql_command\n _run_manager.on_text(\"\\nSQLResult: \", verbose=self.verbose)\n _run_manager.on_text(result, color=\"yellow\", verbose=self.verbose)\n # If return direct, we just set the final result equal to\n # the result of the sql query result, otherwise try to get a human readable\n # final answer\n if self.return_direct:\n final_result = result\n else:\n _run_manager.on_text(\"\\nAnswer:\", verbose=self.verbose)\n input_text += f\"{sql_cmd}\\nSQLResult: {result}\\nAnswer:\"\n llm_inputs[\"input\"] = input_text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html"} {"id": "fb5adb5ba392-4", "text": "llm_inputs[\"input\"] = input_text\n intermediate_steps.append(llm_inputs) # input: final answer\n final_result = self.llm_chain.predict(\n callbacks=_run_manager.get_child(),\n **llm_inputs,\n ).strip()\n intermediate_steps.append(final_result) # output: final answer\n _run_manager.on_text(final_result, color=\"green\", verbose=self.verbose)\n chain_result: Dict[str, Any] = {self.output_key: final_result}\n if self.return_intermediate_steps:\n chain_result[INTERMEDIATE_STEPS_KEY] = intermediate_steps\n return chain_result\n except Exception as exc:\n # Append intermediate steps to exception, to aid in logging and later\n # improvement of few shot prompt seeds\n exc.intermediate_steps = intermediate_steps # type: ignore\n raise exc\n @property\n def _chain_type(self) -> str:\n return \"sql_database_chain\"\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n db: SQLDatabase,\n prompt: Optional[BasePromptTemplate] = None,\n **kwargs: Any,\n ) -> SQLDatabaseChain:\n prompt = prompt or SQL_PROMPTS.get(db.dialect, PROMPT)\n llm_chain = LLMChain(llm=llm, prompt=prompt)\n return cls(llm_chain=llm_chain, database=db, **kwargs)\n[docs]class SQLDatabaseSequentialChain(Chain):\n \"\"\"Chain for querying SQL database that is a sequential chain.\n The chain is as follows:\n 1. Based on the query, determine which tables to use.\n 2. Based on those tables, call the normal SQL database chain.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html"} {"id": "fb5adb5ba392-5", "text": "2. Based on those tables, call the normal SQL database chain.\n This is useful in cases where the number of tables in the database is large.\n \"\"\"\n decider_chain: LLMChain\n sql_chain: SQLDatabaseChain\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n return_intermediate_steps: bool = False\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n database: SQLDatabase,\n query_prompt: BasePromptTemplate = PROMPT,\n decider_prompt: BasePromptTemplate = DECIDER_PROMPT,\n **kwargs: Any,\n ) -> SQLDatabaseSequentialChain:\n \"\"\"Load the necessary chains.\"\"\"\n sql_chain = SQLDatabaseChain.from_llm(\n llm, database, prompt=query_prompt, **kwargs\n )\n decider_chain = LLMChain(\n llm=llm, prompt=decider_prompt, output_key=\"table_names\"\n )\n return cls(sql_chain=sql_chain, decider_chain=decider_chain, **kwargs)\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the singular input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the singular output key.\n :meta private:\n \"\"\"\n if not self.return_intermediate_steps:\n return [self.output_key]\n else:\n return [self.output_key, INTERMEDIATE_STEPS_KEY]\n def _call(\n self,\n inputs: Dict[str, Any],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html"} {"id": "fb5adb5ba392-6", "text": "def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n _table_names = self.sql_chain.database.get_usable_table_names()\n table_names = \", \".join(_table_names)\n llm_inputs = {\n \"query\": inputs[self.input_key],\n \"table_names\": table_names,\n }\n _lowercased_table_names = [name.lower() for name in _table_names]\n table_names_from_chain = self.decider_chain.predict_and_parse(**llm_inputs)\n table_names_to_use = [\n name\n for name in table_names_from_chain\n if name.lower() in _lowercased_table_names\n ]\n _run_manager.on_text(\"Table names to use:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n str(table_names_to_use), color=\"yellow\", verbose=self.verbose\n )\n new_inputs = {\n self.sql_chain.input_key: inputs[self.input_key],\n \"table_names_to_use\": table_names_to_use,\n }\n return self.sql_chain(\n new_inputs, callbacks=_run_manager.get_child(), return_only_outputs=True\n )\n @property\n def _chain_type(self) -> str:\n return \"sql_database_sequential_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html"} {"id": "4ac72f613a9f-0", "text": "Source code for langchain.chains.natbot.base\n\"\"\"Implement an LLM driven browser.\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.natbot.prompt import PROMPT\nfrom langchain.llms.openai import OpenAI\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class NatBotChain(Chain):\n \"\"\"Implement an LLM driven browser.\n Example:\n .. code-block:: python\n from langchain import NatBotChain\n natbot = NatBotChain.from_default(\"Buy me a new hat.\")\n \"\"\"\n llm_chain: LLMChain\n objective: str\n \"\"\"Objective that NatBot is tasked with completing.\"\"\"\n llm: Optional[BaseLanguageModel] = None\n \"\"\"[Deprecated] LLM wrapper to use.\"\"\"\n input_url_key: str = \"url\" #: :meta private:\n input_browser_content_key: str = \"browser_content\" #: :meta private:\n previous_command: str = \"\" #: :meta private:\n output_key: str = \"command\" #: :meta private:\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] @root_validator(pre=True)\n def raise_deprecation(cls, values: Dict) -> Dict:\n if \"llm\" in values:\n warnings.warn(\n \"Directly instantiating an NatBotChain with an llm is deprecated. \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/natbot/base.html"} {"id": "4ac72f613a9f-1", "text": "\"Directly instantiating an NatBotChain with an llm is deprecated. \"\n \"Please instantiate with llm_chain argument or using the from_llm \"\n \"class method.\"\n )\n if \"llm_chain\" not in values and values[\"llm\"] is not None:\n values[\"llm_chain\"] = LLMChain(llm=values[\"llm\"], prompt=PROMPT)\n return values\n[docs] @classmethod\n def from_default(cls, objective: str, **kwargs: Any) -> NatBotChain:\n \"\"\"Load with default LLMChain.\"\"\"\n llm = OpenAI(temperature=0.5, best_of=10, n=3, max_tokens=50)\n return cls.from_llm(llm, objective, **kwargs)\n[docs] @classmethod\n def from_llm(\n cls, llm: BaseLanguageModel, objective: str, **kwargs: Any\n ) -> NatBotChain:\n \"\"\"Load from LLM.\"\"\"\n llm_chain = LLMChain(llm=llm, prompt=PROMPT)\n return cls(llm_chain=llm_chain, objective=objective, **kwargs)\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect url and browser content.\n :meta private:\n \"\"\"\n return [self.input_url_key, self.input_browser_content_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return command.\n :meta private:\n \"\"\"\n return [self.output_key]\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/natbot/base.html"} {"id": "4ac72f613a9f-2", "text": ") -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n url = inputs[self.input_url_key]\n browser_content = inputs[self.input_browser_content_key]\n llm_cmd = self.llm_chain.predict(\n objective=self.objective,\n url=url[:100],\n previous_command=self.previous_command,\n browser_content=browser_content[:4500],\n callbacks=_run_manager.get_child(),\n )\n llm_cmd = llm_cmd.strip()\n self.previous_command = llm_cmd\n return {self.output_key: llm_cmd}\n[docs] def execute(self, url: str, browser_content: str) -> str:\n \"\"\"Figure out next browser command to run.\n Args:\n url: URL of the site currently on.\n browser_content: Content of the page as currently displayed by the browser.\n Returns:\n Next browser command to run.\n Example:\n .. code-block:: python\n browser_content = \"....\"\n llm_command = natbot.run(\"www.google.com\", browser_content)\n \"\"\"\n _inputs = {\n self.input_url_key: url,\n self.input_browser_content_key: browser_content,\n }\n return self(_inputs)[self.output_key]\n @property\n def _chain_type(self) -> str:\n return \"nat_bot_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/natbot/base.html"} {"id": "c371265949cf-0", "text": "Source code for langchain.chains.natbot.crawler\n# flake8: noqa\nimport time\nfrom sys import platform\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Dict,\n Iterable,\n List,\n Optional,\n Set,\n Tuple,\n TypedDict,\n Union,\n)\nif TYPE_CHECKING:\n from playwright.sync_api import Browser, CDPSession, Page, sync_playwright\nblack_listed_elements: Set[str] = {\n \"html\",\n \"head\",\n \"title\",\n \"meta\",\n \"iframe\",\n \"body\",\n \"script\",\n \"style\",\n \"path\",\n \"svg\",\n \"br\",\n \"::marker\",\n}\n[docs]class ElementInViewPort(TypedDict):\n \"\"\"A typed dictionary containing information about elements in the viewport.\"\"\"\n node_index: str\n backend_node_id: int\n node_name: Optional[str]\n node_value: Optional[str]\n node_meta: List[str]\n is_clickable: bool\n origin_x: int\n origin_y: int\n center_x: int\n center_y: int\nclass Crawler:\n def __init__(self) -> None:\n try:\n from playwright.sync_api import sync_playwright\n except ImportError:\n raise ImportError(\n \"Could not import playwright python package. \"\n \"Please install it with `pip install playwright`.\"\n )\n self.browser: Browser = (\n sync_playwright().start().chromium.launch(headless=False)\n )\n self.page: Page = self.browser.new_page()\n self.page.set_viewport_size({\"width\": 1280, \"height\": 1080})", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/natbot/crawler.html"} {"id": "c371265949cf-1", "text": "self.page_element_buffer: Dict[int, ElementInViewPort]\n self.client: CDPSession\n def go_to_page(self, url: str) -> None:\n self.page.goto(url=url if \"://\" in url else \"http://\" + url)\n self.client = self.page.context.new_cdp_session(self.page)\n self.page_element_buffer = {}\n def scroll(self, direction: str) -> None:\n if direction == \"up\":\n self.page.evaluate(\n \"(document.scrollingElement || document.body).scrollTop = (document.scrollingElement || document.body).scrollTop - window.innerHeight;\"\n )\n elif direction == \"down\":\n self.page.evaluate(\n \"(document.scrollingElement || document.body).scrollTop = (document.scrollingElement || document.body).scrollTop + window.innerHeight;\"\n )\n def click(self, id: Union[str, int]) -> None:\n # Inject javascript into the page which removes the target= attribute from all links\n js = \"\"\"\n\t\tlinks = document.getElementsByTagName(\"a\");\n\t\tfor (var i = 0; i < links.length; i++) {\n\t\t\tlinks[i].removeAttribute(\"target\");\n\t\t}\n\t\t\"\"\"\n self.page.evaluate(js)\n element = self.page_element_buffer.get(int(id))\n if element:\n x: float = element[\"center_x\"]\n y: float = element[\"center_y\"]\n self.page.mouse.click(x, y)\n else:\n print(\"Could not find element\")\n def type(self, id: Union[str, int], text: str) -> None:\n self.click(id)\n self.page.keyboard.type(text)\n def enter(self) -> None:\n self.page.keyboard.press(\"Enter\")\n def crawl(self) -> List[str]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/natbot/crawler.html"} {"id": "c371265949cf-2", "text": "self.page.keyboard.press(\"Enter\")\n def crawl(self) -> List[str]:\n page = self.page\n page_element_buffer = self.page_element_buffer\n start = time.time()\n page_state_as_text = []\n device_pixel_ratio: float = page.evaluate(\"window.devicePixelRatio\")\n if platform == \"darwin\" and device_pixel_ratio == 1: # lies\n device_pixel_ratio = 2\n win_upper_bound: float = page.evaluate(\"window.pageYOffset\")\n win_left_bound: float = page.evaluate(\"window.pageXOffset\")\n win_width: float = page.evaluate(\"window.screen.width\")\n win_height: float = page.evaluate(\"window.screen.height\")\n win_right_bound: float = win_left_bound + win_width\n win_lower_bound: float = win_upper_bound + win_height\n # \t\tpercentage_progress_start = (win_upper_bound / document_scroll_height) * 100\n # \t\tpercentage_progress_end = (\n # \t\t\t(win_height + win_upper_bound) / document_scroll_height\n # \t\t) * 100\n percentage_progress_start = 1\n percentage_progress_end = 2\n page_state_as_text.append(\n {\n \"x\": 0,\n \"y\": 0,\n \"text\": \"[scrollbar {:0.2f}-{:0.2f}%]\".format(\n round(percentage_progress_start, 2), round(percentage_progress_end)\n ),\n }\n )\n tree = self.client.send(\n \"DOMSnapshot.captureSnapshot\",\n {\"computedStyles\": [], \"includeDOMRects\": True, \"includePaintOrder\": True},\n )\n strings: Dict[int, str] = tree[\"strings\"]\n document: Dict[str, Any] = tree[\"documents\"][0]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/natbot/crawler.html"} {"id": "c371265949cf-3", "text": "document: Dict[str, Any] = tree[\"documents\"][0]\n nodes: Dict[str, Any] = document[\"nodes\"]\n backend_node_id: Dict[int, int] = nodes[\"backendNodeId\"]\n attributes: Dict[int, Dict[int, Any]] = nodes[\"attributes\"]\n node_value: Dict[int, int] = nodes[\"nodeValue\"]\n parent: Dict[int, int] = nodes[\"parentIndex\"]\n node_names: Dict[int, int] = nodes[\"nodeName\"]\n is_clickable: Set[int] = set(nodes[\"isClickable\"][\"index\"])\n input_value: Dict[str, Any] = nodes[\"inputValue\"]\n input_value_index: List[int] = input_value[\"index\"]\n input_value_values: List[int] = input_value[\"value\"]\n layout: Dict[str, Any] = document[\"layout\"]\n layout_node_index: List[int] = layout[\"nodeIndex\"]\n bounds: Dict[int, List[float]] = layout[\"bounds\"]\n cursor: int = 0\n child_nodes: Dict[str, List[Dict[str, Any]]] = {}\n elements_in_view_port: List[ElementInViewPort] = []\n anchor_ancestry: Dict[str, Tuple[bool, Optional[int]]] = {\"-1\": (False, None)}\n button_ancestry: Dict[str, Tuple[bool, Optional[int]]] = {\"-1\": (False, None)}\n def convert_name(\n node_name: Optional[str], has_click_handler: Optional[bool]\n ) -> str:\n if node_name == \"a\":\n return \"link\"\n if node_name == \"input\":\n return \"input\"\n if node_name == \"img\":\n return \"img\"\n if (", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/natbot/crawler.html"} {"id": "c371265949cf-4", "text": "if node_name == \"img\":\n return \"img\"\n if (\n node_name == \"button\" or has_click_handler\n ): # found pages that needed this quirk\n return \"button\"\n else:\n return \"text\"\n def find_attributes(\n attributes: Dict[int, Any], keys: List[str]\n ) -> Dict[str, str]:\n values = {}\n for [key_index, value_index] in zip(*(iter(attributes),) * 2):\n if value_index < 0:\n continue\n key = strings[key_index]\n value = strings[value_index]\n if key in keys:\n values[key] = value\n keys.remove(key)\n if not keys:\n return values\n return values\n def add_to_hash_tree(\n hash_tree: Dict[str, Tuple[bool, Optional[int]]],\n tag: str,\n node_id: int,\n node_name: Optional[str],\n parent_id: int,\n ) -> Tuple[bool, Optional[int]]:\n parent_id_str = str(parent_id)\n if not parent_id_str in hash_tree:\n parent_name = strings[node_names[parent_id]].lower()\n grand_parent_id = parent[parent_id]\n add_to_hash_tree(\n hash_tree, tag, parent_id, parent_name, grand_parent_id\n )\n is_parent_desc_anchor, anchor_id = hash_tree[parent_id_str]\n # even if the anchor is nested in another anchor, we set the \"root\" for all descendants to be ::Self\n if node_name == tag:\n value: Tuple[bool, Optional[int]] = (True, node_id)\n elif (\n is_parent_desc_anchor", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/natbot/crawler.html"} {"id": "c371265949cf-5", "text": "elif (\n is_parent_desc_anchor\n ): # reuse the parent's anchor_id (which could be much higher in the tree)\n value = (True, anchor_id)\n else:\n value = (\n False,\n None,\n ) # not a descendant of an anchor, most likely it will become text, an interactive element or discarded\n hash_tree[str(node_id)] = value\n return value\n for index, node_name_index in enumerate(node_names):\n node_parent = parent[index]\n node_name: Optional[str] = strings[node_name_index].lower()\n is_ancestor_of_anchor, anchor_id = add_to_hash_tree(\n anchor_ancestry, \"a\", index, node_name, node_parent\n )\n is_ancestor_of_button, button_id = add_to_hash_tree(\n button_ancestry, \"button\", index, node_name, node_parent\n )\n try:\n cursor = layout_node_index.index(\n index\n ) # todo replace this with proper cursoring, ignoring the fact this is O(n^2) for the moment\n except:\n continue\n if node_name in black_listed_elements:\n continue\n [x, y, width, height] = bounds[cursor]\n x /= device_pixel_ratio\n y /= device_pixel_ratio\n width /= device_pixel_ratio\n height /= device_pixel_ratio\n elem_left_bound = x\n elem_top_bound = y\n elem_right_bound = x + width\n elem_lower_bound = y + height\n partially_is_in_viewport = (\n elem_left_bound < win_right_bound\n and elem_right_bound >= win_left_bound\n and elem_top_bound < win_lower_bound\n and elem_lower_bound >= win_upper_bound\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/natbot/crawler.html"} {"id": "c371265949cf-6", "text": "and elem_lower_bound >= win_upper_bound\n )\n if not partially_is_in_viewport:\n continue\n meta_data: List[str] = []\n # inefficient to grab the same set of keys for kinds of objects, but it's fine for now\n element_attributes = find_attributes(\n attributes[index], [\"type\", \"placeholder\", \"aria-label\", \"title\", \"alt\"]\n )\n ancestor_exception = is_ancestor_of_anchor or is_ancestor_of_button\n ancestor_node_key = (\n None\n if not ancestor_exception\n else str(anchor_id)\n if is_ancestor_of_anchor\n else str(button_id)\n )\n ancestor_node = (\n None\n if not ancestor_exception\n else child_nodes.setdefault(str(ancestor_node_key), [])\n )\n if node_name == \"#text\" and ancestor_exception and ancestor_node:\n text = strings[node_value[index]]\n if text == \"|\" or text == \"\u2022\":\n continue\n ancestor_node.append({\"type\": \"type\", \"value\": text})\n else:\n if (\n node_name == \"input\" and element_attributes.get(\"type\") == \"submit\"\n ) or node_name == \"button\":\n node_name = \"button\"\n element_attributes.pop(\n \"type\", None\n ) # prevent [button ... (button)..]\n for key in element_attributes:\n if ancestor_exception and ancestor_node:\n ancestor_node.append(\n {\n \"type\": \"attribute\",\n \"key\": key,\n \"value\": element_attributes[key],\n }\n )\n else:\n meta_data.append(element_attributes[key])\n element_node_value = None\n if node_value[index] >= 0:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/natbot/crawler.html"} {"id": "c371265949cf-7", "text": "element_node_value = None\n if node_value[index] >= 0:\n element_node_value = strings[node_value[index]]\n if (\n element_node_value == \"|\"\n ): # commonly used as a separator, does not add much context - lets save ourselves some token space\n continue\n elif (\n node_name == \"input\"\n and index in input_value_index\n and element_node_value is None\n ):\n node_input_text_index = input_value_index.index(index)\n text_index = input_value_values[node_input_text_index]\n if node_input_text_index >= 0 and text_index >= 0:\n element_node_value = strings[text_index]\n # remove redudant elements\n if ancestor_exception and (node_name != \"a\" and node_name != \"button\"):\n continue\n elements_in_view_port.append(\n {\n \"node_index\": str(index),\n \"backend_node_id\": backend_node_id[index],\n \"node_name\": node_name,\n \"node_value\": element_node_value,\n \"node_meta\": meta_data,\n \"is_clickable\": index in is_clickable,\n \"origin_x\": int(x),\n \"origin_y\": int(y),\n \"center_x\": int(x + (width / 2)),\n \"center_y\": int(y + (height / 2)),\n }\n )\n # lets filter further to remove anything that does not hold any text nor has click handlers + merge text from leaf#text nodes with the parent\n elements_of_interest = []\n id_counter = 0\n for element in elements_in_view_port:\n node_index = element.get(\"node_index\")\n node_name = element.get(\"node_name\")\n element_node_value = element.get(\"node_value\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/natbot/crawler.html"} {"id": "c371265949cf-8", "text": "element_node_value = element.get(\"node_value\")\n node_is_clickable = element.get(\"is_clickable\")\n node_meta_data: Optional[List[str]] = element.get(\"node_meta\")\n inner_text = f\"{element_node_value} \" if element_node_value else \"\"\n meta = \"\"\n if node_index in child_nodes:\n for child in child_nodes[node_index]:\n entry_type = child.get(\"type\")\n entry_value = child.get(\"value\")\n if entry_type == \"attribute\" and node_meta_data:\n entry_key = child.get(\"key\")\n node_meta_data.append(f'{entry_key}=\"{entry_value}\"')\n else:\n inner_text += f\"{entry_value} \"\n if node_meta_data:\n meta_string = \" \".join(node_meta_data)\n meta = f\" {meta_string}\"\n if inner_text != \"\":\n inner_text = f\"{inner_text.strip()}\"\n converted_node_name = convert_name(node_name, node_is_clickable)\n # not very elegant, more like a placeholder\n if (\n (converted_node_name != \"button\" or meta == \"\")\n and converted_node_name != \"link\"\n and converted_node_name != \"input\"\n and converted_node_name != \"img\"\n and converted_node_name != \"textarea\"\n ) and inner_text.strip() == \"\":\n continue\n page_element_buffer[id_counter] = element\n if inner_text != \"\":\n elements_of_interest.append(\n f\"\"\"<{converted_node_name} id={id_counter}{meta}>{inner_text}\"\"\"\n )\n else:\n elements_of_interest.append(\n f\"\"\"<{converted_node_name} id={id_counter}{meta}/>\"\"\"\n )\n id_counter += 1", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/natbot/crawler.html"} {"id": "c371265949cf-9", "text": ")\n id_counter += 1\n print(\"Parsing time: {:0.2f} seconds\".format(time.time() - start))\n return elements_of_interest", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/natbot/crawler.html"} {"id": "48a91ec154cb-0", "text": "Source code for langchain.chains.combine_documents.map_reduce\n\"\"\"Combining documents by mapping a chain over them first, then combining results.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional, Tuple\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.chains.combine_documents.base import BaseCombineDocumentsChain\nfrom langchain.chains.combine_documents.reduce import ReduceDocumentsChain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.docstore.document import Document\n[docs]class MapReduceDocumentsChain(BaseCombineDocumentsChain):\n \"\"\"Combining documents by mapping a chain over them, then combining results.\n We first call `llm_chain` on each document individually, passing in the\n `page_content` and any other kwargs. This is the `map` step.\n We then process the results of that `map` step in a `reduce` step. This should\n likely be a ReduceDocumentsChain.\n Example:\n .. code-block:: python\n from langchain.chains import (\n StuffDocumentsChain,\n LLMChain,\n ReduceDocumentsChain,\n MapReduceDocumentsChain,\n )\n from langchain.prompts import PromptTemplate\n from langchain.llms import OpenAI\n # This controls how each document will be formatted. Specifically,\n # it will be passed to `format_document` - see that function for more\n # details.\n document_prompt = PromptTemplate(\n input_variables=[\"page_content\"],\n template=\"{page_content}\"\n )\n document_variable_name = \"context\"\n llm = OpenAI()\n # The prompt here should take as an input variable the\n # `document_variable_name`\n prompt = PromptTemplate.from_template(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/map_reduce.html"} {"id": "48a91ec154cb-1", "text": "# `document_variable_name`\n prompt = PromptTemplate.from_template(\n \"Summarize this content: {context}\"\n )\n llm_chain = LLMChain(llm=llm, prompt=prompt)\n # We now define how to combine these summaries\n reduce_prompt = PromptTemplate.from_template(\n \"Combine these summaries: {context}\"\n )\n reduce_llm_chain = LLMChain(llm=llm, prompt=reduce_prompt)\n combine_documents_chain = StuffDocumentsChain(\n llm_chain=reduce_llm_chain,\n document_prompt=document_prompt,\n document_variable_name=document_variable_name\n )\n reduce_documents_chain = ReduceDocumentsChain(\n combine_documents_chain=combine_documents_chain,\n )\n chain = MapReduceDocumentsChain(\n llm_chain=llm_chain,\n reduce_documents_chain=reduce_documents_chain,\n )\n # If we wanted to, we could also pass in collapse_documents_chain\n # which is specifically aimed at collapsing documents BEFORE\n # the final call.\n prompt = PromptTemplate.from_template(\n \"Collapse this content: {context}\"\n )\n llm_chain = LLMChain(llm=llm, prompt=prompt)\n collapse_documents_chain = StuffDocumentsChain(\n llm_chain=llm_chain,\n document_prompt=document_prompt,\n document_variable_name=document_variable_name\n )\n reduce_documents_chain = ReduceDocumentsChain(\n combine_documents_chain=combine_documents_chain,\n collapse_documents_chain=collapse_documents_chain,\n )\n chain = MapReduceDocumentsChain(\n llm_chain=llm_chain,\n reduce_documents_chain=reduce_documents_chain,\n )\n \"\"\"\n llm_chain: LLMChain", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/map_reduce.html"} {"id": "48a91ec154cb-2", "text": ")\n \"\"\"\n llm_chain: LLMChain\n \"\"\"Chain to apply to each document individually.\"\"\"\n reduce_documents_chain: BaseCombineDocumentsChain\n \"\"\"Chain to use to reduce the results of applying `llm_chain` to each doc.\n This typically either a ReduceDocumentChain or StuffDocumentChain.\"\"\"\n document_variable_name: str\n \"\"\"The variable name in the llm_chain to put the documents in.\n If only one variable in the llm_chain, this need not be provided.\"\"\"\n return_intermediate_steps: bool = False\n \"\"\"Return the results of the map steps in the output.\"\"\"\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n _output_keys = super().output_keys\n if self.return_intermediate_steps:\n _output_keys = _output_keys + [\"intermediate_steps\"]\n return _output_keys\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] @root_validator(pre=True)\n def get_reduce_chain(cls, values: Dict) -> Dict:\n \"\"\"For backwards compatibility.\"\"\"\n if \"combine_document_chain\" in values:\n if \"reduce_documents_chain\" in values:\n raise ValueError(\n \"Both `reduce_documents_chain` and `combine_document_chain` \"\n \"cannot be provided at the same time. `combine_document_chain` \"\n \"is deprecated, please only provide `reduce_documents_chain`\"\n )\n combine_chain = values[\"combine_document_chain\"]\n collapse_chain = values.get(\"collapse_document_chain\")\n reduce_chain = ReduceDocumentsChain(\n combine_documents_chain=combine_chain,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/map_reduce.html"} {"id": "48a91ec154cb-3", "text": "reduce_chain = ReduceDocumentsChain(\n combine_documents_chain=combine_chain,\n collapse_documents_chain=collapse_chain,\n )\n values[\"reduce_documents_chain\"] = reduce_chain\n del values[\"combine_document_chain\"]\n if \"collapse_document_chain\" in values:\n del values[\"collapse_document_chain\"]\n return values\n[docs] @root_validator(pre=True)\n def get_return_intermediate_steps(cls, values: Dict) -> Dict:\n \"\"\"For backwards compatibility.\"\"\"\n if \"return_map_steps\" in values:\n values[\"return_intermediate_steps\"] = values[\"return_map_steps\"]\n del values[\"return_map_steps\"]\n return values\n[docs] @root_validator(pre=True)\n def get_default_document_variable_name(cls, values: Dict) -> Dict:\n \"\"\"Get default document variable name, if not provided.\"\"\"\n if \"document_variable_name\" not in values:\n llm_chain_variables = values[\"llm_chain\"].prompt.input_variables\n if len(llm_chain_variables) == 1:\n values[\"document_variable_name\"] = llm_chain_variables[0]\n else:\n raise ValueError(\n \"document_variable_name must be provided if there are \"\n \"multiple llm_chain input_variables\"\n )\n else:\n llm_chain_variables = values[\"llm_chain\"].prompt.input_variables\n if values[\"document_variable_name\"] not in llm_chain_variables:\n raise ValueError(\n f\"document_variable_name {values['document_variable_name']} was \"\n f\"not found in llm_chain input_variables: {llm_chain_variables}\"\n )\n return values\n @property\n def collapse_document_chain(self) -> BaseCombineDocumentsChain:\n \"\"\"Kept for backward compatibility.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/map_reduce.html"} {"id": "48a91ec154cb-4", "text": "\"\"\"Kept for backward compatibility.\"\"\"\n if isinstance(self.reduce_documents_chain, ReduceDocumentsChain):\n if self.reduce_documents_chain.collapse_documents_chain:\n return self.reduce_documents_chain.collapse_documents_chain\n else:\n return self.reduce_documents_chain.combine_documents_chain\n else:\n raise ValueError(\n f\"`reduce_documents_chain` is of type \"\n f\"{type(self.reduce_documents_chain)} so it does not have \"\n f\"this attribute.\"\n )\n @property\n def combine_document_chain(self) -> BaseCombineDocumentsChain:\n \"\"\"Kept for backward compatibility.\"\"\"\n if isinstance(self.reduce_documents_chain, ReduceDocumentsChain):\n return self.reduce_documents_chain.combine_documents_chain\n else:\n raise ValueError(\n f\"`reduce_documents_chain` is of type \"\n f\"{type(self.reduce_documents_chain)} so it does not have \"\n f\"this attribute.\"\n )\n[docs] def combine_docs(\n self,\n docs: List[Document],\n token_max: Optional[int] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Tuple[str, dict]:\n \"\"\"Combine documents in a map reduce manner.\n Combine by mapping first chain over all documents, then reducing the results.\n This reducing can be done recursively if needed (if there are many documents).\n \"\"\"\n map_results = self.llm_chain.apply(\n # FYI - this is parallelized and so it is fast.\n [{self.document_variable_name: d.page_content, **kwargs} for d in docs],\n callbacks=callbacks,\n )\n question_result_key = self.llm_chain.output_key\n result_docs = [\n Document(page_content=r[question_result_key], metadata=docs[i].metadata)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/map_reduce.html"} {"id": "48a91ec154cb-5", "text": "Document(page_content=r[question_result_key], metadata=docs[i].metadata)\n # This uses metadata from the docs, and the textual results from `results`\n for i, r in enumerate(map_results)\n ]\n result, extra_return_dict = self.reduce_documents_chain.combine_docs(\n result_docs, token_max=token_max, callbacks=callbacks, **kwargs\n )\n if self.return_intermediate_steps:\n intermediate_steps = [r[question_result_key] for r in map_results]\n extra_return_dict[\"intermediate_steps\"] = intermediate_steps\n return result, extra_return_dict\n[docs] async def acombine_docs(\n self,\n docs: List[Document],\n token_max: Optional[int] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Tuple[str, dict]:\n \"\"\"Combine documents in a map reduce manner.\n Combine by mapping first chain over all documents, then reducing the results.\n This reducing can be done recursively if needed (if there are many documents).\n \"\"\"\n map_results = await self.llm_chain.aapply(\n # FYI - this is parallelized and so it is fast.\n [{**{self.document_variable_name: d.page_content}, **kwargs} for d in docs],\n callbacks=callbacks,\n )\n question_result_key = self.llm_chain.output_key\n result_docs = [\n Document(page_content=r[question_result_key], metadata=docs[i].metadata)\n # This uses metadata from the docs, and the textual results from `results`\n for i, r in enumerate(map_results)\n ]\n result, extra_return_dict = await self.reduce_documents_chain.acombine_docs(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/map_reduce.html"} {"id": "48a91ec154cb-6", "text": "]\n result, extra_return_dict = await self.reduce_documents_chain.acombine_docs(\n result_docs, token_max=token_max, callbacks=callbacks, **kwargs\n )\n if self.return_intermediate_steps:\n intermediate_steps = [r[question_result_key] for r in map_results]\n extra_return_dict[\"intermediate_steps\"] = intermediate_steps\n return result, extra_return_dict\n @property\n def _chain_type(self) -> str:\n return \"map_reduce_documents_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/map_reduce.html"} {"id": "8c6aaf2e7b0b-0", "text": "Source code for langchain.chains.combine_documents.reduce\n\"\"\"Combine many documents together by recursively reducing them.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Callable, List, Optional, Protocol, Tuple\nfrom pydantic import Extra\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.chains.combine_documents.base import BaseCombineDocumentsChain\nfrom langchain.docstore.document import Document\n[docs]class CombineDocsProtocol(Protocol):\n \"\"\"Interface for the combine_docs method.\"\"\"\n[docs] def __call__(self, docs: List[Document], **kwargs: Any) -> str:\n \"\"\"Interface for the combine_docs method.\"\"\"\n[docs]class AsyncCombineDocsProtocol(Protocol):\n \"\"\"Interface for the combine_docs method.\"\"\"\n[docs] async def __call__(self, docs: List[Document], **kwargs: Any) -> str:\n \"\"\"Async nterface for the combine_docs method.\"\"\"\ndef _split_list_of_docs(\n docs: List[Document], length_func: Callable, token_max: int, **kwargs: Any\n) -> List[List[Document]]:\n new_result_doc_list = []\n _sub_result_docs = []\n for doc in docs:\n _sub_result_docs.append(doc)\n _num_tokens = length_func(_sub_result_docs, **kwargs)\n if _num_tokens > token_max:\n if len(_sub_result_docs) == 1:\n raise ValueError(\n \"A single document was longer than the context length,\"\n \" we cannot handle this.\"\n )\n new_result_doc_list.append(_sub_result_docs[:-1])\n _sub_result_docs = _sub_result_docs[-1:]\n new_result_doc_list.append(_sub_result_docs)\n return new_result_doc_list\ndef _collapse_docs(\n docs: List[Document],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/reduce.html"} {"id": "8c6aaf2e7b0b-1", "text": "def _collapse_docs(\n docs: List[Document],\n combine_document_func: CombineDocsProtocol,\n **kwargs: Any,\n) -> Document:\n result = combine_document_func(docs, **kwargs)\n combined_metadata = {k: str(v) for k, v in docs[0].metadata.items()}\n for doc in docs[1:]:\n for k, v in doc.metadata.items():\n if k in combined_metadata:\n combined_metadata[k] += f\", {v}\"\n else:\n combined_metadata[k] = str(v)\n return Document(page_content=result, metadata=combined_metadata)\nasync def _acollapse_docs(\n docs: List[Document],\n combine_document_func: AsyncCombineDocsProtocol,\n **kwargs: Any,\n) -> Document:\n result = await combine_document_func(docs, **kwargs)\n combined_metadata = {k: str(v) for k, v in docs[0].metadata.items()}\n for doc in docs[1:]:\n for k, v in doc.metadata.items():\n if k in combined_metadata:\n combined_metadata[k] += f\", {v}\"\n else:\n combined_metadata[k] = str(v)\n return Document(page_content=result, metadata=combined_metadata)\n[docs]class ReduceDocumentsChain(BaseCombineDocumentsChain):\n \"\"\"Combining documents by recursively reducing them.\n This involves\n - combine_documents_chain\n - collapse_documents_chain\n `combine_documents_chain` is ALWAYS provided. This is final chain that is called.\n We pass all previous results to this chain, and the output of this chain is\n returned as a final result.\n `collapse_documents_chain` is used if the documents passed in are too many to all", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/reduce.html"} {"id": "8c6aaf2e7b0b-2", "text": "`collapse_documents_chain` is used if the documents passed in are too many to all\n be passed to `combine_documents_chain` in one go. In this case,\n `collapse_documents_chain` is called recursively on as big of groups of documents\n as are allowed.\n Example:\n .. code-block:: python\n from langchain.chains import (\n StuffDocumentsChain, LLMChain, ReduceDocumentsChain\n )\n from langchain.prompts import PromptTemplate\n from langchain.llms import OpenAI\n # This controls how each document will be formatted. Specifically,\n # it will be passed to `format_document` - see that function for more\n # details.\n document_prompt = PromptTemplate(\n input_variables=[\"page_content\"],\n template=\"{page_content}\"\n )\n document_variable_name = \"context\"\n llm = OpenAI()\n # The prompt here should take as an input variable the\n # `document_variable_name`\n prompt = PromptTemplate.from_template(\n \"Summarize this content: {context}\"\n )\n llm_chain = LLMChain(llm=llm, prompt=prompt)\n combine_documents_chain = StuffDocumentsChain(\n llm_chain=llm_chain,\n document_prompt=document_prompt,\n document_variable_name=document_variable_name\n )\n chain = ReduceDocumentsChain(\n combine_documents_chain=combine_documents_chain,\n )\n # If we wanted to, we could also pass in collapse_documents_chain\n # which is specifically aimed at collapsing documents BEFORE\n # the final call.\n prompt = PromptTemplate.from_template(\n \"Collapse this content: {context}\"\n )\n llm_chain = LLMChain(llm=llm, prompt=prompt)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/reduce.html"} {"id": "8c6aaf2e7b0b-3", "text": "llm_chain = LLMChain(llm=llm, prompt=prompt)\n collapse_documents_chain = StuffDocumentsChain(\n llm_chain=llm_chain,\n document_prompt=document_prompt,\n document_variable_name=document_variable_name\n )\n chain = ReduceDocumentsChain(\n combine_documents_chain=combine_documents_chain,\n collapse_documents_chain=collapse_documents_chain,\n )\n \"\"\"\n combine_documents_chain: BaseCombineDocumentsChain\n \"\"\"Final chain to call to combine documents.\n This is typically a StuffDocumentsChain.\"\"\"\n collapse_documents_chain: Optional[BaseCombineDocumentsChain] = None\n \"\"\"Chain to use to collapse documents if needed until they can all fit.\n If None, will use the combine_documents_chain.\n This is typically a StuffDocumentsChain.\"\"\"\n token_max: int = 3000\n \"\"\"The maximum number of tokens to group documents into. For example, if\n set to 3000 then documents will be grouped into chunks of no greater than\n 3000 tokens before trying to combine them into a smaller chunk.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def _collapse_chain(self) -> BaseCombineDocumentsChain:\n if self.collapse_documents_chain is not None:\n return self.collapse_documents_chain\n else:\n return self.combine_documents_chain\n[docs] def combine_docs(\n self,\n docs: List[Document],\n token_max: Optional[int] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Tuple[str, dict]:\n \"\"\"Combine multiple documents recursively.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/reduce.html"} {"id": "8c6aaf2e7b0b-4", "text": "\"\"\"Combine multiple documents recursively.\n Args:\n docs: List of documents to combine, assumed that each one is less than\n `token_max`.\n token_max: Recursively creates groups of documents less than this number\n of tokens.\n callbacks: Callbacks to be passed through\n **kwargs: additional parameters to be passed to LLM calls (like other\n input variables besides the documents)\n Returns:\n The first element returned is the single string output. The second\n element returned is a dictionary of other keys to return.\n \"\"\"\n result_docs, extra_return_dict = self._collapse(\n docs, token_max=token_max, callbacks=callbacks, **kwargs\n )\n return self.combine_documents_chain.combine_docs(\n docs=result_docs, callbacks=callbacks, **kwargs\n )\n[docs] async def acombine_docs(\n self,\n docs: List[Document],\n token_max: Optional[int] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Tuple[str, dict]:\n \"\"\"Combine multiple documents recursively.\n Args:\n docs: List of documents to combine, assumed that each one is less than\n `token_max`.\n token_max: Recursively creates groups of documents less than this number\n of tokens.\n callbacks: Callbacks to be passed through\n **kwargs: additional parameters to be passed to LLM calls (like other\n input variables besides the documents)\n Returns:\n The first element returned is the single string output. The second\n element returned is a dictionary of other keys to return.\n \"\"\"\n result_docs, extra_return_dict = await self._acollapse(\n docs, token_max=token_max, callbacks=callbacks, **kwargs\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/reduce.html"} {"id": "8c6aaf2e7b0b-5", "text": "docs, token_max=token_max, callbacks=callbacks, **kwargs\n )\n return await self.combine_documents_chain.acombine_docs(\n docs=result_docs, callbacks=callbacks, **kwargs\n )\n def _collapse(\n self,\n docs: List[Document],\n token_max: Optional[int] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Tuple[List[Document], dict]:\n result_docs = docs\n length_func = self.combine_documents_chain.prompt_length\n num_tokens = length_func(result_docs, **kwargs)\n def _collapse_docs_func(docs: List[Document], **kwargs: Any) -> str:\n return self._collapse_chain.run(\n input_documents=docs, callbacks=callbacks, **kwargs\n )\n _token_max = token_max or self.token_max\n while num_tokens is not None and num_tokens > _token_max:\n new_result_doc_list = _split_list_of_docs(\n result_docs, length_func, _token_max, **kwargs\n )\n result_docs = []\n for docs in new_result_doc_list:\n new_doc = _collapse_docs(docs, _collapse_docs_func, **kwargs)\n result_docs.append(new_doc)\n num_tokens = length_func(result_docs, **kwargs)\n return result_docs, {}\n async def _acollapse(\n self,\n docs: List[Document],\n token_max: Optional[int] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Tuple[List[Document], dict]:\n result_docs = docs\n length_func = self.combine_documents_chain.prompt_length\n num_tokens = length_func(result_docs, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/reduce.html"} {"id": "8c6aaf2e7b0b-6", "text": "num_tokens = length_func(result_docs, **kwargs)\n async def _collapse_docs_func(docs: List[Document], **kwargs: Any) -> str:\n return await self._collapse_chain.arun(\n input_documents=docs, callbacks=callbacks, **kwargs\n )\n _token_max = token_max or self.token_max\n while num_tokens is not None and num_tokens > _token_max:\n new_result_doc_list = _split_list_of_docs(\n result_docs, length_func, _token_max, **kwargs\n )\n result_docs = []\n for docs in new_result_doc_list:\n new_doc = await _acollapse_docs(docs, _collapse_docs_func, **kwargs)\n result_docs.append(new_doc)\n num_tokens = length_func(result_docs, **kwargs)\n return result_docs, {}\n @property\n def _chain_type(self) -> str:\n return \"reduce_documents_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/reduce.html"} {"id": "87cb4277f29a-0", "text": "Source code for langchain.chains.combine_documents.base\n\"\"\"Base interface for chains combining documents.\"\"\"\nfrom abc import ABC, abstractmethod\nfrom typing import Any, Dict, List, Optional, Tuple\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.docstore.document import Document\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter, TextSplitter\n[docs]class BaseCombineDocumentsChain(Chain, ABC):\n \"\"\"Base interface for chains combining documents.\n Subclasses of this chain deal with combining documents in a variety of\n ways. This base class exists to add some uniformity in the interface these types\n of chains should expose. Namely, they expect an input key related to the documents\n to use (default `input_documents`), and then also expose a method to calculate\n the length of a prompt from documents (useful for outside callers to use to\n determine whether it's safe to pass a list of documents into this chain or whether\n that will longer than the context length).\n \"\"\"\n input_key: str = \"input_documents\" #: :meta private:\n output_key: str = \"output_text\" #: :meta private:\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n[docs] def prompt_length(self, docs: List[Document], **kwargs: Any) -> Optional[int]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/base.html"} {"id": "87cb4277f29a-1", "text": "\"\"\"Return the prompt length given the documents passed in.\n This can be used by a caller to determine whether passing in a list\n of documents would exceed a certain prompt length. This useful when\n trying to ensure that the size of a prompt remains below a certain\n context limit.\n Args:\n docs: List[Document], a list of documents to use to calculate the\n total prompt length.\n Returns:\n Returns None if the method does not depend on the prompt length,\n otherwise the length of the prompt in tokens.\n \"\"\"\n return None\n[docs] @abstractmethod\n def combine_docs(self, docs: List[Document], **kwargs: Any) -> Tuple[str, dict]:\n \"\"\"Combine documents into a single string.\n Args:\n docs: List[Document], the documents to combine\n **kwargs: Other parameters to use in combining documents, often\n other inputs to the prompt.\n Returns:\n The first element returned is the single string output. The second\n element returned is a dictionary of other keys to return.\n \"\"\"\n[docs] @abstractmethod\n async def acombine_docs(\n self, docs: List[Document], **kwargs: Any\n ) -> Tuple[str, dict]:\n \"\"\"Combine documents into a single string.\n Args:\n docs: List[Document], the documents to combine\n **kwargs: Other parameters to use in combining documents, often\n other inputs to the prompt.\n Returns:\n The first element returned is the single string output. The second\n element returned is a dictionary of other keys to return.\n \"\"\"\n def _call(\n self,\n inputs: Dict[str, List[Document]],\n run_manager: Optional[CallbackManagerForChainRun] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/base.html"} {"id": "87cb4277f29a-2", "text": "run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n \"\"\"Prepare inputs, call combine docs, prepare outputs.\"\"\"\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n docs = inputs[self.input_key]\n # Other keys are assumed to be needed for LLM prediction\n other_keys = {k: v for k, v in inputs.items() if k != self.input_key}\n output, extra_return_dict = self.combine_docs(\n docs, callbacks=_run_manager.get_child(), **other_keys\n )\n extra_return_dict[self.output_key] = output\n return extra_return_dict\n async def _acall(\n self,\n inputs: Dict[str, List[Document]],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n \"\"\"Prepare inputs, call combine docs, prepare outputs.\"\"\"\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n docs = inputs[self.input_key]\n # Other keys are assumed to be needed for LLM prediction\n other_keys = {k: v for k, v in inputs.items() if k != self.input_key}\n output, extra_return_dict = await self.acombine_docs(\n docs, callbacks=_run_manager.get_child(), **other_keys\n )\n extra_return_dict[self.output_key] = output\n return extra_return_dict\n[docs]class AnalyzeDocumentChain(Chain):\n \"\"\"Chain that splits documents, then analyzes it in pieces.\n This chain is parameterized by a TextSplitter and a CombineDocumentsChain.\n This chain takes a single document as input, and then splits it up into chunks", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/base.html"} {"id": "87cb4277f29a-3", "text": "This chain takes a single document as input, and then splits it up into chunks\n and then passes those chucks to the CombineDocumentsChain.\n \"\"\"\n input_key: str = \"input_document\" #: :meta private:\n text_splitter: TextSplitter = Field(default_factory=RecursiveCharacterTextSplitter)\n combine_docs_chain: BaseCombineDocumentsChain\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return output key.\n :meta private:\n \"\"\"\n return self.combine_docs_chain.output_keys\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n \"\"\"Split document into chunks and pass to CombineDocumentsChain.\"\"\"\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n document = inputs[self.input_key]\n docs = self.text_splitter.create_documents([document])\n # Other keys are assumed to be needed for LLM prediction\n other_keys: Dict = {k: v for k, v in inputs.items() if k != self.input_key}\n other_keys[self.combine_docs_chain.input_key] = docs\n return self.combine_docs_chain(\n other_keys, return_only_outputs=True, callbacks=_run_manager.get_child()\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/base.html"} {"id": "ddd8a194a0e5-0", "text": "Source code for langchain.chains.combine_documents.refine\n\"\"\"Combining documents by doing a first pass and then refining on more documents.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Tuple\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.chains.combine_documents.base import (\n BaseCombineDocumentsChain,\n)\nfrom langchain.chains.llm import LLMChain\nfrom langchain.docstore.document import Document\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema import BasePromptTemplate, format_document\ndef _get_default_document_prompt() -> PromptTemplate:\n return PromptTemplate(input_variables=[\"page_content\"], template=\"{page_content}\")\n[docs]class RefineDocumentsChain(BaseCombineDocumentsChain):\n \"\"\"Combine documents by doing a first pass and then refining on more documents.\n This algorithm first calls `initial_llm_chain` on the first document, passing\n that first document in with the variable name `document_variable_name`, and\n produces a new variable with the variable name `initial_response_name`.\n Then, it loops over every remaining document. This is called the \"refine\" step.\n It calls `refine_llm_chain`,\n passing in that document with the variable name `document_variable_name`\n as well as the previous response with the variable name `initial_response_name`.\n Example:\n .. code-block:: python\n from langchain.chains import RefineDocumentsChain, LLMChain\n from langchain.prompts import PromptTemplate\n from langchain.llms import OpenAI\n # This controls how each document will be formatted. Specifically,\n # it will be passed to `format_document` - see that function for more\n # details.\n document_prompt = PromptTemplate(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/refine.html"} {"id": "ddd8a194a0e5-1", "text": "# details.\n document_prompt = PromptTemplate(\n input_variables=[\"page_content\"],\n template=\"{page_content}\"\n )\n document_variable_name = \"context\"\n llm = OpenAI()\n # The prompt here should take as an input variable the\n # `document_variable_name`\n prompt = PromptTemplate.from_template(\n \"Summarize this content: {context}\"\n )\n llm_chain = LLMChain(llm=llm, prompt=prompt)\n initial_response_name = \"prev_response\"\n # The prompt here should take as an input variable the\n # `document_variable_name` as well as `initial_response_name`\n prompt_refine = PromptTemplate.from_template(\n \"Here's your first summary: {prev_response}. \"\n \"Now add to it based on the following context: {context}\"\n )\n llm_chain_refine = LLMChain(llm=llm, prompt=prompt_refine)\n chain = RefineDocumentsChain(\n initial_llm_chain=initial_llm_chain,\n refine_llm_chain=refine_llm_chain,\n document_prompt=document_prompt,\n document_variable_name=document_variable_name,\n initial_response_name=initial_response_name,\n )\n \"\"\"\n initial_llm_chain: LLMChain\n \"\"\"LLM chain to use on initial document.\"\"\"\n refine_llm_chain: LLMChain\n \"\"\"LLM chain to use when refining.\"\"\"\n document_variable_name: str\n \"\"\"The variable name in the initial_llm_chain to put the documents in.\n If only one variable in the initial_llm_chain, this need not be provided.\"\"\"\n initial_response_name: str\n \"\"\"The variable name to format the initial response in when refining.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/refine.html"} {"id": "ddd8a194a0e5-2", "text": "\"\"\"The variable name to format the initial response in when refining.\"\"\"\n document_prompt: BasePromptTemplate = Field(\n default_factory=_get_default_document_prompt\n )\n \"\"\"Prompt to use to format each document, gets passed to `format_document`.\"\"\"\n return_intermediate_steps: bool = False\n \"\"\"Return the results of the refine steps in the output.\"\"\"\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n _output_keys = super().output_keys\n if self.return_intermediate_steps:\n _output_keys = _output_keys + [\"intermediate_steps\"]\n return _output_keys\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] @root_validator(pre=True)\n def get_return_intermediate_steps(cls, values: Dict) -> Dict:\n \"\"\"For backwards compatibility.\"\"\"\n if \"return_refine_steps\" in values:\n values[\"return_intermediate_steps\"] = values[\"return_refine_steps\"]\n del values[\"return_refine_steps\"]\n return values\n[docs] @root_validator(pre=True)\n def get_default_document_variable_name(cls, values: Dict) -> Dict:\n \"\"\"Get default document variable name, if not provided.\"\"\"\n if \"document_variable_name\" not in values:\n llm_chain_variables = values[\"initial_llm_chain\"].prompt.input_variables\n if len(llm_chain_variables) == 1:\n values[\"document_variable_name\"] = llm_chain_variables[0]\n else:\n raise ValueError(\n \"document_variable_name must be provided if there are \"\n \"multiple llm_chain input_variables\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/refine.html"} {"id": "ddd8a194a0e5-3", "text": "\"multiple llm_chain input_variables\"\n )\n else:\n llm_chain_variables = values[\"initial_llm_chain\"].prompt.input_variables\n if values[\"document_variable_name\"] not in llm_chain_variables:\n raise ValueError(\n f\"document_variable_name {values['document_variable_name']} was \"\n f\"not found in llm_chain input_variables: {llm_chain_variables}\"\n )\n return values\n[docs] def combine_docs(\n self, docs: List[Document], callbacks: Callbacks = None, **kwargs: Any\n ) -> Tuple[str, dict]:\n \"\"\"Combine by mapping first chain over all, then stuffing into final chain.\n Args:\n docs: List of documents to combine\n callbacks: Callbacks to be passed through\n **kwargs: additional parameters to be passed to LLM calls (like other\n input variables besides the documents)\n Returns:\n The first element returned is the single string output. The second\n element returned is a dictionary of other keys to return.\n \"\"\"\n inputs = self._construct_initial_inputs(docs, **kwargs)\n res = self.initial_llm_chain.predict(callbacks=callbacks, **inputs)\n refine_steps = [res]\n for doc in docs[1:]:\n base_inputs = self._construct_refine_inputs(doc, res)\n inputs = {**base_inputs, **kwargs}\n res = self.refine_llm_chain.predict(callbacks=callbacks, **inputs)\n refine_steps.append(res)\n return self._construct_result(refine_steps, res)\n[docs] async def acombine_docs(\n self, docs: List[Document], callbacks: Callbacks = None, **kwargs: Any\n ) -> Tuple[str, dict]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/refine.html"} {"id": "ddd8a194a0e5-4", "text": ") -> Tuple[str, dict]:\n \"\"\"Combine by mapping first chain over all, then stuffing into final chain.\n Args:\n docs: List of documents to combine\n callbacks: Callbacks to be passed through\n **kwargs: additional parameters to be passed to LLM calls (like other\n input variables besides the documents)\n Returns:\n The first element returned is the single string output. The second\n element returned is a dictionary of other keys to return.\n \"\"\"\n inputs = self._construct_initial_inputs(docs, **kwargs)\n res = await self.initial_llm_chain.apredict(callbacks=callbacks, **inputs)\n refine_steps = [res]\n for doc in docs[1:]:\n base_inputs = self._construct_refine_inputs(doc, res)\n inputs = {**base_inputs, **kwargs}\n res = await self.refine_llm_chain.apredict(callbacks=callbacks, **inputs)\n refine_steps.append(res)\n return self._construct_result(refine_steps, res)\n def _construct_result(self, refine_steps: List[str], res: str) -> Tuple[str, dict]:\n if self.return_intermediate_steps:\n extra_return_dict = {\"intermediate_steps\": refine_steps}\n else:\n extra_return_dict = {}\n return res, extra_return_dict\n def _construct_refine_inputs(self, doc: Document, res: str) -> Dict[str, Any]:\n return {\n self.document_variable_name: format_document(doc, self.document_prompt),\n self.initial_response_name: res,\n }\n def _construct_initial_inputs(\n self, docs: List[Document], **kwargs: Any\n ) -> Dict[str, Any]:\n base_info = {\"page_content\": docs[0].page_content}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/refine.html"} {"id": "ddd8a194a0e5-5", "text": "base_info = {\"page_content\": docs[0].page_content}\n base_info.update(docs[0].metadata)\n document_info = {k: base_info[k] for k in self.document_prompt.input_variables}\n base_inputs: dict = {\n self.document_variable_name: self.document_prompt.format(**document_info)\n }\n inputs = {**base_inputs, **kwargs}\n return inputs\n @property\n def _chain_type(self) -> str:\n return \"refine_documents_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/refine.html"} {"id": "861b94ca08fe-0", "text": "Source code for langchain.chains.combine_documents.map_rerank\n\"\"\"Combining documents by mapping a chain over them first, then reranking results.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional, Sequence, Tuple, Union, cast\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.chains.combine_documents.base import BaseCombineDocumentsChain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.docstore.document import Document\nfrom langchain.output_parsers.regex import RegexParser\n[docs]class MapRerankDocumentsChain(BaseCombineDocumentsChain):\n \"\"\"Combining documents by mapping a chain over them, then reranking results.\n This algorithm calls an LLMChain on each input document. The LLMChain is expected\n to have an OutputParser that parses the result into both an answer (`answer_key`)\n and a score (`rank_key`). The answer with the highest score is then returned.\n Example:\n .. code-block:: python\n from langchain.chains import StuffDocumentsChain, LLMChain\n from langchain.prompts import PromptTemplate\n from langchain.llms import OpenAI\n from langchain.output_parsers.regex import RegexParser\n document_variable_name = \"context\"\n llm = OpenAI()\n # The prompt here should take as an input variable the\n # `document_variable_name`\n # The actual prompt will need to be a lot more complex, this is just\n # an example.\n prompt_template = (\n \"Use the following context to tell me the chemical formula \"\n \"for water. Output both your answer and a score of how confident \"\n \"you are. Context: {content}\"\n )\n output_parser = RegexParser(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/map_rerank.html"} {"id": "861b94ca08fe-1", "text": ")\n output_parser = RegexParser(\n regex=r\"(.*?)\\nScore: (.*)\",\n output_keys=[\"answer\", \"score\"],\n )\n prompt = PromptTemplate(\n template=prompt_template,\n input_variables=[\"context\"],\n output_parser=output_parser,\n )\n llm_chain = LLMChain(llm=llm, prompt=prompt)\n chain = MapRerankDocumentsChain(\n llm_chain=llm_chain,\n document_variable_name=document_variable_name,\n rank_key=\"score\",\n answer_key=\"answer\",\n )\n \"\"\"\n llm_chain: LLMChain\n \"\"\"Chain to apply to each document individually.\"\"\"\n document_variable_name: str\n \"\"\"The variable name in the llm_chain to put the documents in.\n If only one variable in the llm_chain, this need not be provided.\"\"\"\n rank_key: str\n \"\"\"Key in output of llm_chain to rank on.\"\"\"\n answer_key: str\n \"\"\"Key in output of llm_chain to return as answer.\"\"\"\n metadata_keys: Optional[List[str]] = None\n \"\"\"Additional metadata from the chosen document to return.\"\"\"\n return_intermediate_steps: bool = False\n \"\"\"Return intermediate steps.\n Intermediate steps include the results of calling llm_chain on each document.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n _output_keys = super().output_keys\n if self.return_intermediate_steps:\n _output_keys = _output_keys + [\"intermediate_steps\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/map_rerank.html"} {"id": "861b94ca08fe-2", "text": "_output_keys = _output_keys + [\"intermediate_steps\"]\n if self.metadata_keys is not None:\n _output_keys += self.metadata_keys\n return _output_keys\n[docs] @root_validator()\n def validate_llm_output(cls, values: Dict) -> Dict:\n \"\"\"Validate that the combine chain outputs a dictionary.\"\"\"\n output_parser = values[\"llm_chain\"].prompt.output_parser\n if not isinstance(output_parser, RegexParser):\n raise ValueError(\n \"Output parser of llm_chain should be a RegexParser,\"\n f\" got {output_parser}\"\n )\n output_keys = output_parser.output_keys\n if values[\"rank_key\"] not in output_keys:\n raise ValueError(\n f\"Got {values['rank_key']} as key to rank on, but did not find \"\n f\"it in the llm_chain output keys ({output_keys})\"\n )\n if values[\"answer_key\"] not in output_keys:\n raise ValueError(\n f\"Got {values['answer_key']} as key to return, but did not find \"\n f\"it in the llm_chain output keys ({output_keys})\"\n )\n return values\n[docs] @root_validator(pre=True)\n def get_default_document_variable_name(cls, values: Dict) -> Dict:\n \"\"\"Get default document variable name, if not provided.\"\"\"\n if \"document_variable_name\" not in values:\n llm_chain_variables = values[\"llm_chain\"].prompt.input_variables\n if len(llm_chain_variables) == 1:\n values[\"document_variable_name\"] = llm_chain_variables[0]\n else:\n raise ValueError(\n \"document_variable_name must be provided if there are \"\n \"multiple llm_chain input_variables\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/map_rerank.html"} {"id": "861b94ca08fe-3", "text": "\"multiple llm_chain input_variables\"\n )\n else:\n llm_chain_variables = values[\"llm_chain\"].prompt.input_variables\n if values[\"document_variable_name\"] not in llm_chain_variables:\n raise ValueError(\n f\"document_variable_name {values['document_variable_name']} was \"\n f\"not found in llm_chain input_variables: {llm_chain_variables}\"\n )\n return values\n[docs] def combine_docs(\n self, docs: List[Document], callbacks: Callbacks = None, **kwargs: Any\n ) -> Tuple[str, dict]:\n \"\"\"Combine documents in a map rerank manner.\n Combine by mapping first chain over all documents, then reranking the results.\n Args:\n docs: List of documents to combine\n callbacks: Callbacks to be passed through\n **kwargs: additional parameters to be passed to LLM calls (like other\n input variables besides the documents)\n Returns:\n The first element returned is the single string output. The second\n element returned is a dictionary of other keys to return.\n \"\"\"\n results = self.llm_chain.apply_and_parse(\n # FYI - this is parallelized and so it is fast.\n [{**{self.document_variable_name: d.page_content}, **kwargs} for d in docs],\n callbacks=callbacks,\n )\n return self._process_results(docs, results)\n[docs] async def acombine_docs(\n self, docs: List[Document], callbacks: Callbacks = None, **kwargs: Any\n ) -> Tuple[str, dict]:\n \"\"\"Combine documents in a map rerank manner.\n Combine by mapping first chain over all documents, then reranking the results.\n Args:\n docs: List of documents to combine", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/map_rerank.html"} {"id": "861b94ca08fe-4", "text": "Args:\n docs: List of documents to combine\n callbacks: Callbacks to be passed through\n **kwargs: additional parameters to be passed to LLM calls (like other\n input variables besides the documents)\n Returns:\n The first element returned is the single string output. The second\n element returned is a dictionary of other keys to return.\n \"\"\"\n results = await self.llm_chain.aapply_and_parse(\n # FYI - this is parallelized and so it is fast.\n [{**{self.document_variable_name: d.page_content}, **kwargs} for d in docs],\n callbacks=callbacks,\n )\n return self._process_results(docs, results)\n def _process_results(\n self,\n docs: List[Document],\n results: Sequence[Union[str, List[str], Dict[str, str]]],\n ) -> Tuple[str, dict]:\n typed_results = cast(List[dict], results)\n sorted_res = sorted(\n zip(typed_results, docs), key=lambda x: -int(x[0][self.rank_key])\n )\n output, document = sorted_res[0]\n extra_info = {}\n if self.metadata_keys is not None:\n for key in self.metadata_keys:\n extra_info[key] = document.metadata[key]\n if self.return_intermediate_steps:\n extra_info[\"intermediate_steps\"] = results\n return output[self.answer_key], extra_info\n @property\n def _chain_type(self) -> str:\n return \"map_rerank_documents_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/map_rerank.html"} {"id": "815bbe17da0b-0", "text": "Source code for langchain.chains.combine_documents.stuff\n\"\"\"Chain that combines documents by stuffing into context.\"\"\"\nfrom typing import Any, Dict, List, Optional, Tuple\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.chains.combine_documents.base import (\n BaseCombineDocumentsChain,\n)\nfrom langchain.chains.llm import LLMChain\nfrom langchain.docstore.document import Document\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema import BasePromptTemplate, format_document\ndef _get_default_document_prompt() -> PromptTemplate:\n return PromptTemplate(input_variables=[\"page_content\"], template=\"{page_content}\")\n[docs]class StuffDocumentsChain(BaseCombineDocumentsChain):\n \"\"\"Chain that combines documents by stuffing into context.\n This chain takes a list of documents and first combines them into a single string.\n It does this by formatting each document into a string with the `document_prompt`\n and then joining them together with `document_separator`. It then adds that new\n string to the inputs with the variable name set by `document_variable_name`.\n Those inputs are then passed to the `llm_chain`.\n Example:\n .. code-block:: python\n from langchain.chains import StuffDocumentsChain, LLMChain\n from langchain.prompts import PromptTemplate\n from langchain.llms import OpenAI\n # This controls how each document will be formatted. Specifically,\n # it will be passed to `format_document` - see that function for more\n # details.\n document_prompt = PromptTemplate(\n input_variables=[\"page_content\"],\n template=\"{page_content}\"\n )\n document_variable_name = \"context\"\n llm = OpenAI()\n # The prompt here should take as an input variable the", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/stuff.html"} {"id": "815bbe17da0b-1", "text": "# The prompt here should take as an input variable the\n # `document_variable_name`\n prompt = PromptTemplate.from_template(\n \"Summarize this content: {context}\"\n )\n llm_chain = LLMChain(llm=llm, prompt=prompt)\n chain = StuffDocumentsChain(\n llm_chain=llm_chain,\n document_prompt=document_prompt,\n document_variable_name=document_variable_name\n )\n \"\"\"\n llm_chain: LLMChain\n \"\"\"LLM chain which is called with the formatted document string,\n along with any other inputs.\"\"\"\n document_prompt: BasePromptTemplate = Field(\n default_factory=_get_default_document_prompt\n )\n \"\"\"Prompt to use to format each document, gets passed to `format_document`.\"\"\"\n document_variable_name: str\n \"\"\"The variable name in the llm_chain to put the documents in.\n If only one variable in the llm_chain, this need not be provided.\"\"\"\n document_separator: str = \"\\n\\n\"\n \"\"\"The string with which to join the formatted documents\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] @root_validator(pre=True)\n def get_default_document_variable_name(cls, values: Dict) -> Dict:\n \"\"\"Get default document variable name, if not provided.\n If only one variable is present in the llm_chain.prompt,\n we can infer that the formatted documents should be passed in\n with this variable name.\n \"\"\"\n llm_chain_variables = values[\"llm_chain\"].prompt.input_variables\n if \"document_variable_name\" not in values:\n if len(llm_chain_variables) == 1:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/stuff.html"} {"id": "815bbe17da0b-2", "text": "if len(llm_chain_variables) == 1:\n values[\"document_variable_name\"] = llm_chain_variables[0]\n else:\n raise ValueError(\n \"document_variable_name must be provided if there are \"\n \"multiple llm_chain_variables\"\n )\n else:\n if values[\"document_variable_name\"] not in llm_chain_variables:\n raise ValueError(\n f\"document_variable_name {values['document_variable_name']} was \"\n f\"not found in llm_chain input_variables: {llm_chain_variables}\"\n )\n return values\n def _get_inputs(self, docs: List[Document], **kwargs: Any) -> dict:\n \"\"\"Construct inputs from kwargs and docs.\n Format and the join all the documents together into one input with name\n `self.document_variable_name`. The pluck any additional variables\n from **kwargs.\n Args:\n docs: List of documents to format and then join into single input\n **kwargs: additional inputs to chain, will pluck any other required\n arguments from here.\n Returns:\n dictionary of inputs to LLMChain\n \"\"\"\n # Format each document according to the prompt\n doc_strings = [format_document(doc, self.document_prompt) for doc in docs]\n # Join the documents together to put them in the prompt.\n inputs = {\n k: v\n for k, v in kwargs.items()\n if k in self.llm_chain.prompt.input_variables\n }\n inputs[self.document_variable_name] = self.document_separator.join(doc_strings)\n return inputs\n[docs] def prompt_length(self, docs: List[Document], **kwargs: Any) -> Optional[int]:\n \"\"\"Return the prompt length given the documents passed in.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/stuff.html"} {"id": "815bbe17da0b-3", "text": "\"\"\"Return the prompt length given the documents passed in.\n This can be used by a caller to determine whether passing in a list\n of documents would exceed a certain prompt length. This useful when\n trying to ensure that the size of a prompt remains below a certain\n context limit.\n Args:\n docs: List[Document], a list of documents to use to calculate the\n total prompt length.\n Returns:\n Returns None if the method does not depend on the prompt length,\n otherwise the length of the prompt in tokens.\n \"\"\"\n inputs = self._get_inputs(docs, **kwargs)\n prompt = self.llm_chain.prompt.format(**inputs)\n return self.llm_chain.llm.get_num_tokens(prompt)\n[docs] def combine_docs(\n self, docs: List[Document], callbacks: Callbacks = None, **kwargs: Any\n ) -> Tuple[str, dict]:\n \"\"\"Stuff all documents into one prompt and pass to LLM.\n Args:\n docs: List of documents to join together into one variable\n callbacks: Optional callbacks to pass along\n **kwargs: additional parameters to use to get inputs to LLMChain.\n Returns:\n The first element returned is the single string output. The second\n element returned is a dictionary of other keys to return.\n \"\"\"\n inputs = self._get_inputs(docs, **kwargs)\n # Call predict on the LLM.\n return self.llm_chain.predict(callbacks=callbacks, **inputs), {}\n[docs] async def acombine_docs(\n self, docs: List[Document], callbacks: Callbacks = None, **kwargs: Any\n ) -> Tuple[str, dict]:\n \"\"\"Stuff all documents into one prompt and pass to LLM.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/stuff.html"} {"id": "815bbe17da0b-4", "text": "\"\"\"Stuff all documents into one prompt and pass to LLM.\n Args:\n docs: List of documents to join together into one variable\n callbacks: Optional callbacks to pass along\n **kwargs: additional parameters to use to get inputs to LLMChain.\n Returns:\n The first element returned is the single string output. The second\n element returned is a dictionary of other keys to return.\n \"\"\"\n inputs = self._get_inputs(docs, **kwargs)\n # Call predict on the LLM.\n return await self.llm_chain.apredict(callbacks=callbacks, **inputs), {}\n @property\n def _chain_type(self) -> str:\n return \"stuff_documents_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/stuff.html"} {"id": "ba77ad202374-0", "text": "Source code for langchain.chains.graph_qa.cypher\n\"\"\"Question answering over a graph.\"\"\"\nfrom __future__ import annotations\nimport re\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.graph_qa.prompts import CYPHER_GENERATION_PROMPT, CYPHER_QA_PROMPT\nfrom langchain.chains.llm import LLMChain\nfrom langchain.graphs.neo4j_graph import Neo4jGraph\nfrom langchain.schema import BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\nINTERMEDIATE_STEPS_KEY = \"intermediate_steps\"\n[docs]def extract_cypher(text: str) -> str:\n \"\"\"\n Extract Cypher code from a text.\n Args:\n text: Text to extract Cypher code from.\n Returns:\n Cypher code extracted from the text.\n \"\"\"\n # The pattern to find Cypher code enclosed in triple backticks\n pattern = r\"```(.*?)```\"\n # Find all matches in the input text\n matches = re.findall(pattern, text, re.DOTALL)\n return matches[0] if matches else text\n[docs]class GraphCypherQAChain(Chain):\n \"\"\"Chain for question-answering against a graph by generating Cypher statements.\"\"\"\n graph: Neo4jGraph = Field(exclude=True)\n cypher_generation_chain: LLMChain\n qa_chain: LLMChain\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n top_k: int = 10\n \"\"\"Number of results to return from the query\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/cypher.html"} {"id": "ba77ad202374-1", "text": "\"\"\"Number of results to return from the query\"\"\"\n return_intermediate_steps: bool = False\n \"\"\"Whether or not to return the intermediate steps along with the final answer.\"\"\"\n return_direct: bool = False\n \"\"\"Whether or not to return the result of querying the graph directly.\"\"\"\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the output keys.\n :meta private:\n \"\"\"\n _output_keys = [self.output_key]\n return _output_keys\n @property\n def _chain_type(self) -> str:\n return \"graph_cypher_chain\"\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n *,\n qa_prompt: BasePromptTemplate = CYPHER_QA_PROMPT,\n cypher_prompt: BasePromptTemplate = CYPHER_GENERATION_PROMPT,\n **kwargs: Any,\n ) -> GraphCypherQAChain:\n \"\"\"Initialize from LLM.\"\"\"\n qa_chain = LLMChain(llm=llm, prompt=qa_prompt)\n cypher_generation_chain = LLMChain(llm=llm, prompt=cypher_prompt)\n return cls(\n qa_chain=qa_chain,\n cypher_generation_chain=cypher_generation_chain,\n **kwargs,\n )\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/cypher.html"} {"id": "ba77ad202374-2", "text": ") -> Dict[str, Any]:\n \"\"\"Generate Cypher statement, use it to look up in db and answer question.\"\"\"\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n question = inputs[self.input_key]\n intermediate_steps: List = []\n generated_cypher = self.cypher_generation_chain.run(\n {\"question\": question, \"schema\": self.graph.get_schema}, callbacks=callbacks\n )\n # Extract Cypher code if it is wrapped in backticks\n generated_cypher = extract_cypher(generated_cypher)\n _run_manager.on_text(\"Generated Cypher:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n generated_cypher, color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n intermediate_steps.append({\"query\": generated_cypher})\n # Retrieve and limit the number of results\n context = self.graph.query(generated_cypher)[: self.top_k]\n if self.return_direct:\n final_result = context\n else:\n _run_manager.on_text(\"Full Context:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n str(context), color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n intermediate_steps.append({\"context\": context})\n result = self.qa_chain(\n {\"question\": question, \"context\": context},\n callbacks=callbacks,\n )\n final_result = result[self.qa_chain.output_key]\n chain_result: Dict[str, Any] = {self.output_key: final_result}\n if self.return_intermediate_steps:\n chain_result[INTERMEDIATE_STEPS_KEY] = intermediate_steps\n return chain_result", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/cypher.html"} {"id": "f47e45a24cfe-0", "text": "Source code for langchain.chains.graph_qa.nebulagraph\n\"\"\"Question answering over a graph.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.graph_qa.prompts import CYPHER_QA_PROMPT, NGQL_GENERATION_PROMPT\nfrom langchain.chains.llm import LLMChain\nfrom langchain.graphs.nebula_graph import NebulaGraph\nfrom langchain.schema import BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class NebulaGraphQAChain(Chain):\n \"\"\"Chain for question-answering against a graph by generating nGQL statements.\"\"\"\n graph: NebulaGraph = Field(exclude=True)\n ngql_generation_chain: LLMChain\n qa_chain: LLMChain\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the output keys.\n :meta private:\n \"\"\"\n _output_keys = [self.output_key]\n return _output_keys\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n *,\n qa_prompt: BasePromptTemplate = CYPHER_QA_PROMPT,\n ngql_prompt: BasePromptTemplate = NGQL_GENERATION_PROMPT,\n **kwargs: Any,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/nebulagraph.html"} {"id": "f47e45a24cfe-1", "text": "**kwargs: Any,\n ) -> NebulaGraphQAChain:\n \"\"\"Initialize from LLM.\"\"\"\n qa_chain = LLMChain(llm=llm, prompt=qa_prompt)\n ngql_generation_chain = LLMChain(llm=llm, prompt=ngql_prompt)\n return cls(\n qa_chain=qa_chain,\n ngql_generation_chain=ngql_generation_chain,\n **kwargs,\n )\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n \"\"\"Generate nGQL statement, use it to look up in db and answer question.\"\"\"\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n question = inputs[self.input_key]\n generated_ngql = self.ngql_generation_chain.run(\n {\"question\": question, \"schema\": self.graph.get_schema}, callbacks=callbacks\n )\n _run_manager.on_text(\"Generated nGQL:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n generated_ngql, color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n context = self.graph.query(generated_ngql)\n _run_manager.on_text(\"Full Context:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n str(context), color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n result = self.qa_chain(\n {\"question\": question, \"context\": context},\n callbacks=callbacks,\n )\n return {self.output_key: result[self.qa_chain.output_key]}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/nebulagraph.html"} {"id": "ab6e54f449c0-0", "text": "Source code for langchain.chains.graph_qa.sparql\n\"\"\"\nQuestion answering over an RDF or OWL graph using SPARQL.\n\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.graph_qa.prompts import (\n SPARQL_GENERATION_SELECT_PROMPT,\n SPARQL_GENERATION_UPDATE_PROMPT,\n SPARQL_INTENT_PROMPT,\n SPARQL_QA_PROMPT,\n)\nfrom langchain.chains.llm import LLMChain\nfrom langchain.graphs.rdf_graph import RdfGraph\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class GraphSparqlQAChain(Chain):\n \"\"\"\n Chain for question-answering against an RDF or OWL graph by generating\n SPARQL statements.\n \"\"\"\n graph: RdfGraph = Field(exclude=True)\n sparql_generation_select_chain: LLMChain\n sparql_generation_update_chain: LLMChain\n sparql_intent_chain: LLMChain\n qa_chain: LLMChain\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n @property\n def input_keys(self) -> List[str]:\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n _output_keys = [self.output_key]\n return _output_keys\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n *,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/sparql.html"} {"id": "ab6e54f449c0-1", "text": "cls,\n llm: BaseLanguageModel,\n *,\n qa_prompt: BasePromptTemplate = SPARQL_QA_PROMPT,\n sparql_select_prompt: BasePromptTemplate = SPARQL_GENERATION_SELECT_PROMPT,\n sparql_update_prompt: BasePromptTemplate = SPARQL_GENERATION_UPDATE_PROMPT,\n sparql_intent_prompt: BasePromptTemplate = SPARQL_INTENT_PROMPT,\n **kwargs: Any,\n ) -> GraphSparqlQAChain:\n \"\"\"Initialize from LLM.\"\"\"\n qa_chain = LLMChain(llm=llm, prompt=qa_prompt)\n sparql_generation_select_chain = LLMChain(llm=llm, prompt=sparql_select_prompt)\n sparql_generation_update_chain = LLMChain(llm=llm, prompt=sparql_update_prompt)\n sparql_intent_chain = LLMChain(llm=llm, prompt=sparql_intent_prompt)\n return cls(\n qa_chain=qa_chain,\n sparql_generation_select_chain=sparql_generation_select_chain,\n sparql_generation_update_chain=sparql_generation_update_chain,\n sparql_intent_chain=sparql_intent_chain,\n **kwargs,\n )\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n \"\"\"\n Generate SPARQL query, use it to retrieve a response from the gdb and answer\n the question.\n \"\"\"\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n prompt = inputs[self.input_key]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/sparql.html"} {"id": "ab6e54f449c0-2", "text": "callbacks = _run_manager.get_child()\n prompt = inputs[self.input_key]\n _intent = self.sparql_intent_chain.run({\"prompt\": prompt}, callbacks=callbacks)\n intent = _intent.strip()\n if intent == \"SELECT\":\n sparql_generation_chain = self.sparql_generation_select_chain\n elif intent == \"UPDATE\":\n sparql_generation_chain = self.sparql_generation_update_chain\n else:\n raise ValueError(\n \"I am sorry, but this prompt seems to fit none of the currently \"\n \"supported SPARQL query types, i.e., SELECT and UPDATE.\"\n )\n _run_manager.on_text(\"Identified intent:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(intent, color=\"green\", end=\"\\n\", verbose=self.verbose)\n generated_sparql = sparql_generation_chain.run(\n {\"prompt\": prompt, \"schema\": self.graph.get_schema}, callbacks=callbacks\n )\n _run_manager.on_text(\"Generated SPARQL:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n generated_sparql, color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n if intent == \"SELECT\":\n context = self.graph.query(generated_sparql)\n _run_manager.on_text(\"Full Context:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n str(context), color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n result = self.qa_chain(\n {\"prompt\": prompt, \"context\": context},\n callbacks=callbacks,\n )\n res = result[self.qa_chain.output_key]\n elif intent == \"UPDATE\":\n self.graph.update(generated_sparql)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/sparql.html"} {"id": "ab6e54f449c0-3", "text": "elif intent == \"UPDATE\":\n self.graph.update(generated_sparql)\n res = \"Successfully inserted triples into the graph.\"\n else:\n raise ValueError(\"Unsupported SPARQL query type.\")\n return {self.output_key: res}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/sparql.html"} {"id": "e296abdf8a19-0", "text": "Source code for langchain.chains.graph_qa.kuzu\n\"\"\"Question answering over a graph.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.graph_qa.prompts import CYPHER_QA_PROMPT, KUZU_GENERATION_PROMPT\nfrom langchain.chains.llm import LLMChain\nfrom langchain.graphs.kuzu_graph import KuzuGraph\nfrom langchain.schema import BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class KuzuQAChain(Chain):\n \"\"\"Chain for question-answering against a graph by generating Cypher statements for\n K\u00f9zu.\n \"\"\"\n graph: KuzuGraph = Field(exclude=True)\n cypher_generation_chain: LLMChain\n qa_chain: LLMChain\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the output keys.\n :meta private:\n \"\"\"\n _output_keys = [self.output_key]\n return _output_keys\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n *,\n qa_prompt: BasePromptTemplate = CYPHER_QA_PROMPT,\n cypher_prompt: BasePromptTemplate = KUZU_GENERATION_PROMPT,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/kuzu.html"} {"id": "e296abdf8a19-1", "text": "cypher_prompt: BasePromptTemplate = KUZU_GENERATION_PROMPT,\n **kwargs: Any,\n ) -> KuzuQAChain:\n \"\"\"Initialize from LLM.\"\"\"\n qa_chain = LLMChain(llm=llm, prompt=qa_prompt)\n cypher_generation_chain = LLMChain(llm=llm, prompt=cypher_prompt)\n return cls(\n qa_chain=qa_chain,\n cypher_generation_chain=cypher_generation_chain,\n **kwargs,\n )\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n \"\"\"Generate Cypher statement, use it to look up in db and answer question.\"\"\"\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n question = inputs[self.input_key]\n generated_cypher = self.cypher_generation_chain.run(\n {\"question\": question, \"schema\": self.graph.get_schema}, callbacks=callbacks\n )\n _run_manager.on_text(\"Generated Cypher:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n generated_cypher, color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n context = self.graph.query(generated_cypher)\n _run_manager.on_text(\"Full Context:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n str(context), color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n result = self.qa_chain(\n {\"question\": question, \"context\": context},\n callbacks=callbacks,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/kuzu.html"} {"id": "e296abdf8a19-2", "text": "callbacks=callbacks,\n )\n return {self.output_key: result[self.qa_chain.output_key]}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/kuzu.html"} {"id": "4759bb6270b7-0", "text": "Source code for langchain.chains.graph_qa.base\n\"\"\"Question answering over a graph.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.graph_qa.prompts import ENTITY_EXTRACTION_PROMPT, GRAPH_QA_PROMPT\nfrom langchain.chains.llm import LLMChain\nfrom langchain.graphs.networkx_graph import NetworkxEntityGraph, get_entities\nfrom langchain.schema import BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class GraphQAChain(Chain):\n \"\"\"Chain for question-answering against a graph.\"\"\"\n graph: NetworkxEntityGraph = Field(exclude=True)\n entity_extraction_chain: LLMChain\n qa_chain: LLMChain\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the output keys.\n :meta private:\n \"\"\"\n _output_keys = [self.output_key]\n return _output_keys\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n qa_prompt: BasePromptTemplate = GRAPH_QA_PROMPT,\n entity_prompt: BasePromptTemplate = ENTITY_EXTRACTION_PROMPT,\n **kwargs: Any,\n ) -> GraphQAChain:\n \"\"\"Initialize from LLM.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/base.html"} {"id": "4759bb6270b7-1", "text": ") -> GraphQAChain:\n \"\"\"Initialize from LLM.\"\"\"\n qa_chain = LLMChain(llm=llm, prompt=qa_prompt)\n entity_chain = LLMChain(llm=llm, prompt=entity_prompt)\n return cls(\n qa_chain=qa_chain,\n entity_extraction_chain=entity_chain,\n **kwargs,\n )\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n \"\"\"Extract entities, look up info and answer question.\"\"\"\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n question = inputs[self.input_key]\n entity_string = self.entity_extraction_chain.run(question)\n _run_manager.on_text(\"Entities Extracted:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n entity_string, color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n entities = get_entities(entity_string)\n context = \"\"\n all_triplets = []\n for entity in entities:\n all_triplets.extend(self.graph.get_entity_knowledge(entity))\n context = \"\\n\".join(all_triplets)\n _run_manager.on_text(\"Full Context:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(context, color=\"green\", end=\"\\n\", verbose=self.verbose)\n result = self.qa_chain(\n {\"question\": question, \"context\": context},\n callbacks=_run_manager.get_child(),\n )\n return {self.output_key: result[self.qa_chain.output_key]}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/base.html"} {"id": "f61eab44334b-0", "text": "Source code for langchain.chains.graph_qa.hugegraph\n\"\"\"Question answering over a graph.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.graph_qa.prompts import (\n CYPHER_QA_PROMPT,\n GREMLIN_GENERATION_PROMPT,\n)\nfrom langchain.chains.llm import LLMChain\nfrom langchain.graphs.hugegraph import HugeGraph\nfrom langchain.schema import BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class HugeGraphQAChain(Chain):\n \"\"\"Chain for question-answering against a graph by generating gremlin statements.\"\"\"\n graph: HugeGraph = Field(exclude=True)\n gremlin_generation_chain: LLMChain\n qa_chain: LLMChain\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the output keys.\n :meta private:\n \"\"\"\n _output_keys = [self.output_key]\n return _output_keys\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n *,\n qa_prompt: BasePromptTemplate = CYPHER_QA_PROMPT,\n gremlin_prompt: BasePromptTemplate = GREMLIN_GENERATION_PROMPT,\n **kwargs: Any,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/hugegraph.html"} {"id": "f61eab44334b-1", "text": "**kwargs: Any,\n ) -> HugeGraphQAChain:\n \"\"\"Initialize from LLM.\"\"\"\n qa_chain = LLMChain(llm=llm, prompt=qa_prompt)\n gremlin_generation_chain = LLMChain(llm=llm, prompt=gremlin_prompt)\n return cls(\n qa_chain=qa_chain,\n gremlin_generation_chain=gremlin_generation_chain,\n **kwargs,\n )\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n \"\"\"Generate gremlin statement, use it to look up in db and answer question.\"\"\"\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n question = inputs[self.input_key]\n generated_gremlin = self.gremlin_generation_chain.run(\n {\"question\": question, \"schema\": self.graph.get_schema}, callbacks=callbacks\n )\n _run_manager.on_text(\"Generated gremlin:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n generated_gremlin, color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n context = self.graph.query(generated_gremlin)\n _run_manager.on_text(\"Full Context:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n str(context), color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n result = self.qa_chain(\n {\"question\": question, \"context\": context},\n callbacks=callbacks,\n )\n return {self.output_key: result[self.qa_chain.output_key]}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/hugegraph.html"} {"id": "736539c20337-0", "text": "Source code for langchain.chains.llm_math.base\n\"\"\"Chain that interprets a prompt and executes python code to do math.\"\"\"\nfrom __future__ import annotations\nimport math\nimport re\nimport warnings\nfrom typing import Any, Dict, List, Optional\nimport numexpr\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.llm_math.prompt import PROMPT\nfrom langchain.schema import BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class LLMMathChain(Chain):\n \"\"\"Chain that interprets a prompt and executes python code to do math.\n Example:\n .. code-block:: python\n from langchain import LLMMathChain, OpenAI\n llm_math = LLMMathChain.from_llm(OpenAI())\n \"\"\"\n llm_chain: LLMChain\n llm: Optional[BaseLanguageModel] = None\n \"\"\"[Deprecated] LLM wrapper to use.\"\"\"\n prompt: BasePromptTemplate = PROMPT\n \"\"\"[Deprecated] Prompt to use to translate to python if necessary.\"\"\"\n input_key: str = \"question\" #: :meta private:\n output_key: str = \"answer\" #: :meta private:\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] @root_validator(pre=True)\n def raise_deprecation(cls, values: Dict) -> Dict:\n if \"llm\" in values:\n warnings.warn(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_math/base.html"} {"id": "736539c20337-1", "text": "if \"llm\" in values:\n warnings.warn(\n \"Directly instantiating an LLMMathChain with an llm is deprecated. \"\n \"Please instantiate with llm_chain argument or using the from_llm \"\n \"class method.\"\n )\n if \"llm_chain\" not in values and values[\"llm\"] is not None:\n prompt = values.get(\"prompt\", PROMPT)\n values[\"llm_chain\"] = LLMChain(llm=values[\"llm\"], prompt=prompt)\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Expect output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n def _evaluate_expression(self, expression: str) -> str:\n try:\n local_dict = {\"pi\": math.pi, \"e\": math.e}\n output = str(\n numexpr.evaluate(\n expression.strip(),\n global_dict={}, # restrict access to globals\n local_dict=local_dict, # add common mathematical functions\n )\n )\n except Exception as e:\n raise ValueError(\n f'LLMMathChain._evaluate(\"{expression}\") raised error: {e}.'\n \" Please try again with a valid numerical expression\"\n )\n # Remove any leading and trailing brackets from the output\n return re.sub(r\"^\\[|\\]$\", \"\", output)\n def _process_llm_result(\n self, llm_output: str, run_manager: CallbackManagerForChainRun\n ) -> Dict[str, str]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_math/base.html"} {"id": "736539c20337-2", "text": ") -> Dict[str, str]:\n run_manager.on_text(llm_output, color=\"green\", verbose=self.verbose)\n llm_output = llm_output.strip()\n text_match = re.search(r\"^```text(.*?)```\", llm_output, re.DOTALL)\n if text_match:\n expression = text_match.group(1)\n output = self._evaluate_expression(expression)\n run_manager.on_text(\"\\nAnswer: \", verbose=self.verbose)\n run_manager.on_text(output, color=\"yellow\", verbose=self.verbose)\n answer = \"Answer: \" + output\n elif llm_output.startswith(\"Answer:\"):\n answer = llm_output\n elif \"Answer:\" in llm_output:\n answer = \"Answer: \" + llm_output.split(\"Answer:\")[-1]\n else:\n raise ValueError(f\"unknown format from LLM: {llm_output}\")\n return {self.output_key: answer}\n async def _aprocess_llm_result(\n self,\n llm_output: str,\n run_manager: AsyncCallbackManagerForChainRun,\n ) -> Dict[str, str]:\n await run_manager.on_text(llm_output, color=\"green\", verbose=self.verbose)\n llm_output = llm_output.strip()\n text_match = re.search(r\"^```text(.*?)```\", llm_output, re.DOTALL)\n if text_match:\n expression = text_match.group(1)\n output = self._evaluate_expression(expression)\n await run_manager.on_text(\"\\nAnswer: \", verbose=self.verbose)\n await run_manager.on_text(output, color=\"yellow\", verbose=self.verbose)\n answer = \"Answer: \" + output\n elif llm_output.startswith(\"Answer:\"):\n answer = llm_output", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_math/base.html"} {"id": "736539c20337-3", "text": "elif llm_output.startswith(\"Answer:\"):\n answer = llm_output\n elif \"Answer:\" in llm_output:\n answer = \"Answer: \" + llm_output.split(\"Answer:\")[-1]\n else:\n raise ValueError(f\"unknown format from LLM: {llm_output}\")\n return {self.output_key: answer}\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n _run_manager.on_text(inputs[self.input_key])\n llm_output = self.llm_chain.predict(\n question=inputs[self.input_key],\n stop=[\"```output\"],\n callbacks=_run_manager.get_child(),\n )\n return self._process_llm_result(llm_output, _run_manager)\n async def _acall(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n await _run_manager.on_text(inputs[self.input_key])\n llm_output = await self.llm_chain.apredict(\n question=inputs[self.input_key],\n stop=[\"```output\"],\n callbacks=_run_manager.get_child(),\n )\n return await self._aprocess_llm_result(llm_output, _run_manager)\n @property\n def _chain_type(self) -> str:\n return \"llm_math_chain\"\n[docs] @classmethod\n def from_llm(\n cls,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_math/base.html"} {"id": "736539c20337-4", "text": "[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n prompt: BasePromptTemplate = PROMPT,\n **kwargs: Any,\n ) -> LLMMathChain:\n llm_chain = LLMChain(llm=llm, prompt=prompt)\n return cls(llm_chain=llm_chain, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_math/base.html"} {"id": "4ed665721fe7-0", "text": "Source code for langchain.chains.pal.base\n\"\"\"Implements Program-Aided Language Models.\nAs in https://arxiv.org/pdf/2211.10435.pdf.\n\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.pal.colored_object_prompt import COLORED_OBJECT_PROMPT\nfrom langchain.chains.pal.math_prompt import MATH_PROMPT\nfrom langchain.schema import BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.utilities import PythonREPL\n[docs]class PALChain(Chain):\n \"\"\"Implements Program-Aided Language Models.\"\"\"\n llm_chain: LLMChain\n llm: Optional[BaseLanguageModel] = None\n \"\"\"[Deprecated]\"\"\"\n prompt: BasePromptTemplate = MATH_PROMPT\n \"\"\"[Deprecated]\"\"\"\n stop: str = \"\\n\\n\"\n get_answer_expr: str = \"print(solution())\"\n python_globals: Optional[Dict[str, Any]] = None\n python_locals: Optional[Dict[str, Any]] = None\n output_key: str = \"result\" #: :meta private:\n return_intermediate_steps: bool = False\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] @root_validator(pre=True)\n def raise_deprecation(cls, values: Dict) -> Dict:\n if \"llm\" in values:\n warnings.warn(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/pal/base.html"} {"id": "4ed665721fe7-1", "text": "if \"llm\" in values:\n warnings.warn(\n \"Directly instantiating an PALChain with an llm is deprecated. \"\n \"Please instantiate with llm_chain argument or using the one of \"\n \"the class method constructors from_math_prompt, \"\n \"from_colored_object_prompt.\"\n )\n if \"llm_chain\" not in values and values[\"llm\"] is not None:\n values[\"llm_chain\"] = LLMChain(llm=values[\"llm\"], prompt=MATH_PROMPT)\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the singular input key.\n :meta private:\n \"\"\"\n return self.prompt.input_variables\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the singular output key.\n :meta private:\n \"\"\"\n if not self.return_intermediate_steps:\n return [self.output_key]\n else:\n return [self.output_key, \"intermediate_steps\"]\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n code = self.llm_chain.predict(\n stop=[self.stop], callbacks=_run_manager.get_child(), **inputs\n )\n _run_manager.on_text(code, color=\"green\", end=\"\\n\", verbose=self.verbose)\n repl = PythonREPL(_globals=self.python_globals, _locals=self.python_locals)\n res = repl.run(code + f\"\\n{self.get_answer_expr}\")\n output = {self.output_key: res.strip()}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/pal/base.html"} {"id": "4ed665721fe7-2", "text": "output = {self.output_key: res.strip()}\n if self.return_intermediate_steps:\n output[\"intermediate_steps\"] = code\n return output\n[docs] @classmethod\n def from_math_prompt(cls, llm: BaseLanguageModel, **kwargs: Any) -> PALChain:\n \"\"\"Load PAL from math prompt.\"\"\"\n llm_chain = LLMChain(llm=llm, prompt=MATH_PROMPT)\n return cls(\n llm_chain=llm_chain,\n stop=\"\\n\\n\",\n get_answer_expr=\"print(solution())\",\n **kwargs,\n )\n[docs] @classmethod\n def from_colored_object_prompt(\n cls, llm: BaseLanguageModel, **kwargs: Any\n ) -> PALChain:\n \"\"\"Load PAL from colored object prompt.\"\"\"\n llm_chain = LLMChain(llm=llm, prompt=COLORED_OBJECT_PROMPT)\n return cls(\n llm_chain=llm_chain,\n stop=\"\\n\\n\\n\",\n get_answer_expr=\"print(answer)\",\n **kwargs,\n )\n @property\n def _chain_type(self) -> str:\n return \"pal_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/pal/base.html"} {"id": "df850a2051b0-0", "text": "Source code for langchain.chains.conversation.base\n\"\"\"Chain that carries on a conversation and calls an LLM.\"\"\"\nfrom typing import Dict, List\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.chains.conversation.prompt import PROMPT\nfrom langchain.chains.llm import LLMChain\nfrom langchain.memory.buffer import ConversationBufferMemory\nfrom langchain.schema import BaseMemory, BasePromptTemplate\n[docs]class ConversationChain(LLMChain):\n \"\"\"Chain to have a conversation and load context from memory.\n Example:\n .. code-block:: python\n from langchain import ConversationChain, OpenAI\n conversation = ConversationChain(llm=OpenAI())\n \"\"\"\n memory: BaseMemory = Field(default_factory=ConversationBufferMemory)\n \"\"\"Default memory store.\"\"\"\n prompt: BasePromptTemplate = PROMPT\n \"\"\"Default conversation prompt to use.\"\"\"\n input_key: str = \"input\" #: :meta private:\n output_key: str = \"response\" #: :meta private:\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Use this since so some prompt vars come from history.\"\"\"\n return [self.input_key]\n[docs] @root_validator()\n def validate_prompt_input_variables(cls, values: Dict) -> Dict:\n \"\"\"Validate that prompt input variables are consistent.\"\"\"\n memory_keys = values[\"memory\"].memory_variables\n input_key = values[\"input_key\"]\n if input_key in memory_keys:\n raise ValueError(\n f\"The input key {input_key} was also found in the memory keys \"\n f\"({memory_keys}) - please provide keys that don't overlap.\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/conversation/base.html"} {"id": "df850a2051b0-1", "text": "f\"({memory_keys}) - please provide keys that don't overlap.\"\n )\n prompt_variables = values[\"prompt\"].input_variables\n expected_keys = memory_keys + [input_key]\n if set(expected_keys) != set(prompt_variables):\n raise ValueError(\n \"Got unexpected prompt input variables. The prompt expects \"\n f\"{prompt_variables}, but got {memory_keys} as inputs from \"\n f\"memory, and {input_key} as the normal input key.\"\n )\n return values", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/conversation/base.html"} {"id": "bf6d6b4f1771-0", "text": "Source code for langchain.chains.openai_functions.qa_with_structure\nfrom typing import Any, List, Optional, Type, Union\nfrom pydantic import BaseModel, Field\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.openai_functions.utils import get_llm_kwargs\nfrom langchain.output_parsers.openai_functions import (\n OutputFunctionsParser,\n PydanticOutputFunctionsParser,\n)\nfrom langchain.prompts import PromptTemplate\nfrom langchain.prompts.chat import ChatPromptTemplate, HumanMessagePromptTemplate\nfrom langchain.schema import BaseLLMOutputParser\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.schema.messages import HumanMessage, SystemMessage\n[docs]class AnswerWithSources(BaseModel):\n \"\"\"An answer to the question being asked, with sources.\"\"\"\n answer: str = Field(..., description=\"Answer to the question that was asked\")\n sources: List[str] = Field(\n ..., description=\"List of sources used to answer the question\"\n )\n[docs]def create_qa_with_structure_chain(\n llm: BaseLanguageModel,\n schema: Union[dict, Type[BaseModel]],\n output_parser: str = \"base\",\n prompt: Optional[Union[PromptTemplate, ChatPromptTemplate]] = None,\n) -> LLMChain:\n \"\"\"Create a question answering chain that returns an answer with sources.\n Args:\n llm: Language model to use for the chain.\n schema: Pydantic schema to use for the output.\n output_parser: Output parser to use. Should be one of `pydantic` or `base`.\n Default to `base`.\n prompt: Optional prompt to use for the chain.\n Returns:\n \"\"\"\n if output_parser == \"pydantic\":", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/qa_with_structure.html"} {"id": "bf6d6b4f1771-1", "text": "Returns:\n \"\"\"\n if output_parser == \"pydantic\":\n if not (isinstance(schema, type) and issubclass(schema, BaseModel)):\n raise ValueError(\n \"Must provide a pydantic class for schema when output_parser is \"\n \"'pydantic'.\"\n )\n _output_parser: BaseLLMOutputParser = PydanticOutputFunctionsParser(\n pydantic_schema=schema\n )\n elif output_parser == \"base\":\n _output_parser = OutputFunctionsParser()\n else:\n raise ValueError(\n f\"Got unexpected output_parser: {output_parser}. \"\n f\"Should be one of `pydantic` or `base`.\"\n )\n if isinstance(schema, type) and issubclass(schema, BaseModel):\n schema_dict = schema.schema()\n else:\n schema_dict = schema\n function = {\n \"name\": schema_dict[\"title\"],\n \"description\": schema_dict[\"description\"],\n \"parameters\": schema_dict,\n }\n llm_kwargs = get_llm_kwargs(function)\n messages = [\n SystemMessage(\n content=(\n \"You are a world class algorithm to answer \"\n \"questions in a specific format.\"\n )\n ),\n HumanMessage(content=\"Answer question using the following context\"),\n HumanMessagePromptTemplate.from_template(\"{context}\"),\n HumanMessagePromptTemplate.from_template(\"Question: {question}\"),\n HumanMessage(content=\"Tips: Make sure to answer in the correct format\"),\n ]\n prompt = prompt or ChatPromptTemplate(messages=messages)\n chain = LLMChain(\n llm=llm,\n prompt=prompt,\n llm_kwargs=llm_kwargs,\n output_parser=_output_parser,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/qa_with_structure.html"} {"id": "bf6d6b4f1771-2", "text": "output_parser=_output_parser,\n )\n return chain\n[docs]def create_qa_with_sources_chain(llm: BaseLanguageModel, **kwargs: Any) -> LLMChain:\n \"\"\"Create a question answering chain that returns an answer with sources.\n Args:\n llm: Language model to use for the chain.\n **kwargs: Keyword arguments to pass to `create_qa_with_structure_chain`.\n Returns:\n Chain (LLMChain) that can be used to answer questions with citations.\n \"\"\"\n return create_qa_with_structure_chain(llm, AnswerWithSources, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/qa_with_structure.html"} {"id": "8a37fd126447-0", "text": "Source code for langchain.chains.openai_functions.extraction\nfrom typing import Any, List\nfrom pydantic import BaseModel\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.openai_functions.utils import (\n _convert_schema,\n _resolve_schema_references,\n get_llm_kwargs,\n)\nfrom langchain.output_parsers.openai_functions import (\n JsonKeyOutputFunctionsParser,\n PydanticAttrOutputFunctionsParser,\n)\nfrom langchain.prompts import ChatPromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\ndef _get_extraction_function(entity_schema: dict) -> dict:\n return {\n \"name\": \"information_extraction\",\n \"description\": \"Extracts the relevant information from the passage.\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"info\": {\"type\": \"array\", \"items\": _convert_schema(entity_schema)}\n },\n \"required\": [\"info\"],\n },\n }\n_EXTRACTION_TEMPLATE = \"\"\"Extract and save the relevant entities mentioned\\\n in the following passage together with their properties.\nPassage:\n{input}\n\"\"\"\n[docs]def create_extraction_chain(schema: dict, llm: BaseLanguageModel) -> Chain:\n \"\"\"Creates a chain that extracts information from a passage.\n Args:\n schema: The schema of the entities to extract.\n llm: The language model to use.\n Returns:\n Chain that can be used to extract information from a passage.\n \"\"\"\n function = _get_extraction_function(schema)\n prompt = ChatPromptTemplate.from_template(_EXTRACTION_TEMPLATE)\n output_parser = JsonKeyOutputFunctionsParser(key_name=\"info\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/extraction.html"} {"id": "8a37fd126447-1", "text": "output_parser = JsonKeyOutputFunctionsParser(key_name=\"info\")\n llm_kwargs = get_llm_kwargs(function)\n chain = LLMChain(\n llm=llm,\n prompt=prompt,\n llm_kwargs=llm_kwargs,\n output_parser=output_parser,\n )\n return chain\n[docs]def create_extraction_chain_pydantic(\n pydantic_schema: Any, llm: BaseLanguageModel\n) -> Chain:\n \"\"\"Creates a chain that extracts information from a passage using pydantic schema.\n Args:\n pydantic_schema: The pydantic schema of the entities to extract.\n llm: The language model to use.\n Returns:\n Chain that can be used to extract information from a passage.\n \"\"\"\n class PydanticSchema(BaseModel):\n info: List[pydantic_schema] # type: ignore\n openai_schema = pydantic_schema.schema()\n openai_schema = _resolve_schema_references(\n openai_schema, openai_schema.get(\"definitions\", {})\n )\n function = _get_extraction_function(openai_schema)\n prompt = ChatPromptTemplate.from_template(_EXTRACTION_TEMPLATE)\n output_parser = PydanticAttrOutputFunctionsParser(\n pydantic_schema=PydanticSchema, attr_name=\"info\"\n )\n llm_kwargs = get_llm_kwargs(function)\n chain = LLMChain(\n llm=llm,\n prompt=prompt,\n llm_kwargs=llm_kwargs,\n output_parser=output_parser,\n )\n return chain", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/extraction.html"} {"id": "3da4e4ceb9cd-0", "text": "Source code for langchain.chains.openai_functions.base\n\"\"\"Methods for creating chains that use OpenAI function-calling APIs.\"\"\"\nimport inspect\nimport re\nfrom typing import Any, Callable, Dict, List, Optional, Sequence, Tuple, Type, Union\nfrom pydantic import BaseModel\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.chains import LLMChain\nfrom langchain.output_parsers.openai_functions import (\n JsonOutputFunctionsParser,\n PydanticOutputFunctionsParser,\n)\nfrom langchain.prompts import BasePromptTemplate\nfrom langchain.schema import BaseLLMOutputParser\nPYTHON_TO_JSON_TYPES = {\n \"str\": \"string\",\n \"int\": \"number\",\n \"float\": \"number\",\n \"bool\": \"boolean\",\n}\ndef _get_python_function_name(function: Callable) -> str:\n \"\"\"Get the name of a Python function.\"\"\"\n source = inspect.getsource(function)\n return re.search(r\"^def (.*)\\(\", source).groups()[0] # type: ignore\ndef _parse_python_function_docstring(function: Callable) -> Tuple[str, dict]:\n \"\"\"Parse the function and argument descriptions from the docstring of a function.\n Assumes the function docstring follows Google Python style guide.\n \"\"\"\n docstring = inspect.getdoc(function)\n if docstring:\n docstring_blocks = docstring.split(\"\\n\\n\")\n descriptors = []\n args_block = None\n past_descriptors = False\n for block in docstring_blocks:\n if block.startswith(\"Args:\"):\n args_block = block\n break\n elif block.startswith(\"Returns:\") or block.startswith(\"Example:\"):\n # Don't break in case Args come after\n past_descriptors = True\n elif not past_descriptors:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/base.html"} {"id": "3da4e4ceb9cd-1", "text": "past_descriptors = True\n elif not past_descriptors:\n descriptors.append(block)\n else:\n continue\n description = \" \".join(descriptors)\n else:\n description = \"\"\n args_block = None\n arg_descriptions = {}\n if args_block:\n arg = None\n for line in args_block.split(\"\\n\")[1:]:\n if \":\" in line:\n arg, desc = line.split(\":\")\n arg_descriptions[arg.strip()] = desc.strip()\n elif arg:\n arg_descriptions[arg.strip()] += \" \" + line.strip()\n return description, arg_descriptions\ndef _get_python_function_arguments(function: Callable, arg_descriptions: dict) -> dict:\n \"\"\"Get JsonSchema describing a Python functions arguments.\n Assumes all function arguments are of primitive types (int, float, str, bool) or\n are subclasses of pydantic.BaseModel.\n \"\"\"\n properties = {}\n annotations = inspect.getfullargspec(function).annotations\n for arg, arg_type in annotations.items():\n if arg == \"return\":\n continue\n if isinstance(arg_type, type) and issubclass(arg_type, BaseModel):\n properties[arg] = arg_type.schema()\n elif arg_type.__name__ in PYTHON_TO_JSON_TYPES:\n properties[arg] = {\"type\": PYTHON_TO_JSON_TYPES[arg_type.__name__]}\n if arg in arg_descriptions:\n if arg not in properties:\n properties[arg] = {}\n properties[arg][\"description\"] = arg_descriptions[arg]\n return properties\ndef _get_python_function_required_args(function: Callable) -> List[str]:\n \"\"\"Get the required arguments for a Python function.\"\"\"\n spec = inspect.getfullargspec(function)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/base.html"} {"id": "3da4e4ceb9cd-2", "text": "spec = inspect.getfullargspec(function)\n required = spec.args[: -len(spec.defaults)] if spec.defaults else spec.args\n required += [k for k in spec.kwonlyargs if k not in (spec.kwonlydefaults or {})]\n return required\n[docs]def convert_python_function_to_openai_function(function: Callable) -> Dict[str, Any]:\n \"\"\"Convert a Python function to an OpenAI function-calling API compatible dict.\n Assumes the Python function has type hints and a docstring with a description. If\n the docstring has Google Python style argument descriptions, these will be\n included as well.\n \"\"\"\n description, arg_descriptions = _parse_python_function_docstring(function)\n return {\n \"name\": _get_python_function_name(function),\n \"description\": description,\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": _get_python_function_arguments(function, arg_descriptions),\n \"required\": _get_python_function_required_args(function),\n },\n }\n[docs]def convert_to_openai_function(\n function: Union[Dict[str, Any], Type[BaseModel], Callable]\n) -> Dict[str, Any]:\n \"\"\"Convert a raw function/class to an OpenAI function.\n Args:\n function: Either a dictionary, a pydantic.BaseModel class, or a Python function.\n If a dictionary is passed in, it is assumed to already be a valid OpenAI\n function.\n Returns:\n A dict version of the passed in function which is compatible with the\n OpenAI function-calling API.\n \"\"\"\n if isinstance(function, dict):\n return function\n elif isinstance(function, type) and issubclass(function, BaseModel):\n schema = function.schema()\n return {", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/base.html"} {"id": "3da4e4ceb9cd-3", "text": "schema = function.schema()\n return {\n \"name\": schema[\"title\"],\n \"description\": schema[\"description\"],\n \"parameters\": schema,\n }\n elif callable(function):\n return convert_python_function_to_openai_function(function)\n else:\n raise ValueError(\n f\"Unsupported function type {type(function)}. Functions must be passed in\"\n f\" as Dict, pydantic.BaseModel, or Callable.\"\n )\ndef _get_openai_output_parser(\n functions: Sequence[Union[Dict[str, Any], Type[BaseModel], Callable]],\n function_names: Sequence[str],\n) -> BaseLLMOutputParser:\n \"\"\"Get the appropriate function output parser given the user functions.\"\"\"\n if isinstance(functions[0], type) and issubclass(functions[0], BaseModel):\n if len(functions) > 1:\n pydantic_schema: Union[Dict, Type[BaseModel]] = {\n name: fn for name, fn in zip(function_names, functions)\n }\n else:\n pydantic_schema = functions[0]\n output_parser: BaseLLMOutputParser = PydanticOutputFunctionsParser(\n pydantic_schema=pydantic_schema\n )\n else:\n output_parser = JsonOutputFunctionsParser(args_only=len(functions) <= 1)\n return output_parser\n[docs]def create_openai_fn_chain(\n functions: Sequence[Union[Dict[str, Any], Type[BaseModel], Callable]],\n llm: BaseLanguageModel,\n prompt: BasePromptTemplate,\n *,\n output_parser: Optional[BaseLLMOutputParser] = None,\n **kwargs: Any,\n) -> LLMChain:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/base.html"} {"id": "3da4e4ceb9cd-4", "text": "**kwargs: Any,\n) -> LLMChain:\n \"\"\"Create an LLM chain that uses OpenAI functions.\n Args:\n functions: A sequence of either dictionaries, pydantic.BaseModels classes, or\n Python functions. If dictionaries are passed in, they are assumed to\n already be a valid OpenAI functions. If only a single\n function is passed in, then it will be enforced that the model use that\n function. pydantic.BaseModels and Python functions should have docstrings\n describing what the function does. For best results, pydantic.BaseModels\n should have descriptions of the parameters and Python functions should have\n Google Python style args descriptions in the docstring. Additionally,\n Python functions should only use primitive types (str, int, float, bool) or\n pydantic.BaseModels for arguments.\n llm: Language model to use, assumed to support the OpenAI function-calling API.\n prompt: BasePromptTemplate to pass to the model.\n output_parser: BaseLLMOutputParser to use for parsing model outputs. By default\n will be inferred from the function types. If pydantic.BaseModels are passed\n in, then the OutputParser will try to parse outputs using those. Otherwise\n model outputs will simply be parsed as JSON. If multiple functions are\n passed in and they are not pydantic.BaseModels, the chain output will\n include both the name of the function that was returned and the arguments\n to pass to the function.\n Returns:\n An LLMChain that will pass in the given functions to the model when run.\n Example:\n .. code-block:: python\n from langchain.chains.openai_functions import create_openai_fn_chain\n from langchain.chat_models import ChatOpenAI", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/base.html"} {"id": "3da4e4ceb9cd-5", "text": "from langchain.chat_models import ChatOpenAI\n from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate\n from pydantic import BaseModel, Field\n class RecordPerson(BaseModel):\n \\\"\\\"\\\"Record some identifying information about a person.\\\"\\\"\\\"\n name: str = Field(..., description=\"The person's name\")\n age: int = Field(..., description=\"The person's age\")\n fav_food: Optional[str] = Field(None, description=\"The person's favorite food\")\n class RecordDog(BaseModel):\n \\\"\\\"\\\"Record some identifying information about a dog.\\\"\\\"\\\"\n name: str = Field(..., description=\"The dog's name\")\n color: str = Field(..., description=\"The dog's color\")\n fav_food: Optional[str] = Field(None, description=\"The dog's favorite food\")\n llm = ChatOpenAI(model=\"gpt-3.5-turbo-0613\", temperature=0)\n prompt_msgs = [\n SystemMessage(\n content=\"You are a world class algorithm for recording entities\"\n ),\n HumanMessage(content=\"Make calls to the relevant function to record the entities in the following input:\"),\n HumanMessagePromptTemplate.from_template(\"{input}\"),\n HumanMessage(content=\"Tips: Make sure to answer in the correct format\"),\n ]\n prompt = ChatPromptTemplate(messages=prompt_msgs)\n chain = create_openai_fn_chain([RecordPerson, RecordDog])\n chain.run(\"Harry was a chubby brown beagle who loved chicken\")\n # -> RecordDog(name=\"Harry\", color=\"brown\", fav_food=\"chicken\")\n \"\"\" # noqa: E501\n if not functions:\n raise ValueError(\"Need to pass in at least one function. Received zero.\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/base.html"} {"id": "3da4e4ceb9cd-6", "text": "raise ValueError(\"Need to pass in at least one function. Received zero.\")\n openai_functions = [convert_to_openai_function(f) for f in functions]\n fn_names = [oai_fn[\"name\"] for oai_fn in openai_functions]\n output_parser = output_parser or _get_openai_output_parser(functions, fn_names)\n llm_kwargs: Dict[str, Any] = {\n \"functions\": openai_functions,\n }\n if len(openai_functions) == 1:\n llm_kwargs[\"function_call\"] = {\"name\": openai_functions[0][\"name\"]}\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n output_parser=output_parser,\n llm_kwargs=llm_kwargs,\n output_key=\"function\",\n **kwargs,\n )\n return llm_chain\n[docs]def create_structured_output_chain(\n output_schema: Union[Dict[str, Any], Type[BaseModel]],\n llm: BaseLanguageModel,\n prompt: BasePromptTemplate,\n *,\n output_parser: Optional[BaseLLMOutputParser] = None,\n **kwargs: Any,\n) -> LLMChain:\n \"\"\"Create an LLMChain that uses an OpenAI function to get a structured output.\n Args:\n output_schema: Either a dictionary or pydantic.BaseModel class. If a dictionary\n is passed in, it's assumed to already be a valid JsonSchema.\n For best results, pydantic.BaseModels should have docstrings describing what\n the schema represents and descriptions for the parameters.\n llm: Language model to use, assumed to support the OpenAI function-calling API.\n prompt: BasePromptTemplate to pass to the model.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/base.html"} {"id": "3da4e4ceb9cd-7", "text": "prompt: BasePromptTemplate to pass to the model.\n output_parser: BaseLLMOutputParser to use for parsing model outputs. By default\n will be inferred from the function types. If pydantic.BaseModels are passed\n in, then the OutputParser will try to parse outputs using those. Otherwise\n model outputs will simply be parsed as JSON.\n Returns:\n An LLMChain that will pass the given function to the model.\n Example:\n .. code-block:: python\n from langchain.chains.openai_functions import create_structured_output_chain\n from langchain.chat_models import ChatOpenAI\n from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate\n from pydantic import BaseModel, Field\n class Dog(BaseModel):\n \\\"\\\"\\\"Identifying information about a dog.\\\"\\\"\\\"\n name: str = Field(..., description=\"The dog's name\")\n color: str = Field(..., description=\"The dog's color\")\n fav_food: Optional[str] = Field(None, description=\"The dog's favorite food\")\n llm = ChatOpenAI(model=\"gpt-3.5-turbo-0613\", temperature=0)\n prompt_msgs = [\n SystemMessage(\n content=\"You are a world class algorithm for extracting information in structured formats.\"\n ),\n HumanMessage(content=\"Use the given format to extract information from the following input:\"),\n HumanMessagePromptTemplate.from_template(\"{input}\"),\n HumanMessage(content=\"Tips: Make sure to answer in the correct format\"),\n ]\n prompt = ChatPromptTemplate(messages=prompt_msgs)\n chain = create_structured_output_chain(Dog, llm, prompt)\n chain.run(\"Harry was a chubby brown beagle who loved chicken\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/base.html"} {"id": "3da4e4ceb9cd-8", "text": "chain.run(\"Harry was a chubby brown beagle who loved chicken\")\n # -> Dog(name=\"Harry\", color=\"brown\", fav_food=\"chicken\")\n \"\"\" # noqa: E501\n function: Dict = {\n \"name\": \"output_formatter\",\n \"description\": (\n \"Output formatter. Should always be used to format your response to the\"\n \" user.\"\n ),\n }\n parameters = (\n output_schema if isinstance(output_schema, dict) else output_schema.schema()\n )\n function[\"parameters\"] = parameters\n return create_openai_fn_chain(\n [function], llm, prompt, output_parser=output_parser, **kwargs\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/base.html"} {"id": "efa05a78289a-0", "text": "Source code for langchain.chains.openai_functions.utils\nfrom typing import Any, Dict\ndef _resolve_schema_references(schema: Any, definitions: Dict[str, Any]) -> Any:\n \"\"\"\n Resolves the $ref keys in a JSON schema object using the provided definitions.\n \"\"\"\n if isinstance(schema, list):\n for i, item in enumerate(schema):\n schema[i] = _resolve_schema_references(item, definitions)\n elif isinstance(schema, dict):\n if \"$ref\" in schema:\n ref_key = schema.pop(\"$ref\").split(\"/\")[-1]\n ref = definitions.get(ref_key, {})\n schema.update(ref)\n else:\n for key, value in schema.items():\n schema[key] = _resolve_schema_references(value, definitions)\n return schema\ndef _convert_schema(schema: dict) -> dict:\n props = {k: {\"title\": k, **v} for k, v in schema[\"properties\"].items()}\n return {\n \"type\": \"object\",\n \"properties\": props,\n \"required\": schema.get(\"required\", []),\n }\n[docs]def get_llm_kwargs(function: dict) -> dict:\n \"\"\"Returns the kwargs for the LLMChain constructor.\n Args:\n function: The function to use.\n Returns:\n The kwargs for the LLMChain constructor.\n \"\"\"\n return {\"functions\": [function], \"function_call\": {\"name\": function[\"name\"]}}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/utils.html"} {"id": "ae5d2f747768-0", "text": "Source code for langchain.chains.openai_functions.tagging\nfrom typing import Any\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.openai_functions.utils import _convert_schema, get_llm_kwargs\nfrom langchain.output_parsers.openai_functions import (\n JsonOutputFunctionsParser,\n PydanticOutputFunctionsParser,\n)\nfrom langchain.prompts import ChatPromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\ndef _get_tagging_function(schema: dict) -> dict:\n return {\n \"name\": \"information_extraction\",\n \"description\": \"Extracts the relevant information from the passage.\",\n \"parameters\": _convert_schema(schema),\n }\n_TAGGING_TEMPLATE = \"\"\"Extract the desired information from the following passage.\nPassage:\n{input}\n\"\"\"\n[docs]def create_tagging_chain(schema: dict, llm: BaseLanguageModel) -> Chain:\n \"\"\"Creates a chain that extracts information from a passage.\n Args:\n schema: The schema of the entities to extract.\n llm: The language model to use.\n Returns:\n Chain (LLMChain) that can be used to extract information from a passage.\n \"\"\"\n function = _get_tagging_function(schema)\n prompt = ChatPromptTemplate.from_template(_TAGGING_TEMPLATE)\n output_parser = JsonOutputFunctionsParser()\n llm_kwargs = get_llm_kwargs(function)\n chain = LLMChain(\n llm=llm,\n prompt=prompt,\n llm_kwargs=llm_kwargs,\n output_parser=output_parser,\n )\n return chain\n[docs]def create_tagging_chain_pydantic(\n pydantic_schema: Any, llm: BaseLanguageModel", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/tagging.html"} {"id": "ae5d2f747768-1", "text": "pydantic_schema: Any, llm: BaseLanguageModel\n) -> Chain:\n \"\"\"Creates a chain that extracts information from a passage.\n Args:\n pydantic_schema: The pydantic schema of the entities to extract.\n llm: The language model to use.\n Returns:\n Chain (LLMChain) that can be used to extract information from a passage.\n \"\"\"\n openai_schema = pydantic_schema.schema()\n function = _get_tagging_function(openai_schema)\n prompt = ChatPromptTemplate.from_template(_TAGGING_TEMPLATE)\n output_parser = PydanticOutputFunctionsParser(pydantic_schema=pydantic_schema)\n llm_kwargs = get_llm_kwargs(function)\n chain = LLMChain(\n llm=llm,\n prompt=prompt,\n llm_kwargs=llm_kwargs,\n output_parser=output_parser,\n )\n return chain", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/tagging.html"} {"id": "ad032368a7c5-0", "text": "Source code for langchain.chains.openai_functions.openapi\nimport json\nimport re\nfrom collections import defaultdict\nfrom typing import Any, Callable, Dict, List, Optional, Tuple, Union\nimport requests\nfrom openapi_schema_pydantic import Parameter\nfrom requests import Response\nfrom langchain import LLMChain\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.sequential import SequentialChain\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.input import get_colored_text\nfrom langchain.output_parsers.openai_functions import JsonOutputFunctionsParser\nfrom langchain.prompts import ChatPromptTemplate\nfrom langchain.schema import BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.tools import APIOperation\nfrom langchain.utilities.openapi import OpenAPISpec\ndef _get_description(o: Any, prefer_short: bool) -> Optional[str]:\n summary = getattr(o, \"summary\", None)\n description = getattr(o, \"description\", None)\n if prefer_short:\n return summary or description\n return description or summary\ndef _format_url(url: str, path_params: dict) -> str:\n expected_path_param = re.findall(r\"{(.*?)}\", url)\n new_params = {}\n for param in expected_path_param:\n clean_param = param.lstrip(\".;\").rstrip(\"*\")\n val = path_params[clean_param]\n if isinstance(val, list):\n if param[0] == \".\":\n sep = \".\" if param[-1] == \"*\" else \",\"\n new_val = \".\" + sep.join(val)\n elif param[0] == \";\":\n sep = f\"{clean_param}=\" if param[-1] == \"*\" else \",\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/openapi.html"} {"id": "ad032368a7c5-1", "text": "sep = f\"{clean_param}=\" if param[-1] == \"*\" else \",\"\n new_val = f\"{clean_param}=\" + sep.join(val)\n else:\n new_val = \",\".join(val)\n elif isinstance(val, dict):\n kv_sep = \"=\" if param[-1] == \"*\" else \",\"\n kv_strs = [kv_sep.join((k, v)) for k, v in val.items()]\n if param[0] == \".\":\n sep = \".\"\n new_val = \".\"\n elif param[0] == \";\":\n sep = \";\"\n new_val = \";\"\n else:\n sep = \",\"\n new_val = \"\"\n new_val += sep.join(kv_strs)\n else:\n if param[0] == \".\":\n new_val = f\".{val}\"\n elif param[0] == \";\":\n new_val = f\";{clean_param}={val}\"\n else:\n new_val = val\n new_params[param] = new_val\n return url.format(**new_params)\ndef _openapi_params_to_json_schema(params: List[Parameter], spec: OpenAPISpec) -> dict:\n properties = {}\n required = []\n for p in params:\n if p.param_schema:\n schema = spec.get_schema(p.param_schema)\n else:\n media_type_schema = list(p.content.values())[0].media_type_schema # type: ignore # noqa: E501\n schema = spec.get_schema(media_type_schema)\n if p.description and not schema.description:\n schema.description = p.description\n properties[p.name] = json.loads(schema.json(exclude_none=True))\n if p.required:\n required.append(p.name)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/openapi.html"} {"id": "ad032368a7c5-2", "text": "if p.required:\n required.append(p.name)\n return {\"type\": \"object\", \"properties\": properties, \"required\": required}\n[docs]def openapi_spec_to_openai_fn(\n spec: OpenAPISpec,\n) -> Tuple[List[Dict[str, Any]], Callable]:\n \"\"\"Convert a valid OpenAPI spec to the JSON Schema format expected for OpenAI\n functions.\n Args:\n spec: OpenAPI spec to convert.\n Returns:\n Tuple of the OpenAI functions JSON schema and a default function for executing\n a request based on the OpenAI function schema.\n \"\"\"\n if not spec.paths:\n return [], lambda: None\n functions = []\n _name_to_call_map = {}\n for path in spec.paths:\n path_params = {\n (p.name, p.param_in): p for p in spec.get_parameters_for_path(path)\n }\n for method in spec.get_methods_for_path(path):\n request_args = {}\n op = spec.get_operation(path, method)\n op_params = path_params.copy()\n for param in spec.get_parameters_for_operation(op):\n op_params[(param.name, param.param_in)] = param\n params_by_type = defaultdict(list)\n for name_loc, p in op_params.items():\n params_by_type[name_loc[1]].append(p)\n param_loc_to_arg_name = {\n \"query\": \"params\",\n \"header\": \"headers\",\n \"cookie\": \"cookies\",\n \"path\": \"path_params\",\n }\n for param_loc, arg_name in param_loc_to_arg_name.items():\n if params_by_type[param_loc]:\n request_args[arg_name] = _openapi_params_to_json_schema(\n params_by_type[param_loc], spec\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/openapi.html"} {"id": "ad032368a7c5-3", "text": "params_by_type[param_loc], spec\n )\n request_body = spec.get_request_body_for_operation(op)\n # TODO: Support more MIME types.\n if request_body and request_body.content:\n media_types = {}\n for media_type, media_type_object in request_body.content.items():\n if media_type_object.media_type_schema:\n schema = spec.get_schema(media_type_object.media_type_schema)\n media_types[media_type] = json.loads(\n schema.json(exclude_none=True)\n )\n if len(media_types) == 1:\n media_type, schema_dict = list(media_types.items())[0]\n key = \"json\" if media_type == \"application/json\" else \"data\"\n request_args[key] = schema_dict\n elif len(media_types) > 1:\n request_args[\"data\"] = {\"anyOf\": list(media_types.values())}\n api_op = APIOperation.from_openapi_spec(spec, path, method)\n fn = {\n \"name\": api_op.operation_id,\n \"description\": api_op.description,\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": request_args,\n },\n }\n functions.append(fn)\n _name_to_call_map[fn[\"name\"]] = {\n \"method\": method,\n \"url\": api_op.base_url + api_op.path,\n }\n def default_call_api(\n name: str,\n fn_args: dict,\n headers: Optional[dict] = None,\n params: Optional[dict] = None,\n **kwargs: Any,\n ) -> Any:\n method = _name_to_call_map[name][\"method\"]\n url = _name_to_call_map[name][\"url\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/openapi.html"} {"id": "ad032368a7c5-4", "text": "url = _name_to_call_map[name][\"url\"]\n path_params = fn_args.pop(\"path_params\", {})\n url = _format_url(url, path_params)\n if \"data\" in fn_args and isinstance(fn_args[\"data\"], dict):\n fn_args[\"data\"] = json.dumps(fn_args[\"data\"])\n _kwargs = {**fn_args, **kwargs}\n if headers is not None:\n if \"headers\" in _kwargs:\n _kwargs[\"headers\"].update(headers)\n else:\n _kwargs[\"headers\"] = headers\n if params is not None:\n if \"params\" in _kwargs:\n _kwargs[\"params\"].update(params)\n else:\n _kwargs[\"params\"] = params\n return requests.request(method, url, **_kwargs)\n return functions, default_call_api\n[docs]class SimpleRequestChain(Chain):\n request_method: Callable\n output_key: str = \"response\"\n input_key: str = \"function\"\n @property\n def input_keys(self) -> List[str]:\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n return [self.output_key]\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Run the logic of this chain and return the output.\"\"\"\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n name = inputs[\"function\"].pop(\"name\")\n args = inputs[\"function\"].pop(\"arguments\")\n _pretty_name = get_colored_text(name, \"green\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/openapi.html"} {"id": "ad032368a7c5-5", "text": "_pretty_name = get_colored_text(name, \"green\")\n _pretty_args = get_colored_text(json.dumps(args, indent=2), \"green\")\n _text = f\"Calling endpoint {_pretty_name} with arguments:\\n\" + _pretty_args\n _run_manager.on_text(_text)\n api_response: Response = self.request_method(name, args)\n if api_response.status_code != 200:\n response = (\n f\"{api_response.status_code}: {api_response.reason}\"\n + f\"\\nFor {name} \"\n + f\"Called with args: {args['params']}\"\n )\n else:\n try:\n response = api_response.json()\n except Exception: # noqa: E722\n response = api_response.text\n return {self.output_key: response}\n[docs]def get_openapi_chain(\n spec: Union[OpenAPISpec, str],\n llm: Optional[BaseLanguageModel] = None,\n prompt: Optional[BasePromptTemplate] = None,\n request_chain: Optional[Chain] = None,\n llm_chain_kwargs: Optional[Dict] = None,\n verbose: bool = False,\n headers: Optional[Dict] = None,\n params: Optional[Dict] = None,\n **kwargs: Any,\n) -> SequentialChain:\n \"\"\"Create a chain for querying an API from a OpenAPI spec.\n Args:\n spec: OpenAPISpec or url/file/text string corresponding to one.\n llm: language model, should be an OpenAI function-calling model, e.g.\n `ChatOpenAI(model=\"gpt-3.5-turbo-0613\")`.\n prompt: Main prompt template to use.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/openapi.html"} {"id": "ad032368a7c5-6", "text": "prompt: Main prompt template to use.\n request_chain: Chain for taking the functions output and executing the request.\n \"\"\"\n if isinstance(spec, str):\n for conversion in (\n OpenAPISpec.from_url,\n OpenAPISpec.from_file,\n OpenAPISpec.from_text,\n ):\n try:\n spec = conversion(spec) # type: ignore[arg-type]\n break\n except Exception: # noqa: E722\n pass\n if isinstance(spec, str):\n raise ValueError(f\"Unable to parse spec from source {spec}\")\n openai_fns, call_api_fn = openapi_spec_to_openai_fn(spec)\n llm = llm or ChatOpenAI(\n model=\"gpt-3.5-turbo-0613\",\n )\n prompt = prompt or ChatPromptTemplate.from_template(\n \"Use the provided API's to respond to this user query:\\n\\n{query}\"\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n llm_kwargs={\"functions\": openai_fns},\n output_parser=JsonOutputFunctionsParser(args_only=False),\n output_key=\"function\",\n verbose=verbose,\n **(llm_chain_kwargs or {}),\n )\n request_chain = request_chain or SimpleRequestChain(\n request_method=lambda name, args: call_api_fn(\n name, args, headers=headers, params=params\n ),\n verbose=verbose,\n )\n return SequentialChain(\n chains=[llm_chain, request_chain],\n input_variables=llm_chain.input_keys,\n output_variables=[\"response\"],\n verbose=verbose,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/openapi.html"} {"id": "efe20c58af57-0", "text": "Source code for langchain.chains.openai_functions.citation_fuzzy_match\nfrom typing import Iterator, List\nfrom pydantic import BaseModel, Field\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.openai_functions.utils import get_llm_kwargs\nfrom langchain.output_parsers.openai_functions import (\n PydanticOutputFunctionsParser,\n)\nfrom langchain.prompts.chat import ChatPromptTemplate, HumanMessagePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.schema.messages import HumanMessage, SystemMessage\n[docs]class FactWithEvidence(BaseModel):\n \"\"\"Class representing single statement.\n Each fact has a body and a list of sources.\n If there are multiple facts make sure to break them apart\n such that each one only uses a set of sources that are relevant to it.\n \"\"\"\n fact: str = Field(..., description=\"Body of the sentence, as part of a response\")\n substring_quote: List[str] = Field(\n ...,\n description=(\n \"Each source should be a direct quote from the context, \"\n \"as a substring of the original content\"\n ),\n )\n def _get_span(self, quote: str, context: str, errs: int = 100) -> Iterator[str]:\n import regex\n minor = quote\n major = context\n errs_ = 0\n s = regex.search(f\"({minor}){{e<={errs_}}}\", major)\n while s is None and errs_ <= errs:\n errs_ += 1\n s = regex.search(f\"({minor}){{e<={errs_}}}\", major)\n if s is not None:\n yield from s.spans()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/citation_fuzzy_match.html"} {"id": "efe20c58af57-1", "text": "if s is not None:\n yield from s.spans()\n[docs] def get_spans(self, context: str) -> Iterator[str]:\n for quote in self.substring_quote:\n yield from self._get_span(quote, context)\n[docs]class QuestionAnswer(BaseModel):\n \"\"\"A question and its answer as a list of facts each one should have a source.\n each sentence contains a body and a list of sources.\"\"\"\n question: str = Field(..., description=\"Question that was asked\")\n answer: List[FactWithEvidence] = Field(\n ...,\n description=(\n \"Body of the answer, each fact should be \"\n \"its separate object with a body and a list of sources\"\n ),\n )\n[docs]def create_citation_fuzzy_match_chain(llm: BaseLanguageModel) -> LLMChain:\n \"\"\"Create a citation fuzzy match chain.\n Args:\n llm: Language model to use for the chain.\n Returns:\n Chain (LLMChain) that can be used to answer questions with citations.\n \"\"\"\n output_parser = PydanticOutputFunctionsParser(pydantic_schema=QuestionAnswer)\n schema = QuestionAnswer.schema()\n function = {\n \"name\": schema[\"title\"],\n \"description\": schema[\"description\"],\n \"parameters\": schema,\n }\n llm_kwargs = get_llm_kwargs(function)\n messages = [\n SystemMessage(\n content=(\n \"You are a world class algorithm to answer \"\n \"questions with correct and exact citations.\"\n )\n ),\n HumanMessage(content=\"Answer question using the following context\"),\n HumanMessagePromptTemplate.from_template(\"{context}\"),\n HumanMessagePromptTemplate.from_template(\"Question: {question}\"),", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/citation_fuzzy_match.html"} {"id": "efe20c58af57-2", "text": "HumanMessagePromptTemplate.from_template(\"Question: {question}\"),\n HumanMessage(\n content=(\n \"Tips: Make sure to cite your sources, \"\n \"and use the exact words from the context.\"\n )\n ),\n ]\n prompt = ChatPromptTemplate(messages=messages)\n chain = LLMChain(\n llm=llm,\n prompt=prompt,\n llm_kwargs=llm_kwargs,\n output_parser=output_parser,\n )\n return chain", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/citation_fuzzy_match.html"} {"id": "015c726e3b2a-0", "text": "Source code for langchain.chains.router.llm_router\n\"\"\"Base classes for LLM-powered router chains.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional, Type, cast\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n)\nfrom langchain.chains import LLMChain\nfrom langchain.chains.router.base import RouterChain\nfrom langchain.output_parsers.json import parse_and_check_json_markdown\nfrom langchain.schema import BaseOutputParser, BasePromptTemplate, OutputParserException\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class LLMRouterChain(RouterChain):\n \"\"\"A router chain that uses an LLM chain to perform routing.\"\"\"\n llm_chain: LLMChain\n \"\"\"LLM chain used to perform routing\"\"\"\n[docs] @root_validator()\n def validate_prompt(cls, values: dict) -> dict:\n prompt = values[\"llm_chain\"].prompt\n if prompt.output_parser is None:\n raise ValueError(\n \"LLMRouterChain requires base llm_chain prompt to have an output\"\n \" parser that converts LLM text output to a dictionary with keys\"\n \" 'destination' and 'next_inputs'. Received a prompt with no output\"\n \" parser.\"\n )\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Will be whatever keys the LLM chain prompt expects.\n :meta private:\n \"\"\"\n return self.llm_chain.input_keys\n def _validate_outputs(self, outputs: Dict[str, Any]) -> None:\n super()._validate_outputs(outputs)\n if not isinstance(outputs[\"next_inputs\"], dict):\n raise ValueError\n def _call(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/router/llm_router.html"} {"id": "015c726e3b2a-1", "text": "raise ValueError\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n output = cast(\n Dict[str, Any],\n self.llm_chain.predict_and_parse(callbacks=callbacks, **inputs),\n )\n return output\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n output = cast(\n Dict[str, Any],\n await self.llm_chain.apredict_and_parse(callbacks=callbacks, **inputs),\n )\n return output\n[docs] @classmethod\n def from_llm(\n cls, llm: BaseLanguageModel, prompt: BasePromptTemplate, **kwargs: Any\n ) -> LLMRouterChain:\n \"\"\"Convenience constructor.\"\"\"\n llm_chain = LLMChain(llm=llm, prompt=prompt)\n return cls(llm_chain=llm_chain, **kwargs)\n[docs]class RouterOutputParser(BaseOutputParser[Dict[str, str]]):\n \"\"\"Parser for output of router chain int he multi-prompt chain.\"\"\"\n default_destination: str = \"DEFAULT\"\n next_inputs_type: Type = str\n next_inputs_inner_key: str = \"input\"\n[docs] def parse(self, text: str) -> Dict[str, Any]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/router/llm_router.html"} {"id": "015c726e3b2a-2", "text": "[docs] def parse(self, text: str) -> Dict[str, Any]:\n try:\n expected_keys = [\"destination\", \"next_inputs\"]\n parsed = parse_and_check_json_markdown(text, expected_keys)\n if not isinstance(parsed[\"destination\"], str):\n raise ValueError(\"Expected 'destination' to be a string.\")\n if not isinstance(parsed[\"next_inputs\"], self.next_inputs_type):\n raise ValueError(\n f\"Expected 'next_inputs' to be {self.next_inputs_type}.\"\n )\n parsed[\"next_inputs\"] = {self.next_inputs_inner_key: parsed[\"next_inputs\"]}\n if (\n parsed[\"destination\"].strip().lower()\n == self.default_destination.lower()\n ):\n parsed[\"destination\"] = None\n else:\n parsed[\"destination\"] = parsed[\"destination\"].strip()\n return parsed\n except Exception as e:\n raise OutputParserException(\n f\"Parsing text\\n{text}\\n raised following error:\\n{e}\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/router/llm_router.html"} {"id": "7c3466a0831a-0", "text": "Source code for langchain.chains.router.multi_prompt\n\"\"\"Use a single chain to route an input to one of multiple llm chains.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom langchain.chains import ConversationChain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.router.base import MultiRouteChain, RouterChain\nfrom langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParser\nfrom langchain.chains.router.multi_prompt_prompt import MULTI_PROMPT_ROUTER_TEMPLATE\nfrom langchain.prompts import PromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class MultiPromptChain(MultiRouteChain):\n \"\"\"A multi-route chain that uses an LLM router chain to choose amongst prompts.\"\"\"\n router_chain: RouterChain\n \"\"\"Chain for deciding a destination chain and the input to it.\"\"\"\n destination_chains: Mapping[str, LLMChain]\n \"\"\"Map of name to candidate chains that inputs can be routed to.\"\"\"\n default_chain: LLMChain\n \"\"\"Default chain to use when router doesn't map input to one of the destinations.\"\"\"\n @property\n def output_keys(self) -> List[str]:\n return [\"text\"]\n[docs] @classmethod\n def from_prompts(\n cls,\n llm: BaseLanguageModel,\n prompt_infos: List[Dict[str, str]],\n default_chain: Optional[LLMChain] = None,\n **kwargs: Any,\n ) -> MultiPromptChain:\n \"\"\"Convenience constructor for instantiating from destination prompts.\"\"\"\n destinations = [f\"{p['name']}: {p['description']}\" for p in prompt_infos]\n destinations_str = \"\\n\".join(destinations)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/router/multi_prompt.html"} {"id": "7c3466a0831a-1", "text": "destinations_str = \"\\n\".join(destinations)\n router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(\n destinations=destinations_str\n )\n router_prompt = PromptTemplate(\n template=router_template,\n input_variables=[\"input\"],\n output_parser=RouterOutputParser(),\n )\n router_chain = LLMRouterChain.from_llm(llm, router_prompt)\n destination_chains = {}\n for p_info in prompt_infos:\n name = p_info[\"name\"]\n prompt_template = p_info[\"prompt_template\"]\n prompt = PromptTemplate(template=prompt_template, input_variables=[\"input\"])\n chain = LLMChain(llm=llm, prompt=prompt)\n destination_chains[name] = chain\n _default_chain = default_chain or ConversationChain(llm=llm, output_key=\"text\")\n return cls(\n router_chain=router_chain,\n destination_chains=destination_chains,\n default_chain=_default_chain,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/router/multi_prompt.html"} {"id": "9db2d561949a-0", "text": "Source code for langchain.chains.router.base\n\"\"\"Base classes for chain routing.\"\"\"\nfrom __future__ import annotations\nfrom abc import ABC\nfrom typing import Any, Dict, List, Mapping, NamedTuple, Optional\nfrom pydantic import Extra\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n Callbacks,\n)\nfrom langchain.chains.base import Chain\n[docs]class Route(NamedTuple):\n destination: Optional[str]\n next_inputs: Dict[str, Any]\n[docs]class RouterChain(Chain, ABC):\n \"\"\"Chain that outputs the name of a destination chain and the inputs to it.\"\"\"\n @property\n def output_keys(self) -> List[str]:\n return [\"destination\", \"next_inputs\"]\n[docs] def route(self, inputs: Dict[str, Any], callbacks: Callbacks = None) -> Route:\n result = self(inputs, callbacks=callbacks)\n return Route(result[\"destination\"], result[\"next_inputs\"])\n[docs] async def aroute(\n self, inputs: Dict[str, Any], callbacks: Callbacks = None\n ) -> Route:\n result = await self.acall(inputs, callbacks=callbacks)\n return Route(result[\"destination\"], result[\"next_inputs\"])\n[docs]class MultiRouteChain(Chain):\n \"\"\"Use a single chain to route an input to one of multiple candidate chains.\"\"\"\n router_chain: RouterChain\n \"\"\"Chain that routes inputs to destination chains.\"\"\"\n destination_chains: Mapping[str, Chain]\n \"\"\"Chains that return final answer to inputs.\"\"\"\n default_chain: Chain\n \"\"\"Default chain to use when none of the destination chains are suitable.\"\"\"\n silent_errors: bool = False", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/router/base.html"} {"id": "9db2d561949a-1", "text": "silent_errors: bool = False\n \"\"\"If True, use default_chain when an invalid destination name is provided. \n Defaults to False.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Will be whatever keys the router chain prompt expects.\n :meta private:\n \"\"\"\n return self.router_chain.input_keys\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Will always return text key.\n :meta private:\n \"\"\"\n return []\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n route = self.router_chain.route(inputs, callbacks=callbacks)\n _run_manager.on_text(\n str(route.destination) + \": \" + str(route.next_inputs), verbose=self.verbose\n )\n if not route.destination:\n return self.default_chain(route.next_inputs, callbacks=callbacks)\n elif route.destination in self.destination_chains:\n return self.destination_chains[route.destination](\n route.next_inputs, callbacks=callbacks\n )\n elif self.silent_errors:\n return self.default_chain(route.next_inputs, callbacks=callbacks)\n else:\n raise ValueError(\n f\"Received invalid destination chain name '{route.destination}'\"\n )\n async def _acall(\n self,\n inputs: Dict[str, Any],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/router/base.html"} {"id": "9db2d561949a-2", "text": "self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n route = await self.router_chain.aroute(inputs, callbacks=callbacks)\n _run_manager.on_text(\n str(route.destination) + \": \" + str(route.next_inputs), verbose=self.verbose\n )\n if not route.destination:\n return await self.default_chain.acall(\n route.next_inputs, callbacks=callbacks\n )\n elif route.destination in self.destination_chains:\n return await self.destination_chains[route.destination].acall(\n route.next_inputs, callbacks=callbacks\n )\n elif self.silent_errors:\n return await self.default_chain.acall(\n route.next_inputs, callbacks=callbacks\n )\n else:\n raise ValueError(\n f\"Received invalid destination chain name '{route.destination}'\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/router/base.html"} {"id": "27fd5cacf453-0", "text": "Source code for langchain.chains.router.embedding_router\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional, Sequence, Tuple, Type\nfrom pydantic import Extra\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.router.base import RouterChain\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\n[docs]class EmbeddingRouterChain(RouterChain):\n \"\"\"Class that uses embeddings to route between options.\"\"\"\n vectorstore: VectorStore\n routing_keys: List[str] = [\"query\"]\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Will be whatever keys the LLM chain prompt expects.\n :meta private:\n \"\"\"\n return self.routing_keys\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _input = \", \".join([inputs[k] for k in self.routing_keys])\n results = self.vectorstore.similarity_search(_input, k=1)\n return {\"next_inputs\": inputs, \"destination\": results[0].metadata[\"name\"]}\n[docs] @classmethod\n def from_names_and_descriptions(\n cls,\n names_and_descriptions: Sequence[Tuple[str, Sequence[str]]],\n vectorstore_cls: Type[VectorStore],\n embeddings: Embeddings,\n **kwargs: Any,\n ) -> EmbeddingRouterChain:\n \"\"\"Convenience constructor.\"\"\"\n documents = []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/router/embedding_router.html"} {"id": "27fd5cacf453-1", "text": "\"\"\"Convenience constructor.\"\"\"\n documents = []\n for name, descriptions in names_and_descriptions:\n for description in descriptions:\n documents.append(\n Document(page_content=description, metadata={\"name\": name})\n )\n vectorstore = vectorstore_cls.from_documents(documents, embeddings)\n return cls(vectorstore=vectorstore, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/router/embedding_router.html"} {"id": "ea7e5ca0c934-0", "text": "Source code for langchain.chains.router.multi_retrieval_qa\n\"\"\"Use a single chain to route an input to one of multiple retrieval qa chains.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom langchain.chains import ConversationChain\nfrom langchain.chains.base import Chain\nfrom langchain.chains.conversation.prompt import DEFAULT_TEMPLATE\nfrom langchain.chains.retrieval_qa.base import BaseRetrievalQA, RetrievalQA\nfrom langchain.chains.router.base import MultiRouteChain\nfrom langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParser\nfrom langchain.chains.router.multi_retrieval_prompt import (\n MULTI_RETRIEVAL_ROUTER_TEMPLATE,\n)\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.prompts import PromptTemplate\nfrom langchain.schema import BaseRetriever\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class MultiRetrievalQAChain(MultiRouteChain):\n \"\"\"A multi-route chain that uses an LLM router chain to choose amongst retrieval\n qa chains.\"\"\"\n router_chain: LLMRouterChain\n \"\"\"Chain for deciding a destination chain and the input to it.\"\"\"\n destination_chains: Mapping[str, BaseRetrievalQA]\n \"\"\"Map of name to candidate chains that inputs can be routed to.\"\"\"\n default_chain: Chain\n \"\"\"Default chain to use when router doesn't map input to one of the destinations.\"\"\"\n @property\n def output_keys(self) -> List[str]:\n return [\"result\"]\n[docs] @classmethod\n def from_retrievers(\n cls,\n llm: BaseLanguageModel,\n retriever_infos: List[Dict[str, Any]],\n default_retriever: Optional[BaseRetriever] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/router/multi_retrieval_qa.html"} {"id": "ea7e5ca0c934-1", "text": "default_retriever: Optional[BaseRetriever] = None,\n default_prompt: Optional[PromptTemplate] = None,\n default_chain: Optional[Chain] = None,\n **kwargs: Any,\n ) -> MultiRetrievalQAChain:\n if default_prompt and not default_retriever:\n raise ValueError(\n \"`default_retriever` must be specified if `default_prompt` is \"\n \"provided. Received only `default_prompt`.\"\n )\n destinations = [f\"{r['name']}: {r['description']}\" for r in retriever_infos]\n destinations_str = \"\\n\".join(destinations)\n router_template = MULTI_RETRIEVAL_ROUTER_TEMPLATE.format(\n destinations=destinations_str\n )\n router_prompt = PromptTemplate(\n template=router_template,\n input_variables=[\"input\"],\n output_parser=RouterOutputParser(next_inputs_inner_key=\"query\"),\n )\n router_chain = LLMRouterChain.from_llm(llm, router_prompt)\n destination_chains = {}\n for r_info in retriever_infos:\n prompt = r_info.get(\"prompt\")\n retriever = r_info[\"retriever\"]\n chain = RetrievalQA.from_llm(llm, prompt=prompt, retriever=retriever)\n name = r_info[\"name\"]\n destination_chains[name] = chain\n if default_chain:\n _default_chain = default_chain\n elif default_retriever:\n _default_chain = RetrievalQA.from_llm(\n llm, prompt=default_prompt, retriever=default_retriever\n )\n else:\n prompt_template = DEFAULT_TEMPLATE.replace(\"input\", \"query\")\n prompt = PromptTemplate(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/router/multi_retrieval_qa.html"} {"id": "ea7e5ca0c934-2", "text": "prompt = PromptTemplate(\n template=prompt_template, input_variables=[\"history\", \"query\"]\n )\n _default_chain = ConversationChain(\n llm=ChatOpenAI(), prompt=prompt, input_key=\"query\", output_key=\"result\"\n )\n return cls(\n router_chain=router_chain,\n destination_chains=destination_chains,\n default_chain=_default_chain,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/router/multi_retrieval_qa.html"} {"id": "9a9274a3e11c-0", "text": "Source code for langchain.chains.qa_with_sources.vector_db\n\"\"\"Question-answering with sources over a vector database.\"\"\"\nimport warnings\nfrom typing import Any, Dict, List\nfrom pydantic import Field, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n)\nfrom langchain.chains.combine_documents.stuff import StuffDocumentsChain\nfrom langchain.chains.qa_with_sources.base import BaseQAWithSourcesChain\nfrom langchain.docstore.document import Document\nfrom langchain.vectorstores.base import VectorStore\n[docs]class VectorDBQAWithSourcesChain(BaseQAWithSourcesChain):\n \"\"\"Question-answering with sources over a vector database.\"\"\"\n vectorstore: VectorStore = Field(exclude=True)\n \"\"\"Vector Database to connect to.\"\"\"\n k: int = 4\n \"\"\"Number of results to return from store\"\"\"\n reduce_k_below_max_tokens: bool = False\n \"\"\"Reduce the number of results to return from store based on tokens limit\"\"\"\n max_tokens_limit: int = 3375\n \"\"\"Restrict the docs to return from store based on tokens,\n enforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true\"\"\"\n search_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Extra search args.\"\"\"\n def _reduce_tokens_below_limit(self, docs: List[Document]) -> List[Document]:\n num_docs = len(docs)\n if self.reduce_k_below_max_tokens and isinstance(\n self.combine_documents_chain, StuffDocumentsChain\n ):\n tokens = [\n self.combine_documents_chain.llm_chain.llm.get_num_tokens(\n doc.page_content\n )\n for doc in docs\n ]\n token_count = sum(tokens[:num_docs])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/vector_db.html"} {"id": "9a9274a3e11c-1", "text": "for doc in docs\n ]\n token_count = sum(tokens[:num_docs])\n while token_count > self.max_tokens_limit:\n num_docs -= 1\n token_count -= tokens[num_docs]\n return docs[:num_docs]\n def _get_docs(\n self, inputs: Dict[str, Any], *, run_manager: CallbackManagerForChainRun\n ) -> List[Document]:\n question = inputs[self.question_key]\n docs = self.vectorstore.similarity_search(\n question, k=self.k, **self.search_kwargs\n )\n return self._reduce_tokens_below_limit(docs)\n async def _aget_docs(\n self, inputs: Dict[str, Any], *, run_manager: AsyncCallbackManagerForChainRun\n ) -> List[Document]:\n raise NotImplementedError(\"VectorDBQAWithSourcesChain does not support async\")\n[docs] @root_validator()\n def raise_deprecation(cls, values: Dict) -> Dict:\n warnings.warn(\n \"`VectorDBQAWithSourcesChain` is deprecated - \"\n \"please use `from langchain.chains import RetrievalQAWithSourcesChain`\"\n )\n return values\n @property\n def _chain_type(self) -> str:\n return \"vector_db_qa_with_sources_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/vector_db.html"} {"id": "bd6723361e0b-0", "text": "Source code for langchain.chains.qa_with_sources.base\n\"\"\"Question answering with sources over documents.\"\"\"\nfrom __future__ import annotations\nimport inspect\nimport re\nfrom abc import ABC, abstractmethod\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n)\nfrom langchain.chains import ReduceDocumentsChain\nfrom langchain.chains.base import Chain\nfrom langchain.chains.combine_documents.base import BaseCombineDocumentsChain\nfrom langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain\nfrom langchain.chains.combine_documents.stuff import StuffDocumentsChain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.qa_with_sources.loading import load_qa_with_sources_chain\nfrom langchain.chains.qa_with_sources.map_reduce_prompt import (\n COMBINE_PROMPT,\n EXAMPLE_PROMPT,\n QUESTION_PROMPT,\n)\nfrom langchain.docstore.document import Document\nfrom langchain.schema import BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class BaseQAWithSourcesChain(Chain, ABC):\n \"\"\"Question answering with sources over documents.\"\"\"\n combine_documents_chain: BaseCombineDocumentsChain\n \"\"\"Chain to use to combine documents.\"\"\"\n question_key: str = \"question\" #: :meta private:\n input_docs_key: str = \"docs\" #: :meta private:\n answer_key: str = \"answer\" #: :meta private:\n sources_answer_key: str = \"sources\" #: :meta private:\n return_source_documents: bool = False\n \"\"\"Return the source documents.\"\"\"\n[docs] @classmethod\n def from_llm(\n cls,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/base.html"} {"id": "bd6723361e0b-1", "text": "[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n document_prompt: BasePromptTemplate = EXAMPLE_PROMPT,\n question_prompt: BasePromptTemplate = QUESTION_PROMPT,\n combine_prompt: BasePromptTemplate = COMBINE_PROMPT,\n **kwargs: Any,\n ) -> BaseQAWithSourcesChain:\n \"\"\"Construct the chain from an LLM.\"\"\"\n llm_question_chain = LLMChain(llm=llm, prompt=question_prompt)\n llm_combine_chain = LLMChain(llm=llm, prompt=combine_prompt)\n combine_results_chain = StuffDocumentsChain(\n llm_chain=llm_combine_chain,\n document_prompt=document_prompt,\n document_variable_name=\"summaries\",\n )\n reduce_documents_chain = ReduceDocumentsChain(\n combine_documents_chain=combine_results_chain\n )\n combine_documents_chain = MapReduceDocumentsChain(\n llm_chain=llm_question_chain,\n reduce_documents_chain=reduce_documents_chain,\n document_variable_name=\"context\",\n )\n return cls(\n combine_documents_chain=combine_documents_chain,\n **kwargs,\n )\n[docs] @classmethod\n def from_chain_type(\n cls,\n llm: BaseLanguageModel,\n chain_type: str = \"stuff\",\n chain_type_kwargs: Optional[dict] = None,\n **kwargs: Any,\n ) -> BaseQAWithSourcesChain:\n \"\"\"Load chain from chain type.\"\"\"\n _chain_kwargs = chain_type_kwargs or {}\n combine_documents_chain = load_qa_with_sources_chain(\n llm, chain_type=chain_type, **_chain_kwargs\n )\n return cls(combine_documents_chain=combine_documents_chain, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/base.html"} {"id": "bd6723361e0b-2", "text": ")\n return cls(combine_documents_chain=combine_documents_chain, **kwargs)\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.question_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return output key.\n :meta private:\n \"\"\"\n _output_keys = [self.answer_key, self.sources_answer_key]\n if self.return_source_documents:\n _output_keys = _output_keys + [\"source_documents\"]\n return _output_keys\n[docs] @root_validator(pre=True)\n def validate_naming(cls, values: Dict) -> Dict:\n \"\"\"Fix backwards compatibility in naming.\"\"\"\n if \"combine_document_chain\" in values:\n values[\"combine_documents_chain\"] = values.pop(\"combine_document_chain\")\n return values\n @abstractmethod\n def _get_docs(\n self,\n inputs: Dict[str, Any],\n *,\n run_manager: CallbackManagerForChainRun,\n ) -> List[Document]:\n \"\"\"Get docs to run questioning over.\"\"\"\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n accepts_run_manager = (\n \"run_manager\" in inspect.signature(self._get_docs).parameters\n )\n if accepts_run_manager:\n docs = self._get_docs(inputs, run_manager=_run_manager)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/base.html"} {"id": "bd6723361e0b-3", "text": "docs = self._get_docs(inputs, run_manager=_run_manager)\n else:\n docs = self._get_docs(inputs) # type: ignore[call-arg]\n answer = self.combine_documents_chain.run(\n input_documents=docs, callbacks=_run_manager.get_child(), **inputs\n )\n if re.search(r\"SOURCES:\\s\", answer):\n answer, sources = re.split(r\"SOURCES:\\s\", answer)\n else:\n sources = \"\"\n result: Dict[str, Any] = {\n self.answer_key: answer,\n self.sources_answer_key: sources,\n }\n if self.return_source_documents:\n result[\"source_documents\"] = docs\n return result\n @abstractmethod\n async def _aget_docs(\n self,\n inputs: Dict[str, Any],\n *,\n run_manager: AsyncCallbackManagerForChainRun,\n ) -> List[Document]:\n \"\"\"Get docs to run questioning over.\"\"\"\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n accepts_run_manager = (\n \"run_manager\" in inspect.signature(self._aget_docs).parameters\n )\n if accepts_run_manager:\n docs = await self._aget_docs(inputs, run_manager=_run_manager)\n else:\n docs = await self._aget_docs(inputs) # type: ignore[call-arg]\n answer = await self.combine_documents_chain.arun(\n input_documents=docs, callbacks=_run_manager.get_child(), **inputs\n )\n if re.search(r\"SOURCES:\\s\", answer):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/base.html"} {"id": "bd6723361e0b-4", "text": ")\n if re.search(r\"SOURCES:\\s\", answer):\n answer, sources = re.split(r\"SOURCES:\\s\", answer)\n else:\n sources = \"\"\n result: Dict[str, Any] = {\n self.answer_key: answer,\n self.sources_answer_key: sources,\n }\n if self.return_source_documents:\n result[\"source_documents\"] = docs\n return result\n[docs]class QAWithSourcesChain(BaseQAWithSourcesChain):\n \"\"\"Question answering with sources over documents.\"\"\"\n input_docs_key: str = \"docs\" #: :meta private:\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.input_docs_key, self.question_key]\n def _get_docs(\n self,\n inputs: Dict[str, Any],\n *,\n run_manager: CallbackManagerForChainRun,\n ) -> List[Document]:\n \"\"\"Get docs to run questioning over.\"\"\"\n return inputs.pop(self.input_docs_key)\n async def _aget_docs(\n self,\n inputs: Dict[str, Any],\n *,\n run_manager: AsyncCallbackManagerForChainRun,\n ) -> List[Document]:\n \"\"\"Get docs to run questioning over.\"\"\"\n return inputs.pop(self.input_docs_key)\n @property\n def _chain_type(self) -> str:\n return \"qa_with_sources_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/base.html"} {"id": "75b6fdf48c1d-0", "text": "Source code for langchain.chains.qa_with_sources.retrieval\n\"\"\"Question-answering with sources over an index.\"\"\"\nfrom typing import Any, Dict, List\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n)\nfrom langchain.chains.combine_documents.stuff import StuffDocumentsChain\nfrom langchain.chains.qa_with_sources.base import BaseQAWithSourcesChain\nfrom langchain.docstore.document import Document\nfrom langchain.schema import BaseRetriever\n[docs]class RetrievalQAWithSourcesChain(BaseQAWithSourcesChain):\n \"\"\"Question-answering with sources over an index.\"\"\"\n retriever: BaseRetriever = Field(exclude=True)\n \"\"\"Index to connect to.\"\"\"\n reduce_k_below_max_tokens: bool = False\n \"\"\"Reduce the number of results to return from store based on tokens limit\"\"\"\n max_tokens_limit: int = 3375\n \"\"\"Restrict the docs to return from store based on tokens,\n enforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true\"\"\"\n def _reduce_tokens_below_limit(self, docs: List[Document]) -> List[Document]:\n num_docs = len(docs)\n if self.reduce_k_below_max_tokens and isinstance(\n self.combine_documents_chain, StuffDocumentsChain\n ):\n tokens = [\n self.combine_documents_chain.llm_chain.llm.get_num_tokens(\n doc.page_content\n )\n for doc in docs\n ]\n token_count = sum(tokens[:num_docs])\n while token_count > self.max_tokens_limit:\n num_docs -= 1\n token_count -= tokens[num_docs]\n return docs[:num_docs]\n def _get_docs(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/retrieval.html"} {"id": "75b6fdf48c1d-1", "text": "return docs[:num_docs]\n def _get_docs(\n self, inputs: Dict[str, Any], *, run_manager: CallbackManagerForChainRun\n ) -> List[Document]:\n question = inputs[self.question_key]\n docs = self.retriever.get_relevant_documents(\n question, callbacks=run_manager.get_child()\n )\n return self._reduce_tokens_below_limit(docs)\n async def _aget_docs(\n self, inputs: Dict[str, Any], *, run_manager: AsyncCallbackManagerForChainRun\n ) -> List[Document]:\n question = inputs[self.question_key]\n docs = await self.retriever.aget_relevant_documents(\n question, callbacks=run_manager.get_child()\n )\n return self._reduce_tokens_below_limit(docs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/retrieval.html"} {"id": "a8df00b18bed-0", "text": "Source code for langchain.chains.qa_with_sources.loading\n\"\"\"Load question answering with sources chains.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Mapping, Optional, Protocol\nfrom langchain.chains.combine_documents.base import BaseCombineDocumentsChain\nfrom langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain\nfrom langchain.chains.combine_documents.map_rerank import MapRerankDocumentsChain\nfrom langchain.chains.combine_documents.reduce import ReduceDocumentsChain\nfrom langchain.chains.combine_documents.refine import RefineDocumentsChain\nfrom langchain.chains.combine_documents.stuff import StuffDocumentsChain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.qa_with_sources import (\n map_reduce_prompt,\n refine_prompts,\n stuff_prompt,\n)\nfrom langchain.chains.question_answering.map_rerank_prompt import (\n PROMPT as MAP_RERANK_PROMPT,\n)\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.schema.prompt_template import BasePromptTemplate\n[docs]class LoadingCallable(Protocol):\n \"\"\"Interface for loading the combine documents chain.\"\"\"\n[docs] def __call__(\n self, llm: BaseLanguageModel, **kwargs: Any\n ) -> BaseCombineDocumentsChain:\n \"\"\"Callable to load the combine documents chain.\"\"\"\ndef _load_map_rerank_chain(\n llm: BaseLanguageModel,\n prompt: BasePromptTemplate = MAP_RERANK_PROMPT,\n verbose: bool = False,\n document_variable_name: str = \"context\",\n rank_key: str = \"score\",\n answer_key: str = \"answer\",\n **kwargs: Any,\n) -> MapRerankDocumentsChain:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/loading.html"} {"id": "a8df00b18bed-1", "text": "**kwargs: Any,\n) -> MapRerankDocumentsChain:\n llm_chain = LLMChain(llm=llm, prompt=prompt, verbose=verbose)\n return MapRerankDocumentsChain(\n llm_chain=llm_chain,\n rank_key=rank_key,\n answer_key=answer_key,\n document_variable_name=document_variable_name,\n **kwargs,\n )\ndef _load_stuff_chain(\n llm: BaseLanguageModel,\n prompt: BasePromptTemplate = stuff_prompt.PROMPT,\n document_prompt: BasePromptTemplate = stuff_prompt.EXAMPLE_PROMPT,\n document_variable_name: str = \"summaries\",\n verbose: Optional[bool] = None,\n **kwargs: Any,\n) -> StuffDocumentsChain:\n llm_chain = LLMChain(llm=llm, prompt=prompt, verbose=verbose)\n return StuffDocumentsChain(\n llm_chain=llm_chain,\n document_variable_name=document_variable_name,\n document_prompt=document_prompt,\n verbose=verbose,\n **kwargs,\n )\ndef _load_map_reduce_chain(\n llm: BaseLanguageModel,\n question_prompt: BasePromptTemplate = map_reduce_prompt.QUESTION_PROMPT,\n combine_prompt: BasePromptTemplate = map_reduce_prompt.COMBINE_PROMPT,\n document_prompt: BasePromptTemplate = map_reduce_prompt.EXAMPLE_PROMPT,\n combine_document_variable_name: str = \"summaries\",\n map_reduce_document_variable_name: str = \"context\",\n collapse_prompt: Optional[BasePromptTemplate] = None,\n reduce_llm: Optional[BaseLanguageModel] = None,\n collapse_llm: Optional[BaseLanguageModel] = None,\n verbose: Optional[bool] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/loading.html"} {"id": "a8df00b18bed-2", "text": "verbose: Optional[bool] = None,\n token_max: int = 3000,\n **kwargs: Any,\n) -> MapReduceDocumentsChain:\n map_chain = LLMChain(llm=llm, prompt=question_prompt, verbose=verbose)\n _reduce_llm = reduce_llm or llm\n reduce_chain = LLMChain(llm=_reduce_llm, prompt=combine_prompt, verbose=verbose)\n combine_documents_chain = StuffDocumentsChain(\n llm_chain=reduce_chain,\n document_variable_name=combine_document_variable_name,\n document_prompt=document_prompt,\n verbose=verbose,\n )\n if collapse_prompt is None:\n collapse_chain = None\n if collapse_llm is not None:\n raise ValueError(\n \"collapse_llm provided, but collapse_prompt was not: please \"\n \"provide one or stop providing collapse_llm.\"\n )\n else:\n _collapse_llm = collapse_llm or llm\n collapse_chain = StuffDocumentsChain(\n llm_chain=LLMChain(\n llm=_collapse_llm,\n prompt=collapse_prompt,\n verbose=verbose,\n ),\n document_variable_name=combine_document_variable_name,\n document_prompt=document_prompt,\n )\n reduce_documents_chain = ReduceDocumentsChain(\n combine_documents_chain=combine_documents_chain,\n collapse_documents_chain=collapse_chain,\n token_max=token_max,\n verbose=verbose,\n )\n return MapReduceDocumentsChain(\n llm_chain=map_chain,\n reduce_documents_chain=reduce_documents_chain,\n document_variable_name=map_reduce_document_variable_name,\n verbose=verbose,\n **kwargs,\n )\ndef _load_refine_chain(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/loading.html"} {"id": "a8df00b18bed-3", "text": "**kwargs,\n )\ndef _load_refine_chain(\n llm: BaseLanguageModel,\n question_prompt: BasePromptTemplate = refine_prompts.DEFAULT_TEXT_QA_PROMPT,\n refine_prompt: BasePromptTemplate = refine_prompts.DEFAULT_REFINE_PROMPT,\n document_prompt: BasePromptTemplate = refine_prompts.EXAMPLE_PROMPT,\n document_variable_name: str = \"context_str\",\n initial_response_name: str = \"existing_answer\",\n refine_llm: Optional[BaseLanguageModel] = None,\n verbose: Optional[bool] = None,\n **kwargs: Any,\n) -> RefineDocumentsChain:\n initial_chain = LLMChain(llm=llm, prompt=question_prompt, verbose=verbose)\n _refine_llm = refine_llm or llm\n refine_chain = LLMChain(llm=_refine_llm, prompt=refine_prompt, verbose=verbose)\n return RefineDocumentsChain(\n initial_llm_chain=initial_chain,\n refine_llm_chain=refine_chain,\n document_variable_name=document_variable_name,\n initial_response_name=initial_response_name,\n document_prompt=document_prompt,\n verbose=verbose,\n **kwargs,\n )\n[docs]def load_qa_with_sources_chain(\n llm: BaseLanguageModel,\n chain_type: str = \"stuff\",\n verbose: Optional[bool] = None,\n **kwargs: Any,\n) -> BaseCombineDocumentsChain:\n \"\"\"Load question answering with sources chain.\n Args:\n llm: Language Model to use in the chain.\n chain_type: Type of document combining chain to use. Should be one of \"stuff\",\n \"map_reduce\", \"refine\" and \"map_rerank\".", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/loading.html"} {"id": "a8df00b18bed-4", "text": "\"map_reduce\", \"refine\" and \"map_rerank\".\n verbose: Whether chains should be run in verbose mode or not. Note that this\n applies to all chains that make up the final chain.\n Returns:\n A chain to use for question answering with sources.\n \"\"\"\n loader_mapping: Mapping[str, LoadingCallable] = {\n \"stuff\": _load_stuff_chain,\n \"map_reduce\": _load_map_reduce_chain,\n \"refine\": _load_refine_chain,\n \"map_rerank\": _load_map_rerank_chain,\n }\n if chain_type not in loader_mapping:\n raise ValueError(\n f\"Got unsupported chain type: {chain_type}. \"\n f\"Should be one of {loader_mapping.keys()}\"\n )\n _func: LoadingCallable = loader_mapping[chain_type]\n return _func(llm, verbose=verbose, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/loading.html"} {"id": "98aaf9e3a6cb-0", "text": "Source code for langchain.chains.llm_checker.base\n\"\"\"Chain for question-answering with self-verification.\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.llm_checker.prompt import (\n CHECK_ASSERTIONS_PROMPT,\n CREATE_DRAFT_ANSWER_PROMPT,\n LIST_ASSERTIONS_PROMPT,\n REVISED_ANSWER_PROMPT,\n)\nfrom langchain.chains.sequential import SequentialChain\nfrom langchain.prompts import PromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\ndef _load_question_to_checked_assertions_chain(\n llm: BaseLanguageModel,\n create_draft_answer_prompt: PromptTemplate,\n list_assertions_prompt: PromptTemplate,\n check_assertions_prompt: PromptTemplate,\n revised_answer_prompt: PromptTemplate,\n) -> SequentialChain:\n create_draft_answer_chain = LLMChain(\n llm=llm,\n prompt=create_draft_answer_prompt,\n output_key=\"statement\",\n )\n list_assertions_chain = LLMChain(\n llm=llm,\n prompt=list_assertions_prompt,\n output_key=\"assertions\",\n )\n check_assertions_chain = LLMChain(\n llm=llm,\n prompt=check_assertions_prompt,\n output_key=\"checked_assertions\",\n )\n revised_answer_chain = LLMChain(\n llm=llm,\n prompt=revised_answer_prompt,\n output_key=\"revised_statement\",\n )\n chains = [\n create_draft_answer_chain,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_checker/base.html"} {"id": "98aaf9e3a6cb-1", "text": ")\n chains = [\n create_draft_answer_chain,\n list_assertions_chain,\n check_assertions_chain,\n revised_answer_chain,\n ]\n question_to_checked_assertions_chain = SequentialChain(\n chains=chains,\n input_variables=[\"question\"],\n output_variables=[\"revised_statement\"],\n verbose=True,\n )\n return question_to_checked_assertions_chain\n[docs]class LLMCheckerChain(Chain):\n \"\"\"Chain for question-answering with self-verification.\n Example:\n .. code-block:: python\n from langchain import OpenAI, LLMCheckerChain\n llm = OpenAI(temperature=0.7)\n checker_chain = LLMCheckerChain.from_llm(llm)\n \"\"\"\n question_to_checked_assertions_chain: SequentialChain\n llm: Optional[BaseLanguageModel] = None\n \"\"\"[Deprecated] LLM wrapper to use.\"\"\"\n create_draft_answer_prompt: PromptTemplate = CREATE_DRAFT_ANSWER_PROMPT\n \"\"\"[Deprecated]\"\"\"\n list_assertions_prompt: PromptTemplate = LIST_ASSERTIONS_PROMPT\n \"\"\"[Deprecated]\"\"\"\n check_assertions_prompt: PromptTemplate = CHECK_ASSERTIONS_PROMPT\n \"\"\"[Deprecated]\"\"\"\n revised_answer_prompt: PromptTemplate = REVISED_ANSWER_PROMPT\n \"\"\"[Deprecated] Prompt to use when questioning the documents.\"\"\"\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] @root_validator(pre=True)\n def raise_deprecation(cls, values: Dict) -> Dict:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_checker/base.html"} {"id": "98aaf9e3a6cb-2", "text": "def raise_deprecation(cls, values: Dict) -> Dict:\n if \"llm\" in values:\n warnings.warn(\n \"Directly instantiating an LLMCheckerChain with an llm is deprecated. \"\n \"Please instantiate with question_to_checked_assertions_chain \"\n \"or using the from_llm class method.\"\n )\n if (\n \"question_to_checked_assertions_chain\" not in values\n and values[\"llm\"] is not None\n ):\n question_to_checked_assertions_chain = (\n _load_question_to_checked_assertions_chain(\n values[\"llm\"],\n values.get(\n \"create_draft_answer_prompt\", CREATE_DRAFT_ANSWER_PROMPT\n ),\n values.get(\"list_assertions_prompt\", LIST_ASSERTIONS_PROMPT),\n values.get(\"check_assertions_prompt\", CHECK_ASSERTIONS_PROMPT),\n values.get(\"revised_answer_prompt\", REVISED_ANSWER_PROMPT),\n )\n )\n values[\n \"question_to_checked_assertions_chain\"\n ] = question_to_checked_assertions_chain\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the singular input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the singular output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n question = inputs[self.input_key]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_checker/base.html"} {"id": "98aaf9e3a6cb-3", "text": "question = inputs[self.input_key]\n output = self.question_to_checked_assertions_chain(\n {\"question\": question}, callbacks=_run_manager.get_child()\n )\n return {self.output_key: output[\"revised_statement\"]}\n @property\n def _chain_type(self) -> str:\n return \"llm_checker_chain\"\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n create_draft_answer_prompt: PromptTemplate = CREATE_DRAFT_ANSWER_PROMPT,\n list_assertions_prompt: PromptTemplate = LIST_ASSERTIONS_PROMPT,\n check_assertions_prompt: PromptTemplate = CHECK_ASSERTIONS_PROMPT,\n revised_answer_prompt: PromptTemplate = REVISED_ANSWER_PROMPT,\n **kwargs: Any,\n ) -> LLMCheckerChain:\n question_to_checked_assertions_chain = (\n _load_question_to_checked_assertions_chain(\n llm,\n create_draft_answer_prompt,\n list_assertions_prompt,\n check_assertions_prompt,\n revised_answer_prompt,\n )\n )\n return cls(\n question_to_checked_assertions_chain=question_to_checked_assertions_chain,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_checker/base.html"} {"id": "c29ba3dc0d52-0", "text": "Source code for langchain.chains.conversational_retrieval.base\n\"\"\"Chain for chatting with a vector database.\"\"\"\nfrom __future__ import annotations\nimport inspect\nimport warnings\nfrom abc import abstractmethod\nfrom pathlib import Path\nfrom typing import Any, Callable, Dict, List, Optional, Tuple, Union\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n Callbacks,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.chains.combine_documents.base import BaseCombineDocumentsChain\nfrom langchain.chains.combine_documents.stuff import StuffDocumentsChain\nfrom langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.question_answering import load_qa_chain\nfrom langchain.schema import BasePromptTemplate, BaseRetriever, Document\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.schema.messages import BaseMessage\nfrom langchain.vectorstores.base import VectorStore\n# Depending on the memory type and configuration, the chat history format may differ.\n# This needs to be consolidated.\nCHAT_TURN_TYPE = Union[Tuple[str, str], BaseMessage]\n_ROLE_MAP = {\"human\": \"Human: \", \"ai\": \"Assistant: \"}\ndef _get_chat_history(chat_history: List[CHAT_TURN_TYPE]) -> str:\n buffer = \"\"\n for dialogue_turn in chat_history:\n if isinstance(dialogue_turn, BaseMessage):\n role_prefix = _ROLE_MAP.get(dialogue_turn.type, f\"{dialogue_turn.type}: \")\n buffer += f\"\\n{role_prefix}{dialogue_turn.content}\"\n elif isinstance(dialogue_turn, tuple):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"} {"id": "c29ba3dc0d52-1", "text": "elif isinstance(dialogue_turn, tuple):\n human = \"Human: \" + dialogue_turn[0]\n ai = \"Assistant: \" + dialogue_turn[1]\n buffer += \"\\n\" + \"\\n\".join([human, ai])\n else:\n raise ValueError(\n f\"Unsupported chat history format: {type(dialogue_turn)}.\"\n f\" Full chat history: {chat_history} \"\n )\n return buffer\n[docs]class BaseConversationalRetrievalChain(Chain):\n \"\"\"Chain for chatting with an index.\"\"\"\n combine_docs_chain: BaseCombineDocumentsChain\n \"\"\"The chain used to combine any retrieved documents.\"\"\"\n question_generator: LLMChain\n \"\"\"The chain used to generate a new question for the sake of retrieval.\n This chain will take in the current question (with variable `question`)\n and any chat history (with variable `chat_history`) and will produce\n a new standalone question to be used later on.\"\"\"\n output_key: str = \"answer\"\n \"\"\"The output key to return the final answer of this chain in.\"\"\"\n rephrase_question: bool = True\n \"\"\"Whether or not to pass the new generated question to the combine_docs_chain.\n If True, will pass the new generated question along.\n If False, will only use the new generated question for retrieval and pass the\n original question along to the combine_docs_chain.\"\"\"\n return_source_documents: bool = False\n \"\"\"Return the retrieved source documents as part of the final result.\"\"\"\n return_generated_question: bool = False\n \"\"\"Return the generated question as part of the final result.\"\"\"\n get_chat_history: Optional[Callable[[CHAT_TURN_TYPE], str]] = None\n \"\"\"An optional function to get a string of the chat history.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"} {"id": "c29ba3dc0d52-2", "text": "\"\"\"An optional function to get a string of the chat history.\n If None is provided, will use a default.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n allow_population_by_field_name = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Input keys.\"\"\"\n return [\"question\", \"chat_history\"]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the output keys.\n :meta private:\n \"\"\"\n _output_keys = [self.output_key]\n if self.return_source_documents:\n _output_keys = _output_keys + [\"source_documents\"]\n if self.return_generated_question:\n _output_keys = _output_keys + [\"generated_question\"]\n return _output_keys\n @abstractmethod\n def _get_docs(\n self,\n question: str,\n inputs: Dict[str, Any],\n *,\n run_manager: CallbackManagerForChainRun,\n ) -> List[Document]:\n \"\"\"Get docs.\"\"\"\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n question = inputs[\"question\"]\n get_chat_history = self.get_chat_history or _get_chat_history\n chat_history_str = get_chat_history(inputs[\"chat_history\"])\n if chat_history_str:\n callbacks = _run_manager.get_child()\n new_question = self.question_generator.run(\n question=question, chat_history=chat_history_str, callbacks=callbacks\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"} {"id": "c29ba3dc0d52-3", "text": "question=question, chat_history=chat_history_str, callbacks=callbacks\n )\n else:\n new_question = question\n accepts_run_manager = (\n \"run_manager\" in inspect.signature(self._get_docs).parameters\n )\n if accepts_run_manager:\n docs = self._get_docs(new_question, inputs, run_manager=_run_manager)\n else:\n docs = self._get_docs(new_question, inputs) # type: ignore[call-arg]\n new_inputs = inputs.copy()\n if self.rephrase_question:\n new_inputs[\"question\"] = new_question\n new_inputs[\"chat_history\"] = chat_history_str\n answer = self.combine_docs_chain.run(\n input_documents=docs, callbacks=_run_manager.get_child(), **new_inputs\n )\n output: Dict[str, Any] = {self.output_key: answer}\n if self.return_source_documents:\n output[\"source_documents\"] = docs\n if self.return_generated_question:\n output[\"generated_question\"] = new_question\n return output\n @abstractmethod\n async def _aget_docs(\n self,\n question: str,\n inputs: Dict[str, Any],\n *,\n run_manager: AsyncCallbackManagerForChainRun,\n ) -> List[Document]:\n \"\"\"Get docs.\"\"\"\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n question = inputs[\"question\"]\n get_chat_history = self.get_chat_history or _get_chat_history\n chat_history_str = get_chat_history(inputs[\"chat_history\"])\n if chat_history_str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"} {"id": "c29ba3dc0d52-4", "text": "if chat_history_str:\n callbacks = _run_manager.get_child()\n new_question = await self.question_generator.arun(\n question=question, chat_history=chat_history_str, callbacks=callbacks\n )\n else:\n new_question = question\n accepts_run_manager = (\n \"run_manager\" in inspect.signature(self._aget_docs).parameters\n )\n if accepts_run_manager:\n docs = await self._aget_docs(new_question, inputs, run_manager=_run_manager)\n else:\n docs = await self._aget_docs(new_question, inputs) # type: ignore[call-arg]\n new_inputs = inputs.copy()\n if self.rephrase_question:\n new_inputs[\"question\"] = new_question\n new_inputs[\"chat_history\"] = chat_history_str\n answer = await self.combine_docs_chain.arun(\n input_documents=docs, callbacks=_run_manager.get_child(), **new_inputs\n )\n output: Dict[str, Any] = {self.output_key: answer}\n if self.return_source_documents:\n output[\"source_documents\"] = docs\n if self.return_generated_question:\n output[\"generated_question\"] = new_question\n return output\n[docs] def save(self, file_path: Union[Path, str]) -> None:\n if self.get_chat_history:\n raise ValueError(\"Chain not savable when `get_chat_history` is not None.\")\n super().save(file_path)\n[docs]class ConversationalRetrievalChain(BaseConversationalRetrievalChain):\n \"\"\"Chain for having a conversation based on retrieved documents.\n This chain takes in chat history (a list of messages) and new questions,\n and then returns an answer to that question.\n The algorithm for this chain consists of three parts:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"} {"id": "c29ba3dc0d52-5", "text": "The algorithm for this chain consists of three parts:\n 1. Use the chat history and the new question to create a \"standalone question\".\n This is done so that this question can be passed into the retrieval step to fetch\n relevant documents. If only the new question was passed in, then relevant context\n may be lacking. If the whole conversation was passed into retrieval, there may\n be unnecessary information there that would distract from retrieval.\n 2. This new question is passed to the retriever and relevant documents are\n returned.\n 3. The retrieved documents are passed to an LLM along with either the new question\n (default behavior) or the original question and chat history to generate a final\n response.\n Example:\n .. code-block:: python\n from langchain.chains import (\n StuffDocumentsChain, LLMChain, ConversationalRetrievalChain\n )\n from langchain.prompts import PromptTemplate\n from langchain.llms import OpenAI\n combine_docs_chain = StuffDocumentsChain(...)\n vectorstore = ...\n retriever = vectorstore.as_retriever()\n # This controls how the standalone question is generated.\n # Should take `chat_history` and `question` as input variables.\n template = (\n \"Combine the chat history and follow up question into \"\n \"a standalone question. Chat History: {chat_history}\"\n \"Follow up question: {question}\"\n )\n prompt = PromptTemplate.from_template(template)\n llm = OpenAI()\n llm_chain = LLMChain(llm=llm, prompt=prompt)\n chain = ConversationalRetrievalChain(\n combine_docs_chain=combine_docs_chain,\n retriever=retriever,\n question_generator=question_generator,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"} {"id": "c29ba3dc0d52-6", "text": "retriever=retriever,\n question_generator=question_generator,\n )\n \"\"\"\n retriever: BaseRetriever\n \"\"\"Retriever to use to fetch documents.\"\"\"\n max_tokens_limit: Optional[int] = None\n \"\"\"If set, enforces that the documents returned are less than this limit.\n This is only enforced if `combine_docs_chain` is of type StuffDocumentsChain.\"\"\"\n def _reduce_tokens_below_limit(self, docs: List[Document]) -> List[Document]:\n num_docs = len(docs)\n if self.max_tokens_limit and isinstance(\n self.combine_docs_chain, StuffDocumentsChain\n ):\n tokens = [\n self.combine_docs_chain.llm_chain.llm.get_num_tokens(doc.page_content)\n for doc in docs\n ]\n token_count = sum(tokens[:num_docs])\n while token_count > self.max_tokens_limit:\n num_docs -= 1\n token_count -= tokens[num_docs]\n return docs[:num_docs]\n def _get_docs(\n self,\n question: str,\n inputs: Dict[str, Any],\n *,\n run_manager: CallbackManagerForChainRun,\n ) -> List[Document]:\n \"\"\"Get docs.\"\"\"\n docs = self.retriever.get_relevant_documents(\n question, callbacks=run_manager.get_child()\n )\n return self._reduce_tokens_below_limit(docs)\n async def _aget_docs(\n self,\n question: str,\n inputs: Dict[str, Any],\n *,\n run_manager: AsyncCallbackManagerForChainRun,\n ) -> List[Document]:\n \"\"\"Get docs.\"\"\"\n docs = await self.retriever.aget_relevant_documents(\n question, callbacks=run_manager.get_child()\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"} {"id": "c29ba3dc0d52-7", "text": "question, callbacks=run_manager.get_child()\n )\n return self._reduce_tokens_below_limit(docs)\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n retriever: BaseRetriever,\n condense_question_prompt: BasePromptTemplate = CONDENSE_QUESTION_PROMPT,\n chain_type: str = \"stuff\",\n verbose: bool = False,\n condense_question_llm: Optional[BaseLanguageModel] = None,\n combine_docs_chain_kwargs: Optional[Dict] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> BaseConversationalRetrievalChain:\n \"\"\"Convenience method to load chain from LLM and retriever.\n This provides some logic to create the `question_generator` chain\n as well as the combine_docs_chain.\n Args:\n llm: The default language model to use at every part of this chain\n (eg in both the question generation and the answering)\n retriever: The retriever to use to fetch relevant documents from.\n condense_question_prompt: The prompt to use to condense the chat history\n and new question into a standalone question.\n chain_type: The chain type to use to create the combine_docs_chain, will\n be sent to `load_qa_chain`.\n verbose: Verbosity flag for logging to stdout.\n condense_question_llm: The language model to use for condensing the chat\n history and new question into a standalone question. If none is\n provided, will default to `llm`.\n combine_docs_chain_kwargs: Parameters to pass as kwargs to `load_qa_chain`\n when constructing the combine_docs_chain.\n callbacks: Callbacks to pass to all subchains.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"} {"id": "c29ba3dc0d52-8", "text": "callbacks: Callbacks to pass to all subchains.\n **kwargs: Additional parameters to pass when initializing\n ConversationalRetrievalChain\n \"\"\"\n combine_docs_chain_kwargs = combine_docs_chain_kwargs or {}\n doc_chain = load_qa_chain(\n llm,\n chain_type=chain_type,\n verbose=verbose,\n callbacks=callbacks,\n **combine_docs_chain_kwargs,\n )\n _llm = condense_question_llm or llm\n condense_question_chain = LLMChain(\n llm=_llm,\n prompt=condense_question_prompt,\n verbose=verbose,\n callbacks=callbacks,\n )\n return cls(\n retriever=retriever,\n combine_docs_chain=doc_chain,\n question_generator=condense_question_chain,\n callbacks=callbacks,\n **kwargs,\n )\n[docs]class ChatVectorDBChain(BaseConversationalRetrievalChain):\n \"\"\"Chain for chatting with a vector database.\"\"\"\n vectorstore: VectorStore = Field(alias=\"vectorstore\")\n top_k_docs_for_context: int = 4\n search_kwargs: dict = Field(default_factory=dict)\n @property\n def _chain_type(self) -> str:\n return \"chat-vector-db\"\n[docs] @root_validator()\n def raise_deprecation(cls, values: Dict) -> Dict:\n warnings.warn(\n \"`ChatVectorDBChain` is deprecated - \"\n \"please use `from langchain.chains import ConversationalRetrievalChain`\"\n )\n return values\n def _get_docs(\n self,\n question: str,\n inputs: Dict[str, Any],\n *,\n run_manager: CallbackManagerForChainRun,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"} {"id": "c29ba3dc0d52-9", "text": "*,\n run_manager: CallbackManagerForChainRun,\n ) -> List[Document]:\n \"\"\"Get docs.\"\"\"\n vectordbkwargs = inputs.get(\"vectordbkwargs\", {})\n full_kwargs = {**self.search_kwargs, **vectordbkwargs}\n return self.vectorstore.similarity_search(\n question, k=self.top_k_docs_for_context, **full_kwargs\n )\n async def _aget_docs(\n self,\n question: str,\n inputs: Dict[str, Any],\n *,\n run_manager: AsyncCallbackManagerForChainRun,\n ) -> List[Document]:\n \"\"\"Get docs.\"\"\"\n raise NotImplementedError(\"ChatVectorDBChain does not support async\")\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n vectorstore: VectorStore,\n condense_question_prompt: BasePromptTemplate = CONDENSE_QUESTION_PROMPT,\n chain_type: str = \"stuff\",\n combine_docs_chain_kwargs: Optional[Dict] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> BaseConversationalRetrievalChain:\n \"\"\"Load chain from LLM.\"\"\"\n combine_docs_chain_kwargs = combine_docs_chain_kwargs or {}\n doc_chain = load_qa_chain(\n llm,\n chain_type=chain_type,\n callbacks=callbacks,\n **combine_docs_chain_kwargs,\n )\n condense_question_chain = LLMChain(\n llm=llm, prompt=condense_question_prompt, callbacks=callbacks\n )\n return cls(\n vectorstore=vectorstore,\n combine_docs_chain=doc_chain,\n question_generator=condense_question_chain,\n callbacks=callbacks,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"} {"id": "c29ba3dc0d52-10", "text": "question_generator=condense_question_chain,\n callbacks=callbacks,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"} {"id": "d395a395cf37-0", "text": "Source code for langchain.chains.qa_generation.base\nfrom __future__ import annotations\nimport json\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.qa_generation.prompt import PROMPT_SELECTOR\nfrom langchain.schema import BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter, TextSplitter\n[docs]class QAGenerationChain(Chain):\n llm_chain: LLMChain\n text_splitter: TextSplitter = Field(\n default=RecursiveCharacterTextSplitter(chunk_overlap=500)\n )\n input_key: str = \"text\"\n output_key: str = \"questions\"\n k: Optional[int] = None\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n prompt: Optional[BasePromptTemplate] = None,\n **kwargs: Any,\n ) -> QAGenerationChain:\n _prompt = prompt or PROMPT_SELECTOR.get_prompt(llm)\n chain = LLMChain(llm=llm, prompt=_prompt)\n return cls(llm_chain=chain, **kwargs)\n @property\n def _chain_type(self) -> str:\n raise NotImplementedError\n @property\n def input_keys(self) -> List[str]:\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n return [self.output_key]\n def _call(\n self,\n inputs: Dict[str, Any],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_generation/base.html"} {"id": "d395a395cf37-1", "text": "def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, List]:\n docs = self.text_splitter.create_documents([inputs[self.input_key]])\n results = self.llm_chain.generate(\n [{\"text\": d.page_content} for d in docs], run_manager=run_manager\n )\n qa = [json.loads(res[0].text) for res in results.generations]\n return {self.output_key: qa}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_generation/base.html"} {"id": "2baf2b471623-0", "text": "Source code for langchain.chains.flare.base\nfrom __future__ import annotations\nimport re\nfrom abc import abstractmethod\nfrom typing import Any, Dict, List, Optional, Sequence, Tuple\nimport numpy as np\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n CallbackManagerForChainRun,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.chains.flare.prompts import (\n PROMPT,\n QUESTION_GENERATOR_PROMPT,\n FinishedOutputParser,\n)\nfrom langchain.chains.llm import LLMChain\nfrom langchain.llms import OpenAI\nfrom langchain.schema import BasePromptTemplate, BaseRetriever, Generation\nfrom langchain.schema.language_model import BaseLanguageModel\nclass _ResponseChain(LLMChain):\n prompt: BasePromptTemplate = PROMPT\n @property\n def input_keys(self) -> List[str]:\n return self.prompt.input_variables\n def generate_tokens_and_log_probs(\n self,\n _input: Dict[str, Any],\n *,\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Tuple[Sequence[str], Sequence[float]]:\n llm_result = self.generate([_input], run_manager=run_manager)\n return self._extract_tokens_and_log_probs(llm_result.generations[0])\n @abstractmethod\n def _extract_tokens_and_log_probs(\n self, generations: List[Generation]\n ) -> Tuple[Sequence[str], Sequence[float]]:\n \"\"\"Extract tokens and log probs from response.\"\"\"\nclass _OpenAIResponseChain(_ResponseChain):\n llm: OpenAI = Field(\n default_factory=lambda: OpenAI(\n max_tokens=32, model_kwargs={\"logprobs\": 1}, temperature=0\n )\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/flare/base.html"} {"id": "2baf2b471623-1", "text": ")\n )\n def _extract_tokens_and_log_probs(\n self, generations: List[Generation]\n ) -> Tuple[Sequence[str], Sequence[float]]:\n tokens = []\n log_probs = []\n for gen in generations:\n if gen.generation_info is None:\n raise ValueError\n tokens.extend(gen.generation_info[\"logprobs\"][\"tokens\"])\n log_probs.extend(gen.generation_info[\"logprobs\"][\"token_logprobs\"])\n return tokens, log_probs\n[docs]class QuestionGeneratorChain(LLMChain):\n prompt: BasePromptTemplate = QUESTION_GENERATOR_PROMPT\n @property\n def input_keys(self) -> List[str]:\n return [\"user_input\", \"context\", \"response\"]\ndef _low_confidence_spans(\n tokens: Sequence[str],\n log_probs: Sequence[float],\n min_prob: float,\n min_token_gap: int,\n num_pad_tokens: int,\n) -> List[str]:\n _low_idx = np.where(np.exp(log_probs) < min_prob)[0]\n low_idx = [i for i in _low_idx if re.search(r\"\\w\", tokens[i])]\n if len(low_idx) == 0:\n return []\n spans = [[low_idx[0], low_idx[0] + num_pad_tokens + 1]]\n for i, idx in enumerate(low_idx[1:]):\n end = idx + num_pad_tokens + 1\n if idx - low_idx[i] < min_token_gap:\n spans[-1][1] = end\n else:\n spans.append([idx, end])\n return [\"\".join(tokens[start:end]) for start, end in spans]\n[docs]class FlareChain(Chain):\n question_generator_chain: QuestionGeneratorChain", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/flare/base.html"} {"id": "2baf2b471623-2", "text": "[docs]class FlareChain(Chain):\n question_generator_chain: QuestionGeneratorChain\n response_chain: _ResponseChain = Field(default_factory=_OpenAIResponseChain)\n output_parser: FinishedOutputParser = Field(default_factory=FinishedOutputParser)\n retriever: BaseRetriever\n min_prob: float = 0.2\n min_token_gap: int = 5\n num_pad_tokens: int = 2\n max_iter: int = 10\n start_with_retrieval: bool = True\n @property\n def input_keys(self) -> List[str]:\n return [\"user_input\"]\n @property\n def output_keys(self) -> List[str]:\n return [\"response\"]\n def _do_generation(\n self,\n questions: List[str],\n user_input: str,\n response: str,\n _run_manager: CallbackManagerForChainRun,\n ) -> Tuple[str, bool]:\n callbacks = _run_manager.get_child()\n docs = []\n for question in questions:\n docs.extend(self.retriever.get_relevant_documents(question))\n context = \"\\n\\n\".join(d.page_content for d in docs)\n result = self.response_chain.predict(\n user_input=user_input,\n context=context,\n response=response,\n callbacks=callbacks,\n )\n marginal, finished = self.output_parser.parse(result)\n return marginal, finished\n def _do_retrieval(\n self,\n low_confidence_spans: List[str],\n _run_manager: CallbackManagerForChainRun,\n user_input: str,\n response: str,\n initial_response: str,\n ) -> Tuple[str, bool]:\n question_gen_inputs = [\n {\n \"user_input\": user_input,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/flare/base.html"} {"id": "2baf2b471623-3", "text": "question_gen_inputs = [\n {\n \"user_input\": user_input,\n \"current_response\": initial_response,\n \"uncertain_span\": span,\n }\n for span in low_confidence_spans\n ]\n callbacks = _run_manager.get_child()\n question_gen_outputs = self.question_generator_chain.apply(\n question_gen_inputs, callbacks=callbacks\n )\n questions = [\n output[self.question_generator_chain.output_keys[0]]\n for output in question_gen_outputs\n ]\n _run_manager.on_text(\n f\"Generated Questions: {questions}\", color=\"yellow\", end=\"\\n\"\n )\n return self._do_generation(questions, user_input, response, _run_manager)\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n user_input = inputs[self.input_keys[0]]\n response = \"\"\n for i in range(self.max_iter):\n _run_manager.on_text(\n f\"Current Response: {response}\", color=\"blue\", end=\"\\n\"\n )\n _input = {\"user_input\": user_input, \"context\": \"\", \"response\": response}\n tokens, log_probs = self.response_chain.generate_tokens_and_log_probs(\n _input, run_manager=_run_manager\n )\n low_confidence_spans = _low_confidence_spans(\n tokens,\n log_probs,\n self.min_prob,\n self.min_token_gap,\n self.num_pad_tokens,\n )\n initial_response = response.strip() + \" \" + \"\".join(tokens)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/flare/base.html"} {"id": "2baf2b471623-4", "text": ")\n initial_response = response.strip() + \" \" + \"\".join(tokens)\n if not low_confidence_spans:\n response = initial_response\n final_response, finished = self.output_parser.parse(response)\n if finished:\n return {self.output_keys[0]: final_response}\n continue\n marginal, finished = self._do_retrieval(\n low_confidence_spans,\n _run_manager,\n user_input,\n response,\n initial_response,\n )\n response = response.strip() + \" \" + marginal\n if finished:\n break\n return {self.output_keys[0]: response}\n[docs] @classmethod\n def from_llm(\n cls, llm: BaseLanguageModel, max_generation_len: int = 32, **kwargs: Any\n ) -> FlareChain:\n question_gen_chain = QuestionGeneratorChain(llm=llm)\n response_llm = OpenAI(\n max_tokens=max_generation_len, model_kwargs={\"logprobs\": 1}, temperature=0\n )\n response_chain = _OpenAIResponseChain(llm=response_llm)\n return cls(\n question_generator_chain=question_gen_chain,\n response_chain=response_chain,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/flare/base.html"} {"id": "5a7422da94e9-0", "text": "Source code for langchain.chains.flare.prompts\nfrom typing import Tuple\nfrom langchain.prompts import PromptTemplate\nfrom langchain.schema import BaseOutputParser\n[docs]class FinishedOutputParser(BaseOutputParser[Tuple[str, bool]]):\n finished_value: str = \"FINISHED\"\n[docs] def parse(self, text: str) -> Tuple[str, bool]:\n cleaned = text.strip()\n finished = self.finished_value in cleaned\n return cleaned.replace(self.finished_value, \"\"), finished\nPROMPT_TEMPLATE = \"\"\"\\\nRespond to the user message using any relevant context. \\\nIf context is provided, you should ground your answer in that context. \\\nOnce you're done responding return FINISHED.\n>>> CONTEXT: {context}\n>>> USER INPUT: {user_input}\n>>> RESPONSE: {response}\\\n\"\"\"\nPROMPT = PromptTemplate(\n template=PROMPT_TEMPLATE,\n input_variables=[\"user_input\", \"context\", \"response\"],\n)\nQUESTION_GENERATOR_PROMPT_TEMPLATE = \"\"\"\\\nGiven a user input and an existing partial response as context, \\\nask a question to which the answer is the given term/entity/phrase:\n>>> USER INPUT: {user_input}\n>>> EXISTING PARTIAL RESPONSE: {current_response}\nThe question to which the answer is the term/entity/phrase \"{uncertain_span}\" is:\"\"\"\nQUESTION_GENERATOR_PROMPT = PromptTemplate(\n template=QUESTION_GENERATOR_PROMPT_TEMPLATE,\n input_variables=[\"user_input\", \"current_response\", \"uncertain_span\"],\n)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/flare/prompts.html"} {"id": "a3f66f09faaa-0", "text": "Source code for langchain.chains.question_answering.__init__\n\"\"\"Load question answering chains.\"\"\"\nfrom typing import Any, Mapping, Optional, Protocol\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.chains import ReduceDocumentsChain\nfrom langchain.chains.combine_documents.base import BaseCombineDocumentsChain\nfrom langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain\nfrom langchain.chains.combine_documents.map_rerank import MapRerankDocumentsChain\nfrom langchain.chains.combine_documents.refine import RefineDocumentsChain\nfrom langchain.chains.combine_documents.stuff import StuffDocumentsChain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.question_answering import (\n map_reduce_prompt,\n refine_prompts,\n stuff_prompt,\n)\nfrom langchain.chains.question_answering.map_rerank_prompt import (\n PROMPT as MAP_RERANK_PROMPT,\n)\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.schema.prompt_template import BasePromptTemplate\n[docs]class LoadingCallable(Protocol):\n \"\"\"Interface for loading the combine documents chain.\"\"\"\n[docs] def __call__(\n self, llm: BaseLanguageModel, **kwargs: Any\n ) -> BaseCombineDocumentsChain:\n \"\"\"Callable to load the combine documents chain.\"\"\"\ndef _load_map_rerank_chain(\n llm: BaseLanguageModel,\n prompt: BasePromptTemplate = MAP_RERANK_PROMPT,\n verbose: bool = False,\n document_variable_name: str = \"context\",\n rank_key: str = \"score\",\n answer_key: str = \"answer\",\n callback_manager: Optional[BaseCallbackManager] = None,\n callbacks: Callbacks = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/question_answering/__init__.html"} {"id": "a3f66f09faaa-1", "text": "callbacks: Callbacks = None,\n **kwargs: Any,\n) -> MapRerankDocumentsChain:\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n verbose=verbose,\n callback_manager=callback_manager,\n callbacks=callbacks,\n )\n return MapRerankDocumentsChain(\n llm_chain=llm_chain,\n rank_key=rank_key,\n answer_key=answer_key,\n document_variable_name=document_variable_name,\n verbose=verbose,\n callback_manager=callback_manager,\n **kwargs,\n )\ndef _load_stuff_chain(\n llm: BaseLanguageModel,\n prompt: Optional[BasePromptTemplate] = None,\n document_variable_name: str = \"context\",\n verbose: Optional[bool] = None,\n callback_manager: Optional[BaseCallbackManager] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n) -> StuffDocumentsChain:\n _prompt = prompt or stuff_prompt.PROMPT_SELECTOR.get_prompt(llm)\n llm_chain = LLMChain(\n llm=llm,\n prompt=_prompt,\n verbose=verbose,\n callback_manager=callback_manager,\n callbacks=callbacks,\n )\n # TODO: document prompt\n return StuffDocumentsChain(\n llm_chain=llm_chain,\n document_variable_name=document_variable_name,\n verbose=verbose,\n callback_manager=callback_manager,\n **kwargs,\n )\ndef _load_map_reduce_chain(\n llm: BaseLanguageModel,\n question_prompt: Optional[BasePromptTemplate] = None,\n combine_prompt: Optional[BasePromptTemplate] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/question_answering/__init__.html"} {"id": "a3f66f09faaa-2", "text": "combine_prompt: Optional[BasePromptTemplate] = None,\n combine_document_variable_name: str = \"summaries\",\n map_reduce_document_variable_name: str = \"context\",\n collapse_prompt: Optional[BasePromptTemplate] = None,\n reduce_llm: Optional[BaseLanguageModel] = None,\n collapse_llm: Optional[BaseLanguageModel] = None,\n verbose: Optional[bool] = None,\n callback_manager: Optional[BaseCallbackManager] = None,\n callbacks: Callbacks = None,\n token_max: int = 3000,\n **kwargs: Any,\n) -> MapReduceDocumentsChain:\n _question_prompt = (\n question_prompt or map_reduce_prompt.QUESTION_PROMPT_SELECTOR.get_prompt(llm)\n )\n _combine_prompt = (\n combine_prompt or map_reduce_prompt.COMBINE_PROMPT_SELECTOR.get_prompt(llm)\n )\n map_chain = LLMChain(\n llm=llm,\n prompt=_question_prompt,\n verbose=verbose,\n callback_manager=callback_manager,\n callbacks=callbacks,\n )\n _reduce_llm = reduce_llm or llm\n reduce_chain = LLMChain(\n llm=_reduce_llm,\n prompt=_combine_prompt,\n verbose=verbose,\n callback_manager=callback_manager,\n callbacks=callbacks,\n )\n # TODO: document prompt\n combine_documents_chain = StuffDocumentsChain(\n llm_chain=reduce_chain,\n document_variable_name=combine_document_variable_name,\n verbose=verbose,\n callback_manager=callback_manager,\n callbacks=callbacks,\n )\n if collapse_prompt is None:\n collapse_chain = None\n if collapse_llm is not None:\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/question_answering/__init__.html"} {"id": "a3f66f09faaa-3", "text": "if collapse_llm is not None:\n raise ValueError(\n \"collapse_llm provided, but collapse_prompt was not: please \"\n \"provide one or stop providing collapse_llm.\"\n )\n else:\n _collapse_llm = collapse_llm or llm\n collapse_chain = StuffDocumentsChain(\n llm_chain=LLMChain(\n llm=_collapse_llm,\n prompt=collapse_prompt,\n verbose=verbose,\n callback_manager=callback_manager,\n callbacks=callbacks,\n ),\n document_variable_name=combine_document_variable_name,\n verbose=verbose,\n callback_manager=callback_manager,\n )\n reduce_documents_chain = ReduceDocumentsChain(\n combine_documents_chain=combine_documents_chain,\n collapse_documents_chain=collapse_chain,\n token_max=token_max,\n verbose=verbose,\n )\n return MapReduceDocumentsChain(\n llm_chain=map_chain,\n document_variable_name=map_reduce_document_variable_name,\n reduce_documents_chain=reduce_documents_chain,\n verbose=verbose,\n callback_manager=callback_manager,\n callbacks=callbacks,\n **kwargs,\n )\ndef _load_refine_chain(\n llm: BaseLanguageModel,\n question_prompt: Optional[BasePromptTemplate] = None,\n refine_prompt: Optional[BasePromptTemplate] = None,\n document_variable_name: str = \"context_str\",\n initial_response_name: str = \"existing_answer\",\n refine_llm: Optional[BaseLanguageModel] = None,\n verbose: Optional[bool] = None,\n callback_manager: Optional[BaseCallbackManager] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n) -> RefineDocumentsChain:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/question_answering/__init__.html"} {"id": "a3f66f09faaa-4", "text": "**kwargs: Any,\n) -> RefineDocumentsChain:\n _question_prompt = (\n question_prompt or refine_prompts.QUESTION_PROMPT_SELECTOR.get_prompt(llm)\n )\n _refine_prompt = refine_prompt or refine_prompts.REFINE_PROMPT_SELECTOR.get_prompt(\n llm\n )\n initial_chain = LLMChain(\n llm=llm,\n prompt=_question_prompt,\n verbose=verbose,\n callback_manager=callback_manager,\n callbacks=callbacks,\n )\n _refine_llm = refine_llm or llm\n refine_chain = LLMChain(\n llm=_refine_llm,\n prompt=_refine_prompt,\n verbose=verbose,\n callback_manager=callback_manager,\n callbacks=callbacks,\n )\n return RefineDocumentsChain(\n initial_llm_chain=initial_chain,\n refine_llm_chain=refine_chain,\n document_variable_name=document_variable_name,\n initial_response_name=initial_response_name,\n verbose=verbose,\n callback_manager=callback_manager,\n **kwargs,\n )\n[docs]def load_qa_chain(\n llm: BaseLanguageModel,\n chain_type: str = \"stuff\",\n verbose: Optional[bool] = None,\n callback_manager: Optional[BaseCallbackManager] = None,\n **kwargs: Any,\n) -> BaseCombineDocumentsChain:\n \"\"\"Load question answering chain.\n Args:\n llm: Language Model to use in the chain.\n chain_type: Type of document combining chain to use. Should be one of \"stuff\",\n \"map_reduce\", \"map_rerank\", and \"refine\".", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/question_answering/__init__.html"} {"id": "a3f66f09faaa-5", "text": "\"map_reduce\", \"map_rerank\", and \"refine\".\n verbose: Whether chains should be run in verbose mode or not. Note that this\n applies to all chains that make up the final chain.\n callback_manager: Callback manager to use for the chain.\n Returns:\n A chain to use for question answering.\n \"\"\"\n loader_mapping: Mapping[str, LoadingCallable] = {\n \"stuff\": _load_stuff_chain,\n \"map_reduce\": _load_map_reduce_chain,\n \"refine\": _load_refine_chain,\n \"map_rerank\": _load_map_rerank_chain,\n }\n if chain_type not in loader_mapping:\n raise ValueError(\n f\"Got unsupported chain type: {chain_type}. \"\n f\"Should be one of {loader_mapping.keys()}\"\n )\n return loader_mapping[chain_type](\n llm, verbose=verbose, callback_manager=callback_manager, **kwargs\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/question_answering/__init__.html"} {"id": "817e1fc54b2e-0", "text": "Source code for langchain.chains.constitutional_ai.base\n\"\"\"Chain for applying constitutional principles to the outputs of another chain.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.constitutional_ai.models import ConstitutionalPrinciple\nfrom langchain.chains.constitutional_ai.principles import PRINCIPLES\nfrom langchain.chains.constitutional_ai.prompts import CRITIQUE_PROMPT, REVISION_PROMPT\nfrom langchain.chains.llm import LLMChain\nfrom langchain.schema import BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class ConstitutionalChain(Chain):\n \"\"\"Chain for applying constitutional principles.\n Example:\n .. code-block:: python\n from langchain.llms import OpenAI\n from langchain.chains import LLMChain, ConstitutionalChain\n from langchain.chains.constitutional_ai.models \\\n import ConstitutionalPrinciple\n llm = OpenAI()\n qa_prompt = PromptTemplate(\n template=\"Q: {question} A:\",\n input_variables=[\"question\"],\n )\n qa_chain = LLMChain(llm=llm, prompt=qa_prompt)\n constitutional_chain = ConstitutionalChain.from_llm(\n llm=llm,\n chain=qa_chain,\n constitutional_principles=[\n ConstitutionalPrinciple(\n critique_request=\"Tell if this answer is good.\",\n revision_request=\"Give a better answer.\",\n )\n ],\n )\n constitutional_chain.run(question=\"What is the meaning of life?\")\n \"\"\"\n chain: LLMChain\n constitutional_principles: List[ConstitutionalPrinciple]\n critique_chain: LLMChain", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/constitutional_ai/base.html"} {"id": "817e1fc54b2e-1", "text": "critique_chain: LLMChain\n revision_chain: LLMChain\n return_intermediate_steps: bool = False\n[docs] @classmethod\n def get_principles(\n cls, names: Optional[List[str]] = None\n ) -> List[ConstitutionalPrinciple]:\n if names is None:\n return list(PRINCIPLES.values())\n else:\n return [PRINCIPLES[name] for name in names]\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n chain: LLMChain,\n critique_prompt: BasePromptTemplate = CRITIQUE_PROMPT,\n revision_prompt: BasePromptTemplate = REVISION_PROMPT,\n **kwargs: Any,\n ) -> \"ConstitutionalChain\":\n \"\"\"Create a chain from an LLM.\"\"\"\n critique_chain = LLMChain(llm=llm, prompt=critique_prompt)\n revision_chain = LLMChain(llm=llm, prompt=revision_prompt)\n return cls(\n chain=chain,\n critique_chain=critique_chain,\n revision_chain=revision_chain,\n **kwargs,\n )\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Defines the input keys.\"\"\"\n return self.chain.input_keys\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Defines the output keys.\"\"\"\n if self.return_intermediate_steps:\n return [\"output\", \"critiques_and_revisions\", \"initial_output\"]\n return [\"output\"]\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/constitutional_ai/base.html"} {"id": "817e1fc54b2e-2", "text": ") -> Dict[str, Any]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n response = self.chain.run(\n **inputs,\n callbacks=_run_manager.get_child(\"original\"),\n )\n initial_response = response\n input_prompt = self.chain.prompt.format(**inputs)\n _run_manager.on_text(\n text=\"Initial response: \" + response + \"\\n\\n\",\n verbose=self.verbose,\n color=\"yellow\",\n )\n critiques_and_revisions = []\n for constitutional_principle in self.constitutional_principles:\n # Do critique\n raw_critique = self.critique_chain.run(\n input_prompt=input_prompt,\n output_from_model=response,\n critique_request=constitutional_principle.critique_request,\n callbacks=_run_manager.get_child(\"critique\"),\n )\n critique = self._parse_critique(\n output_string=raw_critique,\n ).strip()\n # if the critique contains \"No critique needed\", then we're done\n # in this case, initial_output is the same as output,\n # but we'll keep it for consistency\n if \"no critique needed\" in critique.lower():\n critiques_and_revisions.append((critique, \"\"))\n continue\n # Do revision\n revision = self.revision_chain.run(\n input_prompt=input_prompt,\n output_from_model=response,\n critique_request=constitutional_principle.critique_request,\n critique=critique,\n revision_request=constitutional_principle.revision_request,\n callbacks=_run_manager.get_child(\"revision\"),\n ).strip()\n response = revision\n critiques_and_revisions.append((critique, revision))\n _run_manager.on_text(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/constitutional_ai/base.html"} {"id": "817e1fc54b2e-3", "text": "_run_manager.on_text(\n text=f\"Applying {constitutional_principle.name}...\" + \"\\n\\n\",\n verbose=self.verbose,\n color=\"green\",\n )\n _run_manager.on_text(\n text=\"Critique: \" + critique + \"\\n\\n\",\n verbose=self.verbose,\n color=\"blue\",\n )\n _run_manager.on_text(\n text=\"Updated response: \" + revision + \"\\n\\n\",\n verbose=self.verbose,\n color=\"yellow\",\n )\n final_output: Dict[str, Any] = {\"output\": response}\n if self.return_intermediate_steps:\n final_output[\"initial_output\"] = initial_response\n final_output[\"critiques_and_revisions\"] = critiques_and_revisions\n return final_output\n @staticmethod\n def _parse_critique(output_string: str) -> str:\n if \"Revision request:\" not in output_string:\n return output_string\n output_string = output_string.split(\"Revision request:\")[0]\n if \"\\n\\n\" in output_string:\n output_string = output_string.split(\"\\n\\n\")[0]\n return output_string", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/constitutional_ai/base.html"} {"id": "3284febafc8a-0", "text": "Source code for langchain.chains.constitutional_ai.models\n\"\"\"Models for the Constitutional AI chain.\"\"\"\nfrom pydantic import BaseModel\n[docs]class ConstitutionalPrinciple(BaseModel):\n \"\"\"Class for a constitutional principle.\"\"\"\n critique_request: str\n revision_request: str\n name: str = \"Constitutional Principle\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/constitutional_ai/models.html"} {"id": "26c54962270f-0", "text": "Source code for langchain.chains.query_constructor.ir\n\"\"\"Internal representation of a structured query language.\"\"\"\nfrom __future__ import annotations\nfrom abc import ABC, abstractmethod\nfrom enum import Enum\nfrom typing import Any, List, Optional, Sequence, Union\nfrom pydantic import BaseModel\n[docs]class Visitor(ABC):\n \"\"\"Defines interface for IR translation using visitor pattern.\"\"\"\n allowed_comparators: Optional[Sequence[Comparator]] = None\n allowed_operators: Optional[Sequence[Operator]] = None\n def _validate_func(self, func: Union[Operator, Comparator]) -> None:\n if isinstance(func, Operator) and self.allowed_operators is not None:\n if func not in self.allowed_operators:\n raise ValueError(\n f\"Received disallowed operator {func}. Allowed \"\n f\"comparators are {self.allowed_operators}\"\n )\n if isinstance(func, Comparator) and self.allowed_comparators is not None:\n if func not in self.allowed_comparators:\n raise ValueError(\n f\"Received disallowed comparator {func}. Allowed \"\n f\"comparators are {self.allowed_comparators}\"\n )\n[docs] @abstractmethod\n def visit_operation(self, operation: Operation) -> Any:\n \"\"\"Translate an Operation.\"\"\"\n[docs] @abstractmethod\n def visit_comparison(self, comparison: Comparison) -> Any:\n \"\"\"Translate a Comparison.\"\"\"\n[docs] @abstractmethod\n def visit_structured_query(self, structured_query: StructuredQuery) -> Any:\n \"\"\"Translate a StructuredQuery.\"\"\"\ndef _to_snake_case(name: str) -> str:\n \"\"\"Convert a name into snake_case.\"\"\"\n snake_case = \"\"\n for i, char in enumerate(name):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/query_constructor/ir.html"} {"id": "26c54962270f-1", "text": "snake_case = \"\"\n for i, char in enumerate(name):\n if char.isupper() and i != 0:\n snake_case += \"_\" + char.lower()\n else:\n snake_case += char.lower()\n return snake_case\n[docs]class Expr(BaseModel):\n[docs] def accept(self, visitor: Visitor) -> Any:\n return getattr(visitor, f\"visit_{_to_snake_case(self.__class__.__name__)}\")(\n self\n )\n[docs]class Operator(str, Enum):\n \"\"\"Enumerator of the operations.\"\"\"\n AND = \"and\"\n OR = \"or\"\n NOT = \"not\"\n[docs]class Comparator(str, Enum):\n \"\"\"Enumerator of the comparison operators.\"\"\"\n EQ = \"eq\"\n GT = \"gt\"\n GTE = \"gte\"\n LT = \"lt\"\n LTE = \"lte\"\n CONTAIN = \"contain\"\n LIKE = \"like\"\n[docs]class FilterDirective(Expr, ABC):\n \"\"\"A filtering expression.\"\"\"\n[docs]class Comparison(FilterDirective):\n \"\"\"A comparison to a value.\"\"\"\n comparator: Comparator\n attribute: str\n value: Any\n[docs]class Operation(FilterDirective):\n \"\"\"A logical operation over other directives.\"\"\"\n operator: Operator\n arguments: List[FilterDirective]\n[docs]class StructuredQuery(Expr):\n query: str\n filter: Optional[FilterDirective]\n limit: Optional[int]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/query_constructor/ir.html"} {"id": "60f188119b1c-0", "text": "Source code for langchain.chains.query_constructor.schema\nfrom pydantic import BaseModel\n[docs]class AttributeInfo(BaseModel):\n \"\"\"Information about a data source attribute.\"\"\"\n name: str\n description: str\n type: str\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n frozen = True", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/query_constructor/schema.html"} {"id": "51e09a1532ad-0", "text": "Source code for langchain.chains.query_constructor.base\n\"\"\"LLM Chain for turning a user text query into a structured query.\"\"\"\nfrom __future__ import annotations\nimport json\nfrom typing import Any, Callable, List, Optional, Sequence\nfrom langchain import FewShotPromptTemplate, LLMChain\nfrom langchain.chains.query_constructor.ir import (\n Comparator,\n Operator,\n StructuredQuery,\n)\nfrom langchain.chains.query_constructor.parser import get_parser\nfrom langchain.chains.query_constructor.prompt import (\n DEFAULT_EXAMPLES,\n DEFAULT_PREFIX,\n DEFAULT_SCHEMA,\n DEFAULT_SUFFIX,\n EXAMPLE_PROMPT,\n EXAMPLES_WITH_LIMIT,\n SCHEMA_WITH_LIMIT,\n)\nfrom langchain.chains.query_constructor.schema import AttributeInfo\nfrom langchain.output_parsers.json import parse_and_check_json_markdown\nfrom langchain.schema import BaseOutputParser, BasePromptTemplate, OutputParserException\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class StructuredQueryOutputParser(BaseOutputParser[StructuredQuery]):\n ast_parse: Callable\n \"\"\"Callable that parses dict into internal representation of query language.\"\"\"\n[docs] def parse(self, text: str) -> StructuredQuery:\n try:\n expected_keys = [\"query\", \"filter\"]\n allowed_keys = [\"query\", \"filter\", \"limit\"]\n parsed = parse_and_check_json_markdown(text, expected_keys)\n if len(parsed[\"query\"]) == 0:\n parsed[\"query\"] = \" \"\n if parsed[\"filter\"] == \"NO_FILTER\" or not parsed[\"filter\"]:\n parsed[\"filter\"] = None\n else:\n parsed[\"filter\"] = self.ast_parse(parsed[\"filter\"])\n if not parsed.get(\"limit\"):\n parsed.pop(\"limit\", None)\n return StructuredQuery(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/query_constructor/base.html"} {"id": "51e09a1532ad-1", "text": "parsed.pop(\"limit\", None)\n return StructuredQuery(\n **{k: v for k, v in parsed.items() if k in allowed_keys}\n )\n except Exception as e:\n raise OutputParserException(\n f\"Parsing text\\n{text}\\n raised following error:\\n{e}\"\n )\n[docs] @classmethod\n def from_components(\n cls,\n allowed_comparators: Optional[Sequence[Comparator]] = None,\n allowed_operators: Optional[Sequence[Operator]] = None,\n ) -> StructuredQueryOutputParser:\n ast_parser = get_parser(\n allowed_comparators=allowed_comparators, allowed_operators=allowed_operators\n )\n return cls(ast_parse=ast_parser.parse)\ndef _format_attribute_info(info: Sequence[AttributeInfo]) -> str:\n info_dicts = {}\n for i in info:\n i_dict = dict(i)\n info_dicts[i_dict.pop(\"name\")] = i_dict\n return json.dumps(info_dicts, indent=4).replace(\"{\", \"{{\").replace(\"}\", \"}}\")\ndef _get_prompt(\n document_contents: str,\n attribute_info: Sequence[AttributeInfo],\n examples: Optional[List] = None,\n allowed_comparators: Optional[Sequence[Comparator]] = None,\n allowed_operators: Optional[Sequence[Operator]] = None,\n enable_limit: bool = False,\n) -> BasePromptTemplate:\n attribute_str = _format_attribute_info(attribute_info)\n allowed_comparators = allowed_comparators or list(Comparator)\n allowed_operators = allowed_operators or list(Operator)\n if enable_limit:\n schema = SCHEMA_WITH_LIMIT.format(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/query_constructor/base.html"} {"id": "51e09a1532ad-2", "text": "if enable_limit:\n schema = SCHEMA_WITH_LIMIT.format(\n allowed_comparators=\" | \".join(allowed_comparators),\n allowed_operators=\" | \".join(allowed_operators),\n )\n examples = examples or EXAMPLES_WITH_LIMIT\n else:\n schema = DEFAULT_SCHEMA.format(\n allowed_comparators=\" | \".join(allowed_comparators),\n allowed_operators=\" | \".join(allowed_operators),\n )\n examples = examples or DEFAULT_EXAMPLES\n prefix = DEFAULT_PREFIX.format(schema=schema)\n suffix = DEFAULT_SUFFIX.format(\n i=len(examples) + 1, content=document_contents, attributes=attribute_str\n )\n output_parser = StructuredQueryOutputParser.from_components(\n allowed_comparators=allowed_comparators, allowed_operators=allowed_operators\n )\n return FewShotPromptTemplate(\n examples=examples,\n example_prompt=EXAMPLE_PROMPT,\n input_variables=[\"query\"],\n suffix=suffix,\n prefix=prefix,\n output_parser=output_parser,\n )\n[docs]def load_query_constructor_chain(\n llm: BaseLanguageModel,\n document_contents: str,\n attribute_info: List[AttributeInfo],\n examples: Optional[List] = None,\n allowed_comparators: Optional[Sequence[Comparator]] = None,\n allowed_operators: Optional[Sequence[Operator]] = None,\n enable_limit: bool = False,\n **kwargs: Any,\n) -> LLMChain:\n \"\"\"Load a query constructor chain.\n Args:\n llm: BaseLanguageModel to use for the chain.\n document_contents: The contents of the document to be queried.\n attribute_info: A list of AttributeInfo objects describing", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/query_constructor/base.html"} {"id": "51e09a1532ad-3", "text": "attribute_info: A list of AttributeInfo objects describing\n the attributes of the document.\n examples: Optional list of examples to use for the chain.\n allowed_comparators: An optional list of allowed comparators.\n allowed_operators: An optional list of allowed operators.\n enable_limit: Whether to enable the limit operator. Defaults to False.\n **kwargs:\n Returns:\n A LLMChain that can be used to construct queries.\n \"\"\"\n prompt = _get_prompt(\n document_contents,\n attribute_info,\n examples=examples,\n allowed_comparators=allowed_comparators,\n allowed_operators=allowed_operators,\n enable_limit=enable_limit,\n )\n return LLMChain(llm=llm, prompt=prompt, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/query_constructor/base.html"} {"id": "2f21c3c8f20a-0", "text": "Source code for langchain.chains.query_constructor.parser\nimport datetime\nfrom typing import Any, Optional, Sequence, Union\nfrom langchain.utils import check_package_version\ntry:\n check_package_version(\"lark\", gte_version=\"1.1.5\")\n from lark import Lark, Transformer, v_args\nexcept ImportError:\n def v_args(*args: Any, **kwargs: Any) -> Any: # type: ignore\n return lambda _: None\n Transformer = object # type: ignore\n Lark = object # type: ignore\nfrom langchain.chains.query_constructor.ir import (\n Comparator,\n Comparison,\n FilterDirective,\n Operation,\n Operator,\n)\nGRAMMAR = \"\"\"\n ?program: func_call\n ?expr: func_call\n | value\n func_call: CNAME \"(\" [args] \")\"\n ?value: SIGNED_INT -> int\n | SIGNED_FLOAT -> float\n | TIMESTAMP -> timestamp\n | list\n | string\n | (\"false\" | \"False\" | \"FALSE\") -> false\n | (\"true\" | \"True\" | \"TRUE\") -> true\n args: expr (\",\" expr)*\n TIMESTAMP.2: /[\"'](\\d{4}-[01]\\d-[0-3]\\d)[\"']/\n string: /'[^']*'/ | ESCAPED_STRING\n list: \"[\" [args] \"]\"\n %import common.CNAME\n %import common.ESCAPED_STRING\n %import common.SIGNED_FLOAT\n %import common.SIGNED_INT\n %import common.WS\n %ignore WS\n\"\"\"\n@v_args(inline=True)\nclass QueryTransformer(Transformer):\n \"\"\"Transforms a query string into an IR representation\n (intermediate representation).\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/query_constructor/parser.html"} {"id": "2f21c3c8f20a-1", "text": "\"\"\"Transforms a query string into an IR representation\n (intermediate representation).\"\"\"\n def __init__(\n self,\n *args: Any,\n allowed_comparators: Optional[Sequence[Comparator]] = None,\n allowed_operators: Optional[Sequence[Operator]] = None,\n **kwargs: Any,\n ):\n super().__init__(*args, **kwargs)\n self.allowed_comparators = allowed_comparators\n self.allowed_operators = allowed_operators\n def program(self, *items: Any) -> tuple:\n return items\n def func_call(self, func_name: Any, args: list) -> FilterDirective:\n func = self._match_func_name(str(func_name))\n if isinstance(func, Comparator):\n return Comparison(comparator=func, attribute=args[0], value=args[1])\n elif len(args) == 1 and func in (Operator.AND, Operator.OR):\n return args[0]\n else:\n return Operation(operator=func, arguments=args)\n def _match_func_name(self, func_name: str) -> Union[Operator, Comparator]:\n if func_name in set(Comparator):\n if self.allowed_comparators is not None:\n if func_name not in self.allowed_comparators:\n raise ValueError(\n f\"Received disallowed comparator {func_name}. Allowed \"\n f\"comparators are {self.allowed_comparators}\"\n )\n return Comparator(func_name)\n elif func_name in set(Operator):\n if self.allowed_operators is not None:\n if func_name not in self.allowed_operators:\n raise ValueError(\n f\"Received disallowed operator {func_name}. Allowed operators\"\n f\" are {self.allowed_operators}\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/query_constructor/parser.html"} {"id": "2f21c3c8f20a-2", "text": "f\" are {self.allowed_operators}\"\n )\n return Operator(func_name)\n else:\n raise ValueError(\n f\"Received unrecognized function {func_name}. Valid functions are \"\n f\"{list(Operator) + list(Comparator)}\"\n )\n def args(self, *items: Any) -> tuple:\n return items\n def false(self) -> bool:\n return False\n def true(self) -> bool:\n return True\n def list(self, item: Any) -> list:\n if item is None:\n return []\n return list(item)\n def int(self, item: Any) -> int:\n return int(item)\n def float(self, item: Any) -> float:\n return float(item)\n def timestamp(self, item: Any) -> datetime.date:\n item = item.replace(\"'\", '\"')\n return datetime.datetime.strptime(item, '\"%Y-%m-%d\"').date()\n def string(self, item: Any) -> str:\n # Remove escaped quotes\n return str(item).strip(\"\\\"'\")\n[docs]def get_parser(\n allowed_comparators: Optional[Sequence[Comparator]] = None,\n allowed_operators: Optional[Sequence[Operator]] = None,\n) -> Lark:\n \"\"\"\n Returns a parser for the query language.\n Args:\n allowed_comparators: Optional[Sequence[Comparator]]\n allowed_operators: Optional[Sequence[Operator]]\n Returns:\n Lark parser for the query language.\n \"\"\"\n transformer = QueryTransformer(\n allowed_comparators=allowed_comparators, allowed_operators=allowed_operators\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/query_constructor/parser.html"} {"id": "2f21c3c8f20a-3", "text": ")\n return Lark(GRAMMAR, parser=\"lalr\", transformer=transformer, start=\"program\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/query_constructor/parser.html"} {"id": "78414701bd07-0", "text": "Source code for langchain.chains.summarize.__init__\n\"\"\"Load summarizing chains.\"\"\"\nfrom typing import Any, Mapping, Optional, Protocol\nfrom langchain.chains.combine_documents.base import BaseCombineDocumentsChain\nfrom langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain\nfrom langchain.chains.combine_documents.reduce import ReduceDocumentsChain\nfrom langchain.chains.combine_documents.refine import RefineDocumentsChain\nfrom langchain.chains.combine_documents.stuff import StuffDocumentsChain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.summarize import map_reduce_prompt, refine_prompts, stuff_prompt\nfrom langchain.schema import BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class LoadingCallable(Protocol):\n \"\"\"Interface for loading the combine documents chain.\"\"\"\n[docs] def __call__(\n self, llm: BaseLanguageModel, **kwargs: Any\n ) -> BaseCombineDocumentsChain:\n \"\"\"Callable to load the combine documents chain.\"\"\"\ndef _load_stuff_chain(\n llm: BaseLanguageModel,\n prompt: BasePromptTemplate = stuff_prompt.PROMPT,\n document_variable_name: str = \"text\",\n verbose: Optional[bool] = None,\n **kwargs: Any,\n) -> StuffDocumentsChain:\n llm_chain = LLMChain(llm=llm, prompt=prompt, verbose=verbose)\n # TODO: document prompt\n return StuffDocumentsChain(\n llm_chain=llm_chain,\n document_variable_name=document_variable_name,\n verbose=verbose,\n **kwargs,\n )\ndef _load_map_reduce_chain(\n llm: BaseLanguageModel,\n map_prompt: BasePromptTemplate = map_reduce_prompt.PROMPT,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/summarize/__init__.html"} {"id": "78414701bd07-1", "text": "map_prompt: BasePromptTemplate = map_reduce_prompt.PROMPT,\n combine_prompt: BasePromptTemplate = map_reduce_prompt.PROMPT,\n combine_document_variable_name: str = \"text\",\n map_reduce_document_variable_name: str = \"text\",\n collapse_prompt: Optional[BasePromptTemplate] = None,\n reduce_llm: Optional[BaseLanguageModel] = None,\n collapse_llm: Optional[BaseLanguageModel] = None,\n verbose: Optional[bool] = None,\n token_max: int = 3000,\n **kwargs: Any,\n) -> MapReduceDocumentsChain:\n map_chain = LLMChain(llm=llm, prompt=map_prompt, verbose=verbose)\n _reduce_llm = reduce_llm or llm\n reduce_chain = LLMChain(llm=_reduce_llm, prompt=combine_prompt, verbose=verbose)\n # TODO: document prompt\n combine_documents_chain = StuffDocumentsChain(\n llm_chain=reduce_chain,\n document_variable_name=combine_document_variable_name,\n verbose=verbose,\n )\n if collapse_prompt is None:\n collapse_chain = None\n if collapse_llm is not None:\n raise ValueError(\n \"collapse_llm provided, but collapse_prompt was not: please \"\n \"provide one or stop providing collapse_llm.\"\n )\n else:\n _collapse_llm = collapse_llm or llm\n collapse_chain = StuffDocumentsChain(\n llm_chain=LLMChain(\n llm=_collapse_llm,\n prompt=collapse_prompt,\n verbose=verbose,\n ),\n document_variable_name=combine_document_variable_name,\n )\n reduce_documents_chain = ReduceDocumentsChain(\n combine_documents_chain=combine_documents_chain,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/summarize/__init__.html"} {"id": "78414701bd07-2", "text": "reduce_documents_chain = ReduceDocumentsChain(\n combine_documents_chain=combine_documents_chain,\n collapse_documents_chain=collapse_chain,\n token_max=token_max,\n verbose=verbose,\n )\n return MapReduceDocumentsChain(\n llm_chain=map_chain,\n reduce_documents_chain=reduce_documents_chain,\n document_variable_name=map_reduce_document_variable_name,\n verbose=verbose,\n **kwargs,\n )\ndef _load_refine_chain(\n llm: BaseLanguageModel,\n question_prompt: BasePromptTemplate = refine_prompts.PROMPT,\n refine_prompt: BasePromptTemplate = refine_prompts.REFINE_PROMPT,\n document_variable_name: str = \"text\",\n initial_response_name: str = \"existing_answer\",\n refine_llm: Optional[BaseLanguageModel] = None,\n verbose: Optional[bool] = None,\n **kwargs: Any,\n) -> RefineDocumentsChain:\n initial_chain = LLMChain(llm=llm, prompt=question_prompt, verbose=verbose)\n _refine_llm = refine_llm or llm\n refine_chain = LLMChain(llm=_refine_llm, prompt=refine_prompt, verbose=verbose)\n return RefineDocumentsChain(\n initial_llm_chain=initial_chain,\n refine_llm_chain=refine_chain,\n document_variable_name=document_variable_name,\n initial_response_name=initial_response_name,\n verbose=verbose,\n **kwargs,\n )\n[docs]def load_summarize_chain(\n llm: BaseLanguageModel,\n chain_type: str = \"stuff\",\n verbose: Optional[bool] = None,\n **kwargs: Any,\n) -> BaseCombineDocumentsChain:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/summarize/__init__.html"} {"id": "78414701bd07-3", "text": "**kwargs: Any,\n) -> BaseCombineDocumentsChain:\n \"\"\"Load summarizing chain.\n Args:\n llm: Language Model to use in the chain.\n chain_type: Type of document combining chain to use. Should be one of \"stuff\",\n \"map_reduce\", and \"refine\".\n verbose: Whether chains should be run in verbose mode or not. Note that this\n applies to all chains that make up the final chain.\n Returns:\n A chain to use for summarizing.\n \"\"\"\n loader_mapping: Mapping[str, LoadingCallable] = {\n \"stuff\": _load_stuff_chain,\n \"map_reduce\": _load_map_reduce_chain,\n \"refine\": _load_refine_chain,\n }\n if chain_type not in loader_mapping:\n raise ValueError(\n f\"Got unsupported chain type: {chain_type}. \"\n f\"Should be one of {loader_mapping.keys()}\"\n )\n return loader_mapping[chain_type](llm, verbose=verbose, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/summarize/__init__.html"} {"id": "e04c5b90a94e-0", "text": "Source code for langchain.chains.llm_summarization_checker.base\n\"\"\"Chain for summarization with self-verification.\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.sequential import SequentialChain\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\nPROMPTS_DIR = Path(__file__).parent / \"prompts\"\nCREATE_ASSERTIONS_PROMPT = PromptTemplate.from_file(\n PROMPTS_DIR / \"create_facts.txt\", [\"summary\"]\n)\nCHECK_ASSERTIONS_PROMPT = PromptTemplate.from_file(\n PROMPTS_DIR / \"check_facts.txt\", [\"assertions\"]\n)\nREVISED_SUMMARY_PROMPT = PromptTemplate.from_file(\n PROMPTS_DIR / \"revise_summary.txt\", [\"checked_assertions\", \"summary\"]\n)\nARE_ALL_TRUE_PROMPT = PromptTemplate.from_file(\n PROMPTS_DIR / \"are_all_true_prompt.txt\", [\"checked_assertions\"]\n)\ndef _load_sequential_chain(\n llm: BaseLanguageModel,\n create_assertions_prompt: PromptTemplate,\n check_assertions_prompt: PromptTemplate,\n revised_summary_prompt: PromptTemplate,\n are_all_true_prompt: PromptTemplate,\n verbose: bool = False,\n) -> SequentialChain:\n chain = SequentialChain(\n chains=[\n LLMChain(\n llm=llm,\n prompt=create_assertions_prompt,\n output_key=\"assertions\",\n verbose=verbose,\n ),\n LLMChain(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_summarization_checker/base.html"} {"id": "e04c5b90a94e-1", "text": "verbose=verbose,\n ),\n LLMChain(\n llm=llm,\n prompt=check_assertions_prompt,\n output_key=\"checked_assertions\",\n verbose=verbose,\n ),\n LLMChain(\n llm=llm,\n prompt=revised_summary_prompt,\n output_key=\"revised_summary\",\n verbose=verbose,\n ),\n LLMChain(\n llm=llm,\n output_key=\"all_true\",\n prompt=are_all_true_prompt,\n verbose=verbose,\n ),\n ],\n input_variables=[\"summary\"],\n output_variables=[\"all_true\", \"revised_summary\"],\n verbose=verbose,\n )\n return chain\n[docs]class LLMSummarizationCheckerChain(Chain):\n \"\"\"Chain for question-answering with self-verification.\n Example:\n .. code-block:: python\n from langchain import OpenAI, LLMSummarizationCheckerChain\n llm = OpenAI(temperature=0.0)\n checker_chain = LLMSummarizationCheckerChain.from_llm(llm)\n \"\"\"\n sequential_chain: SequentialChain\n llm: Optional[BaseLanguageModel] = None\n \"\"\"[Deprecated] LLM wrapper to use.\"\"\"\n create_assertions_prompt: PromptTemplate = CREATE_ASSERTIONS_PROMPT\n \"\"\"[Deprecated]\"\"\"\n check_assertions_prompt: PromptTemplate = CHECK_ASSERTIONS_PROMPT\n \"\"\"[Deprecated]\"\"\"\n revised_summary_prompt: PromptTemplate = REVISED_SUMMARY_PROMPT\n \"\"\"[Deprecated]\"\"\"\n are_all_true_prompt: PromptTemplate = ARE_ALL_TRUE_PROMPT\n \"\"\"[Deprecated]\"\"\"\n input_key: str = \"query\" #: :meta private:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_summarization_checker/base.html"} {"id": "e04c5b90a94e-2", "text": "input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n max_checks: int = 2\n \"\"\"Maximum number of times to check the assertions. Default to double-checking.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] @root_validator(pre=True)\n def raise_deprecation(cls, values: Dict) -> Dict:\n if \"llm\" in values:\n warnings.warn(\n \"Directly instantiating an LLMSummarizationCheckerChain with an llm is \"\n \"deprecated. Please instantiate with\"\n \" sequential_chain argument or using the from_llm class method.\"\n )\n if \"sequential_chain\" not in values and values[\"llm\"] is not None:\n values[\"sequential_chain\"] = _load_sequential_chain(\n values[\"llm\"],\n values.get(\"create_assertions_prompt\", CREATE_ASSERTIONS_PROMPT),\n values.get(\"check_assertions_prompt\", CHECK_ASSERTIONS_PROMPT),\n values.get(\"revised_summary_prompt\", REVISED_SUMMARY_PROMPT),\n values.get(\"are_all_true_prompt\", ARE_ALL_TRUE_PROMPT),\n verbose=values.get(\"verbose\", False),\n )\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the singular input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the singular output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n def _call(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_summarization_checker/base.html"} {"id": "e04c5b90a94e-3", "text": "return [self.output_key]\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n all_true = False\n count = 0\n output = None\n original_input = inputs[self.input_key]\n chain_input = original_input\n while not all_true and count < self.max_checks:\n output = self.sequential_chain(\n {\"summary\": chain_input}, callbacks=_run_manager.get_child()\n )\n count += 1\n if output[\"all_true\"].strip() == \"True\":\n break\n if self.verbose:\n print(output[\"revised_summary\"])\n chain_input = output[\"revised_summary\"]\n if not output:\n raise ValueError(\"No output from chain\")\n return {self.output_key: output[\"revised_summary\"].strip()}\n @property\n def _chain_type(self) -> str:\n return \"llm_summarization_checker_chain\"\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n create_assertions_prompt: PromptTemplate = CREATE_ASSERTIONS_PROMPT,\n check_assertions_prompt: PromptTemplate = CHECK_ASSERTIONS_PROMPT,\n revised_summary_prompt: PromptTemplate = REVISED_SUMMARY_PROMPT,\n are_all_true_prompt: PromptTemplate = ARE_ALL_TRUE_PROMPT,\n verbose: bool = False,\n **kwargs: Any,\n ) -> LLMSummarizationCheckerChain:\n chain = _load_sequential_chain(\n llm,\n create_assertions_prompt,\n check_assertions_prompt,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_summarization_checker/base.html"} {"id": "e04c5b90a94e-4", "text": "llm,\n create_assertions_prompt,\n check_assertions_prompt,\n revised_summary_prompt,\n are_all_true_prompt,\n verbose=verbose,\n )\n return cls(sequential_chain=chain, verbose=verbose, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_summarization_checker/base.html"} {"id": "c12d22a6163c-0", "text": "Source code for langchain.chains.hyde.base\n\"\"\"Hypothetical Document Embeddings.\nhttps://arxiv.org/abs/2212.10496\n\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional\nimport numpy as np\nfrom pydantic import Extra\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.hyde.prompts import PROMPT_MAP\nfrom langchain.chains.llm import LLMChain\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class HypotheticalDocumentEmbedder(Chain, Embeddings):\n \"\"\"Generate hypothetical document for query, and then embed that.\n Based on https://arxiv.org/abs/2212.10496\n \"\"\"\n base_embeddings: Embeddings\n llm_chain: LLMChain\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Input keys for Hyde's LLM chain.\"\"\"\n return self.llm_chain.input_keys\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Output keys for Hyde's LLM chain.\"\"\"\n return self.llm_chain.output_keys\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Call the base embeddings.\"\"\"\n return self.base_embeddings.embed_documents(texts)\n[docs] def combine_embeddings(self, embeddings: List[List[float]]) -> List[float]:\n \"\"\"Combine embeddings into final embeddings.\"\"\"\n return list(np.array(embeddings).mean(axis=0))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/hyde/base.html"} {"id": "c12d22a6163c-1", "text": "return list(np.array(embeddings).mean(axis=0))\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Generate a hypothetical document and embedded it.\"\"\"\n var_name = self.llm_chain.input_keys[0]\n result = self.llm_chain.generate([{var_name: text}])\n documents = [generation.text for generation in result.generations[0]]\n embeddings = self.embed_documents(documents)\n return self.combine_embeddings(embeddings)\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n \"\"\"Call the internal llm chain.\"\"\"\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n return self.llm_chain(inputs, callbacks=_run_manager.get_child())\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n base_embeddings: Embeddings,\n prompt_key: str,\n **kwargs: Any,\n ) -> HypotheticalDocumentEmbedder:\n \"\"\"Load and use LLMChain for a specific prompt key.\"\"\"\n prompt = PROMPT_MAP[prompt_key]\n llm_chain = LLMChain(llm=llm, prompt=prompt)\n return cls(base_embeddings=base_embeddings, llm_chain=llm_chain, **kwargs)\n @property\n def _chain_type(self) -> str:\n return \"hyde_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/hyde/base.html"} {"id": "643c62bc0986-0", "text": "Source code for langchain.docstore.in_memory\n\"\"\"Simple in memory docstore in the form of a dict.\"\"\"\nfrom typing import Dict, Optional, Union\nfrom langchain.docstore.base import AddableMixin, Docstore\nfrom langchain.docstore.document import Document\n[docs]class InMemoryDocstore(Docstore, AddableMixin):\n \"\"\"Simple in memory docstore in the form of a dict.\"\"\"\n def __init__(self, _dict: Optional[Dict[str, Document]] = None):\n \"\"\"Initialize with dict.\"\"\"\n self._dict = _dict if _dict is not None else {}\n[docs] def add(self, texts: Dict[str, Document]) -> None:\n \"\"\"Add texts to in memory dictionary.\n Args:\n texts: dictionary of id -> document.\n Returns:\n None\n \"\"\"\n overlapping = set(texts).intersection(self._dict)\n if overlapping:\n raise ValueError(f\"Tried to add ids that already exist: {overlapping}\")\n self._dict = {**self._dict, **texts}\n[docs] def search(self, search: str) -> Union[str, Document]:\n \"\"\"Search via direct lookup.\n Args:\n search: id of a document to search for.\n Returns:\n Document if found, else error message.\n \"\"\"\n if search not in self._dict:\n return f\"ID {search} not found.\"\n else:\n return self._dict[search]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/docstore/in_memory.html"} {"id": "1c52fbdf2e83-0", "text": "Source code for langchain.docstore.base\n\"\"\"Interface to access to place that stores documents.\"\"\"\nfrom abc import ABC, abstractmethod\nfrom typing import Dict, Union\nfrom langchain.docstore.document import Document\n[docs]class Docstore(ABC):\n \"\"\"Interface to access to place that stores documents.\"\"\"\n[docs] @abstractmethod\n def search(self, search: str) -> Union[str, Document]:\n \"\"\"Search for document.\n If page exists, return the page summary, and a Document object.\n If page does not exist, return similar entries.\n \"\"\"\n[docs]class AddableMixin(ABC):\n \"\"\"Mixin class that supports adding texts.\"\"\"\n[docs] @abstractmethod\n def add(self, texts: Dict[str, Document]) -> None:\n \"\"\"Add more documents.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/docstore/base.html"} {"id": "713d42addfa0-0", "text": "Source code for langchain.docstore.arbitrary_fn\nfrom typing import Callable, Union\nfrom langchain.docstore.base import Docstore\nfrom langchain.schema import Document\n[docs]class DocstoreFn(Docstore):\n \"\"\"Langchain Docstore via arbitrary lookup function.\n This is useful when:\n * it's expensive to construct an InMemoryDocstore/dict\n * you retrieve documents from remote sources\n * you just want to reuse existing objects\n \"\"\"\n def __init__(\n self,\n lookup_fn: Callable[[str], Union[Document, str]],\n ):\n self._lookup_fn = lookup_fn\n[docs] def search(self, search: str) -> Document:\n \"\"\"Search for a document.\n Args:\n search: search string\n Returns:\n Document if found, else error message.\n \"\"\"\n r = self._lookup_fn(search)\n if isinstance(r, str):\n # NOTE: assume the search string is the source ID\n return Document(page_content=r, metadata={\"source\": search})\n elif isinstance(r, Document):\n return r\n raise ValueError(f\"Unexpected type of document {type(r)}\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/docstore/arbitrary_fn.html"} {"id": "3118fcc403b9-0", "text": "Source code for langchain.docstore.wikipedia\n\"\"\"Wrapper around wikipedia API.\"\"\"\nfrom typing import Union\nfrom langchain.docstore.base import Docstore\nfrom langchain.docstore.document import Document\n[docs]class Wikipedia(Docstore):\n \"\"\"Wrapper around wikipedia API.\"\"\"\n def __init__(self) -> None:\n \"\"\"Check that wikipedia package is installed.\"\"\"\n try:\n import wikipedia # noqa: F401\n except ImportError:\n raise ImportError(\n \"Could not import wikipedia python package. \"\n \"Please install it with `pip install wikipedia`.\"\n )\n[docs] def search(self, search: str) -> Union[str, Document]:\n \"\"\"Try to search for wiki page.\n If page exists, return the page summary, and a PageWithLookups object.\n If page does not exist, return similar entries.\n Args:\n search: search string.\n Returns: a Document object or error message.\n \"\"\"\n import wikipedia\n try:\n page_content = wikipedia.page(search).content\n url = wikipedia.page(search).url\n result: Union[str, Document] = Document(\n page_content=page_content, metadata={\"page\": url}\n )\n except wikipedia.PageError:\n result = f\"Could not find [{search}]. Similar: {wikipedia.search(search)}\"\n except wikipedia.DisambiguationError:\n result = f\"Could not find [{search}]. Similar: {wikipedia.search(search)}\"\n return result", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/docstore/wikipedia.html"} {"id": "3a393f75b70a-0", "text": "Source code for langchain.chat_models.openai\n\"\"\"OpenAI chat wrapper.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport sys\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Dict,\n List,\n Mapping,\n Optional,\n Tuple,\n Union,\n)\nfrom pydantic import Field, root_validator\nfrom tenacity import (\n before_sleep_log,\n retry,\n retry_if_exception_type,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.chat_models.base import BaseChatModel\nfrom langchain.schema import (\n ChatGeneration,\n ChatResult,\n)\nfrom langchain.schema.messages import (\n AIMessage,\n BaseMessage,\n ChatMessage,\n FunctionMessage,\n HumanMessage,\n SystemMessage,\n)\nfrom langchain.utils import get_from_dict_or_env\nif TYPE_CHECKING:\n import tiktoken\nlogger = logging.getLogger(__name__)\ndef _import_tiktoken() -> Any:\n try:\n import tiktoken\n except ImportError:\n raise ValueError(\n \"Could not import tiktoken python package. \"\n \"This is needed in order to calculate get_token_ids. \"\n \"Please install it with `pip install tiktoken`.\"\n )\n return tiktoken\ndef _create_retry_decorator(llm: ChatOpenAI) -> Callable[[Any], Any]:\n import openai\n min_seconds = 1\n max_seconds = 60\n # Wait 2^x * 1 second between each retry starting with", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} {"id": "3a393f75b70a-1", "text": "# Wait 2^x * 1 second between each retry starting with\n # 4 seconds, then up to 10 seconds, then 10 seconds afterwards\n return retry(\n reraise=True,\n stop=stop_after_attempt(llm.max_retries),\n wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),\n retry=(\n retry_if_exception_type(openai.error.Timeout)\n | retry_if_exception_type(openai.error.APIError)\n | retry_if_exception_type(openai.error.APIConnectionError)\n | retry_if_exception_type(openai.error.RateLimitError)\n | retry_if_exception_type(openai.error.ServiceUnavailableError)\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\nasync def acompletion_with_retry(llm: ChatOpenAI, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the async completion call.\"\"\"\n retry_decorator = _create_retry_decorator(llm)\n @retry_decorator\n async def _completion_with_retry(**kwargs: Any) -> Any:\n # Use OpenAI's async api https://github.com/openai/openai-python#async-api\n return await llm.client.acreate(**kwargs)\n return await _completion_with_retry(**kwargs)\ndef _convert_dict_to_message(_dict: Mapping[str, Any]) -> BaseMessage:\n role = _dict[\"role\"]\n if role == \"user\":\n return HumanMessage(content=_dict[\"content\"])\n elif role == \"assistant\":\n content = _dict[\"content\"] or \"\" # OpenAI returns None for tool invocations\n if _dict.get(\"function_call\"):\n additional_kwargs = {\"function_call\": dict(_dict[\"function_call\"])}\n else:\n additional_kwargs = {}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} {"id": "3a393f75b70a-2", "text": "else:\n additional_kwargs = {}\n return AIMessage(content=content, additional_kwargs=additional_kwargs)\n elif role == \"system\":\n return SystemMessage(content=_dict[\"content\"])\n elif role == \"function\":\n return FunctionMessage(content=_dict[\"content\"], name=_dict[\"name\"])\n else:\n return ChatMessage(content=_dict[\"content\"], role=role)\ndef _convert_message_to_dict(message: BaseMessage) -> dict:\n if isinstance(message, ChatMessage):\n message_dict = {\"role\": message.role, \"content\": message.content}\n elif isinstance(message, HumanMessage):\n message_dict = {\"role\": \"user\", \"content\": message.content}\n elif isinstance(message, AIMessage):\n message_dict = {\"role\": \"assistant\", \"content\": message.content}\n if \"function_call\" in message.additional_kwargs:\n message_dict[\"function_call\"] = message.additional_kwargs[\"function_call\"]\n elif isinstance(message, SystemMessage):\n message_dict = {\"role\": \"system\", \"content\": message.content}\n elif isinstance(message, FunctionMessage):\n message_dict = {\n \"role\": \"function\",\n \"content\": message.content,\n \"name\": message.name,\n }\n else:\n raise ValueError(f\"Got unknown type {message}\")\n if \"name\" in message.additional_kwargs:\n message_dict[\"name\"] = message.additional_kwargs[\"name\"]\n return message_dict\n[docs]class ChatOpenAI(BaseChatModel):\n \"\"\"Wrapper around OpenAI Chat large language models.\n To use, you should have the ``openai`` python package installed, and the\n environment variable ``OPENAI_API_KEY`` set with your API key.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} {"id": "3a393f75b70a-3", "text": "environment variable ``OPENAI_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the openai.create call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.chat_models import ChatOpenAI\n openai = ChatOpenAI(model_name=\"gpt-3.5-turbo\")\n \"\"\"\n @property\n def lc_secrets(self) -> Dict[str, str]:\n return {\"openai_api_key\": \"OPENAI_API_KEY\"}\n @property\n def lc_serializable(self) -> bool:\n return True\n client: Any #: :meta private:\n model_name: str = Field(default=\"gpt-3.5-turbo\", alias=\"model\")\n \"\"\"Model name to use.\"\"\"\n temperature: float = 0.7\n \"\"\"What sampling temperature to use.\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not explicitly specified.\"\"\"\n openai_api_key: Optional[str] = None\n \"\"\"Base URL path for API requests, \n leave blank if not using a proxy or service emulator.\"\"\"\n openai_api_base: Optional[str] = None\n openai_organization: Optional[str] = None\n # to support explicit proxy for OpenAI\n openai_proxy: Optional[str] = None\n request_timeout: Optional[Union[float, Tuple[float, float]]] = None\n \"\"\"Timeout for requests to OpenAI completion API. Default is 600 seconds.\"\"\"\n max_retries: int = 6\n \"\"\"Maximum number of retries to make when generating.\"\"\"\n streaming: bool = False\n \"\"\"Whether to stream the results or not.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} {"id": "3a393f75b70a-4", "text": "streaming: bool = False\n \"\"\"Whether to stream the results or not.\"\"\"\n n: int = 1\n \"\"\"Number of chat completions to generate for each prompt.\"\"\"\n max_tokens: Optional[int] = None\n \"\"\"Maximum number of tokens to generate.\"\"\"\n tiktoken_model_name: Optional[str] = None\n \"\"\"The model name to pass to tiktoken when using this class. \n Tiktoken is used to count the number of tokens in documents to constrain \n them to be under a certain limit. By default, when set to None, this will \n be the same as the embedding model name. However, there are some cases \n where you may want to use this Embedding class with a model name not \n supported by tiktoken. This can include when using Azure embeddings or \n when using one of the many model providers that expose an OpenAI-like \n API but with different models. In those cases, in order to avoid erroring \n when tiktoken is called, you can specify a model name to use here.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n allow_population_by_field_name = True\n[docs] @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = cls._all_required_field_names()\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n if field_name not in all_required_field_names:\n logger.warning(\n f\"\"\"WARNING! {field_name} is not default parameter.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} {"id": "3a393f75b70a-5", "text": "logger.warning(\n f\"\"\"WARNING! {field_name} is not default parameter.\n {field_name} was transferred to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n invalid_model_kwargs = all_required_field_names.intersection(extra.keys())\n if invalid_model_kwargs:\n raise ValueError(\n f\"Parameters {invalid_model_kwargs} should be specified explicitly. \"\n f\"Instead they were passed in as part of `model_kwargs` parameter.\"\n )\n values[\"model_kwargs\"] = extra\n return values\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n values[\"openai_api_key\"] = get_from_dict_or_env(\n values, \"openai_api_key\", \"OPENAI_API_KEY\"\n )\n values[\"openai_organization\"] = get_from_dict_or_env(\n values,\n \"openai_organization\",\n \"OPENAI_ORGANIZATION\",\n default=\"\",\n )\n values[\"openai_api_base\"] = get_from_dict_or_env(\n values,\n \"openai_api_base\",\n \"OPENAI_API_BASE\",\n default=\"\",\n )\n values[\"openai_proxy\"] = get_from_dict_or_env(\n values,\n \"openai_proxy\",\n \"OPENAI_PROXY\",\n default=\"\",\n )\n try:\n import openai\n except ImportError:\n raise ValueError(\n \"Could not import openai python package. \"\n \"Please install it with `pip install openai`.\"\n )\n try:\n values[\"client\"] = openai.ChatCompletion", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} {"id": "3a393f75b70a-6", "text": ")\n try:\n values[\"client\"] = openai.ChatCompletion\n except AttributeError:\n raise ValueError(\n \"`openai` has no `ChatCompletion` attribute, this is likely \"\n \"due to an old version of the openai package. Try upgrading it \"\n \"with `pip install --upgrade openai`.\"\n )\n if values[\"n\"] < 1:\n raise ValueError(\"n must be at least 1.\")\n if values[\"n\"] > 1 and values[\"streaming\"]:\n raise ValueError(\"n must be 1 when streaming.\")\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling OpenAI API.\"\"\"\n return {\n \"model\": self.model_name,\n \"request_timeout\": self.request_timeout,\n \"max_tokens\": self.max_tokens,\n \"stream\": self.streaming,\n \"n\": self.n,\n \"temperature\": self.temperature,\n **self.model_kwargs,\n }\n def _create_retry_decorator(self) -> Callable[[Any], Any]:\n import openai\n min_seconds = 1\n max_seconds = 60\n # Wait 2^x * 1 second between each retry starting with\n # 4 seconds, then up to 10 seconds, then 10 seconds afterwards\n return retry(\n reraise=True,\n stop=stop_after_attempt(self.max_retries),\n wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),\n retry=(\n retry_if_exception_type(openai.error.Timeout)\n | retry_if_exception_type(openai.error.APIError)\n | retry_if_exception_type(openai.error.APIConnectionError)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} {"id": "3a393f75b70a-7", "text": "| retry_if_exception_type(openai.error.APIConnectionError)\n | retry_if_exception_type(openai.error.RateLimitError)\n | retry_if_exception_type(openai.error.ServiceUnavailableError)\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\n[docs] def completion_with_retry(self, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the completion call.\"\"\"\n retry_decorator = self._create_retry_decorator()\n @retry_decorator\n def _completion_with_retry(**kwargs: Any) -> Any:\n return self.client.create(**kwargs)\n return _completion_with_retry(**kwargs)\n def _combine_llm_outputs(self, llm_outputs: List[Optional[dict]]) -> dict:\n overall_token_usage: dict = {}\n for output in llm_outputs:\n if output is None:\n # Happens in streaming\n continue\n token_usage = output[\"token_usage\"]\n for k, v in token_usage.items():\n if k in overall_token_usage:\n overall_token_usage[k] += v\n else:\n overall_token_usage[k] = v\n return {\"token_usage\": overall_token_usage, \"model_name\": self.model_name}\n def _generate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n message_dicts, params = self._create_message_dicts(messages, stop)\n params = {**params, **kwargs}\n if self.streaming:\n inner_completion = \"\"\n role = \"assistant\"\n params[\"stream\"] = True\n function_call: Optional[dict] = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} {"id": "3a393f75b70a-8", "text": "params[\"stream\"] = True\n function_call: Optional[dict] = None\n for stream_resp in self.completion_with_retry(\n messages=message_dicts, **params\n ):\n role = stream_resp[\"choices\"][0][\"delta\"].get(\"role\", role)\n token = stream_resp[\"choices\"][0][\"delta\"].get(\"content\") or \"\"\n inner_completion += token\n _function_call = stream_resp[\"choices\"][0][\"delta\"].get(\"function_call\")\n if _function_call:\n if function_call is None:\n function_call = _function_call\n else:\n function_call[\"arguments\"] += _function_call[\"arguments\"]\n if run_manager:\n run_manager.on_llm_new_token(token)\n message = _convert_dict_to_message(\n {\n \"content\": inner_completion,\n \"role\": role,\n \"function_call\": function_call,\n }\n )\n return ChatResult(generations=[ChatGeneration(message=message)])\n response = self.completion_with_retry(messages=message_dicts, **params)\n return self._create_chat_result(response)\n def _create_message_dicts(\n self, messages: List[BaseMessage], stop: Optional[List[str]]\n ) -> Tuple[List[Dict[str, Any]], Dict[str, Any]]:\n params = dict(self._client_params)\n if stop is not None:\n if \"stop\" in params:\n raise ValueError(\"`stop` found in both the input and default params.\")\n params[\"stop\"] = stop\n message_dicts = [_convert_message_to_dict(m) for m in messages]\n return message_dicts, params\n def _create_chat_result(self, response: Mapping[str, Any]) -> ChatResult:\n generations = []\n for res in response[\"choices\"]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} {"id": "3a393f75b70a-9", "text": "generations = []\n for res in response[\"choices\"]:\n message = _convert_dict_to_message(res[\"message\"])\n gen = ChatGeneration(message=message)\n generations.append(gen)\n llm_output = {\"token_usage\": response[\"usage\"], \"model_name\": self.model_name}\n return ChatResult(generations=generations, llm_output=llm_output)\n async def _agenerate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n message_dicts, params = self._create_message_dicts(messages, stop)\n params = {**params, **kwargs}\n if self.streaming:\n inner_completion = \"\"\n role = \"assistant\"\n params[\"stream\"] = True\n function_call: Optional[dict] = None\n async for stream_resp in await acompletion_with_retry(\n self, messages=message_dicts, **params\n ):\n role = stream_resp[\"choices\"][0][\"delta\"].get(\"role\", role)\n token = stream_resp[\"choices\"][0][\"delta\"].get(\"content\", \"\")\n inner_completion += token or \"\"\n _function_call = stream_resp[\"choices\"][0][\"delta\"].get(\"function_call\")\n if _function_call:\n if function_call is None:\n function_call = _function_call\n else:\n function_call[\"arguments\"] += _function_call[\"arguments\"]\n if run_manager:\n await run_manager.on_llm_new_token(token)\n message = _convert_dict_to_message(\n {\n \"content\": inner_completion,\n \"role\": role,\n \"function_call\": function_call,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} {"id": "3a393f75b70a-10", "text": "\"role\": role,\n \"function_call\": function_call,\n }\n )\n return ChatResult(generations=[ChatGeneration(message=message)])\n else:\n response = await acompletion_with_retry(\n self, messages=message_dicts, **params\n )\n return self._create_chat_result(response)\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model_name\": self.model_name}, **self._default_params}\n @property\n def _client_params(self) -> Mapping[str, Any]:\n \"\"\"Get the parameters used for the openai client.\"\"\"\n openai_creds: Dict[str, Any] = {\n \"api_key\": self.openai_api_key,\n \"api_base\": self.openai_api_base,\n \"organization\": self.openai_organization,\n \"model\": self.model_name,\n }\n if self.openai_proxy:\n import openai\n openai.proxy = {\"http\": self.openai_proxy, \"https\": self.openai_proxy} # type: ignore[assignment] # noqa: E501\n return {**openai_creds, **self._default_params}\n def _get_invocation_params(\n self, stop: Optional[List[str]] = None, **kwargs: Any\n ) -> Dict[str, Any]:\n \"\"\"Get the parameters used to invoke the model FOR THE CALLBACKS.\"\"\"\n return {\n **super()._get_invocation_params(stop=stop, **kwargs),\n **self._default_params,\n \"model\": self.model_name,\n \"function\": kwargs.get(\"functions\"),\n }\n @property\n def _llm_type(self) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} {"id": "3a393f75b70a-11", "text": "}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of chat model.\"\"\"\n return \"openai-chat\"\n def _get_encoding_model(self) -> Tuple[str, tiktoken.Encoding]:\n tiktoken_ = _import_tiktoken()\n if self.tiktoken_model_name is not None:\n model = self.tiktoken_model_name\n else:\n model = self.model_name\n if model == \"gpt-3.5-turbo\":\n # gpt-3.5-turbo may change over time.\n # Returning num tokens assuming gpt-3.5-turbo-0301.\n model = \"gpt-3.5-turbo-0301\"\n elif model == \"gpt-4\":\n # gpt-4 may change over time.\n # Returning num tokens assuming gpt-4-0314.\n model = \"gpt-4-0314\"\n # Returns the number of tokens used by a list of messages.\n try:\n encoding = tiktoken_.encoding_for_model(model)\n except KeyError:\n logger.warning(\"Warning: model not found. Using cl100k_base encoding.\")\n model = \"cl100k_base\"\n encoding = tiktoken_.get_encoding(model)\n return model, encoding\n[docs] def get_token_ids(self, text: str) -> List[int]:\n \"\"\"Get the tokens present in the text with tiktoken package.\"\"\"\n # tiktoken NOT supported for Python 3.7 or below\n if sys.version_info[1] <= 7:\n return super().get_token_ids(text)\n _, encoding_model = self._get_encoding_model()\n return encoding_model.encode(text)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} {"id": "3a393f75b70a-12", "text": "_, encoding_model = self._get_encoding_model()\n return encoding_model.encode(text)\n[docs] def get_num_tokens_from_messages(self, messages: List[BaseMessage]) -> int:\n \"\"\"Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.\n Official documentation: https://github.com/openai/openai-cookbook/blob/\n main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb\"\"\"\n if sys.version_info[1] <= 7:\n return super().get_num_tokens_from_messages(messages)\n model, encoding = self._get_encoding_model()\n if model.startswith(\"gpt-3.5-turbo\"):\n # every message follows {role/name}\\n{content}\\n\n tokens_per_message = 4\n # if there's a name, the role is omitted\n tokens_per_name = -1\n elif model.startswith(\"gpt-4\"):\n tokens_per_message = 3\n tokens_per_name = 1\n else:\n raise NotImplementedError(\n f\"get_num_tokens_from_messages() is not presently implemented \"\n f\"for model {model}.\"\n \"See https://github.com/openai/openai-python/blob/main/chatml.md for \"\n \"information on how messages are converted to tokens.\"\n )\n num_tokens = 0\n messages_dict = [_convert_message_to_dict(m) for m in messages]\n for message in messages_dict:\n num_tokens += tokens_per_message\n for key, value in message.items():\n num_tokens += len(encoding.encode(value))\n if key == \"name\":\n num_tokens += tokens_per_name\n # every reply is primed with assistant", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} {"id": "3a393f75b70a-13", "text": "# every reply is primed with assistant\n num_tokens += 3\n return num_tokens", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} {"id": "448fa9c85719-0", "text": "Source code for langchain.chat_models.vertexai\n\"\"\"Wrapper around Google VertexAI chat-based models.\"\"\"\nfrom dataclasses import dataclass, field\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.chat_models.base import BaseChatModel\nfrom langchain.llms.vertexai import _VertexAICommon, is_codey_model\nfrom langchain.schema import (\n ChatGeneration,\n ChatResult,\n)\nfrom langchain.schema.messages import (\n AIMessage,\n BaseMessage,\n HumanMessage,\n SystemMessage,\n)\nfrom langchain.utilities.vertexai import raise_vertex_import_error\n@dataclass\nclass _MessagePair:\n \"\"\"InputOutputTextPair represents a pair of input and output texts.\"\"\"\n question: HumanMessage\n answer: AIMessage\n@dataclass\nclass _ChatHistory:\n \"\"\"InputOutputTextPair represents a pair of input and output texts.\"\"\"\n history: List[_MessagePair] = field(default_factory=list)\n system_message: Optional[SystemMessage] = None\ndef _parse_chat_history(history: List[BaseMessage]) -> _ChatHistory:\n \"\"\"Parse a sequence of messages into history.\n A sequence should be either (SystemMessage, HumanMessage, AIMessage,\n HumanMessage, AIMessage, ...) or (HumanMessage, AIMessage, HumanMessage,\n AIMessage, ...). CodeChat does not support SystemMessage.\n Args:\n history: The list of messages to re-create the history of the chat.\n Returns:\n A parsed chat history.\n Raises:\n ValueError: If a sequence of message is odd, or a human message is not followed", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/vertexai.html"} {"id": "448fa9c85719-1", "text": "ValueError: If a sequence of message is odd, or a human message is not followed\n by a message from AI (e.g., Human, Human, AI or AI, AI, Human).\n \"\"\"\n if not history:\n return _ChatHistory()\n first_message = history[0]\n system_message = first_message if isinstance(first_message, SystemMessage) else None\n chat_history = _ChatHistory(system_message=system_message)\n messages_left = history[1:] if system_message else history\n if len(messages_left) % 2 != 0:\n raise ValueError(\n f\"Amount of messages in history should be even, got {len(messages_left)}!\"\n )\n for question, answer in zip(messages_left[::2], messages_left[1::2]):\n if not isinstance(question, HumanMessage) or not isinstance(answer, AIMessage):\n raise ValueError(\n \"A human message should follow a bot one, \"\n f\"got {question.type}, {answer.type}.\"\n )\n chat_history.history.append(_MessagePair(question=question, answer=answer))\n return chat_history\n[docs]class ChatVertexAI(_VertexAICommon, BaseChatModel):\n \"\"\"Wrapper around Vertex AI large language models.\"\"\"\n model_name: str = \"chat-bison\"\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the python package exists in environment.\"\"\"\n cls._try_init_vertexai(values)\n try:\n if is_codey_model(values[\"model_name\"]):\n from vertexai.preview.language_models import CodeChatModel\n values[\"client\"] = CodeChatModel.from_pretrained(values[\"model_name\"])\n else:\n from vertexai.preview.language_models import ChatModel", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/vertexai.html"} {"id": "448fa9c85719-2", "text": "else:\n from vertexai.preview.language_models import ChatModel\n values[\"client\"] = ChatModel.from_pretrained(values[\"model_name\"])\n except ImportError:\n raise_vertex_import_error()\n return values\n def _generate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n \"\"\"Generate next turn in the conversation.\n Args:\n messages: The history of the conversation as a list of messages. Code chat\n does not support context.\n stop: The list of stop words (optional).\n run_manager: The CallbackManager for LLM run, it's not used at the moment.\n Returns:\n The ChatResult that contains outputs generated by the model.\n Raises:\n ValueError: if the last message in the list is not from human.\n \"\"\"\n if not messages:\n raise ValueError(\n \"You should provide at least one message to start the chat!\"\n )\n question = messages[-1]\n if not isinstance(question, HumanMessage):\n raise ValueError(\n f\"Last message in the list should be from human, got {question.type}.\"\n )\n history = _parse_chat_history(messages[:-1])\n context = history.system_message.content if history.system_message else None\n params = {**self._default_params, **kwargs}\n if not self.is_codey_model:\n chat = self.client.start_chat(context=context, **params)\n else:\n chat = self.client.start_chat(**params)\n for pair in history.history:\n chat._history.append((pair.question.content, pair.answer.content))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/vertexai.html"} {"id": "448fa9c85719-3", "text": "chat._history.append((pair.question.content, pair.answer.content))\n response = chat.send_message(question.content, **params)\n text = self._enforce_stop_words(response.text, stop)\n return ChatResult(generations=[ChatGeneration(message=AIMessage(content=text))])\n async def _agenerate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n raise NotImplementedError(\n \"\"\"Vertex AI doesn't support async requests at the moment.\"\"\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/vertexai.html"} {"id": "a4f4d39e414e-0", "text": "Source code for langchain.chat_models.google_palm\n\"\"\"Wrapper around Google's PaLM Chat API.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import TYPE_CHECKING, Any, Callable, Dict, List, Mapping, Optional\nfrom pydantic import BaseModel, root_validator\nfrom tenacity import (\n before_sleep_log,\n retry,\n retry_if_exception_type,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.chat_models.base import BaseChatModel\nfrom langchain.schema import (\n ChatGeneration,\n ChatResult,\n)\nfrom langchain.schema.messages import (\n AIMessage,\n BaseMessage,\n ChatMessage,\n HumanMessage,\n SystemMessage,\n)\nfrom langchain.utils import get_from_dict_or_env\nif TYPE_CHECKING:\n import google.generativeai as genai\nlogger = logging.getLogger(__name__)\n[docs]class ChatGooglePalmError(Exception):\n \"\"\"Error raised when there is an issue with the Google PaLM API.\"\"\"\n pass\ndef _truncate_at_stop_tokens(\n text: str,\n stop: Optional[List[str]],\n) -> str:\n \"\"\"Truncates text at the earliest stop token found.\"\"\"\n if stop is None:\n return text\n for stop_token in stop:\n stop_token_idx = text.find(stop_token)\n if stop_token_idx != -1:\n text = text[:stop_token_idx]\n return text\ndef _response_to_result(\n response: genai.types.ChatResponse,\n stop: Optional[List[str]],\n) -> ChatResult:\n \"\"\"Converts a PaLM API response into a LangChain ChatResult.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/google_palm.html"} {"id": "a4f4d39e414e-1", "text": "\"\"\"Converts a PaLM API response into a LangChain ChatResult.\"\"\"\n if not response.candidates:\n raise ChatGooglePalmError(\"ChatResponse must have at least one candidate.\")\n generations: List[ChatGeneration] = []\n for candidate in response.candidates:\n author = candidate.get(\"author\")\n if author is None:\n raise ChatGooglePalmError(f\"ChatResponse must have an author: {candidate}\")\n content = _truncate_at_stop_tokens(candidate.get(\"content\", \"\"), stop)\n if content is None:\n raise ChatGooglePalmError(f\"ChatResponse must have a content: {candidate}\")\n if author == \"ai\":\n generations.append(\n ChatGeneration(text=content, message=AIMessage(content=content))\n )\n elif author == \"human\":\n generations.append(\n ChatGeneration(\n text=content,\n message=HumanMessage(content=content),\n )\n )\n else:\n generations.append(\n ChatGeneration(\n text=content,\n message=ChatMessage(role=author, content=content),\n )\n )\n return ChatResult(generations=generations)\ndef _messages_to_prompt_dict(\n input_messages: List[BaseMessage],\n) -> genai.types.MessagePromptDict:\n \"\"\"Converts a list of LangChain messages into a PaLM API MessagePrompt structure.\"\"\"\n import google.generativeai as genai\n context: str = \"\"\n examples: List[genai.types.MessageDict] = []\n messages: List[genai.types.MessageDict] = []\n remaining = list(enumerate(input_messages))\n while remaining:\n index, input_message = remaining.pop(0)\n if isinstance(input_message, SystemMessage):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/google_palm.html"} {"id": "a4f4d39e414e-2", "text": "if isinstance(input_message, SystemMessage):\n if index != 0:\n raise ChatGooglePalmError(\"System message must be first input message.\")\n context = input_message.content\n elif isinstance(input_message, HumanMessage) and input_message.example:\n if messages:\n raise ChatGooglePalmError(\n \"Message examples must come before other messages.\"\n )\n _, next_input_message = remaining.pop(0)\n if isinstance(next_input_message, AIMessage) and next_input_message.example:\n examples.extend(\n [\n genai.types.MessageDict(\n author=\"human\", content=input_message.content\n ),\n genai.types.MessageDict(\n author=\"ai\", content=next_input_message.content\n ),\n ]\n )\n else:\n raise ChatGooglePalmError(\n \"Human example message must be immediately followed by an \"\n \" AI example response.\"\n )\n elif isinstance(input_message, AIMessage) and input_message.example:\n raise ChatGooglePalmError(\n \"AI example message must be immediately preceded by a Human \"\n \"example message.\"\n )\n elif isinstance(input_message, AIMessage):\n messages.append(\n genai.types.MessageDict(author=\"ai\", content=input_message.content)\n )\n elif isinstance(input_message, HumanMessage):\n messages.append(\n genai.types.MessageDict(author=\"human\", content=input_message.content)\n )\n elif isinstance(input_message, ChatMessage):\n messages.append(\n genai.types.MessageDict(\n author=input_message.role, content=input_message.content\n )\n )\n else:\n raise ChatGooglePalmError(\n \"Messages without an explicit role not supported by PaLM API.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/google_palm.html"} {"id": "a4f4d39e414e-3", "text": "\"Messages without an explicit role not supported by PaLM API.\"\n )\n return genai.types.MessagePromptDict(\n context=context,\n examples=examples,\n messages=messages,\n )\ndef _create_retry_decorator() -> Callable[[Any], Any]:\n \"\"\"Returns a tenacity retry decorator, preconfigured to handle PaLM exceptions\"\"\"\n import google.api_core.exceptions\n multiplier = 2\n min_seconds = 1\n max_seconds = 60\n max_retries = 10\n return retry(\n reraise=True,\n stop=stop_after_attempt(max_retries),\n wait=wait_exponential(multiplier=multiplier, min=min_seconds, max=max_seconds),\n retry=(\n retry_if_exception_type(google.api_core.exceptions.ResourceExhausted)\n | retry_if_exception_type(google.api_core.exceptions.ServiceUnavailable)\n | retry_if_exception_type(google.api_core.exceptions.GoogleAPIError)\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\n[docs]def chat_with_retry(llm: ChatGooglePalm, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the completion call.\"\"\"\n retry_decorator = _create_retry_decorator()\n @retry_decorator\n def _chat_with_retry(**kwargs: Any) -> Any:\n return llm.client.chat(**kwargs)\n return _chat_with_retry(**kwargs)\nasync def achat_with_retry(llm: ChatGooglePalm, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the async completion call.\"\"\"\n retry_decorator = _create_retry_decorator()\n @retry_decorator\n async def _achat_with_retry(**kwargs: Any) -> Any:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/google_palm.html"} {"id": "a4f4d39e414e-4", "text": "async def _achat_with_retry(**kwargs: Any) -> Any:\n # Use OpenAI's async api https://github.com/openai/openai-python#async-api\n return await llm.client.chat_async(**kwargs)\n return await _achat_with_retry(**kwargs)\n[docs]class ChatGooglePalm(BaseChatModel, BaseModel):\n \"\"\"Wrapper around Google's PaLM Chat API.\n To use you must have the google.generativeai Python package installed and\n either:\n 1. The ``GOOGLE_API_KEY``` environment varaible set with your API key, or\n 2. Pass your API key using the google_api_key kwarg to the ChatGoogle\n constructor.\n Example:\n .. code-block:: python\n from langchain.chat_models import ChatGooglePalm\n chat = ChatGooglePalm()\n \"\"\"\n client: Any #: :meta private:\n model_name: str = \"models/chat-bison-001\"\n \"\"\"Model name to use.\"\"\"\n google_api_key: Optional[str] = None\n temperature: Optional[float] = None\n \"\"\"Run inference with this temperature. Must by in the closed\n interval [0.0, 1.0].\"\"\"\n top_p: Optional[float] = None\n \"\"\"Decode using nucleus sampling: consider the smallest set of tokens whose\n probability sum is at least top_p. Must be in the closed interval [0.0, 1.0].\"\"\"\n top_k: Optional[int] = None\n \"\"\"Decode using top-k sampling: consider the set of top_k most probable tokens.\n Must be positive.\"\"\"\n n: int = 1\n \"\"\"Number of chat completions to generate for each prompt. Note that the API may\n not return the full n completions if duplicates are generated.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/google_palm.html"} {"id": "a4f4d39e414e-5", "text": "not return the full n completions if duplicates are generated.\"\"\"\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate api key, python package exists, temperature, top_p, and top_k.\"\"\"\n google_api_key = get_from_dict_or_env(\n values, \"google_api_key\", \"GOOGLE_API_KEY\"\n )\n try:\n import google.generativeai as genai\n genai.configure(api_key=google_api_key)\n except ImportError:\n raise ChatGooglePalmError(\n \"Could not import google.generativeai python package. \"\n \"Please install it with `pip install google-generativeai`\"\n )\n values[\"client\"] = genai\n if values[\"temperature\"] is not None and not 0 <= values[\"temperature\"] <= 1:\n raise ValueError(\"temperature must be in the range [0.0, 1.0]\")\n if values[\"top_p\"] is not None and not 0 <= values[\"top_p\"] <= 1:\n raise ValueError(\"top_p must be in the range [0.0, 1.0]\")\n if values[\"top_k\"] is not None and values[\"top_k\"] <= 0:\n raise ValueError(\"top_k must be positive\")\n return values\n def _generate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n prompt = _messages_to_prompt_dict(messages)\n response: genai.types.ChatResponse = chat_with_retry(\n self,\n model=self.model_name,\n prompt=prompt,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/google_palm.html"} {"id": "a4f4d39e414e-6", "text": "self,\n model=self.model_name,\n prompt=prompt,\n temperature=self.temperature,\n top_p=self.top_p,\n top_k=self.top_k,\n candidate_count=self.n,\n **kwargs,\n )\n return _response_to_result(response, stop)\n async def _agenerate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n prompt = _messages_to_prompt_dict(messages)\n response: genai.types.ChatResponse = await achat_with_retry(\n self,\n model=self.model_name,\n prompt=prompt,\n temperature=self.temperature,\n top_p=self.top_p,\n top_k=self.top_k,\n candidate_count=self.n,\n )\n return _response_to_result(response, stop)\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"model_name\": self.model_name,\n \"temperature\": self.temperature,\n \"top_p\": self.top_p,\n \"top_k\": self.top_k,\n \"n\": self.n,\n }\n @property\n def _llm_type(self) -> str:\n return \"google-palm-chat\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/google_palm.html"} {"id": "cfaffbb98640-0", "text": "Source code for langchain.chat_models.promptlayer_openai\n\"\"\"PromptLayer wrapper.\"\"\"\nimport datetime\nfrom typing import Any, List, Mapping, Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.schema import ChatResult\nfrom langchain.schema.messages import BaseMessage\n[docs]class PromptLayerChatOpenAI(ChatOpenAI):\n \"\"\"Wrapper around OpenAI Chat large language models and PromptLayer.\n To use, you should have the ``openai`` and ``promptlayer`` python\n package installed, and the environment variable ``OPENAI_API_KEY``\n and ``PROMPTLAYER_API_KEY`` set with your openAI API key and\n promptlayer key respectively.\n All parameters that can be passed to the OpenAI LLM can also\n be passed here. The PromptLayerChatOpenAI adds to optional\n parameters:\n ``pl_tags``: List of strings to tag the request with.\n ``return_pl_id``: If True, the PromptLayer request ID will be\n returned in the ``generation_info`` field of the\n ``Generation`` object.\n Example:\n .. code-block:: python\n from langchain.chat_models import PromptLayerChatOpenAI\n openai = PromptLayerChatOpenAI(model_name=\"gpt-3.5-turbo\")\n \"\"\"\n pl_tags: Optional[List[str]]\n return_pl_id: Optional[bool] = False\n def _generate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any\n ) -> ChatResult:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/promptlayer_openai.html"} {"id": "cfaffbb98640-1", "text": "**kwargs: Any\n ) -> ChatResult:\n \"\"\"Call ChatOpenAI generate and then call PromptLayer API to log the request.\"\"\"\n from promptlayer.utils import get_api_key, promptlayer_api_request\n request_start_time = datetime.datetime.now().timestamp()\n generated_responses = super()._generate(messages, stop, run_manager, **kwargs)\n request_end_time = datetime.datetime.now().timestamp()\n message_dicts, params = super()._create_message_dicts(messages, stop)\n for i, generation in enumerate(generated_responses.generations):\n response_dict, params = super()._create_message_dicts(\n [generation.message], stop\n )\n params = {**params, **kwargs}\n pl_request_id = promptlayer_api_request(\n \"langchain.PromptLayerChatOpenAI\",\n \"langchain\",\n message_dicts,\n params,\n self.pl_tags,\n response_dict,\n request_start_time,\n request_end_time,\n get_api_key(),\n return_pl_id=self.return_pl_id,\n )\n if self.return_pl_id:\n if generation.generation_info is None or not isinstance(\n generation.generation_info, dict\n ):\n generation.generation_info = {}\n generation.generation_info[\"pl_request_id\"] = pl_request_id\n return generated_responses\n async def _agenerate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any\n ) -> ChatResult:\n \"\"\"Call ChatOpenAI agenerate and then call PromptLayer to log.\"\"\"\n from promptlayer.utils import get_api_key, promptlayer_api_request_async", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/promptlayer_openai.html"} {"id": "cfaffbb98640-2", "text": "from promptlayer.utils import get_api_key, promptlayer_api_request_async\n request_start_time = datetime.datetime.now().timestamp()\n generated_responses = await super()._agenerate(messages, stop, run_manager)\n request_end_time = datetime.datetime.now().timestamp()\n message_dicts, params = super()._create_message_dicts(messages, stop)\n for i, generation in enumerate(generated_responses.generations):\n response_dict, params = super()._create_message_dicts(\n [generation.message], stop\n )\n params = {**params, **kwargs}\n pl_request_id = await promptlayer_api_request_async(\n \"langchain.PromptLayerChatOpenAI.async\",\n \"langchain\",\n message_dicts,\n params,\n self.pl_tags,\n response_dict,\n request_start_time,\n request_end_time,\n get_api_key(),\n return_pl_id=self.return_pl_id,\n )\n if self.return_pl_id:\n if generation.generation_info is None or not isinstance(\n generation.generation_info, dict\n ):\n generation.generation_info = {}\n generation.generation_info[\"pl_request_id\"] = pl_request_id\n return generated_responses\n @property\n def _llm_type(self) -> str:\n return \"promptlayer-openai-chat\"\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n return {\n **super()._identifying_params,\n \"pl_tags\": self.pl_tags,\n \"return_pl_id\": self.return_pl_id,\n }", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/promptlayer_openai.html"} {"id": "d2b089bd47ce-0", "text": "Source code for langchain.chat_models.anthropic\nfrom typing import Any, Dict, List, Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.chat_models.base import BaseChatModel\nfrom langchain.llms.anthropic import _AnthropicCommon\nfrom langchain.schema import (\n ChatGeneration,\n ChatResult,\n)\nfrom langchain.schema.messages import (\n AIMessage,\n BaseMessage,\n ChatMessage,\n HumanMessage,\n SystemMessage,\n)\n[docs]class ChatAnthropic(BaseChatModel, _AnthropicCommon):\n r\"\"\"Wrapper around Anthropic's large language model.\n To use, you should have the ``anthropic`` python package installed, and the\n environment variable ``ANTHROPIC_API_KEY`` set with your API key, or pass\n it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n import anthropic\n from langchain.llms import Anthropic\n model = ChatAnthropic(model=\"\", anthropic_api_key=\"my-api-key\")\n \"\"\"\n @property\n def lc_secrets(self) -> Dict[str, str]:\n return {\"anthropic_api_key\": \"ANTHROPIC_API_KEY\"}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of chat model.\"\"\"\n return \"anthropic-chat\"\n @property\n def lc_serializable(self) -> bool:\n return True\n def _convert_one_message_to_text(self, message: BaseMessage) -> str:\n if isinstance(message, ChatMessage):\n message_text = f\"\\n\\n{message.role.capitalize()}: {message.content}\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/anthropic.html"} {"id": "d2b089bd47ce-1", "text": "message_text = f\"\\n\\n{message.role.capitalize()}: {message.content}\"\n elif isinstance(message, HumanMessage):\n message_text = f\"{self.HUMAN_PROMPT} {message.content}\"\n elif isinstance(message, AIMessage):\n message_text = f\"{self.AI_PROMPT} {message.content}\"\n elif isinstance(message, SystemMessage):\n message_text = f\"{self.HUMAN_PROMPT} {message.content}\"\n else:\n raise ValueError(f\"Got unknown type {message}\")\n return message_text\n def _convert_messages_to_text(self, messages: List[BaseMessage]) -> str:\n \"\"\"Format a list of strings into a single string with necessary newlines.\n Args:\n messages (List[BaseMessage]): List of BaseMessage to combine.\n Returns:\n str: Combined string with necessary newlines.\n \"\"\"\n return \"\".join(\n self._convert_one_message_to_text(message) for message in messages\n )\n def _convert_messages_to_prompt(self, messages: List[BaseMessage]) -> str:\n \"\"\"Format a list of messages into a full prompt for the Anthropic model\n Args:\n messages (List[BaseMessage]): List of BaseMessage to combine.\n Returns:\n str: Combined string with necessary HUMAN_PROMPT and AI_PROMPT tags.\n \"\"\"\n messages = messages.copy() # don't mutate the original list\n if not self.AI_PROMPT:\n raise NameError(\"Please ensure the anthropic package is loaded\")\n if not isinstance(messages[-1], AIMessage):\n messages.append(AIMessage(content=\"\"))\n text = self._convert_messages_to_text(messages)\n return (\n text.rstrip()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/anthropic.html"} {"id": "d2b089bd47ce-2", "text": "return (\n text.rstrip()\n ) # trim off the trailing ' ' that might come from the \"Assistant: \"\n def _generate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n prompt = self._convert_messages_to_prompt(messages)\n params: Dict[str, Any] = {\"prompt\": prompt, **self._default_params, **kwargs}\n if stop:\n params[\"stop_sequences\"] = stop\n if self.streaming:\n completion = \"\"\n stream_resp = self.client.completions.create(**params, stream=True)\n for data in stream_resp:\n delta = data.completion\n completion += delta\n if run_manager:\n run_manager.on_llm_new_token(\n delta,\n )\n else:\n response = self.client.completions.create(**params)\n completion = response.completion\n message = AIMessage(content=completion)\n return ChatResult(generations=[ChatGeneration(message=message)])\n async def _agenerate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n prompt = self._convert_messages_to_prompt(messages)\n params: Dict[str, Any] = {\"prompt\": prompt, **self._default_params, **kwargs}\n if stop:\n params[\"stop_sequences\"] = stop\n if self.streaming:\n completion = \"\"\n stream_resp = await self.async_client.completions.create(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/anthropic.html"} {"id": "d2b089bd47ce-3", "text": "completion = \"\"\n stream_resp = await self.async_client.completions.create(\n **params, stream=True\n )\n async for data in stream_resp:\n delta = data.completion\n completion += delta\n if run_manager:\n await run_manager.on_llm_new_token(\n delta,\n )\n else:\n response = await self.async_client.completions.create(**params)\n completion = response.completion\n message = AIMessage(content=completion)\n return ChatResult(generations=[ChatGeneration(message=message)])\n[docs] def get_num_tokens(self, text: str) -> int:\n \"\"\"Calculate number of tokens.\"\"\"\n if not self.count_tokens:\n raise NameError(\"Please ensure the anthropic package is loaded\")\n return self.count_tokens(text)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/anthropic.html"} {"id": "86ae9baaa50d-0", "text": "Source code for langchain.chat_models.base\nimport asyncio\nimport inspect\nimport warnings\nfrom abc import ABC, abstractmethod\nfrom functools import partial\nfrom typing import Any, Dict, List, Mapping, Optional, Sequence\nfrom pydantic import Field, root_validator\nimport langchain\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.callbacks.manager import (\n AsyncCallbackManager,\n AsyncCallbackManagerForLLMRun,\n CallbackManager,\n CallbackManagerForLLMRun,\n Callbacks,\n)\nfrom langchain.load.dump import dumpd, dumps\nfrom langchain.schema import (\n ChatGeneration,\n ChatResult,\n LLMResult,\n PromptValue,\n RunInfo,\n)\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.schema.messages import AIMessage, BaseMessage, HumanMessage\ndef _get_verbosity() -> bool:\n return langchain.verbose\n[docs]class BaseChatModel(BaseLanguageModel, ABC):\n cache: Optional[bool] = None\n verbose: bool = Field(default_factory=_get_verbosity)\n \"\"\"Whether to print out response text.\"\"\"\n callbacks: Callbacks = Field(default=None, exclude=True)\n callback_manager: Optional[BaseCallbackManager] = Field(default=None, exclude=True)\n tags: Optional[List[str]] = Field(default=None, exclude=True)\n \"\"\"Tags to add to the run trace.\"\"\"\n metadata: Optional[Dict[str, Any]] = Field(default=None, exclude=True)\n \"\"\"Metadata to add to the run trace.\"\"\"\n[docs] @root_validator()\n def raise_deprecation(cls, values: Dict) -> Dict:\n \"\"\"Raise deprecation warning if callback_manager is used.\"\"\"\n if values.get(\"callback_manager\") is not None:\n warnings.warn(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/base.html"} {"id": "86ae9baaa50d-1", "text": "if values.get(\"callback_manager\") is not None:\n warnings.warn(\n \"callback_manager is deprecated. Please use callbacks instead.\",\n DeprecationWarning,\n )\n values[\"callbacks\"] = values.pop(\"callback_manager\", None)\n return values\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n def _combine_llm_outputs(self, llm_outputs: List[Optional[dict]]) -> dict:\n return {}\n def _get_invocation_params(\n self,\n stop: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> dict:\n params = self.dict()\n params[\"stop\"] = stop\n return {**params, **kwargs}\n def _get_llm_string(self, stop: Optional[List[str]] = None, **kwargs: Any) -> str:\n if self.lc_serializable:\n params = {**kwargs, **{\"stop\": stop}}\n param_string = str(sorted([(k, v) for k, v in params.items()]))\n llm_string = dumps(self)\n return llm_string + \"---\" + param_string\n else:\n params = self._get_invocation_params(stop=stop, **kwargs)\n params = {**params, **kwargs}\n return str(sorted([(k, v) for k, v in params.items()]))\n[docs] def generate(\n self,\n messages: List[List[BaseMessage]],\n stop: Optional[List[str]] = None,\n callbacks: Callbacks = None,\n *,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/base.html"} {"id": "86ae9baaa50d-2", "text": "**kwargs: Any,\n ) -> LLMResult:\n \"\"\"Top Level call\"\"\"\n params = self._get_invocation_params(stop=stop, **kwargs)\n options = {\"stop\": stop}\n callback_manager = CallbackManager.configure(\n callbacks,\n self.callbacks,\n self.verbose,\n tags,\n self.tags,\n metadata,\n self.metadata,\n )\n run_managers = callback_manager.on_chat_model_start(\n dumpd(self), messages, invocation_params=params, options=options\n )\n results = []\n for i, m in enumerate(messages):\n try:\n results.append(\n self._generate_with_cache(\n m,\n stop=stop,\n run_manager=run_managers[i] if run_managers else None,\n **kwargs,\n )\n )\n except (KeyboardInterrupt, Exception) as e:\n if run_managers:\n run_managers[i].on_llm_error(e)\n raise e\n flattened_outputs = [\n LLMResult(generations=[res.generations], llm_output=res.llm_output)\n for res in results\n ]\n llm_output = self._combine_llm_outputs([res.llm_output for res in results])\n generations = [res.generations for res in results]\n output = LLMResult(generations=generations, llm_output=llm_output)\n if run_managers:\n run_infos = []\n for manager, flattened_output in zip(run_managers, flattened_outputs):\n manager.on_llm_end(flattened_output)\n run_infos.append(RunInfo(run_id=manager.run_id))\n output.run = run_infos\n return output\n[docs] async def agenerate(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/base.html"} {"id": "86ae9baaa50d-3", "text": "return output\n[docs] async def agenerate(\n self,\n messages: List[List[BaseMessage]],\n stop: Optional[List[str]] = None,\n callbacks: Callbacks = None,\n *,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> LLMResult:\n \"\"\"Top Level call\"\"\"\n params = self._get_invocation_params(stop=stop, **kwargs)\n options = {\"stop\": stop}\n callback_manager = AsyncCallbackManager.configure(\n callbacks,\n self.callbacks,\n self.verbose,\n tags,\n self.tags,\n metadata,\n self.metadata,\n )\n run_managers = await callback_manager.on_chat_model_start(\n dumpd(self), messages, invocation_params=params, options=options\n )\n results = await asyncio.gather(\n *[\n self._agenerate_with_cache(\n m,\n stop=stop,\n run_manager=run_managers[i] if run_managers else None,\n **kwargs,\n )\n for i, m in enumerate(messages)\n ],\n return_exceptions=True,\n )\n exceptions = []\n for i, res in enumerate(results):\n if isinstance(res, Exception):\n if run_managers:\n await run_managers[i].on_llm_error(res)\n exceptions.append(res)\n if exceptions:\n if run_managers:\n await asyncio.gather(\n *[\n run_manager.on_llm_end(\n LLMResult(\n generations=[res.generations], llm_output=res.llm_output\n )\n )\n for run_manager, res in zip(run_managers, results)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/base.html"} {"id": "86ae9baaa50d-4", "text": ")\n for run_manager, res in zip(run_managers, results)\n if not isinstance(res, Exception)\n ]\n )\n raise exceptions[0]\n flattened_outputs = [\n LLMResult(generations=[res.generations], llm_output=res.llm_output)\n for res in results\n ]\n llm_output = self._combine_llm_outputs([res.llm_output for res in results])\n generations = [res.generations for res in results]\n output = LLMResult(generations=generations, llm_output=llm_output)\n await asyncio.gather(\n *[\n run_manager.on_llm_end(flattened_output)\n for run_manager, flattened_output in zip(\n run_managers, flattened_outputs\n )\n ]\n )\n if run_managers:\n output.run = [\n RunInfo(run_id=run_manager.run_id) for run_manager in run_managers\n ]\n return output\n[docs] def generate_prompt(\n self,\n prompts: List[PromptValue],\n stop: Optional[List[str]] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> LLMResult:\n prompt_messages = [p.to_messages() for p in prompts]\n return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)\n[docs] async def agenerate_prompt(\n self,\n prompts: List[PromptValue],\n stop: Optional[List[str]] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> LLMResult:\n prompt_messages = [p.to_messages() for p in prompts]\n return await self.agenerate(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/base.html"} {"id": "86ae9baaa50d-5", "text": "return await self.agenerate(\n prompt_messages, stop=stop, callbacks=callbacks, **kwargs\n )\n def _generate_with_cache(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n new_arg_supported = inspect.signature(self._generate).parameters.get(\n \"run_manager\"\n )\n disregard_cache = self.cache is not None and not self.cache\n if langchain.llm_cache is None or disregard_cache:\n # This happens when langchain.cache is None, but self.cache is True\n if self.cache is not None and self.cache:\n raise ValueError(\n \"Asked to cache, but no cache found at `langchain.cache`.\"\n )\n if new_arg_supported:\n return self._generate(\n messages, stop=stop, run_manager=run_manager, **kwargs\n )\n else:\n return self._generate(messages, stop=stop, **kwargs)\n else:\n llm_string = self._get_llm_string(stop=stop, **kwargs)\n prompt = dumps(messages)\n cache_val = langchain.llm_cache.lookup(prompt, llm_string)\n if isinstance(cache_val, list):\n return ChatResult(generations=cache_val)\n else:\n if new_arg_supported:\n result = self._generate(\n messages, stop=stop, run_manager=run_manager, **kwargs\n )\n else:\n result = self._generate(messages, stop=stop, **kwargs)\n langchain.llm_cache.update(prompt, llm_string, result.generations)\n return result", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/base.html"} {"id": "86ae9baaa50d-6", "text": "return result\n async def _agenerate_with_cache(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n new_arg_supported = inspect.signature(self._agenerate).parameters.get(\n \"run_manager\"\n )\n disregard_cache = self.cache is not None and not self.cache\n if langchain.llm_cache is None or disregard_cache:\n # This happens when langchain.cache is None, but self.cache is True\n if self.cache is not None and self.cache:\n raise ValueError(\n \"Asked to cache, but no cache found at `langchain.cache`.\"\n )\n if new_arg_supported:\n return await self._agenerate(\n messages, stop=stop, run_manager=run_manager, **kwargs\n )\n else:\n return await self._agenerate(messages, stop=stop, **kwargs)\n else:\n llm_string = self._get_llm_string(stop=stop, **kwargs)\n prompt = dumps(messages)\n cache_val = langchain.llm_cache.lookup(prompt, llm_string)\n if isinstance(cache_val, list):\n return ChatResult(generations=cache_val)\n else:\n if new_arg_supported:\n result = await self._agenerate(\n messages, stop=stop, run_manager=run_manager, **kwargs\n )\n else:\n result = await self._agenerate(messages, stop=stop, **kwargs)\n langchain.llm_cache.update(prompt, llm_string, result.generations)\n return result\n @abstractmethod\n def _generate(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/base.html"} {"id": "86ae9baaa50d-7", "text": "return result\n @abstractmethod\n def _generate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n \"\"\"Top Level call\"\"\"\n @abstractmethod\n async def _agenerate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n \"\"\"Top Level call\"\"\"\n[docs] def __call__(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> BaseMessage:\n generation = self.generate(\n [messages], stop=stop, callbacks=callbacks, **kwargs\n ).generations[0][0]\n if isinstance(generation, ChatGeneration):\n return generation.message\n else:\n raise ValueError(\"Unexpected generation type\")\n async def _call_async(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> BaseMessage:\n result = await self.agenerate(\n [messages], stop=stop, callbacks=callbacks, **kwargs\n )\n generation = result.generations[0][0]\n if isinstance(generation, ChatGeneration):\n return generation.message\n else:\n raise ValueError(\"Unexpected generation type\")\n[docs] def call_as_llm(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/base.html"} {"id": "86ae9baaa50d-8", "text": "raise ValueError(\"Unexpected generation type\")\n[docs] def call_as_llm(\n self, message: str, stop: Optional[List[str]] = None, **kwargs: Any\n ) -> str:\n return self.predict(message, stop=stop, **kwargs)\n[docs] def predict(\n self, text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any\n ) -> str:\n if stop is None:\n _stop = None\n else:\n _stop = list(stop)\n result = self([HumanMessage(content=text)], stop=_stop, **kwargs)\n return result.content\n[docs] def predict_messages(\n self,\n messages: List[BaseMessage],\n *,\n stop: Optional[Sequence[str]] = None,\n **kwargs: Any,\n ) -> BaseMessage:\n if stop is None:\n _stop = None\n else:\n _stop = list(stop)\n return self(messages, stop=_stop, **kwargs)\n[docs] async def apredict(\n self, text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any\n ) -> str:\n if stop is None:\n _stop = None\n else:\n _stop = list(stop)\n result = await self._call_async(\n [HumanMessage(content=text)], stop=_stop, **kwargs\n )\n return result.content\n[docs] async def apredict_messages(\n self,\n messages: List[BaseMessage],\n *,\n stop: Optional[Sequence[str]] = None,\n **kwargs: Any,\n ) -> BaseMessage:\n if stop is None:\n _stop = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/base.html"} {"id": "86ae9baaa50d-9", "text": ") -> BaseMessage:\n if stop is None:\n _stop = None\n else:\n _stop = list(stop)\n return await self._call_async(messages, stop=_stop, **kwargs)\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {}\n @property\n @abstractmethod\n def _llm_type(self) -> str:\n \"\"\"Return type of chat model.\"\"\"\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return a dictionary of the LLM.\"\"\"\n starter_dict = dict(self._identifying_params)\n starter_dict[\"_type\"] = self._llm_type\n return starter_dict\n[docs]class SimpleChatModel(BaseChatModel):\n def _generate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n output_str = self._call(messages, stop=stop, run_manager=run_manager, **kwargs)\n message = AIMessage(content=output_str)\n generation = ChatGeneration(message=message)\n return ChatResult(generations=[generation])\n @abstractmethod\n def _call(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Simpler interface.\"\"\"\n async def _agenerate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/base.html"} {"id": "86ae9baaa50d-10", "text": "messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n func = partial(\n self._generate, messages, stop=stop, run_manager=run_manager, **kwargs\n )\n return await asyncio.get_event_loop().run_in_executor(None, func)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/base.html"} {"id": "ffee9c50476b-0", "text": "Source code for langchain.chat_models.jinachat\n\"\"\"JinaChat wrapper.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import (\n Any,\n Callable,\n Dict,\n List,\n Mapping,\n Optional,\n Tuple,\n Union,\n)\nfrom pydantic import Field, root_validator\nfrom tenacity import (\n before_sleep_log,\n retry,\n retry_if_exception_type,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.chat_models.base import BaseChatModel\nfrom langchain.schema import (\n AIMessage,\n BaseMessage,\n ChatGeneration,\n ChatMessage,\n ChatResult,\n HumanMessage,\n SystemMessage,\n)\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\ndef _create_retry_decorator(llm: JinaChat) -> Callable[[Any], Any]:\n import openai\n min_seconds = 1\n max_seconds = 60\n # Wait 2^x * 1 second between each retry starting with\n # 4 seconds, then up to 10 seconds, then 10 seconds afterwards\n return retry(\n reraise=True,\n stop=stop_after_attempt(llm.max_retries),\n wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),\n retry=(\n retry_if_exception_type(openai.error.Timeout)\n | retry_if_exception_type(openai.error.APIError)\n | retry_if_exception_type(openai.error.APIConnectionError)\n | retry_if_exception_type(openai.error.RateLimitError)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/jinachat.html"} {"id": "ffee9c50476b-1", "text": "| retry_if_exception_type(openai.error.RateLimitError)\n | retry_if_exception_type(openai.error.ServiceUnavailableError)\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\nasync def acompletion_with_retry(llm: JinaChat, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the async completion call.\"\"\"\n retry_decorator = _create_retry_decorator(llm)\n @retry_decorator\n async def _completion_with_retry(**kwargs: Any) -> Any:\n # Use OpenAI's async api https://github.com/openai/openai-python#async-api\n return await llm.client.acreate(**kwargs)\n return await _completion_with_retry(**kwargs)\ndef _convert_dict_to_message(_dict: Mapping[str, Any]) -> BaseMessage:\n role = _dict[\"role\"]\n if role == \"user\":\n return HumanMessage(content=_dict[\"content\"])\n elif role == \"assistant\":\n content = _dict[\"content\"] or \"\"\n return AIMessage(content=content)\n elif role == \"system\":\n return SystemMessage(content=_dict[\"content\"])\n else:\n return ChatMessage(content=_dict[\"content\"], role=role)\ndef _convert_message_to_dict(message: BaseMessage) -> dict:\n if isinstance(message, ChatMessage):\n message_dict = {\"role\": message.role, \"content\": message.content}\n elif isinstance(message, HumanMessage):\n message_dict = {\"role\": \"user\", \"content\": message.content}\n elif isinstance(message, AIMessage):\n message_dict = {\"role\": \"assistant\", \"content\": message.content}\n elif isinstance(message, SystemMessage):\n message_dict = {\"role\": \"system\", \"content\": message.content}\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/jinachat.html"} {"id": "ffee9c50476b-2", "text": "else:\n raise ValueError(f\"Got unknown type {message}\")\n if \"name\" in message.additional_kwargs:\n message_dict[\"name\"] = message.additional_kwargs[\"name\"]\n return message_dict\n[docs]class JinaChat(BaseChatModel):\n \"\"\"JinaChat is a wrapper for Jina AI's LLM service, providing cost-effective\n image chat capabilities in comparison to other LLM APIs.\n To use, you should have the ``openai`` python package installed, and the\n environment variable ``JINACHAT_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the openai.create call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.chat_models import JinaChat\n chat = JinaChat()\n \"\"\"\n @property\n def lc_secrets(self) -> Dict[str, str]:\n return {\"jinachat_api_key\": \"JINACHAT_API_KEY\"}\n @property\n def lc_serializable(self) -> bool:\n return True\n client: Any #: :meta private:\n temperature: float = 0.7\n \"\"\"What sampling temperature to use.\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not explicitly specified.\"\"\"\n jinachat_api_key: Optional[str] = None\n \"\"\"Base URL path for API requests, \n leave blank if not using a proxy or service emulator.\"\"\"\n request_timeout: Optional[Union[float, Tuple[float, float]]] = None\n \"\"\"Timeout for requests to JinaChat completion API. Default is 600 seconds.\"\"\"\n max_retries: int = 6", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/jinachat.html"} {"id": "ffee9c50476b-3", "text": "max_retries: int = 6\n \"\"\"Maximum number of retries to make when generating.\"\"\"\n streaming: bool = False\n \"\"\"Whether to stream the results or not.\"\"\"\n max_tokens: Optional[int] = None\n \"\"\"Maximum number of tokens to generate.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n allow_population_by_field_name = True\n[docs] @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = cls._all_required_field_names()\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n if field_name not in all_required_field_names:\n logger.warning(\n f\"\"\"WARNING! {field_name} is not default parameter.\n {field_name} was transferred to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n invalid_model_kwargs = all_required_field_names.intersection(extra.keys())\n if invalid_model_kwargs:\n raise ValueError(\n f\"Parameters {invalid_model_kwargs} should be specified explicitly. \"\n f\"Instead they were passed in as part of `model_kwargs` parameter.\"\n )\n values[\"model_kwargs\"] = extra\n return values\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n values[\"jinachat_api_key\"] = get_from_dict_or_env(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/jinachat.html"} {"id": "ffee9c50476b-4", "text": "values[\"jinachat_api_key\"] = get_from_dict_or_env(\n values, \"jinachat_api_key\", \"JINACHAT_API_KEY\"\n )\n try:\n import openai\n except ImportError:\n raise ValueError(\n \"Could not import openai python package. \"\n \"Please install it with `pip install openai`.\"\n )\n try:\n values[\"client\"] = openai.ChatCompletion\n except AttributeError:\n raise ValueError(\n \"`openai` has no `ChatCompletion` attribute, this is likely \"\n \"due to an old version of the openai package. Try upgrading it \"\n \"with `pip install --upgrade openai`.\"\n )\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling JinaChat API.\"\"\"\n return {\n \"request_timeout\": self.request_timeout,\n \"max_tokens\": self.max_tokens,\n \"stream\": self.streaming,\n \"temperature\": self.temperature,\n **self.model_kwargs,\n }\n def _create_retry_decorator(self) -> Callable[[Any], Any]:\n import openai\n min_seconds = 1\n max_seconds = 60\n # Wait 2^x * 1 second between each retry starting with\n # 4 seconds, then up to 10 seconds, then 10 seconds afterwards\n return retry(\n reraise=True,\n stop=stop_after_attempt(self.max_retries),\n wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),\n retry=(\n retry_if_exception_type(openai.error.Timeout)\n | retry_if_exception_type(openai.error.APIError)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/jinachat.html"} {"id": "ffee9c50476b-5", "text": "| retry_if_exception_type(openai.error.APIError)\n | retry_if_exception_type(openai.error.APIConnectionError)\n | retry_if_exception_type(openai.error.RateLimitError)\n | retry_if_exception_type(openai.error.ServiceUnavailableError)\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\n[docs] def completion_with_retry(self, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the completion call.\"\"\"\n retry_decorator = self._create_retry_decorator()\n @retry_decorator\n def _completion_with_retry(**kwargs: Any) -> Any:\n return self.client.create(**kwargs)\n return _completion_with_retry(**kwargs)\n def _combine_llm_outputs(self, llm_outputs: List[Optional[dict]]) -> dict:\n overall_token_usage: dict = {}\n for output in llm_outputs:\n if output is None:\n # Happens in streaming\n continue\n token_usage = output[\"token_usage\"]\n for k, v in token_usage.items():\n if k in overall_token_usage:\n overall_token_usage[k] += v\n else:\n overall_token_usage[k] = v\n return {\"token_usage\": overall_token_usage}\n def _generate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n message_dicts, params = self._create_message_dicts(messages, stop)\n params = {**params, **kwargs}\n if self.streaming:\n inner_completion = \"\"\n role = \"assistant\"\n params[\"stream\"] = True", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/jinachat.html"} {"id": "ffee9c50476b-6", "text": "role = \"assistant\"\n params[\"stream\"] = True\n for stream_resp in self.completion_with_retry(\n messages=message_dicts, **params\n ):\n role = stream_resp[\"choices\"][0][\"delta\"].get(\"role\", role)\n token = stream_resp[\"choices\"][0][\"delta\"].get(\"content\") or \"\"\n inner_completion += token\n if run_manager:\n run_manager.on_llm_new_token(token)\n message = _convert_dict_to_message(\n {\n \"content\": inner_completion,\n \"role\": role,\n }\n )\n return ChatResult(generations=[ChatGeneration(message=message)])\n response = self.completion_with_retry(messages=message_dicts, **params)\n return self._create_chat_result(response)\n def _create_message_dicts(\n self, messages: List[BaseMessage], stop: Optional[List[str]]\n ) -> Tuple[List[Dict[str, Any]], Dict[str, Any]]:\n params = dict(self._invocation_params)\n if stop is not None:\n if \"stop\" in params:\n raise ValueError(\"`stop` found in both the input and default params.\")\n params[\"stop\"] = stop\n message_dicts = [_convert_message_to_dict(m) for m in messages]\n return message_dicts, params\n def _create_chat_result(self, response: Mapping[str, Any]) -> ChatResult:\n generations = []\n for res in response[\"choices\"]:\n message = _convert_dict_to_message(res[\"message\"])\n gen = ChatGeneration(message=message)\n generations.append(gen)\n llm_output = {\"token_usage\": response[\"usage\"]}\n return ChatResult(generations=generations, llm_output=llm_output)\n async def _agenerate(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/jinachat.html"} {"id": "ffee9c50476b-7", "text": "async def _agenerate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n message_dicts, params = self._create_message_dicts(messages, stop)\n params = {**params, **kwargs}\n if self.streaming:\n inner_completion = \"\"\n role = \"assistant\"\n params[\"stream\"] = True\n async for stream_resp in await acompletion_with_retry(\n self, messages=message_dicts, **params\n ):\n role = stream_resp[\"choices\"][0][\"delta\"].get(\"role\", role)\n token = stream_resp[\"choices\"][0][\"delta\"].get(\"content\", \"\")\n inner_completion += token or \"\"\n if run_manager:\n await run_manager.on_llm_new_token(token)\n message = _convert_dict_to_message(\n {\n \"content\": inner_completion,\n \"role\": role,\n }\n )\n return ChatResult(generations=[ChatGeneration(message=message)])\n else:\n response = await acompletion_with_retry(\n self, messages=message_dicts, **params\n )\n return self._create_chat_result(response)\n @property\n def _invocation_params(self) -> Mapping[str, Any]:\n \"\"\"Get the parameters used to invoke the model.\"\"\"\n jinachat_creds: Dict[str, Any] = {\n \"api_key\": self.jinachat_api_key,\n \"api_base\": \"https://api.chat.jina.ai/v1\",\n \"model\": \"jinachat\",\n }\n return {**jinachat_creds, **self._default_params}\n @property", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/jinachat.html"} {"id": "ffee9c50476b-8", "text": "return {**jinachat_creds, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of chat model.\"\"\"\n return \"jinachat\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/jinachat.html"} {"id": "71fb3fc36493-0", "text": "Source code for langchain.chat_models.fake\n\"\"\"Fake ChatModel for testing purposes.\"\"\"\nfrom typing import Any, List, Mapping, Optional\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.chat_models.base import SimpleChatModel\nfrom langchain.schema.messages import BaseMessage\n[docs]class FakeListChatModel(SimpleChatModel):\n \"\"\"Fake ChatModel for testing purposes.\"\"\"\n responses: List\n i: int = 0\n @property\n def _llm_type(self) -> str:\n return \"fake-list-chat-model\"\n def _call(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"First try to lookup in queries, else return 'foo' or 'bar'.\"\"\"\n response = self.responses[self.i]\n self.i += 1\n return response\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n return {\"responses\": self.responses}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/fake.html"} {"id": "dd45d9d71c1e-0", "text": "Source code for langchain.chat_models.human\n\"\"\"ChatModel wrapper which returns user input as the response..\"\"\"\nimport asyncio\nfrom functools import partial\nfrom io import StringIO\nfrom typing import Any, Callable, List, Mapping, Optional\nimport yaml\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.chat_models.base import BaseChatModel\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.schema.messages import (\n BaseMessage,\n HumanMessage,\n _message_from_dict,\n messages_to_dict,\n)\nfrom langchain.schema.output import ChatGeneration, ChatResult\ndef _display_messages(messages: List[BaseMessage]) -> None:\n dict_messages = messages_to_dict(messages)\n for message in dict_messages:\n yaml_string = yaml.dump(\n message,\n default_flow_style=False,\n sort_keys=False,\n allow_unicode=True,\n width=10000,\n line_break=None,\n )\n print(\"\\n\", \"======= start of message =======\", \"\\n\\n\")\n print(yaml_string)\n print(\"======= end of message =======\", \"\\n\\n\")\ndef _collect_yaml_input(\n messages: List[BaseMessage], stop: Optional[List[str]] = None\n) -> BaseMessage:\n \"\"\"Collects and returns user input as a single string.\"\"\"\n lines = []\n while True:\n line = input()\n if not line.strip():\n break\n if stop and any(seq in line for seq in stop):\n break\n lines.append(line)\n yaml_string = \"\\n\".join(lines)\n # Try to parse the input string as YAML\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/human.html"} {"id": "dd45d9d71c1e-1", "text": "# Try to parse the input string as YAML\n try:\n message = _message_from_dict(yaml.safe_load(StringIO(yaml_string)))\n if message is None:\n return HumanMessage(content=\"\")\n if stop:\n message.content = enforce_stop_tokens(message.content, stop)\n return message\n except yaml.YAMLError:\n raise ValueError(\"Invalid YAML string entered.\")\n except ValueError:\n raise ValueError(\"Invalid message entered.\")\n[docs]class HumanInputChatModel(BaseChatModel):\n \"\"\"ChatModel wrapper which returns user input as the response..\"\"\"\n input_func: Callable = Field(default_factory=lambda: _collect_yaml_input)\n message_func: Callable = Field(default_factory=lambda: _display_messages)\n separator: str = \"\\n\"\n input_kwargs: Mapping[str, Any] = {}\n message_kwargs: Mapping[str, Any] = {}\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n return {\n \"input_func\": self.input_func.__name__,\n \"message_func\": self.message_func.__name__,\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Returns the type of LLM.\"\"\"\n return \"human-input-chat-model\"\n def _generate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n \"\"\"\n Displays the messages to the user and returns their input as a response.\n Args:\n messages (List[BaseMessage]): The messages to be displayed to the user.\n stop (Optional[List[str]]): A list of stop strings.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/human.html"} {"id": "dd45d9d71c1e-2", "text": "stop (Optional[List[str]]): A list of stop strings.\n run_manager (Optional[CallbackManagerForLLMRun]): Currently not used.\n Returns:\n ChatResult: The user's input as a response.\n \"\"\"\n self.message_func(messages, **self.message_kwargs)\n user_input = self.input_func(messages, stop=stop, **self.input_kwargs)\n return ChatResult(generations=[ChatGeneration(message=user_input)])\n async def _agenerate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n func = partial(\n self._generate, messages, stop=stop, run_manager=run_manager, **kwargs\n )\n return await asyncio.get_event_loop().run_in_executor(None, func)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/human.html"} {"id": "489143d1c726-0", "text": "Source code for langchain.chat_models.azure_openai\n\"\"\"Azure OpenAI chat wrapper.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, Dict, Mapping\nfrom pydantic import root_validator\nfrom langchain.chat_models.openai import ChatOpenAI\nfrom langchain.schema import ChatResult\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class AzureChatOpenAI(ChatOpenAI):\n \"\"\"Wrapper around Azure OpenAI Chat Completion API. To use this class you\n must have a deployed model on Azure OpenAI. Use `deployment_name` in the\n constructor to refer to the \"Model deployment name\" in the Azure portal.\n In addition, you should have the ``openai`` python package installed, and the\n following environment variables set or passed in constructor in lower case:\n - ``OPENAI_API_TYPE`` (default: ``azure``)\n - ``OPENAI_API_KEY``\n - ``OPENAI_API_BASE``\n - ``OPENAI_API_VERSION``\n - ``OPENAI_PROXY``\n For exmaple, if you have `gpt-35-turbo` deployed, with the deployment name\n `35-turbo-dev`, the constructor should look like:\n .. code-block:: python\n AzureChatOpenAI(\n deployment_name=\"35-turbo-dev\",\n openai_api_version=\"2023-03-15-preview\",\n )\n Be aware the API version may change.\n Any parameters that are valid to be passed to the openai.create call can be passed\n in, even if not explicitly saved on this class.\n \"\"\"\n deployment_name: str = \"\"\n openai_api_type: str = \"azure\"\n openai_api_base: str = \"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/azure_openai.html"} {"id": "489143d1c726-1", "text": "openai_api_base: str = \"\"\n openai_api_version: str = \"\"\n openai_api_key: str = \"\"\n openai_organization: str = \"\"\n openai_proxy: str = \"\"\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n values[\"openai_api_key\"] = get_from_dict_or_env(\n values,\n \"openai_api_key\",\n \"OPENAI_API_KEY\",\n )\n values[\"openai_api_base\"] = get_from_dict_or_env(\n values,\n \"openai_api_base\",\n \"OPENAI_API_BASE\",\n )\n values[\"openai_api_version\"] = get_from_dict_or_env(\n values,\n \"openai_api_version\",\n \"OPENAI_API_VERSION\",\n )\n values[\"openai_api_type\"] = get_from_dict_or_env(\n values,\n \"openai_api_type\",\n \"OPENAI_API_TYPE\",\n )\n values[\"openai_organization\"] = get_from_dict_or_env(\n values,\n \"openai_organization\",\n \"OPENAI_ORGANIZATION\",\n default=\"\",\n )\n values[\"openai_proxy\"] = get_from_dict_or_env(\n values,\n \"openai_proxy\",\n \"OPENAI_PROXY\",\n default=\"\",\n )\n try:\n import openai\n except ImportError:\n raise ImportError(\n \"Could not import openai python package. \"\n \"Please install it with `pip install openai`.\"\n )\n try:\n values[\"client\"] = openai.ChatCompletion\n except AttributeError:\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/azure_openai.html"} {"id": "489143d1c726-2", "text": "except AttributeError:\n raise ValueError(\n \"`openai` has no `ChatCompletion` attribute, this is likely \"\n \"due to an old version of the openai package. Try upgrading it \"\n \"with `pip install --upgrade openai`.\"\n )\n if values[\"n\"] < 1:\n raise ValueError(\"n must be at least 1.\")\n if values[\"n\"] > 1 and values[\"streaming\"]:\n raise ValueError(\"n must be 1 when streaming.\")\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling OpenAI API.\"\"\"\n return {\n **super()._default_params,\n \"engine\": self.deployment_name,\n }\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**self._default_params}\n @property\n def _client_params(self) -> Dict[str, Any]:\n \"\"\"Get the config params used for the openai client.\"\"\"\n openai_creds = {\n \"api_type\": self.openai_api_type,\n \"api_version\": self.openai_api_version,\n }\n return {**super()._client_params, **openai_creds}\n @property\n def _llm_type(self) -> str:\n return \"azure-openai-chat\"\n def _create_chat_result(self, response: Mapping[str, Any]) -> ChatResult:\n for res in response[\"choices\"]:\n if res.get(\"finish_reason\", None) == \"content_filter\":\n raise ValueError(\n \"Azure has not provided the response due to a content\"\n \" filter being triggered\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/azure_openai.html"} {"id": "489143d1c726-3", "text": "\" filter being triggered\"\n )\n return super()._create_chat_result(response)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/azure_openai.html"} {"id": "bd5d4448290a-0", "text": "Source code for langchain.vectorstores.cassandra\n\"\"\"Wrapper around Cassandra vector-store capabilities, based on cassIO.\"\"\"\nfrom __future__ import annotations\nimport typing\nimport uuid\nfrom typing import Any, Iterable, List, Optional, Tuple, Type, TypeVar\nimport numpy as np\nif typing.TYPE_CHECKING:\n from cassandra.cluster import Session\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nCVST = TypeVar(\"CVST\", bound=\"Cassandra\")\n[docs]class Cassandra(VectorStore):\n \"\"\"Wrapper around Cassandra embeddings platform.\n There is no notion of a default table name, since each embedding\n function implies its own vector dimension, which is part of the schema.\n Example:\n .. code-block:: python\n from langchain.vectorstores import Cassandra\n from langchain.embeddings.openai import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n session = ...\n keyspace = 'my_keyspace'\n vectorstore = Cassandra(embeddings, session, keyspace, 'my_doc_archive')\n \"\"\"\n _embedding_dimension: int | None\n def _get_embedding_dimension(self) -> int:\n if self._embedding_dimension is None:\n self._embedding_dimension = len(\n self.embedding.embed_query(\"This is a sample sentence.\")\n )\n return self._embedding_dimension\n def __init__(\n self,\n embedding: Embeddings,\n session: Session,\n keyspace: str,\n table_name: str,\n ttl_seconds: Optional[int] = None,\n ) -> None:\n try:\n from cassio.vector import VectorTable", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/cassandra.html"} {"id": "bd5d4448290a-1", "text": ") -> None:\n try:\n from cassio.vector import VectorTable\n except (ImportError, ModuleNotFoundError):\n raise ImportError(\n \"Could not import cassio python package. \"\n \"Please install it with `pip install cassio`.\"\n )\n \"\"\"Create a vector table.\"\"\"\n self.embedding = embedding\n self.session = session\n self.keyspace = keyspace\n self.table_name = table_name\n self.ttl_seconds = ttl_seconds\n #\n self._embedding_dimension = None\n #\n self.table = VectorTable(\n session=session,\n keyspace=keyspace,\n table=table_name,\n embedding_dimension=self._get_embedding_dimension(),\n primary_key_type=\"TEXT\",\n )\n[docs] def delete_collection(self) -> None:\n \"\"\"\n Just an alias for `clear`\n (to better align with other VectorStore implementations).\n \"\"\"\n self.clear()\n[docs] def clear(self) -> None:\n \"\"\"Empty the collection.\"\"\"\n self.table.clear()\n[docs] def delete_by_document_id(self, document_id: str) -> None:\n return self.table.delete(document_id)\n[docs] def delete(self, ids: Optional[List[str]] = None, **kwargs: Any) -> Optional[bool]:\n \"\"\"Delete by vector IDs.\n Args:\n ids: List of ids to delete.\n Returns:\n Optional[bool]: True if deletion is successful,\n False otherwise, None if not implemented.\n \"\"\"\n if ids is None:\n raise ValueError(\"No ids provided to delete.\")\n for document_id in ids:\n self.delete_by_document_id(document_id)\n return True\n[docs] def add_texts(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/cassandra.html"} {"id": "bd5d4448290a-2", "text": "return True\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n batch_size: int = 16,\n ttl_seconds: Optional[int] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts (Iterable[str]): Texts to add to the vectorstore.\n metadatas (Optional[List[dict]], optional): Optional list of metadatas.\n ids (Optional[List[str]], optional): Optional list of IDs.\n batch_size (int): Number of concurrent requests to send to the server.\n ttl_seconds (Optional[int], optional): Optional time-to-live\n for the added texts.\n Returns:\n List[str]: List of IDs of the added texts.\n \"\"\"\n _texts = list(texts) # lest it be a generator or something\n if ids is None:\n ids = [uuid.uuid4().hex for _ in _texts]\n if metadatas is None:\n metadatas = [{} for _ in _texts]\n #\n ttl_seconds = ttl_seconds or self.ttl_seconds\n #\n embedding_vectors = self.embedding.embed_documents(_texts)\n #\n for i in range(0, len(_texts), batch_size):\n batch_texts = _texts[i : i + batch_size]\n batch_embedding_vectors = embedding_vectors[i : i + batch_size]\n batch_ids = ids[i : i + batch_size]\n batch_metadatas = metadatas[i : i + batch_size]\n futures = [\n self.table.put_async(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/cassandra.html"} {"id": "bd5d4448290a-3", "text": "futures = [\n self.table.put_async(\n text, embedding_vector, text_id, metadata, ttl_seconds\n )\n for text, embedding_vector, text_id, metadata in zip(\n batch_texts, batch_embedding_vectors, batch_ids, batch_metadatas\n )\n ]\n for future in futures:\n future.result()\n return ids\n # id-returning search facilities\n[docs] def similarity_search_with_score_id_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n ) -> List[Tuple[Document, float, str]]:\n \"\"\"Return docs most similar to embedding vector.\n No support for `filter` query (on metadata) along with vector search.\n Args:\n embedding (str): Embedding to look up documents similar to.\n k (int): Number of Documents to return. Defaults to 4.\n Returns:\n List of (Document, score, id), the most similar to the query vector.\n \"\"\"\n hits = self.table.search(\n embedding_vector=embedding,\n top_k=k,\n metric=\"cos\",\n metric_threshold=None,\n )\n # We stick to 'cos' distance as it can be normalized on a 0-1 axis\n # (1=most relevant), as required by this class' contract.\n return [\n (\n Document(\n page_content=hit[\"document\"],\n metadata=hit[\"metadata\"],\n ),\n 0.5 + 0.5 * hit[\"distance\"],\n hit[\"document_id\"],\n )\n for hit in hits\n ]\n[docs] def similarity_search_with_score_id(\n self,\n query: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/cassandra.html"} {"id": "bd5d4448290a-4", "text": "self,\n query: str,\n k: int = 4,\n ) -> List[Tuple[Document, float, str]]:\n embedding_vector = self.embedding.embed_query(query)\n return self.similarity_search_with_score_id_by_vector(\n embedding=embedding_vector,\n k=k,\n )\n # id-unaware search facilities\n[docs] def similarity_search_with_score_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to embedding vector.\n No support for `filter` query (on metadata) along with vector search.\n Args:\n embedding (str): Embedding to look up documents similar to.\n k (int): Number of Documents to return. Defaults to 4.\n Returns:\n List of (Document, score), the most similar to the query vector.\n \"\"\"\n return [\n (doc, score)\n for (doc, score, docId) in self.similarity_search_with_score_id_by_vector(\n embedding=embedding,\n k=k,\n )\n ]\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Document]:\n embedding_vector = self.embedding.embed_query(query)\n return self.similarity_search_by_vector(\n embedding_vector,\n k,\n )\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n **kwargs: Any,\n ) -> List[Document]:\n return [\n doc", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/cassandra.html"} {"id": "bd5d4448290a-5", "text": ") -> List[Document]:\n return [\n doc\n for doc, _ in self.similarity_search_with_score_by_vector(\n embedding,\n k,\n )\n ]\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,\n ) -> List[Tuple[Document, float]]:\n embedding_vector = self.embedding.embed_query(query)\n return self.similarity_search_with_score_by_vector(\n embedding_vector,\n k,\n )\n # Even though this is a `_`-method,\n # it is apparently used by VectorSearch parent class\n # in an exposed method (`similarity_search_with_relevance_scores`).\n # So we implement it (hmm).\n def _similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n return self.similarity_search_with_score(\n query,\n k,\n )\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/cassandra.html"} {"id": "bd5d4448290a-6", "text": "fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n prefetchHits = self.table.search(\n embedding_vector=embedding,\n top_k=fetch_k,\n metric=\"cos\",\n metric_threshold=None,\n )\n # let the mmr utility pick the *indices* in the above array\n mmrChosenIndices = maximal_marginal_relevance(\n np.array(embedding, dtype=np.float32),\n [pfHit[\"embedding_vector\"] for pfHit in prefetchHits],\n k=k,\n lambda_mult=lambda_mult,\n )\n mmrHits = [\n pfHit\n for pfIndex, pfHit in enumerate(prefetchHits)\n if pfIndex in mmrChosenIndices\n ]\n return [\n Document(\n page_content=hit[\"document\"],\n metadata=hit[\"metadata\"],\n )\n for hit in mmrHits\n ]\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/cassandra.html"} {"id": "bd5d4448290a-7", "text": "k: Number of Documents to return.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Optional.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n embedding_vector = self.embedding.embed_query(query)\n return self.max_marginal_relevance_search_by_vector(\n embedding_vector,\n k,\n fetch_k,\n lambda_mult=lambda_mult,\n )\n[docs] @classmethod\n def from_texts(\n cls: Type[CVST],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n batch_size: int = 16,\n **kwargs: Any,\n ) -> CVST:\n \"\"\"Create a Cassandra vectorstore from raw texts.\n No support for specifying text IDs\n Returns:\n a Cassandra vectorstore.\n \"\"\"\n session: Session = kwargs[\"session\"]\n keyspace: str = kwargs[\"keyspace\"]\n table_name: str = kwargs[\"table_name\"]\n cassandraStore = cls(\n embedding=embedding,\n session=session,\n keyspace=keyspace,\n table_name=table_name,\n )\n cassandraStore.add_texts(texts=texts, metadatas=metadatas)\n return cassandraStore\n[docs] @classmethod\n def from_documents(\n cls: Type[CVST],\n documents: List[Document],\n embedding: Embeddings,\n batch_size: int = 16,\n **kwargs: Any,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/cassandra.html"} {"id": "bd5d4448290a-8", "text": "batch_size: int = 16,\n **kwargs: Any,\n ) -> CVST:\n \"\"\"Create a Cassandra vectorstore from a document list.\n No support for specifying text IDs\n Returns:\n a Cassandra vectorstore.\n \"\"\"\n texts = [doc.page_content for doc in documents]\n metadatas = [doc.metadata for doc in documents]\n session: Session = kwargs[\"session\"]\n keyspace: str = kwargs[\"keyspace\"]\n table_name: str = kwargs[\"table_name\"]\n return cls.from_texts(\n texts=texts,\n metadatas=metadatas,\n embedding=embedding,\n session=session,\n keyspace=keyspace,\n table_name=table_name,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/cassandra.html"} {"id": "38fd3af7cb05-0", "text": "Source code for langchain.vectorstores.matching_engine\n\"\"\"Vertex Matching Engine implementation of the vector store.\"\"\"\nfrom __future__ import annotations\nimport json\nimport logging\nimport time\nimport uuid\nfrom typing import TYPE_CHECKING, Any, Iterable, List, Optional, Type\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings import TensorflowHubEmbeddings\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nif TYPE_CHECKING:\n from google.cloud import storage\n from google.cloud.aiplatform import MatchingEngineIndex, MatchingEngineIndexEndpoint\n from google.oauth2.service_account import Credentials\nlogger = logging.getLogger()\n[docs]class MatchingEngine(VectorStore):\n \"\"\"Vertex Matching Engine implementation of the vector store.\n While the embeddings are stored in the Matching Engine, the embedded\n documents will be stored in GCS.\n An existing Index and corresponding Endpoint are preconditions for\n using this module.\n See usage in docs/modules/indexes/vectorstores/examples/matchingengine.ipynb\n Note that this implementation is mostly meant for reading if you are\n planning to do a real time implementation. While reading is a real time\n operation, updating the index takes close to one hour.\"\"\"\n def __init__(\n self,\n project_id: str,\n index: MatchingEngineIndex,\n endpoint: MatchingEngineIndexEndpoint,\n embedding: Embeddings,\n gcs_client: storage.Client,\n gcs_bucket_name: str,\n credentials: Optional[Credentials] = None,\n ):\n \"\"\"Vertex Matching Engine implementation of the vector store.\n While the embeddings are stored in the Matching Engine, the embedded\n documents will be stored in GCS.\n An existing Index and corresponding Endpoint are preconditions for\n using this module.\n See usage in", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"} {"id": "38fd3af7cb05-1", "text": "using this module.\n See usage in\n docs/modules/indexes/vectorstores/examples/matchingengine.ipynb.\n Note that this implementation is mostly meant for reading if you are\n planning to do a real time implementation. While reading is a real time\n operation, updating the index takes close to one hour.\n Attributes:\n project_id: The GCS project id.\n index: The created index class. See\n ~:func:`MatchingEngine.from_components`.\n endpoint: The created endpoint class. See\n ~:func:`MatchingEngine.from_components`.\n embedding: A :class:`Embeddings` that will be used for\n embedding the text sent. If none is sent, then the\n multilingual Tensorflow Universal Sentence Encoder will be used.\n gcs_client: The GCS client.\n gcs_bucket_name: The GCS bucket name.\n credentials (Optional): Created GCP credentials.\n \"\"\"\n super().__init__()\n self._validate_google_libraries_installation()\n self.project_id = project_id\n self.index = index\n self.endpoint = endpoint\n self.embedding = embedding\n self.gcs_client = gcs_client\n self.credentials = credentials\n self.gcs_bucket_name = gcs_bucket_name\n def _validate_google_libraries_installation(self) -> None:\n \"\"\"Validates that Google libraries that are needed are installed.\"\"\"\n try:\n from google.cloud import aiplatform, storage # noqa: F401\n from google.oauth2 import service_account # noqa: F401\n except ImportError:\n raise ImportError(\n \"You must run `pip install --upgrade \"\n \"google-cloud-aiplatform google-cloud-storage`\"\n \"to use the MatchingEngine Vectorstore.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"} {"id": "38fd3af7cb05-2", "text": "\"to use the MatchingEngine Vectorstore.\"\n )\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n kwargs: vectorstore specific parameters.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n logger.debug(\"Embedding documents.\")\n embeddings = self.embedding.embed_documents(list(texts))\n jsons = []\n ids = []\n # Could be improved with async.\n for embedding, text in zip(embeddings, texts):\n id = str(uuid.uuid4())\n ids.append(id)\n jsons.append({\"id\": id, \"embedding\": embedding})\n self._upload_to_gcs(text, f\"documents/{id}\")\n logger.debug(f\"Uploaded {len(ids)} documents to GCS.\")\n # Creating json lines from the embedded documents.\n result_str = \"\\n\".join([json.dumps(x) for x in jsons])\n filename_prefix = f\"indexes/{uuid.uuid4()}\"\n filename = f\"{filename_prefix}/{time.time()}.json\"\n self._upload_to_gcs(result_str, filename)\n logger.debug(\n f\"Uploaded updated json with embeddings to \"\n f\"{self.gcs_bucket_name}/{filename}.\"\n )\n self.index = self.index.update_embeddings(\n contents_delta_uri=f\"gs://{self.gcs_bucket_name}/{filename_prefix}/\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"} {"id": "38fd3af7cb05-3", "text": ")\n logger.debug(\"Updated index with new configuration.\")\n return ids\n def _upload_to_gcs(self, data: str, gcs_location: str) -> None:\n \"\"\"Uploads data to gcs_location.\n Args:\n data: The data that will be stored.\n gcs_location: The location where the data will be stored.\n \"\"\"\n bucket = self.gcs_client.get_bucket(self.gcs_bucket_name)\n blob = bucket.blob(gcs_location)\n blob.upload_from_string(data)\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n Args:\n query: The string that will be used to search for similar documents.\n k: The amount of neighbors that will be retrieved.\n Returns:\n A list of k matching documents.\n \"\"\"\n logger.debug(f\"Embedding query {query}.\")\n embedding_query = self.embedding.embed_documents([query])\n response = self.endpoint.match(\n deployed_index_id=self._get_index_id(),\n queries=embedding_query,\n num_neighbors=k,\n )\n if len(response) == 0:\n return []\n logger.debug(f\"Found {len(response)} matches for the query {query}.\")\n results = []\n # I'm only getting the first one because queries receives an array\n # and the similarity_search method only recevies one query. This\n # means that the match method will always return an array with only\n # one element.\n for doc in response[0]:\n page_content = self._download_from_gcs(f\"documents/{doc.id}\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"} {"id": "38fd3af7cb05-4", "text": "page_content = self._download_from_gcs(f\"documents/{doc.id}\")\n results.append(Document(page_content=page_content))\n logger.debug(\"Downloaded documents for query.\")\n return results\n def _get_index_id(self) -> str:\n \"\"\"Gets the correct index id for the endpoint.\n Returns:\n The index id if found (which should be found) or throws\n ValueError otherwise.\n \"\"\"\n for index in self.endpoint.deployed_indexes:\n if index.index == self.index.resource_name:\n return index.id\n raise ValueError(\n f\"No index with id {self.index.resource_name} \"\n f\"deployed on endpoint \"\n f\"{self.endpoint.display_name}.\"\n )\n def _download_from_gcs(self, gcs_location: str) -> str:\n \"\"\"Downloads from GCS in text format.\n Args:\n gcs_location: The location where the file is located.\n Returns:\n The string contents of the file.\n \"\"\"\n bucket = self.gcs_client.get_bucket(self.gcs_bucket_name)\n blob = bucket.blob(gcs_location)\n return blob.download_as_string()\n[docs] @classmethod\n def from_texts(\n cls: Type[\"MatchingEngine\"],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> \"MatchingEngine\":\n \"\"\"Use from components instead.\"\"\"\n raise NotImplementedError(\n \"This method is not implemented. Instead, you should initialize the class\"\n \" with `MatchingEngine.from_components(...)` and then call \"\n \"`add_texts`\"\n )\n[docs] @classmethod\n def from_components(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"} {"id": "38fd3af7cb05-5", "text": ")\n[docs] @classmethod\n def from_components(\n cls: Type[\"MatchingEngine\"],\n project_id: str,\n region: str,\n gcs_bucket_name: str,\n index_id: str,\n endpoint_id: str,\n credentials_path: Optional[str] = None,\n embedding: Optional[Embeddings] = None,\n ) -> \"MatchingEngine\":\n \"\"\"Takes the object creation out of the constructor.\n Args:\n project_id: The GCP project id.\n region: The default location making the API calls. It must have\n the same location as the GCS bucket and must be regional.\n gcs_bucket_name: The location where the vectors will be stored in\n order for the index to be created.\n index_id: The id of the created index.\n endpoint_id: The id of the created endpoint.\n credentials_path: (Optional) The path of the Google credentials on\n the local file system.\n embedding: The :class:`Embeddings` that will be used for\n embedding the texts.\n Returns:\n A configured MatchingEngine with the texts added to the index.\n \"\"\"\n gcs_bucket_name = cls._validate_gcs_bucket(gcs_bucket_name)\n credentials = cls._create_credentials_from_file(credentials_path)\n index = cls._create_index_by_id(index_id, project_id, region, credentials)\n endpoint = cls._create_endpoint_by_id(\n endpoint_id, project_id, region, credentials\n )\n gcs_client = cls._get_gcs_client(credentials, project_id)\n cls._init_aiplatform(project_id, region, gcs_bucket_name, credentials)\n return cls(\n project_id=project_id,\n index=index,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"} {"id": "38fd3af7cb05-6", "text": "return cls(\n project_id=project_id,\n index=index,\n endpoint=endpoint,\n embedding=embedding or cls._get_default_embeddings(),\n gcs_client=gcs_client,\n credentials=credentials,\n gcs_bucket_name=gcs_bucket_name,\n )\n @classmethod\n def _validate_gcs_bucket(cls, gcs_bucket_name: str) -> str:\n \"\"\"Validates the gcs_bucket_name as a bucket name.\n Args:\n gcs_bucket_name: The received bucket uri.\n Returns:\n A valid gcs_bucket_name or throws ValueError if full path is\n provided.\n \"\"\"\n gcs_bucket_name = gcs_bucket_name.replace(\"gs://\", \"\")\n if \"/\" in gcs_bucket_name:\n raise ValueError(\n f\"The argument gcs_bucket_name should only be \"\n f\"the bucket name. Received {gcs_bucket_name}\"\n )\n return gcs_bucket_name\n @classmethod\n def _create_credentials_from_file(\n cls, json_credentials_path: Optional[str]\n ) -> Optional[Credentials]:\n \"\"\"Creates credentials for GCP.\n Args:\n json_credentials_path: The path on the file system where the\n credentials are stored.\n Returns:\n An optional of Credentials or None, in which case the default\n will be used.\n \"\"\"\n from google.oauth2 import service_account\n credentials = None\n if json_credentials_path is not None:\n credentials = service_account.Credentials.from_service_account_file(\n json_credentials_path\n )\n return credentials\n @classmethod\n def _create_index_by_id(\n cls, index_id: str, project_id: str, region: str, credentials: \"Credentials\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"} {"id": "38fd3af7cb05-7", "text": ") -> MatchingEngineIndex:\n \"\"\"Creates a MatchingEngineIndex object by id.\n Args:\n index_id: The created index id.\n project_id: The project to retrieve index from.\n region: Location to retrieve index from.\n credentials: GCS credentials.\n Returns:\n A configured MatchingEngineIndex.\n \"\"\"\n from google.cloud import aiplatform\n logger.debug(f\"Creating matching engine index with id {index_id}.\")\n return aiplatform.MatchingEngineIndex(\n index_name=index_id,\n project=project_id,\n location=region,\n credentials=credentials,\n )\n @classmethod\n def _create_endpoint_by_id(\n cls, endpoint_id: str, project_id: str, region: str, credentials: \"Credentials\"\n ) -> MatchingEngineIndexEndpoint:\n \"\"\"Creates a MatchingEngineIndexEndpoint object by id.\n Args:\n endpoint_id: The created endpoint id.\n project_id: The project to retrieve index from.\n region: Location to retrieve index from.\n credentials: GCS credentials.\n Returns:\n A configured MatchingEngineIndexEndpoint.\n \"\"\"\n from google.cloud import aiplatform\n logger.debug(f\"Creating endpoint with id {endpoint_id}.\")\n return aiplatform.MatchingEngineIndexEndpoint(\n index_endpoint_name=endpoint_id,\n project=project_id,\n location=region,\n credentials=credentials,\n )\n @classmethod\n def _get_gcs_client(\n cls, credentials: \"Credentials\", project_id: str\n ) -> \"storage.Client\":\n \"\"\"Lazily creates a GCS client.\n Returns:\n A configured GCS client.\n \"\"\"\n from google.cloud import storage", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"} {"id": "38fd3af7cb05-8", "text": "A configured GCS client.\n \"\"\"\n from google.cloud import storage\n return storage.Client(credentials=credentials, project=project_id)\n @classmethod\n def _init_aiplatform(\n cls,\n project_id: str,\n region: str,\n gcs_bucket_name: str,\n credentials: \"Credentials\",\n ) -> None:\n \"\"\"Configures the aiplatform library.\n Args:\n project_id: The GCP project id.\n region: The default location making the API calls. It must have\n the same location as the GCS bucket and must be regional.\n gcs_bucket_name: GCS staging location.\n credentials: The GCS Credentials object.\n \"\"\"\n from google.cloud import aiplatform\n logger.debug(\n f\"Initializing AI Platform for project {project_id} on \"\n f\"{region} and for {gcs_bucket_name}.\"\n )\n aiplatform.init(\n project=project_id,\n location=region,\n staging_bucket=gcs_bucket_name,\n credentials=credentials,\n )\n @classmethod\n def _get_default_embeddings(cls) -> TensorflowHubEmbeddings:\n \"\"\"This function returns the default embedding.\n Returns:\n Default TensorflowHubEmbeddings to use.\n \"\"\"\n return TensorflowHubEmbeddings()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"} {"id": "2de7eaa5e26f-0", "text": "Source code for langchain.vectorstores.annoy\n\"\"\"Wrapper around Annoy vector database.\"\"\"\nfrom __future__ import annotations\nimport os\nimport pickle\nimport uuid\nfrom configparser import ConfigParser\nfrom pathlib import Path\nfrom typing import Any, Callable, Dict, Iterable, List, Optional, Tuple\nimport numpy as np\nfrom langchain.docstore.base import Docstore\nfrom langchain.docstore.document import Document\nfrom langchain.docstore.in_memory import InMemoryDocstore\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nINDEX_METRICS = frozenset([\"angular\", \"euclidean\", \"manhattan\", \"hamming\", \"dot\"])\nDEFAULT_METRIC = \"angular\"\n[docs]def dependable_annoy_import() -> Any:\n \"\"\"Import annoy if available, otherwise raise error.\"\"\"\n try:\n import annoy\n except ImportError:\n raise ValueError(\n \"Could not import annoy python package. \"\n \"Please install it with `pip install --user annoy` \"\n )\n return annoy\n[docs]class Annoy(VectorStore):\n \"\"\"Wrapper around Annoy vector database.\n To use, you should have the ``annoy`` python package installed.\n Example:\n .. code-block:: python\n from langchain import Annoy\n db = Annoy(embedding_function, index, docstore, index_to_docstore_id)\n \"\"\"\n def __init__(\n self,\n embedding_function: Callable,\n index: Any,\n metric: str,\n docstore: Docstore,\n index_to_docstore_id: Dict[int, str],\n ):\n \"\"\"Initialize with necessary components.\"\"\"\n self.embedding_function = embedding_function", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"} {"id": "2de7eaa5e26f-1", "text": "):\n \"\"\"Initialize with necessary components.\"\"\"\n self.embedding_function = embedding_function\n self.index = index\n self.metric = metric\n self.docstore = docstore\n self.index_to_docstore_id = index_to_docstore_id\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n raise NotImplementedError(\n \"Annoy does not allow to add new data once the index is build.\"\n )\n[docs] def process_index_results(\n self, idxs: List[int], dists: List[float]\n ) -> List[Tuple[Document, float]]:\n \"\"\"Turns annoy results into a list of documents and scores.\n Args:\n idxs: List of indices of the documents in the index.\n dists: List of distances of the documents in the index.\n Returns:\n List of Documents and scores.\n \"\"\"\n docs = []\n for idx, dist in zip(idxs, dists):\n _id = self.index_to_docstore_id[idx]\n doc = self.docstore.search(_id)\n if not isinstance(doc, Document):\n raise ValueError(f\"Could not find document for id {_id}, got {doc}\")\n docs.append((doc, dist))\n return docs\n[docs] def similarity_search_with_score_by_vector(\n self, embedding: List[float], k: int = 4, search_k: int = -1\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"} {"id": "2de7eaa5e26f-2", "text": "Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n search_k: inspect up to search_k nodes which defaults\n to n_trees * n if not provided\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n idxs, dists = self.index.get_nns_by_vector(\n embedding, k, search_k=search_k, include_distances=True\n )\n return self.process_index_results(idxs, dists)\n[docs] def similarity_search_with_score_by_index(\n self, docstore_index: int, k: int = 4, search_k: int = -1\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n search_k: inspect up to search_k nodes which defaults\n to n_trees * n if not provided\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n idxs, dists = self.index.get_nns_by_item(\n docstore_index, k, search_k=search_k, include_distances=True\n )\n return self.process_index_results(idxs, dists)\n[docs] def similarity_search_with_score(\n self, query: str, k: int = 4, search_k: int = -1\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"} {"id": "2de7eaa5e26f-3", "text": "k: Number of Documents to return. Defaults to 4.\n search_k: inspect up to search_k nodes which defaults\n to n_trees * n if not provided\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n embedding = self.embedding_function(query)\n docs = self.similarity_search_with_score_by_vector(embedding, k, search_k)\n return docs\n[docs] def similarity_search_by_vector(\n self, embedding: List[float], k: int = 4, search_k: int = -1, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n search_k: inspect up to search_k nodes which defaults\n to n_trees * n if not provided\n Returns:\n List of Documents most similar to the embedding.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score_by_vector(\n embedding, k, search_k\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] def similarity_search_by_index(\n self, docstore_index: int, k: int = 4, search_k: int = -1, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to docstore_index.\n Args:\n docstore_index: Index of document in docstore\n k: Number of Documents to return. Defaults to 4.\n search_k: inspect up to search_k nodes which defaults\n to n_trees * n if not provided\n Returns:\n List of Documents most similar to the embedding.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"} {"id": "2de7eaa5e26f-4", "text": "Returns:\n List of Documents most similar to the embedding.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score_by_index(\n docstore_index, k, search_k\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] def similarity_search(\n self, query: str, k: int = 4, search_k: int = -1, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n search_k: inspect up to search_k nodes which defaults\n to n_trees * n if not provided\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(query, k, search_k)\n return [doc for doc, _ in docs_and_scores]\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n k: Number of Documents to return. Defaults to 4.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"} {"id": "2de7eaa5e26f-5", "text": "of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n idxs = self.index.get_nns_by_vector(\n embedding, fetch_k, search_k=-1, include_distances=False\n )\n embeddings = [self.index.get_item_vector(i) for i in idxs]\n mmr_selected = maximal_marginal_relevance(\n np.array([embedding], dtype=np.float32),\n embeddings,\n k=k,\n lambda_mult=lambda_mult,\n )\n # ignore the -1's if not enough docs are returned/indexed\n selected_indices = [idxs[i] for i in mmr_selected if i != -1]\n docs = []\n for i in selected_indices:\n _id = self.index_to_docstore_id[i]\n doc = self.docstore.search(_id)\n if not isinstance(doc, Document):\n raise ValueError(f\"Could not find document for id {_id}, got {doc}\")\n docs.append(doc)\n return docs\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"} {"id": "2de7eaa5e26f-6", "text": "k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n embedding = self.embedding_function(query)\n docs = self.max_marginal_relevance_search_by_vector(\n embedding, k, fetch_k, lambda_mult=lambda_mult\n )\n return docs\n @classmethod\n def __from(\n cls,\n texts: List[str],\n embeddings: List[List[float]],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n metric: str = DEFAULT_METRIC,\n trees: int = 100,\n n_jobs: int = -1,\n **kwargs: Any,\n ) -> Annoy:\n if metric not in INDEX_METRICS:\n raise ValueError(\n (\n f\"Unsupported distance metric: {metric}. \"\n f\"Expected one of {list(INDEX_METRICS)}\"\n )\n )\n annoy = dependable_annoy_import()\n if not embeddings:\n raise ValueError(\"embeddings must be provided to build AnnoyIndex\")\n f = len(embeddings[0])\n index = annoy.AnnoyIndex(f, metric=metric)\n for i, emb in enumerate(embeddings):\n index.add_item(i, emb)\n index.build(trees, n_jobs=n_jobs)\n documents = []\n for i, text in enumerate(texts):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"} {"id": "2de7eaa5e26f-7", "text": "documents = []\n for i, text in enumerate(texts):\n metadata = metadatas[i] if metadatas else {}\n documents.append(Document(page_content=text, metadata=metadata))\n index_to_id = {i: str(uuid.uuid4()) for i in range(len(documents))}\n docstore = InMemoryDocstore(\n {index_to_id[i]: doc for i, doc in enumerate(documents)}\n )\n return cls(embedding.embed_query, index, metric, docstore, index_to_id)\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n metric: str = DEFAULT_METRIC,\n trees: int = 100,\n n_jobs: int = -1,\n **kwargs: Any,\n ) -> Annoy:\n \"\"\"Construct Annoy wrapper from raw documents.\n Args:\n texts: List of documents to index.\n embedding: Embedding function to use.\n metadatas: List of metadata dictionaries to associate with documents.\n metric: Metric to use for indexing. Defaults to \"angular\".\n trees: Number of trees to use for indexing. Defaults to 100.\n n_jobs: Number of jobs to use for indexing. Defaults to -1.\n This is a user friendly interface that:\n 1. Embeds documents.\n 2. Creates an in memory docstore\n 3. Initializes the Annoy database\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import Annoy\n from langchain.embeddings import OpenAIEmbeddings", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"} {"id": "2de7eaa5e26f-8", "text": "from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n index = Annoy.from_texts(texts, embeddings)\n \"\"\"\n embeddings = embedding.embed_documents(texts)\n return cls.__from(\n texts, embeddings, embedding, metadatas, metric, trees, n_jobs, **kwargs\n )\n[docs] @classmethod\n def from_embeddings(\n cls,\n text_embeddings: List[Tuple[str, List[float]]],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n metric: str = DEFAULT_METRIC,\n trees: int = 100,\n n_jobs: int = -1,\n **kwargs: Any,\n ) -> Annoy:\n \"\"\"Construct Annoy wrapper from embeddings.\n Args:\n text_embeddings: List of tuples of (text, embedding)\n embedding: Embedding function to use.\n metadatas: List of metadata dictionaries to associate with documents.\n metric: Metric to use for indexing. Defaults to \"angular\".\n trees: Number of trees to use for indexing. Defaults to 100.\n n_jobs: Number of jobs to use for indexing. Defaults to -1\n This is a user friendly interface that:\n 1. Creates an in memory docstore with provided embeddings\n 2. Initializes the Annoy database\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import Annoy\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n text_embeddings = embeddings.embed_documents(texts)\n text_embedding_pairs = list(zip(texts, text_embeddings))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"} {"id": "2de7eaa5e26f-9", "text": "text_embedding_pairs = list(zip(texts, text_embeddings))\n db = Annoy.from_embeddings(text_embedding_pairs, embeddings)\n \"\"\"\n texts = [t[0] for t in text_embeddings]\n embeddings = [t[1] for t in text_embeddings]\n return cls.__from(\n texts, embeddings, embedding, metadatas, metric, trees, n_jobs, **kwargs\n )\n[docs] def save_local(self, folder_path: str, prefault: bool = False) -> None:\n \"\"\"Save Annoy index, docstore, and index_to_docstore_id to disk.\n Args:\n folder_path: folder path to save index, docstore,\n and index_to_docstore_id to.\n prefault: Whether to pre-load the index into memory.\n \"\"\"\n path = Path(folder_path)\n os.makedirs(path, exist_ok=True)\n # save index, index config, docstore and index_to_docstore_id\n config_object = ConfigParser()\n config_object[\"ANNOY\"] = {\n \"f\": self.index.f,\n \"metric\": self.metric,\n }\n self.index.save(str(path / \"index.annoy\"), prefault=prefault)\n with open(path / \"index.pkl\", \"wb\") as file:\n pickle.dump((self.docstore, self.index_to_docstore_id, config_object), file)\n[docs] @classmethod\n def load_local(\n cls,\n folder_path: str,\n embeddings: Embeddings,\n ) -> Annoy:\n \"\"\"Load Annoy index, docstore, and index_to_docstore_id to disk.\n Args:\n folder_path: folder path to load index, docstore,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"} {"id": "2de7eaa5e26f-10", "text": "Args:\n folder_path: folder path to load index, docstore,\n and index_to_docstore_id from.\n embeddings: Embeddings to use when generating queries.\n \"\"\"\n path = Path(folder_path)\n # load index separately since it is not picklable\n annoy = dependable_annoy_import()\n # load docstore and index_to_docstore_id\n with open(path / \"index.pkl\", \"rb\") as file:\n docstore, index_to_docstore_id, config_object = pickle.load(file)\n f = int(config_object[\"ANNOY\"][\"f\"])\n metric = config_object[\"ANNOY\"][\"metric\"]\n index = annoy.AnnoyIndex(f, metric=metric)\n index.load(str(path / \"index.annoy\"))\n return cls(\n embeddings.embed_query, index, metric, docstore, index_to_docstore_id\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"} {"id": "7ad591e60f99-0", "text": "Source code for langchain.vectorstores.supabase\nfrom __future__ import annotations\nimport uuid\nfrom itertools import repeat\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Iterable,\n List,\n Optional,\n Tuple,\n Type,\n Union,\n)\nimport numpy as np\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nif TYPE_CHECKING:\n import supabase\n[docs]class SupabaseVectorStore(VectorStore):\n \"\"\"VectorStore for a Supabase postgres database. Assumes you have the `pgvector`\n extension installed and a `match_documents` (or similar) function. For more details:\n https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabase\n You can implement your own `match_documents` function in order to limit the search\n space to a subset of documents based on your own authorization or business logic.\n Note that the Supabase Python client does not yet support async operations.\n If you'd like to use `max_marginal_relevance_search`, please review the instructions\n below on modifying the `match_documents` function to return matched embeddings.\n \"\"\"\n _client: supabase.client.Client\n # This is the embedding function. Don't confuse with the embedding vectors.\n # We should perhaps rename the underlying Embedding base class to EmbeddingFunction\n # or something\n _embedding: Embeddings\n table_name: str\n query_name: str\n def __init__(\n self,\n client: supabase.client.Client,\n embedding: Embeddings,\n table_name: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html"} {"id": "7ad591e60f99-1", "text": "embedding: Embeddings,\n table_name: str,\n query_name: Union[str, None] = None,\n ) -> None:\n \"\"\"Initialize with supabase client.\"\"\"\n try:\n import supabase # noqa: F401\n except ImportError:\n raise ValueError(\n \"Could not import supabase python package. \"\n \"Please install it with `pip install supabase`.\"\n )\n self._client = client\n self._embedding: Embeddings = embedding\n self.table_name = table_name or \"documents\"\n self.query_name = query_name or \"match_documents\"\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict[Any, Any]]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n ids = ids or [str(uuid.uuid4()) for _ in texts]\n docs = self._texts_to_documents(texts, metadatas)\n vectors = self._embedding.embed_documents(list(texts))\n return self.add_vectors(vectors, docs, ids)\n[docs] @classmethod\n def from_texts(\n cls: Type[\"SupabaseVectorStore\"],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n client: Optional[supabase.client.Client] = None,\n table_name: Optional[str] = \"documents\",\n query_name: Union[str, None] = \"match_documents\",\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> \"SupabaseVectorStore\":\n \"\"\"Return VectorStore initialized from texts and embeddings.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html"} {"id": "7ad591e60f99-2", "text": "\"\"\"Return VectorStore initialized from texts and embeddings.\"\"\"\n if not client:\n raise ValueError(\"Supabase client is required.\")\n if not table_name:\n raise ValueError(\"Supabase document table_name is required.\")\n embeddings = embedding.embed_documents(texts)\n ids = [str(uuid.uuid4()) for _ in texts]\n docs = cls._texts_to_documents(texts, metadatas)\n _ids = cls._add_vectors(client, table_name, embeddings, docs, ids)\n return cls(\n client=client,\n embedding=embedding,\n table_name=table_name,\n query_name=query_name,\n )\n[docs] def add_vectors(\n self,\n vectors: List[List[float]],\n documents: List[Document],\n ids: List[str],\n ) -> List[str]:\n return self._add_vectors(self._client, self.table_name, vectors, documents, ids)\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n vectors = self._embedding.embed_documents([query])\n return self.similarity_search_by_vector(vectors[0], k)\n[docs] def similarity_search_by_vector(\n self, embedding: List[float], k: int = 4, **kwargs: Any\n ) -> List[Document]:\n result = self.similarity_search_by_vector_with_relevance_scores(embedding, k)\n documents = [doc for doc, _ in result]\n return documents\n[docs] def similarity_search_with_relevance_scores(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Tuple[Document, float]]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html"} {"id": "7ad591e60f99-3", "text": ") -> List[Tuple[Document, float]]:\n vectors = self._embedding.embed_documents([query])\n return self.similarity_search_by_vector_with_relevance_scores(vectors[0], k)\n[docs] def similarity_search_by_vector_with_relevance_scores(\n self, query: List[float], k: int\n ) -> List[Tuple[Document, float]]:\n match_documents_params = dict(query_embedding=query, match_count=k)\n res = self._client.rpc(self.query_name, match_documents_params).execute()\n match_result = [\n (\n Document(\n metadata=search.get(\"metadata\", {}), # type: ignore\n page_content=search.get(\"content\", \"\"),\n ),\n search.get(\"similarity\", 0.0),\n )\n for search in res.data\n if search.get(\"content\")\n ]\n return match_result\n[docs] def similarity_search_by_vector_returning_embeddings(\n self, query: List[float], k: int\n ) -> List[Tuple[Document, float, np.ndarray[np.float32, Any]]]:\n match_documents_params = dict(query_embedding=query, match_count=k)\n res = self._client.rpc(self.query_name, match_documents_params).execute()\n match_result = [\n (\n Document(\n metadata=search.get(\"metadata\", {}), # type: ignore\n page_content=search.get(\"content\", \"\"),\n ),\n search.get(\"similarity\", 0.0),\n # Supabase returns a vector type as its string represation (!).\n # This is a hack to convert the string to numpy array.\n np.fromstring(\n search.get(\"embedding\", \"\").strip(\"[]\"), np.float32, sep=\",\"\n ),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html"} {"id": "7ad591e60f99-4", "text": "),\n )\n for search in res.data\n if search.get(\"content\")\n ]\n return match_result\n @staticmethod\n def _texts_to_documents(\n texts: Iterable[str],\n metadatas: Optional[Iterable[dict[Any, Any]]] = None,\n ) -> List[Document]:\n \"\"\"Return list of Documents from list of texts and metadatas.\"\"\"\n if metadatas is None:\n metadatas = repeat({})\n docs = [\n Document(page_content=text, metadata=metadata)\n for text, metadata in zip(texts, metadatas)\n ]\n return docs\n @staticmethod\n def _add_vectors(\n client: supabase.client.Client,\n table_name: str,\n vectors: List[List[float]],\n documents: List[Document],\n ids: List[str],\n ) -> List[str]:\n \"\"\"Add vectors to Supabase table.\"\"\"\n rows: List[dict[str, Any]] = [\n {\n \"id\": ids[idx],\n \"content\": documents[idx].page_content,\n \"embedding\": embedding,\n \"metadata\": documents[idx].metadata, # type: ignore\n }\n for idx, embedding in enumerate(vectors)\n ]\n # According to the SupabaseVectorStore JS implementation, the best chunk size\n # is 500\n chunk_size = 500\n id_list: List[str] = []\n for i in range(0, len(rows), chunk_size):\n chunk = rows[i : i + chunk_size]\n result = client.from_(table_name).upsert(chunk).execute() # type: ignore\n if len(result.data) == 0:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html"} {"id": "7ad591e60f99-5", "text": "if len(result.data) == 0:\n raise Exception(\"Error inserting: No rows added\")\n # VectorStore.add_vectors returns ids as strings\n ids = [str(i.get(\"id\")) for i in result.data if i.get(\"id\")]\n id_list.extend(ids)\n return id_list\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n result = self.similarity_search_by_vector_returning_embeddings(\n embedding, fetch_k\n )\n matched_documents = [doc_tuple[0] for doc_tuple in result]\n matched_embeddings = [doc_tuple[2] for doc_tuple in result]\n mmr_selected = maximal_marginal_relevance(\n np.array([embedding], dtype=np.float32),\n matched_embeddings,\n k=k,\n lambda_mult=lambda_mult,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html"} {"id": "7ad591e60f99-6", "text": "matched_embeddings,\n k=k,\n lambda_mult=lambda_mult,\n )\n filtered_documents = [matched_documents[i] for i in mmr_selected]\n return filtered_documents\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n `max_marginal_relevance_search` requires that `query_name` returns matched\n embeddings alongside the match documents. The following function\n demonstrates how to do this:\n ```sql\n CREATE FUNCTION match_documents_embeddings(query_embedding vector(1536),\n match_count int)\n RETURNS TABLE(\n id uuid,\n content text,\n metadata jsonb,\n embedding vector(1536),\n similarity float)\n LANGUAGE plpgsql\n AS $$\n # variable_conflict use_column\n BEGIN\n RETURN query\n SELECT\n id,\n content,\n metadata,\n embedding,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html"} {"id": "7ad591e60f99-7", "text": "SELECT\n id,\n content,\n metadata,\n embedding,\n 1 -(docstore.embedding <=> query_embedding) AS similarity\n FROM\n docstore\n ORDER BY\n docstore.embedding <=> query_embedding\n LIMIT match_count;\n END;\n $$;\n ```\n \"\"\"\n embedding = self._embedding.embed_documents([query])\n docs = self.max_marginal_relevance_search_by_vector(\n embedding[0], k, fetch_k, lambda_mult=lambda_mult\n )\n return docs\n[docs] def delete(self, ids: Optional[List[str]] = None, **kwargs: Any) -> None:\n \"\"\"Delete by vector IDs.\n Args:\n ids: List of ids to delete.\n \"\"\"\n if ids is None:\n raise ValueError(\"No ids provided to delete.\")\n rows: List[dict[str, Any]] = [\n {\n \"id\": id,\n }\n for id in ids\n ]\n # TODO: Check if this can be done in bulk\n for row in rows:\n self._client.from_(self.table_name).delete().eq(\"id\", row[\"id\"]).execute()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html"} {"id": "5edc3a48e330-0", "text": "Source code for langchain.vectorstores.vectara\n\"\"\"Wrapper around Vectara vector database.\"\"\"\nfrom __future__ import annotations\nimport json\nimport logging\nimport os\nfrom hashlib import md5\nfrom typing import Any, Iterable, List, Optional, Tuple, Type\nimport requests\nfrom pydantic import Field\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import Document\nfrom langchain.vectorstores.base import VectorStore, VectorStoreRetriever\n[docs]class Vectara(VectorStore):\n \"\"\"Implementation of Vector Store using Vectara.\n See (https://vectara.com).\n Example:\n .. code-block:: python\n from langchain.vectorstores import Vectara\n vectorstore = Vectara(\n vectara_customer_id=vectara_customer_id,\n vectara_corpus_id=vectara_corpus_id,\n vectara_api_key=vectara_api_key\n )\n \"\"\"\n def __init__(\n self,\n vectara_customer_id: Optional[str] = None,\n vectara_corpus_id: Optional[str] = None,\n vectara_api_key: Optional[str] = None,\n ):\n \"\"\"Initialize with Vectara API.\"\"\"\n self._vectara_customer_id = vectara_customer_id or os.environ.get(\n \"VECTARA_CUSTOMER_ID\"\n )\n self._vectara_corpus_id = vectara_corpus_id or os.environ.get(\n \"VECTARA_CORPUS_ID\"\n )\n self._vectara_api_key = vectara_api_key or os.environ.get(\"VECTARA_API_KEY\")\n if (\n self._vectara_customer_id is None\n or self._vectara_corpus_id is None\n or self._vectara_api_key is None\n ):\n logging.warning(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"} {"id": "5edc3a48e330-1", "text": "or self._vectara_api_key is None\n ):\n logging.warning(\n \"Cant find Vectara credentials, customer_id or corpus_id in \"\n \"environment.\"\n )\n else:\n logging.debug(f\"Using corpus id {self._vectara_corpus_id}\")\n self._session = requests.Session() # to reuse connections\n adapter = requests.adapters.HTTPAdapter(max_retries=3)\n self._session.mount(\"http://\", adapter)\n def _get_post_headers(self) -> dict:\n \"\"\"Returns headers that should be attached to each post request.\"\"\"\n return {\n \"x-api-key\": self._vectara_api_key,\n \"customer-id\": self._vectara_customer_id,\n \"Content-Type\": \"application/json\",\n }\n def _delete_doc(self, doc_id: str) -> bool:\n \"\"\"\n Delete a document from the Vectara corpus.\n Args:\n url (str): URL of the page to delete.\n doc_id (str): ID of the document to delete.\n Returns:\n bool: True if deletion was successful, False otherwise.\n \"\"\"\n body = {\n \"customer_id\": self._vectara_customer_id,\n \"corpus_id\": self._vectara_corpus_id,\n \"document_id\": doc_id,\n }\n response = self._session.post(\n \"https://api.vectara.io/v1/delete-doc\",\n data=json.dumps(body),\n verify=True,\n headers=self._get_post_headers(),\n )\n if response.status_code != 200:\n logging.error(\n f\"Delete request failed for doc_id = {doc_id} with status code \"\n f\"{response.status_code}, reason {response.reason}, text \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"} {"id": "5edc3a48e330-2", "text": "f\"{response.status_code}, reason {response.reason}, text \"\n f\"{response.text}\"\n )\n return False\n return True\n def _index_doc(self, doc: dict) -> str:\n request: dict[str, Any] = {}\n request[\"customer_id\"] = self._vectara_customer_id\n request[\"corpus_id\"] = self._vectara_corpus_id\n request[\"document\"] = doc\n response = self._session.post(\n headers=self._get_post_headers(),\n url=\"https://api.vectara.io/v1/core/index\",\n data=json.dumps(request),\n timeout=30,\n verify=True,\n )\n status_code = response.status_code\n result = response.json()\n status_str = result[\"status\"][\"code\"] if \"status\" in result else None\n if status_code == 409 or status_str and (status_str == \"ALREADY_EXISTS\"):\n return \"E_ALREADY_EXISTS\"\n elif status_str and (status_str == \"FORBIDDEN\"):\n return \"E_NO_PERMISSIONS\"\n else:\n return \"E_SUCCEEDED\"\n[docs] def add_files(\n self,\n files_list: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"\n Vectara provides a way to add documents directly via our API where\n pre-processing and chunking occurs internally in an optimal way\n This method provides a way to use that API in LangChain\n Args:\n files_list: Iterable of strings, each representing a local file path.\n Files could be text, HTML, PDF, markdown, doc/docx, ppt/pptx, etc.\n see API docs for full list", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"} {"id": "5edc3a48e330-3", "text": "see API docs for full list\n metadatas: Optional list of metadatas associated with each file\n Returns:\n List of ids associated with each of the files indexed\n \"\"\"\n doc_ids = []\n for inx, file in enumerate(files_list):\n if not os.path.exists(file):\n logging.error(f\"File {file} does not exist, skipping\")\n continue\n md = metadatas[inx] if metadatas else {}\n files: dict = {\n \"file\": (file, open(file, \"rb\")),\n \"doc_metadata\": json.dumps(md),\n }\n headers = self._get_post_headers()\n headers.pop(\"Content-Type\")\n response = self._session.post(\n f\"https://api.vectara.io/upload?c={self._vectara_customer_id}&o={self._vectara_corpus_id}&d=True\",\n files=files,\n verify=True,\n headers=headers,\n )\n if response.status_code == 409:\n doc_id = response.json()[\"document\"][\"documentId\"]\n logging.info(\n f\"File {file} already exists on Vectara (doc_id={doc_id}), skipping\"\n )\n elif response.status_code == 200:\n doc_id = response.json()[\"document\"][\"documentId\"]\n doc_ids.append(doc_id)\n else:\n logging.info(f\"Error indexing file {file}: {response.json()}\")\n return doc_ids\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n doc_metadata: Optional[dict] = None,\n **kwargs: Any,\n ) -> List[str]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"} {"id": "5edc3a48e330-4", "text": "**kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n doc_metadata: optional metadata for the document\n This function indexes all the input text strings in the Vectara corpus as a\n single Vectara document, where each input text is considered a \"part\" and the\n metadata are associated with each part.\n if 'doc_metadata' is provided, it is associated with the Vectara document.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n doc_hash = md5()\n for t in texts:\n doc_hash.update(t.encode())\n doc_id = doc_hash.hexdigest()\n if metadatas is None:\n metadatas = [{} for _ in texts]\n if doc_metadata:\n doc_metadata[\"source\"] = \"langchain\"\n else:\n doc_metadata = {\"source\": \"langchain\"}\n doc = {\n \"document_id\": doc_id,\n \"metadataJson\": json.dumps(doc_metadata),\n \"parts\": [\n {\"text\": text, \"metadataJson\": json.dumps(md)}\n for text, md in zip(texts, metadatas)\n ],\n }\n success_str = self._index_doc(doc)\n if success_str == \"E_ALREADY_EXISTS\":\n self._delete_doc(doc_id)\n self._index_doc(doc)\n elif success_str == \"E_NO_PERMISSIONS\":\n print(\n \"\"\"No permissions to add document to Vectara. \n Check your corpus ID, customer ID and API key\"\"\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"} {"id": "5edc3a48e330-5", "text": "Check your corpus ID, customer ID and API key\"\"\"\n )\n return [doc_id]\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 5,\n lambda_val: float = 0.025,\n filter: Optional[str] = None,\n n_sentence_context: int = 0,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return Vectara documents most similar to query, along with scores.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 5.\n lambda_val: lexical match parameter for hybrid search.\n filter: Dictionary of argument(s) to filter on metadata. For example a\n filter can be \"doc.rating > 3.0 and part.lang = 'deu'\"} see\n https://docs.vectara.com/docs/search-apis/sql/filter-overview\n for more details.\n n_sentence_context: number of sentences before/after the matching segment\n to add\n Returns:\n List of Documents most similar to the query and score for each.\n \"\"\"\n data = json.dumps(\n {\n \"query\": [\n {\n \"query\": query,\n \"start\": 0,\n \"num_results\": k,\n \"context_config\": {\n \"sentences_before\": n_sentence_context,\n \"sentences_after\": n_sentence_context,\n },\n \"corpus_key\": [\n {\n \"customer_id\": self._vectara_customer_id,\n \"corpus_id\": self._vectara_corpus_id,\n \"metadataFilter\": filter,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"} {"id": "5edc3a48e330-6", "text": "\"metadataFilter\": filter,\n \"lexical_interpolation_config\": {\"lambda\": lambda_val},\n }\n ],\n }\n ]\n }\n )\n response = self._session.post(\n headers=self._get_post_headers(),\n url=\"https://api.vectara.io/v1/query\",\n data=data,\n timeout=10,\n )\n if response.status_code != 200:\n logging.error(\n \"Query failed %s\",\n f\"(code {response.status_code}, reason {response.reason}, details \"\n f\"{response.text})\",\n )\n return []\n result = response.json()\n responses = result[\"responseSet\"][0][\"response\"]\n vectara_default_metadata = [\"lang\", \"len\", \"offset\"]\n docs = [\n (\n Document(\n page_content=x[\"text\"],\n metadata={\n m[\"name\"]: m[\"value\"]\n for m in x[\"metadata\"]\n if m[\"name\"] not in vectara_default_metadata\n },\n ),\n x[\"score\"],\n )\n for x in responses\n ]\n return docs\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 5,\n lambda_val: float = 0.025,\n filter: Optional[str] = None,\n n_sentence_context: int = 0,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return Vectara documents most similar to query, along with scores.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 5.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"} {"id": "5edc3a48e330-7", "text": "k: Number of Documents to return. Defaults to 5.\n filter: Dictionary of argument(s) to filter on metadata. For example a\n filter can be \"doc.rating > 3.0 and part.lang = 'deu'\"} see\n https://docs.vectara.com/docs/search-apis/sql/filter-overview for more\n details.\n n_sentence_context: number of sentences before/after the matching segment\n to add\n Returns:\n List of Documents most similar to the query\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(\n query,\n k=k,\n lambda_val=lambda_val,\n filter=filter,\n n_sentence_context=n_sentence_context,\n **kwargs,\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] @classmethod\n def from_texts(\n cls: Type[Vectara],\n texts: List[str],\n embedding: Optional[Embeddings] = None,\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> Vectara:\n \"\"\"Construct Vectara wrapper from raw documents.\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import Vectara\n vectara = Vectara.from_texts(\n texts,\n vectara_customer_id=customer_id,\n vectara_corpus_id=corpus_id,\n vectara_api_key=api_key,\n )\n \"\"\"\n # Note: Vectara generates its own embeddings, so we ignore the provided\n # embeddings (required by interface)\n doc_metadata = kwargs.pop(\"doc_metadata\", {})\n vectara = cls(**kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"} {"id": "5edc3a48e330-8", "text": "vectara = cls(**kwargs)\n vectara.add_texts(texts, metadatas, doc_metadata=doc_metadata, **kwargs)\n return vectara\n[docs] @classmethod\n def from_files(\n cls: Type[Vectara],\n files: List[str],\n embedding: Optional[Embeddings] = None,\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> Vectara:\n \"\"\"Construct Vectara wrapper from raw documents.\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import Vectara\n vectara = Vectara.from_files(\n files_list,\n vectara_customer_id=customer_id,\n vectara_corpus_id=corpus_id,\n vectara_api_key=api_key,\n )\n \"\"\"\n # Note: Vectara generates its own embeddings, so we ignore the provided\n # embeddings (required by interface)\n vectara = cls(**kwargs)\n vectara.add_files(files, metadatas)\n return vectara\n[docs] def as_retriever(self, **kwargs: Any) -> VectaraRetriever:\n return VectaraRetriever(vectorstore=self, **kwargs)\n[docs]class VectaraRetriever(VectorStoreRetriever):\n vectorstore: Vectara\n search_kwargs: dict = Field(\n default_factory=lambda: {\n \"lambda_val\": 0.025,\n \"k\": 5,\n \"filter\": \"\",\n \"n_sentence_context\": \"0\",\n }\n )\n \"\"\"Search params.\n k: Number of Documents to return. Defaults to 5.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"} {"id": "5edc3a48e330-9", "text": "k: Number of Documents to return. Defaults to 5.\n lambda_val: lexical match parameter for hybrid search.\n filter: Dictionary of argument(s) to filter on metadata. For example a\n filter can be \"doc.rating > 3.0 and part.lang = 'deu'\"} see\n https://docs.vectara.com/docs/search-apis/sql/filter-overview\n for more details.\n n_sentence_context: number of sentences before/after the matching segment to add\n \"\"\"\n[docs] def add_texts(\n self,\n texts: List[str],\n metadatas: Optional[List[dict]] = None,\n doc_metadata: Optional[dict] = {},\n ) -> None:\n \"\"\"Add text to the Vectara vectorstore.\n Args:\n texts (List[str]): The text\n metadatas (List[dict]): Metadata dicts, must line up with existing store\n \"\"\"\n self.vectorstore.add_texts(texts, metadatas, doc_metadata)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"} {"id": "d9f6a9ba3a6d-0", "text": "Source code for langchain.vectorstores.elastic_vector_search\n\"\"\"Wrapper around Elasticsearch vector database.\"\"\"\nfrom __future__ import annotations\nimport uuid\nfrom abc import ABC\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Dict,\n Iterable,\n List,\n Mapping,\n Optional,\n Tuple,\n Union,\n)\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_env\nfrom langchain.vectorstores.base import VectorStore\nif TYPE_CHECKING:\n from elasticsearch import Elasticsearch\ndef _default_text_mapping(dim: int) -> Dict:\n return {\n \"properties\": {\n \"text\": {\"type\": \"text\"},\n \"vector\": {\"type\": \"dense_vector\", \"dims\": dim},\n }\n }\ndef _default_script_query(query_vector: List[float], filter: Optional[dict]) -> Dict:\n if filter:\n ((key, value),) = filter.items()\n filter = {\"match\": {f\"metadata.{key}.keyword\": f\"{value}\"}}\n else:\n filter = {\"match_all\": {}}\n return {\n \"script_score\": {\n \"query\": filter,\n \"script\": {\n \"source\": \"cosineSimilarity(params.query_vector, 'vector') + 1.0\",\n \"params\": {\"query_vector\": query_vector},\n },\n }\n }\n# ElasticVectorSearch is a concrete implementation of the abstract base class\n# VectorStore, which defines a common interface for all vector database\n# implementations. By inheriting from the ABC class, ElasticVectorSearch can be\n# defined as an abstract base class itself, allowing the creation of subclasses with", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} {"id": "d9f6a9ba3a6d-1", "text": "# defined as an abstract base class itself, allowing the creation of subclasses with\n# their own specific implementations. If you plan to subclass ElasticVectorSearch,\n# you can inherit from it and define your own implementation of the necessary methods\n# and attributes.\n[docs]class ElasticVectorSearch(VectorStore, ABC):\n \"\"\"Wrapper around Elasticsearch as a vector database.\n To connect to an Elasticsearch instance that does not require\n login credentials, pass the Elasticsearch URL and index name along with the\n embedding object to the constructor.\n Example:\n .. code-block:: python\n from langchain import ElasticVectorSearch\n from langchain.embeddings import OpenAIEmbeddings\n embedding = OpenAIEmbeddings()\n elastic_vector_search = ElasticVectorSearch(\n elasticsearch_url=\"http://localhost:9200\",\n index_name=\"test_index\",\n embedding=embedding\n )\n To connect to an Elasticsearch instance that requires login credentials,\n including Elastic Cloud, use the Elasticsearch URL format\n https://username:password@es_host:9243. For example, to connect to Elastic\n Cloud, create the Elasticsearch URL with the required authentication details and\n pass it to the ElasticVectorSearch constructor as the named parameter\n elasticsearch_url.\n You can obtain your Elastic Cloud URL and login credentials by logging in to the\n Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and\n navigating to the \"Deployments\" page.\n To obtain your Elastic Cloud password for the default \"elastic\" user:\n 1. Log in to the Elastic Cloud console at https://cloud.elastic.co\n 2. Go to \"Security\" > \"Users\"\n 3. Locate the \"elastic\" user and click \"Edit\"\n 4. Click \"Reset password\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} {"id": "d9f6a9ba3a6d-2", "text": "4. Click \"Reset password\"\n 5. Follow the prompts to reset the password\n The format for Elastic Cloud URLs is\n https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.\n Example:\n .. code-block:: python\n from langchain import ElasticVectorSearch\n from langchain.embeddings import OpenAIEmbeddings\n embedding = OpenAIEmbeddings()\n elastic_host = \"cluster_id.region_id.gcp.cloud.es.io\"\n elasticsearch_url = f\"https://username:password@{elastic_host}:9243\"\n elastic_vector_search = ElasticVectorSearch(\n elasticsearch_url=elasticsearch_url,\n index_name=\"test_index\",\n embedding=embedding\n )\n Args:\n elasticsearch_url (str): The URL for the Elasticsearch instance.\n index_name (str): The name of the Elasticsearch index for the embeddings.\n embedding (Embeddings): An object that provides the ability to embed text.\n It should be an instance of a class that subclasses the Embeddings\n abstract base class, such as OpenAIEmbeddings()\n Raises:\n ValueError: If the elasticsearch python package is not installed.\n \"\"\"\n def __init__(\n self,\n elasticsearch_url: str,\n index_name: str,\n embedding: Embeddings,\n *,\n ssl_verify: Optional[Dict[str, Any]] = None,\n ):\n \"\"\"Initialize with necessary components.\"\"\"\n try:\n import elasticsearch\n except ImportError:\n raise ImportError(\n \"Could not import elasticsearch python package. \"\n \"Please install it with `pip install elasticsearch`.\"\n )\n self.embedding = embedding\n self.index_name = index_name\n _ssl_verify = ssl_verify or {}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} {"id": "d9f6a9ba3a6d-3", "text": "self.index_name = index_name\n _ssl_verify = ssl_verify or {}\n try:\n self.client = elasticsearch.Elasticsearch(elasticsearch_url, **_ssl_verify)\n except ValueError as e:\n raise ValueError(\n f\"Your elasticsearch client string is mis-formatted. Got error: {e} \"\n )\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n refresh_indices: bool = True,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n refresh_indices: bool to refresh ElasticSearch indices\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n try:\n from elasticsearch.exceptions import NotFoundError\n from elasticsearch.helpers import bulk\n except ImportError:\n raise ImportError(\n \"Could not import elasticsearch python package. \"\n \"Please install it with `pip install elasticsearch`.\"\n )\n requests = []\n ids = ids or [str(uuid.uuid4()) for _ in texts]\n embeddings = self.embedding.embed_documents(list(texts))\n dim = len(embeddings[0])\n mapping = _default_text_mapping(dim)\n # check to see if the index already exists\n try:\n self.client.indices.get(index=self.index_name)\n except NotFoundError:\n # TODO would be nice to create index before embedding,\n # just to save expensive steps for last", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} {"id": "d9f6a9ba3a6d-4", "text": "# just to save expensive steps for last\n self.create_index(self.client, self.index_name, mapping)\n for i, text in enumerate(texts):\n metadata = metadatas[i] if metadatas else {}\n request = {\n \"_op_type\": \"index\",\n \"_index\": self.index_name,\n \"vector\": embeddings[i],\n \"text\": text,\n \"metadata\": metadata,\n \"_id\": ids[i],\n }\n requests.append(request)\n bulk(self.client, requests)\n if refresh_indices:\n self.client.indices.refresh(index=self.index_name)\n return ids\n[docs] def similarity_search(\n self, query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(query, k, filter=filter)\n documents = [d[0] for d in docs_and_scores]\n return documents\n[docs] def similarity_search_with_score(\n self, query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} {"id": "d9f6a9ba3a6d-5", "text": "Returns:\n List of Documents most similar to the query.\n \"\"\"\n embedding = self.embedding.embed_query(query)\n script_query = _default_script_query(embedding, filter)\n response = self.client_search(\n self.client, self.index_name, script_query, size=k\n )\n hits = [hit for hit in response[\"hits\"][\"hits\"]]\n docs_and_scores = [\n (\n Document(\n page_content=hit[\"_source\"][\"text\"],\n metadata=hit[\"_source\"][\"metadata\"],\n ),\n hit[\"_score\"],\n )\n for hit in hits\n ]\n return docs_and_scores\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n elasticsearch_url: Optional[str] = None,\n index_name: Optional[str] = None,\n refresh_indices: bool = True,\n **kwargs: Any,\n ) -> ElasticVectorSearch:\n \"\"\"Construct ElasticVectorSearch wrapper from raw documents.\n This is a user-friendly interface that:\n 1. Embeds documents.\n 2. Creates a new index for the embeddings in the Elasticsearch instance.\n 3. Adds the documents to the newly created Elasticsearch index.\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import ElasticVectorSearch\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n elastic_vector_search = ElasticVectorSearch.from_texts(\n texts,\n embeddings,\n elasticsearch_url=\"http://localhost:9200\"\n )\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} {"id": "d9f6a9ba3a6d-6", "text": "elasticsearch_url=\"http://localhost:9200\"\n )\n \"\"\"\n elasticsearch_url = elasticsearch_url or get_from_env(\n \"elasticsearch_url\", \"ELASTICSEARCH_URL\"\n )\n index_name = index_name or uuid.uuid4().hex\n vectorsearch = cls(elasticsearch_url, index_name, embedding, **kwargs)\n vectorsearch.add_texts(\n texts, metadatas=metadatas, refresh_indices=refresh_indices\n )\n return vectorsearch\n[docs] def create_index(self, client: Any, index_name: str, mapping: Dict) -> None:\n version_num = client.info()[\"version\"][\"number\"][0]\n version_num = int(version_num)\n if version_num >= 8:\n client.indices.create(index=index_name, mappings=mapping)\n else:\n client.indices.create(index=index_name, body={\"mappings\": mapping})\n[docs] def client_search(\n self, client: Any, index_name: str, script_query: Dict, size: int\n ) -> Any:\n version_num = client.info()[\"version\"][\"number\"][0]\n version_num = int(version_num)\n if version_num >= 8:\n response = client.search(index=index_name, query=script_query, size=size)\n else:\n response = client.search(\n index=index_name, body={\"query\": script_query, \"size\": size}\n )\n return response\n[docs] def delete(self, ids: Optional[List[str]] = None, **kwargs: Any) -> None:\n \"\"\"Delete by vector IDs.\n Args:\n ids: List of ids to delete.\n \"\"\"\n if ids is None:\n raise ValueError(\"No ids provided to delete.\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} {"id": "d9f6a9ba3a6d-7", "text": "if ids is None:\n raise ValueError(\"No ids provided to delete.\")\n # TODO: Check if this can be done in bulk\n for id in ids:\n self.client.delete(index=self.index_name, id=id)\n[docs]class ElasticKnnSearch(ElasticVectorSearch):\n \"\"\"\n A class for performing k-Nearest Neighbors (k-NN) search on an Elasticsearch index.\n The class is designed for a text search scenario where documents are text strings\n and their embeddings are vector representations of those strings.\n \"\"\"\n def __init__(\n self,\n index_name: str,\n embedding: Embeddings,\n es_connection: Optional[\"Elasticsearch\"] = None,\n es_cloud_id: Optional[str] = None,\n es_user: Optional[str] = None,\n es_password: Optional[str] = None,\n vector_query_field: Optional[str] = \"vector\",\n query_field: Optional[str] = \"text\",\n ):\n \"\"\"\n Initializes an instance of the ElasticKnnSearch class and sets up the\n Elasticsearch client.\n Args:\n index_name: The name of the Elasticsearch index.\n embedding: An instance of the Embeddings class, used to generate vector\n representations of text strings.\n es_connection: An existing Elasticsearch connection.\n es_cloud_id: The Cloud ID of the Elasticsearch instance. Required if\n creating a new connection.\n es_user: The username for the Elasticsearch instance. Required if\n creating a new connection.\n es_password: The password for the Elasticsearch instance. Required if\n creating a new connection.\n \"\"\"\n try:\n import elasticsearch\n except ImportError:\n raise ImportError(\n \"Could not import elasticsearch python package. \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} {"id": "d9f6a9ba3a6d-8", "text": "raise ImportError(\n \"Could not import elasticsearch python package. \"\n \"Please install it with `pip install elasticsearch`.\"\n )\n self.embedding = embedding\n self.index_name = index_name\n self.query_field = query_field\n self.vector_query_field = vector_query_field\n # If a pre-existing Elasticsearch connection is provided, use it.\n if es_connection is not None:\n self.client = es_connection\n else:\n # If credentials for a new Elasticsearch connection are provided,\n # create a new connection.\n if es_cloud_id and es_user and es_password:\n self.client = elasticsearch.Elasticsearch(\n cloud_id=es_cloud_id, basic_auth=(es_user, es_password)\n )\n else:\n raise ValueError(\n \"\"\"Either provide a pre-existing Elasticsearch connection, \\\n or valid credentials for creating a new connection.\"\"\"\n )\n @staticmethod\n def _default_knn_mapping(dims: int) -> Dict:\n \"\"\"Generates a default index mapping for kNN search.\"\"\"\n return {\n \"properties\": {\n \"text\": {\"type\": \"text\"},\n \"vector\": {\n \"type\": \"dense_vector\",\n \"dims\": dims,\n \"index\": True,\n \"similarity\": \"dot_product\",\n },\n }\n }\n def _default_knn_query(\n self,\n query_vector: Optional[List[float]] = None,\n query: Optional[str] = None,\n model_id: Optional[str] = None,\n k: Optional[int] = 10,\n num_candidates: Optional[int] = 10,\n ) -> Dict:\n knn: Dict = {\n \"field\": self.vector_query_field,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} {"id": "d9f6a9ba3a6d-9", "text": "knn: Dict = {\n \"field\": self.vector_query_field,\n \"k\": k,\n \"num_candidates\": num_candidates,\n }\n # Case 1: `query_vector` is provided, but not `model_id` -> use query_vector\n if query_vector and not model_id:\n knn[\"query_vector\"] = query_vector\n # Case 2: `query` and `model_id` are provided, -> use query_vector_builder\n elif query and model_id:\n knn[\"query_vector_builder\"] = {\n \"text_embedding\": {\n \"model_id\": model_id, # use 'model_id' argument\n \"model_text\": query, # use 'query' argument\n }\n }\n else:\n raise ValueError(\n \"Either `query_vector` or `model_id` must be provided, but not both.\"\n )\n return knn\n[docs] def knn_search(\n self,\n query: Optional[str] = None,\n k: Optional[int] = 10,\n query_vector: Optional[List[float]] = None,\n model_id: Optional[str] = None,\n size: Optional[int] = 10,\n source: Optional[bool] = True,\n fields: Optional[\n Union[List[Mapping[str, Any]], Tuple[Mapping[str, Any], ...], None]\n ] = None,\n ) -> Dict:\n \"\"\"\n Performs a k-nearest neighbor (k-NN) search on the Elasticsearch index.\n The search can be conducted using either a raw query vector or a model ID.\n The method first generates\n the body of the search query, which can be interpreted by Elasticsearch.\n It then performs the k-NN", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} {"id": "d9f6a9ba3a6d-10", "text": "It then performs the k-NN\n search on the Elasticsearch index and returns the results.\n Args:\n query: The query or queries to be used for the search. Required if\n `query_vector` is not provided.\n k: The number of nearest neighbors to return. Defaults to 10.\n query_vector: The query vector to be used for the search. Required if\n `query` is not provided.\n model_id: The ID of the model to use for generating the query vector, if\n `query` is provided.\n size: The number of search hits to return. Defaults to 10.\n source: Whether to include the source of each hit in the results.\n fields: The fields to include in the source of each hit. If None, all\n fields are included.\n vector_query_field: Field name to use in knn search if not default 'vector'\n Returns:\n The search results.\n Raises:\n ValueError: If neither `query_vector` nor `model_id` is provided, or if\n both are provided.\n \"\"\"\n knn_query_body = self._default_knn_query(\n query_vector=query_vector, query=query, model_id=model_id, k=k\n )\n # Perform the kNN search on the Elasticsearch index and return the results.\n res = self.client.search(\n index=self.index_name,\n knn=knn_query_body,\n size=size,\n source=source,\n fields=fields,\n )\n return dict(res)\n[docs] def knn_hybrid_search(\n self,\n query: Optional[str] = None,\n k: Optional[int] = 10,\n query_vector: Optional[List[float]] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} {"id": "d9f6a9ba3a6d-11", "text": "query_vector: Optional[List[float]] = None,\n model_id: Optional[str] = None,\n size: Optional[int] = 10,\n source: Optional[bool] = True,\n knn_boost: Optional[float] = 0.9,\n query_boost: Optional[float] = 0.1,\n fields: Optional[\n Union[List[Mapping[str, Any]], Tuple[Mapping[str, Any], ...], None]\n ] = None,\n ) -> Dict[Any, Any]:\n \"\"\"Performs a hybrid k-nearest neighbor (k-NN) and text-based search on the\n Elasticsearch index.\n The search can be conducted using either a raw query vector or a model ID.\n The method first generates\n the body of the k-NN search query and the text-based query, which can be\n interpreted by Elasticsearch.\n It then performs the hybrid search on the Elasticsearch index and returns the\n results.\n Args:\n query: The query or queries to be used for the search. Required if\n `query_vector` is not provided.\n k: The number of nearest neighbors to return. Defaults to 10.\n query_vector: The query vector to be used for the search. Required if\n `query` is not provided.\n model_id: The ID of the model to use for generating the query vector, if\n `query` is provided.\n size: The number of search hits to return. Defaults to 10.\n source: Whether to include the source of each hit in the results.\n knn_boost: The boost factor for the k-NN part of the search.\n query_boost: The boost factor for the text-based part of the search.\n fields", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} {"id": "d9f6a9ba3a6d-12", "text": "query_boost: The boost factor for the text-based part of the search.\n fields\n The fields to include in the source of each hit. If None, all fields are\n included. Defaults to None.\n vector_query_field: Field name to use in knn search if not default 'vector'\n query_field: Field name to use in search if not default 'text'\n Returns:\n The search results.\n Raises:\n ValueError: If neither `query_vector` nor `model_id` is provided, or if\n both are provided.\n \"\"\"\n knn_query_body = self._default_knn_query(\n query_vector=query_vector, query=query, model_id=model_id, k=k\n )\n # Modify the knn_query_body to add a \"boost\" parameter\n knn_query_body[\"boost\"] = knn_boost\n # Generate the body of the standard Elasticsearch query\n match_query_body = {\n \"match\": {self.query_field: {\"query\": query, \"boost\": query_boost}}\n }\n # Perform the hybrid search on the Elasticsearch index and return the results.\n res = self.client.search(\n index=self.index_name,\n query=match_query_body,\n knn=knn_query_body,\n fields=fields,\n size=size,\n source=source,\n )\n return dict(res)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} {"id": "792d66d73364-0", "text": "Source code for langchain.vectorstores.redis\n\"\"\"Wrapper around Redis vector database.\"\"\"\nfrom __future__ import annotations\nimport json\nimport logging\nimport uuid\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Dict,\n Iterable,\n List,\n Literal,\n Mapping,\n Optional,\n Tuple,\n Type,\n)\nimport numpy as np\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nfrom langchain.vectorstores.base import VectorStore, VectorStoreRetriever\nlogger = logging.getLogger(__name__)\nif TYPE_CHECKING:\n from redis.client import Redis as RedisType\n from redis.commands.search.query import Query\n# required modules\nREDIS_REQUIRED_MODULES = [\n {\"name\": \"search\", \"ver\": 20400},\n {\"name\": \"searchlight\", \"ver\": 20400},\n]\n# distance mmetrics\nREDIS_DISTANCE_METRICS = Literal[\"COSINE\", \"IP\", \"L2\"]\ndef _check_redis_module_exist(client: RedisType, required_modules: List[dict]) -> None:\n \"\"\"Check if the correct Redis modules are installed.\"\"\"\n installed_modules = client.module_list()\n installed_modules = {\n module[b\"name\"].decode(\"utf-8\"): module for module in installed_modules\n }\n for module in required_modules:\n if module[\"name\"] in installed_modules and int(\n installed_modules[module[\"name\"]][b\"ver\"]\n ) >= int(module[\"ver\"]):\n return\n # otherwise raise error", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} {"id": "792d66d73364-1", "text": ") >= int(module[\"ver\"]):\n return\n # otherwise raise error\n error_message = (\n \"Redis cannot be used as a vector database without RediSearch >=2.4\"\n \"Please head to https://redis.io/docs/stack/search/quick_start/\"\n \"to know more about installing the RediSearch module within Redis Stack.\"\n )\n logging.error(error_message)\n raise ValueError(error_message)\ndef _check_index_exists(client: RedisType, index_name: str) -> bool:\n \"\"\"Check if Redis index exists.\"\"\"\n try:\n client.ft(index_name).info()\n except: # noqa: E722\n logger.info(\"Index does not exist\")\n return False\n logger.info(\"Index already exists\")\n return True\ndef _redis_key(prefix: str) -> str:\n \"\"\"Redis key schema for a given prefix.\"\"\"\n return f\"{prefix}:{uuid.uuid4().hex}\"\ndef _redis_prefix(index_name: str) -> str:\n \"\"\"Redis key prefix for a given index.\"\"\"\n return f\"doc:{index_name}\"\ndef _default_relevance_score(val: float) -> float:\n return 1 - val\n[docs]class Redis(VectorStore):\n \"\"\"Wrapper around Redis vector database.\n To use, you should have the ``redis`` python package installed.\n Example:\n .. code-block:: python\n from langchain.vectorstores import Redis\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n vectorstore = Redis(\n redis_url=\"redis://username:password@localhost:6379\"\n index_name=\"my-index\",\n embedding_function=embeddings.embed_query,\n )\n \"\"\"\n def __init__(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} {"id": "792d66d73364-2", "text": ")\n \"\"\"\n def __init__(\n self,\n redis_url: str,\n index_name: str,\n embedding_function: Callable,\n content_key: str = \"content\",\n metadata_key: str = \"metadata\",\n vector_key: str = \"content_vector\",\n relevance_score_fn: Optional[\n Callable[[float], float]\n ] = _default_relevance_score,\n **kwargs: Any,\n ):\n \"\"\"Initialize with necessary components.\"\"\"\n try:\n import redis\n except ImportError:\n raise ValueError(\n \"Could not import redis python package. \"\n \"Please install it with `pip install redis>=4.1.0`.\"\n )\n self.embedding_function = embedding_function\n self.index_name = index_name\n try:\n # connect to redis from url\n redis_client = redis.from_url(redis_url, **kwargs)\n # check if redis has redisearch module installed\n _check_redis_module_exist(redis_client, REDIS_REQUIRED_MODULES)\n except ValueError as e:\n raise ValueError(f\"Redis failed to connect: {e}\")\n self.client = redis_client\n self.content_key = content_key\n self.metadata_key = metadata_key\n self.vector_key = vector_key\n self.relevance_score_fn = relevance_score_fn\n def _create_index(\n self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = \"COSINE\"\n ) -> None:\n try:\n from redis.commands.search.field import TextField, VectorField\n from redis.commands.search.indexDefinition import IndexDefinition, IndexType\n except ImportError:\n raise ValueError(\n \"Could not import redis python package. \"\n \"Please install it with `pip install redis`.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} {"id": "792d66d73364-3", "text": "\"Please install it with `pip install redis`.\"\n )\n # Check if index exists\n if not _check_index_exists(self.client, self.index_name):\n # Define schema\n schema = (\n TextField(name=self.content_key),\n TextField(name=self.metadata_key),\n VectorField(\n self.vector_key,\n \"FLAT\",\n {\n \"TYPE\": \"FLOAT32\",\n \"DIM\": dim,\n \"DISTANCE_METRIC\": distance_metric,\n },\n ),\n )\n prefix = _redis_prefix(self.index_name)\n # Create Redis Index\n self.client.ft(self.index_name).create_index(\n fields=schema,\n definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH),\n )\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n embeddings: Optional[List[List[float]]] = None,\n batch_size: int = 1000,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Add more texts to the vectorstore.\n Args:\n texts (Iterable[str]): Iterable of strings/text to add to the vectorstore.\n metadatas (Optional[List[dict]], optional): Optional list of metadatas.\n Defaults to None.\n embeddings (Optional[List[List[float]]], optional): Optional pre-generated\n embeddings. Defaults to None.\n keys (List[str]) or ids (List[str]): Identifiers of entries.\n Defaults to None.\n batch_size (int, optional): Batch size to use for writes. Defaults to 1000.\n Returns:\n List[str]: List of ids added to the vectorstore\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} {"id": "792d66d73364-4", "text": "Returns:\n List[str]: List of ids added to the vectorstore\n \"\"\"\n ids = []\n prefix = _redis_prefix(self.index_name)\n # Get keys or ids from kwargs\n # Other vectorstores use ids\n keys_or_ids = kwargs.get(\"keys\", kwargs.get(\"ids\"))\n # Write data to redis\n pipeline = self.client.pipeline(transaction=False)\n for i, text in enumerate(texts):\n # Use provided values by default or fallback\n key = keys_or_ids[i] if keys_or_ids else _redis_key(prefix)\n metadata = metadatas[i] if metadatas else {}\n embedding = embeddings[i] if embeddings else self.embedding_function(text)\n pipeline.hset(\n key,\n mapping={\n self.content_key: text,\n self.vector_key: np.array(embedding, dtype=np.float32).tobytes(),\n self.metadata_key: json.dumps(metadata),\n },\n )\n ids.append(key)\n # Write batch\n if i % batch_size == 0:\n pipeline.execute()\n # Cleanup final batch\n pipeline.execute()\n return ids\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"\n Returns the most similar indexed documents to the query text.\n Args:\n query (str): The query text for which to find similar documents.\n k (int): The number of documents to return. Default is 4.\n Returns:\n List[Document]: A list of documents that are most similar to the query text.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(query, k=k)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} {"id": "792d66d73364-5", "text": "\"\"\"\n docs_and_scores = self.similarity_search_with_score(query, k=k)\n return [doc for doc, _ in docs_and_scores]\n[docs] def similarity_search_limit_score(\n self, query: str, k: int = 4, score_threshold: float = 0.2, **kwargs: Any\n ) -> List[Document]:\n \"\"\"\n Returns the most similar indexed documents to the query text within the\n score_threshold range.\n Args:\n query (str): The query text for which to find similar documents.\n k (int): The number of documents to return. Default is 4.\n score_threshold (float): The minimum matching score required for a document\n to be considered a match. Defaults to 0.2.\n Because the similarity calculation algorithm is based on cosine similarity,\n the smaller the angle, the higher the similarity.\n Returns:\n List[Document]: A list of documents that are most similar to the query text,\n including the match score for each document.\n Note:\n If there are no documents that satisfy the score_threshold value,\n an empty list is returned.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(query, k=k)\n return [doc for doc, score in docs_and_scores if score < score_threshold]\n def _prepare_query(self, k: int) -> Query:\n try:\n from redis.commands.search.query import Query\n except ImportError:\n raise ValueError(\n \"Could not import redis python package. \"\n \"Please install it with `pip install redis`.\"\n )\n # Prepare the Query\n hybrid_fields = \"*\"\n base_query = (", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} {"id": "792d66d73364-6", "text": "# Prepare the Query\n hybrid_fields = \"*\"\n base_query = (\n f\"{hybrid_fields}=>[KNN {k} @{self.vector_key} $vector AS vector_score]\"\n )\n return_fields = [self.metadata_key, self.content_key, \"vector_score\"]\n return (\n Query(base_query)\n .return_fields(*return_fields)\n .sort_by(\"vector_score\")\n .paging(0, k)\n .dialect(2)\n )\n[docs] def similarity_search_with_score(\n self, query: str, k: int = 4\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n # Creates embedding vector from user query\n embedding = self.embedding_function(query)\n # Creates Redis query\n redis_query = self._prepare_query(k)\n params_dict: Mapping[str, str] = {\n \"vector\": np.array(embedding) # type: ignore\n .astype(dtype=np.float32)\n .tobytes()\n }\n # Perform vector search\n results = self.client.ft(self.index_name).search(redis_query, params_dict)\n # Prepare document results\n docs = [\n (\n Document(\n page_content=result.content, metadata=json.loads(result.metadata)\n ),\n float(result.vector_score),\n )\n for result in results.docs\n ]\n return docs\n def _similarity_search_with_relevance_scores(\n self,\n query: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} {"id": "792d66d73364-7", "text": "self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and relevance scores, normalized on a scale from 0 to 1.\n 0 is dissimilar, 1 is most similar.\n \"\"\"\n if self.relevance_score_fn is None:\n raise ValueError(\n \"relevance_score_fn must be provided to\"\n \" Redis constructor to normalize scores\"\n )\n docs_and_scores = self.similarity_search_with_score(query, k=k)\n return [(doc, self.relevance_score_fn(score)) for doc, score in docs_and_scores]\n[docs] @classmethod\n def from_texts_return_keys(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n index_name: Optional[str] = None,\n content_key: str = \"content\",\n metadata_key: str = \"metadata\",\n vector_key: str = \"content_vector\",\n distance_metric: REDIS_DISTANCE_METRICS = \"COSINE\",\n **kwargs: Any,\n ) -> Tuple[Redis, List[str]]:\n \"\"\"Create a Redis vectorstore from raw documents.\n This is a user-friendly interface that:\n 1. Embeds documents.\n 2. Creates a new index for the embeddings in Redis.\n 3. Adds the documents to the newly created Redis index.\n 4. Returns the keys of the newly created documents.\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain.vectorstores import Redis\n from langchain.embeddings import OpenAIEmbeddings", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} {"id": "792d66d73364-8", "text": "from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n redisearch, keys = RediSearch.from_texts_return_keys(\n texts,\n embeddings,\n redis_url=\"redis://username:password@localhost:6379\"\n )\n \"\"\"\n redis_url = get_from_dict_or_env(kwargs, \"redis_url\", \"REDIS_URL\")\n if \"redis_url\" in kwargs:\n kwargs.pop(\"redis_url\")\n # Name of the search index if not given\n if not index_name:\n index_name = uuid.uuid4().hex\n # Create instance\n instance = cls(\n redis_url,\n index_name,\n embedding.embed_query,\n content_key=content_key,\n metadata_key=metadata_key,\n vector_key=vector_key,\n **kwargs,\n )\n # Create embeddings over documents\n embeddings = embedding.embed_documents(texts)\n # Create the search index\n instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric)\n # Add data to Redis\n keys = instance.add_texts(texts, metadatas, embeddings)\n return instance, keys\n[docs] @classmethod\n def from_texts(\n cls: Type[Redis],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n index_name: Optional[str] = None,\n content_key: str = \"content\",\n metadata_key: str = \"metadata\",\n vector_key: str = \"content_vector\",\n **kwargs: Any,\n ) -> Redis:\n \"\"\"Create a Redis vectorstore from raw documents.\n This is a user-friendly interface that:\n 1. Embeds documents.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} {"id": "792d66d73364-9", "text": "This is a user-friendly interface that:\n 1. Embeds documents.\n 2. Creates a new index for the embeddings in Redis.\n 3. Adds the documents to the newly created Redis index.\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain.vectorstores import Redis\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n redisearch = RediSearch.from_texts(\n texts,\n embeddings,\n redis_url=\"redis://username:password@localhost:6379\"\n )\n \"\"\"\n instance, _ = cls.from_texts_return_keys(\n texts,\n embedding,\n metadatas=metadatas,\n index_name=index_name,\n content_key=content_key,\n metadata_key=metadata_key,\n vector_key=vector_key,\n **kwargs,\n )\n return instance\n[docs] @staticmethod\n def delete(\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> bool:\n \"\"\"\n Delete a Redis entry.\n Args:\n ids: List of ids (keys) to delete.\n Returns:\n bool: Whether or not the deletions were successful.\n \"\"\"\n redis_url = get_from_dict_or_env(kwargs, \"redis_url\", \"REDIS_URL\")\n if ids is None:\n raise ValueError(\"'ids' (keys)() were not provided.\")\n try:\n import redis\n except ImportError:\n raise ValueError(\n \"Could not import redis python package. \"\n \"Please install it with `pip install redis`.\"\n )\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} {"id": "792d66d73364-10", "text": "\"Please install it with `pip install redis`.\"\n )\n try:\n # We need to first remove redis_url from kwargs,\n # otherwise passing it to Redis will result in an error.\n if \"redis_url\" in kwargs:\n kwargs.pop(\"redis_url\")\n client = redis.from_url(url=redis_url, **kwargs)\n except ValueError as e:\n raise ValueError(f\"Your redis connected error: {e}\")\n # Check if index exists\n try:\n client.delete(*ids)\n logger.info(\"Entries deleted\")\n return True\n except: # noqa: E722\n # ids does not exist\n return False\n[docs] @staticmethod\n def drop_index(\n index_name: str,\n delete_documents: bool,\n **kwargs: Any,\n ) -> bool:\n \"\"\"\n Drop a Redis search index.\n Args:\n index_name (str): Name of the index to drop.\n delete_documents (bool): Whether to drop the associated documents.\n Returns:\n bool: Whether or not the drop was successful.\n \"\"\"\n redis_url = get_from_dict_or_env(kwargs, \"redis_url\", \"REDIS_URL\")\n try:\n import redis\n except ImportError:\n raise ValueError(\n \"Could not import redis python package. \"\n \"Please install it with `pip install redis`.\"\n )\n try:\n # We need to first remove redis_url from kwargs,\n # otherwise passing it to Redis will result in an error.\n if \"redis_url\" in kwargs:\n kwargs.pop(\"redis_url\")\n client = redis.from_url(url=redis_url, **kwargs)\n except ValueError as e:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} {"id": "792d66d73364-11", "text": "except ValueError as e:\n raise ValueError(f\"Your redis connected error: {e}\")\n # Check if index exists\n try:\n client.ft(index_name).dropindex(delete_documents)\n logger.info(\"Drop index\")\n return True\n except: # noqa: E722\n # Index not exist\n return False\n[docs] @classmethod\n def from_existing_index(\n cls,\n embedding: Embeddings,\n index_name: str,\n content_key: str = \"content\",\n metadata_key: str = \"metadata\",\n vector_key: str = \"content_vector\",\n **kwargs: Any,\n ) -> Redis:\n \"\"\"Connect to an existing Redis index.\"\"\"\n redis_url = get_from_dict_or_env(kwargs, \"redis_url\", \"REDIS_URL\")\n try:\n import redis\n except ImportError:\n raise ValueError(\n \"Could not import redis python package. \"\n \"Please install it with `pip install redis`.\"\n )\n try:\n # We need to first remove redis_url from kwargs,\n # otherwise passing it to Redis will result in an error.\n if \"redis_url\" in kwargs:\n kwargs.pop(\"redis_url\")\n client = redis.from_url(url=redis_url, **kwargs)\n # check if redis has redisearch module installed\n _check_redis_module_exist(client, REDIS_REQUIRED_MODULES)\n # ensure that the index already exists\n assert _check_index_exists(\n client, index_name\n ), f\"Index {index_name} does not exist\"\n except Exception as e:\n raise ValueError(f\"Redis failed to connect: {e}\")\n return cls(\n redis_url,\n index_name,\n embedding.embed_query,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} {"id": "792d66d73364-12", "text": "redis_url,\n index_name,\n embedding.embed_query,\n content_key=content_key,\n metadata_key=metadata_key,\n vector_key=vector_key,\n **kwargs,\n )\n[docs] def as_retriever(self, **kwargs: Any) -> RedisVectorStoreRetriever:\n return RedisVectorStoreRetriever(vectorstore=self, **kwargs)\n[docs]class RedisVectorStoreRetriever(VectorStoreRetriever):\n vectorstore: Redis\n search_type: str = \"similarity\"\n k: int = 4\n score_threshold: float = 0.4\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] @root_validator()\n def validate_search_type(cls, values: Dict) -> Dict:\n \"\"\"Validate search type.\"\"\"\n if \"search_type\" in values:\n search_type = values[\"search_type\"]\n if search_type not in (\"similarity\", \"similarity_limit\"):\n raise ValueError(f\"search_type of {search_type} not allowed.\")\n return values\n def _get_relevant_documents(\n self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n ) -> List[Document]:\n if self.search_type == \"similarity\":\n docs = self.vectorstore.similarity_search(query, k=self.k)\n elif self.search_type == \"similarity_limit\":\n docs = self.vectorstore.similarity_search_limit_score(\n query, k=self.k, score_threshold=self.score_threshold\n )\n else:\n raise ValueError(f\"search_type of {self.search_type} not allowed.\")\n return docs\n async def _aget_relevant_documents(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} {"id": "792d66d73364-13", "text": "return docs\n async def _aget_relevant_documents(\n self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n ) -> List[Document]:\n raise NotImplementedError(\"RedisVectorStoreRetriever does not support async\")\n[docs] def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:\n \"\"\"Add documents to vectorstore.\"\"\"\n return self.vectorstore.add_documents(documents, **kwargs)\n[docs] async def aadd_documents(\n self, documents: List[Document], **kwargs: Any\n ) -> List[str]:\n \"\"\"Add documents to vectorstore.\"\"\"\n return await self.vectorstore.aadd_documents(documents, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} {"id": "afc897e84bba-0", "text": "Source code for langchain.vectorstores.faiss\n\"\"\"Wrapper around FAISS vector database.\"\"\"\nfrom __future__ import annotations\nimport math\nimport os\nimport pickle\nimport uuid\nfrom pathlib import Path\nfrom typing import Any, Callable, Dict, Iterable, List, Optional, Tuple\nimport numpy as np\nfrom langchain.docstore.base import AddableMixin, Docstore\nfrom langchain.docstore.document import Document\nfrom langchain.docstore.in_memory import InMemoryDocstore\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\n[docs]def dependable_faiss_import(no_avx2: Optional[bool] = None) -> Any:\n \"\"\"\n Import faiss if available, otherwise raise error.\n If FAISS_NO_AVX2 environment variable is set, it will be considered\n to load FAISS with no AVX2 optimization.\n Args:\n no_avx2: Load FAISS strictly with no AVX2 optimization\n so that the vectorstore is portable and compatible with other devices.\n \"\"\"\n if no_avx2 is None and \"FAISS_NO_AVX2\" in os.environ:\n no_avx2 = bool(os.getenv(\"FAISS_NO_AVX2\"))\n try:\n if no_avx2:\n from faiss import swigfaiss as faiss\n else:\n import faiss\n except ImportError:\n raise ImportError(\n \"Could not import faiss python package. \"\n \"Please install it with `pip install faiss` \"\n \"or `pip install faiss-cpu` (depending on Python version).\"\n )\n return faiss\ndef _default_relevance_score_fn(score: float) -> float:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} {"id": "afc897e84bba-1", "text": "return faiss\ndef _default_relevance_score_fn(score: float) -> float:\n \"\"\"Return a similarity score on a scale [0, 1].\"\"\"\n # The 'correct' relevance function\n # may differ depending on a few things, including:\n # - the distance / similarity metric used by the VectorStore\n # - the scale of your embeddings (OpenAI's are unit normed. Many others are not!)\n # - embedding dimensionality\n # - etc.\n # This function converts the euclidean norm of normalized embeddings\n # (0 is most similar, sqrt(2) most dissimilar)\n # to a similarity function (0 to 1)\n return 1.0 - score / math.sqrt(2)\n[docs]class FAISS(VectorStore):\n \"\"\"Wrapper around FAISS vector database.\n To use, you should have the ``faiss`` python package installed.\n Example:\n .. code-block:: python\n from langchain import FAISS\n faiss = FAISS(embedding_function, index, docstore, index_to_docstore_id)\n \"\"\"\n def __init__(\n self,\n embedding_function: Callable,\n index: Any,\n docstore: Docstore,\n index_to_docstore_id: Dict[int, str],\n relevance_score_fn: Callable[[float], float] = _default_relevance_score_fn,\n normalize_L2: bool = False,\n ):\n \"\"\"Initialize with necessary components.\"\"\"\n self.embedding_function = embedding_function\n self.index = index\n self.docstore = docstore\n self.index_to_docstore_id = index_to_docstore_id\n self.relevance_score_fn = relevance_score_fn\n self._normalize_L2 = normalize_L2\n def __add(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} {"id": "afc897e84bba-2", "text": "self._normalize_L2 = normalize_L2\n def __add(\n self,\n texts: Iterable[str],\n embeddings: Iterable[List[float]],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n if not isinstance(self.docstore, AddableMixin):\n raise ValueError(\n \"If trying to add texts, the underlying docstore should support \"\n f\"adding items, which {self.docstore} does not\"\n )\n documents = []\n for i, text in enumerate(texts):\n metadata = metadatas[i] if metadatas else {}\n documents.append(Document(page_content=text, metadata=metadata))\n if ids is None:\n ids = [str(uuid.uuid4()) for _ in texts]\n # Add to the index, the index_to_id mapping, and the docstore.\n starting_len = len(self.index_to_docstore_id)\n faiss = dependable_faiss_import()\n vector = np.array(embeddings, dtype=np.float32)\n if self._normalize_L2:\n faiss.normalize_L2(vector)\n self.index.add(vector)\n # Get list of index, id, and docs.\n full_info = [(starting_len + i, ids[i], doc) for i, doc in enumerate(documents)]\n # Add information to docstore and index.\n self.docstore.add({_id: doc for _, _id, doc in full_info})\n index_to_id = {index: _id for index, _id, _ in full_info}\n self.index_to_docstore_id.update(index_to_id)\n return [_id for _, _id, _ in full_info]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} {"id": "afc897e84bba-3", "text": "return [_id for _, _id, _ in full_info]\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n ids: Optional list of unique IDs.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n if not isinstance(self.docstore, AddableMixin):\n raise ValueError(\n \"If trying to add texts, the underlying docstore should support \"\n f\"adding items, which {self.docstore} does not\"\n )\n # Embed and create the documents.\n embeddings = [self.embedding_function(text) for text in texts]\n return self.__add(texts, embeddings, metadatas=metadatas, ids=ids, **kwargs)\n[docs] def add_embeddings(\n self,\n text_embeddings: Iterable[Tuple[str, List[float]]],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n text_embeddings: Iterable pairs of string and embedding to\n add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n ids: Optional list of unique IDs.\n Returns:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} {"id": "afc897e84bba-4", "text": "ids: Optional list of unique IDs.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n if not isinstance(self.docstore, AddableMixin):\n raise ValueError(\n \"If trying to add texts, the underlying docstore should support \"\n f\"adding items, which {self.docstore} does not\"\n )\n # Embed and create the documents.\n texts, embeddings = zip(*text_embeddings)\n return self.__add(texts, embeddings, metadatas=metadatas, ids=ids, **kwargs)\n[docs] def similarity_search_with_score_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n filter: Optional[Dict[str, Any]] = None,\n fetch_k: int = 20,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n embedding: Embedding vector to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter (Optional[Dict[str, Any]]): Filter by metadata. Defaults to None.\n fetch_k: (Optional[int]) Number of Documents to fetch before filtering.\n Defaults to 20.\n **kwargs: kwargs to be passed to similarity search. Can include:\n score_threshold: Optional, a floating point value between 0 to 1 to\n filter the resulting set of retrieved docs\n Returns:\n List of documents most similar to the query text and L2 distance\n in float for each. Lower score represents more similarity.\n \"\"\"\n faiss = dependable_faiss_import()\n vector = np.array([embedding], dtype=np.float32)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} {"id": "afc897e84bba-5", "text": "vector = np.array([embedding], dtype=np.float32)\n if self._normalize_L2:\n faiss.normalize_L2(vector)\n scores, indices = self.index.search(vector, k if filter is None else fetch_k)\n docs = []\n for j, i in enumerate(indices[0]):\n if i == -1:\n # This happens when not enough docs are returned.\n continue\n _id = self.index_to_docstore_id[i]\n doc = self.docstore.search(_id)\n if not isinstance(doc, Document):\n raise ValueError(f\"Could not find document for id {_id}, got {doc}\")\n if filter is not None:\n filter = {\n key: [value] if not isinstance(value, list) else value\n for key, value in filter.items()\n }\n if all(doc.metadata.get(key) in value for key, value in filter.items()):\n docs.append((doc, scores[0][j]))\n else:\n docs.append((doc, scores[0][j]))\n score_threshold = kwargs.get(\"score_threshold\")\n if score_threshold is not None:\n docs = [\n (doc, similarity)\n for doc, similarity in docs\n if similarity >= score_threshold\n ]\n return docs[:k]\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,\n filter: Optional[Dict[str, Any]] = None,\n fetch_k: int = 20,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} {"id": "afc897e84bba-6", "text": "Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n fetch_k: (Optional[int]) Number of Documents to fetch before filtering.\n Defaults to 20.\n Returns:\n List of documents most similar to the query text with\n L2 distance in float. Lower score represents more similarity.\n \"\"\"\n embedding = self.embedding_function(query)\n docs = self.similarity_search_with_score_by_vector(\n embedding,\n k,\n filter=filter,\n fetch_k=fetch_k,\n **kwargs,\n )\n return docs\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n filter: Optional[Dict[str, Any]] = None,\n fetch_k: int = 20,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n fetch_k: (Optional[int]) Number of Documents to fetch before filtering.\n Defaults to 20.\n Returns:\n List of Documents most similar to the embedding.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score_by_vector(\n embedding,\n k,\n filter=filter,\n fetch_k=fetch_k,\n **kwargs,\n )\n return [doc for doc, _ in docs_and_scores]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} {"id": "afc897e84bba-7", "text": ")\n return [doc for doc, _ in docs_and_scores]\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n filter: Optional[Dict[str, Any]] = None,\n fetch_k: int = 20,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter: (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n fetch_k: (Optional[int]) Number of Documents to fetch before filtering.\n Defaults to 20.\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(\n query, k, filter=filter, fetch_k=fetch_k, **kwargs\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] def max_marginal_relevance_search_with_score_by_vector(\n self,\n embedding: List[float],\n *,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n filter: Optional[Dict[str, Any]] = None,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and their similarity scores selected using the maximal marginal\n relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} {"id": "afc897e84bba-8", "text": "k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch before filtering to\n pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents and similarity scores selected by maximal marginal\n relevance and score for each.\n \"\"\"\n scores, indices = self.index.search(\n np.array([embedding], dtype=np.float32),\n fetch_k if filter is None else fetch_k * 2,\n )\n if filter is not None:\n filtered_indices = []\n for i in indices[0]:\n if i == -1:\n # This happens when not enough docs are returned.\n continue\n _id = self.index_to_docstore_id[i]\n doc = self.docstore.search(_id)\n if not isinstance(doc, Document):\n raise ValueError(f\"Could not find document for id {_id}, got {doc}\")\n if all(doc.metadata.get(key) == value for key, value in filter.items()):\n filtered_indices.append(i)\n indices = np.array([filtered_indices])\n # -1 happens when not enough docs are returned.\n embeddings = [self.index.reconstruct(int(i)) for i in indices[0] if i != -1]\n mmr_selected = maximal_marginal_relevance(\n np.array([embedding], dtype=np.float32),\n embeddings,\n k=k,\n lambda_mult=lambda_mult,\n )\n selected_indices = [indices[0][i] for i in mmr_selected]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} {"id": "afc897e84bba-9", "text": "selected_indices = [indices[0][i] for i in mmr_selected]\n selected_scores = [scores[0][i] for i in mmr_selected]\n docs_and_scores = []\n for i, score in zip(selected_indices, selected_scores):\n if i == -1:\n # This happens when not enough docs are returned.\n continue\n _id = self.index_to_docstore_id[i]\n doc = self.docstore.search(_id)\n if not isinstance(doc, Document):\n raise ValueError(f\"Could not find document for id {_id}, got {doc}\")\n docs_and_scores.append((doc, score))\n return docs_and_scores\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n filter: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch before filtering to\n pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} {"id": "afc897e84bba-10", "text": "Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n docs_and_scores = self.max_marginal_relevance_search_with_score_by_vector(\n embedding, k=k, fetch_k=fetch_k, lambda_mult=lambda_mult, filter=filter\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n filter: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch before filtering (if needed) to\n pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n embedding = self.embedding_function(query)\n docs = self.max_marginal_relevance_search_by_vector(\n embedding,\n k=k,\n fetch_k=fetch_k,\n lambda_mult=lambda_mult,\n filter=filter,\n **kwargs,\n )\n return docs\n[docs] def merge_from(self, target: FAISS) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} {"id": "afc897e84bba-11", "text": "[docs] def merge_from(self, target: FAISS) -> None:\n \"\"\"Merge another FAISS object with the current one.\n Add the target FAISS to the current one.\n Args:\n target: FAISS object you wish to merge into the current one\n Returns:\n None.\n \"\"\"\n if not isinstance(self.docstore, AddableMixin):\n raise ValueError(\"Cannot merge with this type of docstore\")\n # Numerical index for target docs are incremental on existing ones\n starting_len = len(self.index_to_docstore_id)\n # Merge two IndexFlatL2\n self.index.merge_from(target.index)\n # Get id and docs from target FAISS object\n full_info = []\n for i, target_id in target.index_to_docstore_id.items():\n doc = target.docstore.search(target_id)\n if not isinstance(doc, Document):\n raise ValueError(\"Document should be returned\")\n full_info.append((starting_len + i, target_id, doc))\n # Add information to docstore and index_to_docstore_id.\n self.docstore.add({_id: doc for _, _id, doc in full_info})\n index_to_id = {index: _id for index, _id, _ in full_info}\n self.index_to_docstore_id.update(index_to_id)\n @classmethod\n def __from(\n cls,\n texts: List[str],\n embeddings: List[List[float]],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n normalize_L2: bool = False,\n **kwargs: Any,\n ) -> FAISS:\n faiss = dependable_faiss_import()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} {"id": "afc897e84bba-12", "text": ") -> FAISS:\n faiss = dependable_faiss_import()\n index = faiss.IndexFlatL2(len(embeddings[0]))\n vector = np.array(embeddings, dtype=np.float32)\n if normalize_L2:\n faiss.normalize_L2(vector)\n index.add(vector)\n documents = []\n if ids is None:\n ids = [str(uuid.uuid4()) for _ in texts]\n for i, text in enumerate(texts):\n metadata = metadatas[i] if metadatas else {}\n documents.append(Document(page_content=text, metadata=metadata))\n index_to_id = dict(enumerate(ids))\n docstore = InMemoryDocstore(dict(zip(index_to_id.values(), documents)))\n return cls(\n embedding.embed_query,\n index,\n docstore,\n index_to_id,\n normalize_L2=normalize_L2,\n **kwargs,\n )\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> FAISS:\n \"\"\"Construct FAISS wrapper from raw documents.\n This is a user friendly interface that:\n 1. Embeds documents.\n 2. Creates an in memory docstore\n 3. Initializes the FAISS database\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import FAISS\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n faiss = FAISS.from_texts(texts, embeddings)\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} {"id": "afc897e84bba-13", "text": "faiss = FAISS.from_texts(texts, embeddings)\n \"\"\"\n embeddings = embedding.embed_documents(texts)\n return cls.__from(\n texts,\n embeddings,\n embedding,\n metadatas=metadatas,\n ids=ids,\n **kwargs,\n )\n[docs] @classmethod\n def from_embeddings(\n cls,\n text_embeddings: List[Tuple[str, List[float]]],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> FAISS:\n \"\"\"Construct FAISS wrapper from raw documents.\n This is a user friendly interface that:\n 1. Embeds documents.\n 2. Creates an in memory docstore\n 3. Initializes the FAISS database\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import FAISS\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n text_embeddings = embeddings.embed_documents(texts)\n text_embedding_pairs = list(zip(texts, text_embeddings))\n faiss = FAISS.from_embeddings(text_embedding_pairs, embeddings)\n \"\"\"\n texts = [t[0] for t in text_embeddings]\n embeddings = [t[1] for t in text_embeddings]\n return cls.__from(\n texts,\n embeddings,\n embedding,\n metadatas=metadatas,\n ids=ids,\n **kwargs,\n )\n[docs] def save_local(self, folder_path: str, index_name: str = \"index\") -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} {"id": "afc897e84bba-14", "text": "\"\"\"Save FAISS index, docstore, and index_to_docstore_id to disk.\n Args:\n folder_path: folder path to save index, docstore,\n and index_to_docstore_id to.\n index_name: for saving with a specific index file name\n \"\"\"\n path = Path(folder_path)\n path.mkdir(exist_ok=True, parents=True)\n # save index separately since it is not picklable\n faiss = dependable_faiss_import()\n faiss.write_index(\n self.index, str(path / \"{index_name}.faiss\".format(index_name=index_name))\n )\n # save docstore and index_to_docstore_id\n with open(path / \"{index_name}.pkl\".format(index_name=index_name), \"wb\") as f:\n pickle.dump((self.docstore, self.index_to_docstore_id), f)\n[docs] @classmethod\n def load_local(\n cls,\n folder_path: str,\n embeddings: Embeddings,\n index_name: str = \"index\",\n **kwargs: Any,\n ) -> FAISS:\n \"\"\"Load FAISS index, docstore, and index_to_docstore_id from disk.\n Args:\n folder_path: folder path to load index, docstore,\n and index_to_docstore_id from.\n embeddings: Embeddings to use when generating queries\n index_name: for saving with a specific index file name\n \"\"\"\n path = Path(folder_path)\n # load index separately since it is not picklable\n faiss = dependable_faiss_import()\n index = faiss.read_index(\n str(path / \"{index_name}.faiss\".format(index_name=index_name))\n )\n # load docstore and index_to_docstore_id", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} {"id": "afc897e84bba-15", "text": ")\n # load docstore and index_to_docstore_id\n with open(path / \"{index_name}.pkl\".format(index_name=index_name), \"rb\") as f:\n docstore, index_to_docstore_id = pickle.load(f)\n return cls(\n embeddings.embed_query, index, docstore, index_to_docstore_id, **kwargs\n )\n def _similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n filter: Optional[Dict[str, Any]] = None,\n fetch_k: int = 20,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and their similarity scores on a scale from 0 to 1.\"\"\"\n docs_and_scores = self.similarity_search_with_score(\n query,\n k=k,\n filter=filter,\n fetch_k=fetch_k,\n **kwargs,\n )\n return [(doc, self.relevance_score_fn(score)) for doc, score in docs_and_scores]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} {"id": "f114cfb7f7b7-0", "text": "Source code for langchain.vectorstores.qdrant\n\"\"\"Wrapper around Qdrant vector database.\"\"\"\nfrom __future__ import annotations\nimport uuid\nimport warnings\nfrom itertools import islice\nfrom operator import itemgetter\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Dict,\n Iterable,\n List,\n Optional,\n Sequence,\n Tuple,\n Type,\n Union,\n)\nimport numpy as np\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nif TYPE_CHECKING:\n from qdrant_client.conversions import common_types\n from qdrant_client.http import models as rest\n DictFilter = Dict[str, Union[str, int, bool, dict, list]]\n MetadataFilter = Union[DictFilter, common_types.Filter]\n[docs]class Qdrant(VectorStore):\n \"\"\"Wrapper around Qdrant vector database.\n To use you should have the ``qdrant-client`` package installed.\n Example:\n .. code-block:: python\n from qdrant_client import QdrantClient\n from langchain import Qdrant\n client = QdrantClient()\n collection_name = \"MyCollection\"\n qdrant = Qdrant(client, collection_name, embedding_function)\n \"\"\"\n CONTENT_KEY = \"page_content\"\n METADATA_KEY = \"metadata\"\n VECTOR_NAME = None\n def __init__(\n self,\n client: Any,\n collection_name: str,\n embeddings: Optional[Embeddings] = None,\n content_payload_key: str = CONTENT_KEY,\n metadata_payload_key: str = METADATA_KEY,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} {"id": "f114cfb7f7b7-1", "text": "metadata_payload_key: str = METADATA_KEY,\n vector_name: Optional[str] = VECTOR_NAME,\n embedding_function: Optional[Callable] = None, # deprecated\n ):\n \"\"\"Initialize with necessary components.\"\"\"\n try:\n import qdrant_client\n except ImportError:\n raise ValueError(\n \"Could not import qdrant-client python package. \"\n \"Please install it with `pip install qdrant-client`.\"\n )\n if not isinstance(client, qdrant_client.QdrantClient):\n raise ValueError(\n f\"client should be an instance of qdrant_client.QdrantClient, \"\n f\"got {type(client)}\"\n )\n if embeddings is None and embedding_function is None:\n raise ValueError(\n \"`embeddings` value can't be None. Pass `Embeddings` instance.\"\n )\n if embeddings is not None and embedding_function is not None:\n raise ValueError(\n \"Both `embeddings` and `embedding_function` are passed. \"\n \"Use `embeddings` only.\"\n )\n self.embeddings = embeddings\n self._embeddings_function = embedding_function\n self.client: qdrant_client.QdrantClient = client\n self.collection_name = collection_name\n self.content_payload_key = content_payload_key or self.CONTENT_KEY\n self.metadata_payload_key = metadata_payload_key or self.METADATA_KEY\n self.vector_name = vector_name or self.VECTOR_NAME\n if embedding_function is not None:\n warnings.warn(\n \"Using `embedding_function` is deprecated. \"\n \"Pass `Embeddings` instance to `embeddings` instead.\"\n )\n if not isinstance(embeddings, Embeddings):\n warnings.warn(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} {"id": "f114cfb7f7b7-2", "text": ")\n if not isinstance(embeddings, Embeddings):\n warnings.warn(\n \"`embeddings` should be an instance of `Embeddings`.\"\n \"Using `embeddings` as `embedding_function` which is deprecated\"\n )\n self._embeddings_function = embeddings\n self.embeddings = None\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[Sequence[str]] = None,\n batch_size: int = 64,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n ids:\n Optional list of ids to associate with the texts. Ids have to be\n uuid-like strings.\n batch_size:\n How many vectors upload per-request.\n Default: 64\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n from qdrant_client.http import models as rest\n added_ids = []\n texts_iterator = iter(texts)\n metadatas_iterator = iter(metadatas or [])\n ids_iterator = iter(ids or [uuid.uuid4().hex for _ in iter(texts)])\n while batch_texts := list(islice(texts_iterator, batch_size)):\n # Take the corresponding metadata and id for each text in a batch\n batch_metadatas = list(islice(metadatas_iterator, batch_size)) or None\n batch_ids = list(islice(ids_iterator, batch_size))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} {"id": "f114cfb7f7b7-3", "text": "batch_ids = list(islice(ids_iterator, batch_size))\n # Generate the embeddings for all the texts in a batch\n batch_embeddings = self._embed_texts(batch_texts)\n if self.vector_name is not None:\n batch_embeddings = { # type: ignore[assignment]\n self.vector_name: batch_embeddings\n }\n points = rest.Batch.construct(\n ids=batch_ids,\n vectors=batch_embeddings,\n payloads=self._build_payloads(\n batch_texts,\n batch_metadatas,\n self.content_payload_key,\n self.metadata_payload_key,\n ),\n )\n self.client.upsert(collection_name=self.collection_name, points=points)\n added_ids.extend(batch_ids)\n return added_ids\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n filter: Optional[MetadataFilter] = None,\n search_params: Optional[common_types.SearchParams] = None,\n offset: int = 0,\n score_threshold: Optional[float] = None,\n consistency: Optional[common_types.ReadConsistency] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter: Filter by metadata. Defaults to None.\n search_params: Additional search params\n offset:\n Offset of the first result to return.\n May be used to paginate results.\n Note: large offset values may cause performance issues.\n score_threshold:\n Define a minimal score threshold for the result.\n If defined, less similar results will not be returned.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} {"id": "f114cfb7f7b7-4", "text": "If defined, less similar results will not be returned.\n Score of the returned result might be higher or smaller than the\n threshold depending on the Distance function used.\n E.g. for cosine similarity only higher scores will be returned.\n consistency:\n Read consistency of the search. Defines how many replicas should be\n queried before returning the result.\n Values:\n - int - number of replicas to query, values should present in all\n queried replicas\n - 'majority' - query all replicas, but return values present in the\n majority of replicas\n - 'quorum' - query the majority of replicas, return values present in\n all of them\n - 'all' - query all replicas, and return values present in all replicas\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n results = self.similarity_search_with_score(\n query,\n k,\n filter=filter,\n search_params=search_params,\n offset=offset,\n score_threshold=score_threshold,\n consistency=consistency,\n **kwargs,\n )\n return list(map(itemgetter(0), results))\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,\n filter: Optional[MetadataFilter] = None,\n search_params: Optional[common_types.SearchParams] = None,\n offset: int = 0,\n score_threshold: Optional[float] = None,\n consistency: Optional[common_types.ReadConsistency] = None,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} {"id": "f114cfb7f7b7-5", "text": "Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter: Filter by metadata. Defaults to None.\n search_params: Additional search params\n offset:\n Offset of the first result to return.\n May be used to paginate results.\n Note: large offset values may cause performance issues.\n score_threshold:\n Define a minimal score threshold for the result.\n If defined, less similar results will not be returned.\n Score of the returned result might be higher or smaller than the\n threshold depending on the Distance function used.\n E.g. for cosine similarity only higher scores will be returned.\n consistency:\n Read consistency of the search. Defines how many replicas should be\n queried before returning the result.\n Values:\n - int - number of replicas to query, values should present in all\n queried replicas\n - 'majority' - query all replicas, but return values present in the\n majority of replicas\n - 'quorum' - query the majority of replicas, return values present in\n all of them\n - 'all' - query all replicas, and return values present in all replicas\n Returns:\n List of documents most similar to the query text and cosine\n distance in float for each.\n Lower score represents more similarity.\n \"\"\"\n return self.similarity_search_with_score_by_vector(\n self._embed_query(query),\n k,\n filter=filter,\n search_params=search_params,\n offset=offset,\n score_threshold=score_threshold,\n consistency=consistency,\n **kwargs,\n )\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} {"id": "f114cfb7f7b7-6", "text": "self,\n embedding: List[float],\n k: int = 4,\n filter: Optional[MetadataFilter] = None,\n search_params: Optional[common_types.SearchParams] = None,\n offset: int = 0,\n score_threshold: Optional[float] = None,\n consistency: Optional[common_types.ReadConsistency] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding: Embedding vector to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter: Filter by metadata. Defaults to None.\n search_params: Additional search params\n offset:\n Offset of the first result to return.\n May be used to paginate results.\n Note: large offset values may cause performance issues.\n score_threshold:\n Define a minimal score threshold for the result.\n If defined, less similar results will not be returned.\n Score of the returned result might be higher or smaller than the\n threshold depending on the Distance function used.\n E.g. for cosine similarity only higher scores will be returned.\n consistency:\n Read consistency of the search. Defines how many replicas should be\n queried before returning the result.\n Values:\n - int - number of replicas to query, values should present in all\n queried replicas\n - 'majority' - query all replicas, but return values present in the\n majority of replicas\n - 'quorum' - query the majority of replicas, return values present in\n all of them\n - 'all' - query all replicas, and return values present in all replicas\n Returns:\n List of Documents most similar to the query.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} {"id": "f114cfb7f7b7-7", "text": "Returns:\n List of Documents most similar to the query.\n \"\"\"\n results = self.similarity_search_with_score_by_vector(\n embedding,\n k,\n filter=filter,\n search_params=search_params,\n offset=offset,\n score_threshold=score_threshold,\n consistency=consistency,\n **kwargs,\n )\n return list(map(itemgetter(0), results))\n[docs] def similarity_search_with_score_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n filter: Optional[MetadataFilter] = None,\n search_params: Optional[common_types.SearchParams] = None,\n offset: int = 0,\n score_threshold: Optional[float] = None,\n consistency: Optional[common_types.ReadConsistency] = None,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding: Embedding vector to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter: Filter by metadata. Defaults to None.\n search_params: Additional search params\n offset:\n Offset of the first result to return.\n May be used to paginate results.\n Note: large offset values may cause performance issues.\n score_threshold:\n Define a minimal score threshold for the result.\n If defined, less similar results will not be returned.\n Score of the returned result might be higher or smaller than the\n threshold depending on the Distance function used.\n E.g. for cosine similarity only higher scores will be returned.\n consistency:\n Read consistency of the search. Defines how many replicas should be", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} {"id": "f114cfb7f7b7-8", "text": "consistency:\n Read consistency of the search. Defines how many replicas should be\n queried before returning the result.\n Values:\n - int - number of replicas to query, values should present in all\n queried replicas\n - 'majority' - query all replicas, but return values present in the\n majority of replicas\n - 'quorum' - query the majority of replicas, return values present in\n all of them\n - 'all' - query all replicas, and return values present in all replicas\n Returns:\n List of documents most similar to the query text and cosine\n distance in float for each.\n Lower score represents more similarity.\n \"\"\"\n if filter is not None and isinstance(filter, dict):\n warnings.warn(\n \"Using dict as a `filter` is deprecated. Please use qdrant-client \"\n \"filters directly: \"\n \"https://qdrant.tech/documentation/concepts/filtering/\",\n DeprecationWarning,\n )\n qdrant_filter = self._qdrant_filter_from_dict(filter)\n else:\n qdrant_filter = filter\n query_vector = embedding\n if self.vector_name is not None:\n query_vector = (self.vector_name, embedding) # type: ignore[assignment]\n results = self.client.search(\n collection_name=self.collection_name,\n query_vector=query_vector,\n query_filter=qdrant_filter,\n search_params=search_params,\n limit=k,\n offset=offset,\n with_payload=True,\n with_vectors=False, # Langchain does not expect vectors to be returned\n score_threshold=score_threshold,\n consistency=consistency,\n **kwargs,\n )\n return [\n (\n self._document_from_scored_point(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} {"id": "f114cfb7f7b7-9", "text": ")\n return [\n (\n self._document_from_scored_point(\n result, self.content_payload_key, self.metadata_payload_key\n ),\n result.score,\n )\n for result in results\n ]\n def _similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and relevance scores in the range [0, 1].\n 0 is dissimilar, 1 is most similar.\n Args:\n query: input text\n k: Number of Documents to return. Defaults to 4.\n **kwargs: kwargs to be passed to similarity search. Should include:\n score_threshold: Optional, a floating point value between 0 to 1 to\n filter the resulting set of retrieved docs\n Returns:\n List of Tuples of (doc, similarity_score)\n \"\"\"\n return self.similarity_search_with_score(query, k, **kwargs)\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n Defaults to 20.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} {"id": "f114cfb7f7b7-10", "text": "Defaults to 20.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n query_embedding = self._embed_query(query)\n query_vector = query_embedding\n if self.vector_name is not None:\n query_vector = (self.vector_name, query_vector) # type: ignore[assignment]\n results = self.client.search(\n collection_name=self.collection_name,\n query_vector=query_vector,\n with_payload=True,\n with_vectors=True,\n limit=fetch_k,\n )\n embeddings = [\n result.vector.get(self.vector_name) # type: ignore[index, union-attr]\n if self.vector_name is not None\n else result.vector\n for result in results\n ]\n mmr_selected = maximal_marginal_relevance(\n np.array(query_embedding), embeddings, k=k, lambda_mult=lambda_mult\n )\n return [\n self._document_from_scored_point(\n results[i], self.content_payload_key, self.metadata_payload_key\n )\n for i in mmr_selected\n ]\n[docs] @classmethod\n def from_texts(\n cls: Type[Qdrant],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[Sequence[str]] = None,\n location: Optional[str] = None,\n url: Optional[str] = None,\n port: Optional[int] = 6333,\n grpc_port: int = 6334,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} {"id": "f114cfb7f7b7-11", "text": "grpc_port: int = 6334,\n prefer_grpc: bool = False,\n https: Optional[bool] = None,\n api_key: Optional[str] = None,\n prefix: Optional[str] = None,\n timeout: Optional[float] = None,\n host: Optional[str] = None,\n path: Optional[str] = None,\n collection_name: Optional[str] = None,\n distance_func: str = \"Cosine\",\n content_payload_key: str = CONTENT_KEY,\n metadata_payload_key: str = METADATA_KEY,\n vector_name: Optional[str] = VECTOR_NAME,\n batch_size: int = 64,\n shard_number: Optional[int] = None,\n replication_factor: Optional[int] = None,\n write_consistency_factor: Optional[int] = None,\n on_disk_payload: Optional[bool] = None,\n hnsw_config: Optional[common_types.HnswConfigDiff] = None,\n optimizers_config: Optional[common_types.OptimizersConfigDiff] = None,\n wal_config: Optional[common_types.WalConfigDiff] = None,\n quantization_config: Optional[common_types.QuantizationConfig] = None,\n init_from: Optional[common_types.InitFrom] = None,\n **kwargs: Any,\n ) -> Qdrant:\n \"\"\"Construct Qdrant wrapper from a list of texts.\n Args:\n texts: A list of texts to be indexed in Qdrant.\n embedding: A subclass of `Embeddings`, responsible for text vectorization.\n metadatas:\n An optional list of metadata. If provided it has to be of the same\n length as a list of texts.\n ids:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} {"id": "f114cfb7f7b7-12", "text": "length as a list of texts.\n ids:\n Optional list of ids to associate with the texts. Ids have to be\n uuid-like strings.\n location:\n If `:memory:` - use in-memory Qdrant instance.\n If `str` - use it as a `url` parameter.\n If `None` - fallback to relying on `host` and `port` parameters.\n url: either host or str of \"Optional[scheme], host, Optional[port],\n Optional[prefix]\". Default: `None`\n port: Port of the REST API interface. Default: 6333\n grpc_port: Port of the gRPC interface. Default: 6334\n prefer_grpc:\n If true - use gPRC interface whenever possible in custom methods.\n Default: False\n https: If true - use HTTPS(SSL) protocol. Default: None\n api_key: API key for authentication in Qdrant Cloud. Default: None\n prefix:\n If not None - add prefix to the REST URL path.\n Example: service/v1 will result in\n http://localhost:6333/service/v1/{qdrant-endpoint} for REST API.\n Default: None\n timeout:\n Timeout for REST and gRPC API requests.\n Default: 5.0 seconds for REST and unlimited for gRPC\n host:\n Host name of Qdrant service. If url and host are None, set to\n 'localhost'. Default: None\n path:\n Path in which the vectors will be stored while using local mode.\n Default: None\n collection_name:\n Name of the Qdrant collection to be used. If not provided,\n it will be created randomly. Default: None\n distance_func:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} {"id": "f114cfb7f7b7-13", "text": "it will be created randomly. Default: None\n distance_func:\n Distance function. One of: \"Cosine\" / \"Euclid\" / \"Dot\".\n Default: \"Cosine\"\n content_payload_key:\n A payload key used to store the content of the document.\n Default: \"page_content\"\n metadata_payload_key:\n A payload key used to store the metadata of the document.\n Default: \"metadata\"\n vector_name:\n Name of the vector to be used internally in Qdrant.\n Default: None\n batch_size:\n How many vectors upload per-request.\n Default: 64\n shard_number: Number of shards in collection. Default is 1, minimum is 1.\n replication_factor:\n Replication factor for collection. Default is 1, minimum is 1.\n Defines how many copies of each shard will be created.\n Have effect only in distributed mode.\n write_consistency_factor:\n Write consistency factor for collection. Default is 1, minimum is 1.\n Defines how many replicas should apply the operation for us to consider\n it successful. Increasing this number will make the collection more\n resilient to inconsistencies, but will also make it fail if not enough\n replicas are available.\n Does not have any performance impact.\n Have effect only in distributed mode.\n on_disk_payload:\n If true - point`s payload will not be stored in memory.\n It will be read from the disk every time it is requested.\n This setting saves RAM by (slightly) increasing the response time.\n Note: those payload values that are involved in filtering and are\n indexed - remain in RAM.\n hnsw_config: Params for HNSW index\n optimizers_config: Params for optimizer", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} {"id": "f114cfb7f7b7-14", "text": "optimizers_config: Params for optimizer\n wal_config: Params for Write-Ahead-Log\n quantization_config:\n Params for quantization, if None - quantization will be disabled\n init_from:\n Use data stored in another collection to initialize this collection\n **kwargs:\n Additional arguments passed directly into REST client initialization\n This is a user-friendly interface that:\n 1. Creates embeddings, one for each text\n 2. Initializes the Qdrant database as an in-memory docstore by default\n (and overridable to a remote docstore)\n 3. Adds the text embeddings to the Qdrant database\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import Qdrant\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n qdrant = Qdrant.from_texts(texts, embeddings, \"localhost\")\n \"\"\"\n try:\n import qdrant_client\n except ImportError:\n raise ValueError(\n \"Could not import qdrant-client python package. \"\n \"Please install it with `pip install qdrant-client`.\"\n )\n from qdrant_client.http import models as rest\n # Just do a single quick embedding to get vector size\n partial_embeddings = embedding.embed_documents(texts[:1])\n vector_size = len(partial_embeddings[0])\n collection_name = collection_name or uuid.uuid4().hex\n distance_func = distance_func.upper()\n client = qdrant_client.QdrantClient(\n location=location,\n url=url,\n port=port,\n grpc_port=grpc_port,\n prefer_grpc=prefer_grpc,\n https=https,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} {"id": "f114cfb7f7b7-15", "text": "prefer_grpc=prefer_grpc,\n https=https,\n api_key=api_key,\n prefix=prefix,\n timeout=timeout,\n host=host,\n path=path,\n **kwargs,\n )\n vectors_config = rest.VectorParams(\n size=vector_size,\n distance=rest.Distance[distance_func],\n )\n # If vector name was provided, we're going to use the named vectors feature\n # with just a single vector.\n if vector_name is not None:\n vectors_config = { # type: ignore[assignment]\n vector_name: vectors_config,\n }\n client.recreate_collection(\n collection_name=collection_name,\n vectors_config=vectors_config,\n shard_number=shard_number,\n replication_factor=replication_factor,\n write_consistency_factor=write_consistency_factor,\n on_disk_payload=on_disk_payload,\n hnsw_config=hnsw_config,\n optimizers_config=optimizers_config,\n wal_config=wal_config,\n quantization_config=quantization_config,\n init_from=init_from,\n timeout=timeout, # type: ignore[arg-type]\n )\n texts_iterator = iter(texts)\n metadatas_iterator = iter(metadatas or [])\n ids_iterator = iter(ids or [uuid.uuid4().hex for _ in iter(texts)])\n while batch_texts := list(islice(texts_iterator, batch_size)):\n # Take the corresponding metadata and id for each text in a batch\n batch_metadatas = list(islice(metadatas_iterator, batch_size)) or None\n batch_ids = list(islice(ids_iterator, batch_size))\n # Generate the embeddings for all the texts in a batch", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} {"id": "f114cfb7f7b7-16", "text": "# Generate the embeddings for all the texts in a batch\n batch_embeddings = embedding.embed_documents(batch_texts)\n if vector_name is not None:\n batch_embeddings = { # type: ignore[assignment]\n vector_name: batch_embeddings\n }\n points = rest.Batch.construct(\n ids=batch_ids,\n vectors=batch_embeddings,\n payloads=cls._build_payloads(\n batch_texts,\n batch_metadatas,\n content_payload_key,\n metadata_payload_key,\n ),\n )\n client.upsert(collection_name=collection_name, points=points)\n return cls(\n client=client,\n collection_name=collection_name,\n embeddings=embedding,\n content_payload_key=content_payload_key,\n metadata_payload_key=metadata_payload_key,\n vector_name=vector_name,\n )\n @classmethod\n def _build_payloads(\n cls,\n texts: Iterable[str],\n metadatas: Optional[List[dict]],\n content_payload_key: str,\n metadata_payload_key: str,\n ) -> List[dict]:\n payloads = []\n for i, text in enumerate(texts):\n if text is None:\n raise ValueError(\n \"At least one of the texts is None. Please remove it before \"\n \"calling .from_texts or .add_texts on Qdrant instance.\"\n )\n metadata = metadatas[i] if metadatas is not None else None\n payloads.append(\n {\n content_payload_key: text,\n metadata_payload_key: metadata,\n }\n )\n return payloads\n @classmethod\n def _document_from_scored_point(\n cls,\n scored_point: Any,\n content_payload_key: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} {"id": "f114cfb7f7b7-17", "text": "cls,\n scored_point: Any,\n content_payload_key: str,\n metadata_payload_key: str,\n ) -> Document:\n return Document(\n page_content=scored_point.payload.get(content_payload_key),\n metadata=scored_point.payload.get(metadata_payload_key) or {},\n )\n def _build_condition(self, key: str, value: Any) -> List[rest.FieldCondition]:\n from qdrant_client.http import models as rest\n out = []\n if isinstance(value, dict):\n for _key, value in value.items():\n out.extend(self._build_condition(f\"{key}.{_key}\", value))\n elif isinstance(value, list):\n for _value in value:\n if isinstance(_value, dict):\n out.extend(self._build_condition(f\"{key}[]\", _value))\n else:\n out.extend(self._build_condition(f\"{key}\", _value))\n else:\n out.append(\n rest.FieldCondition(\n key=f\"{self.metadata_payload_key}.{key}\",\n match=rest.MatchValue(value=value),\n )\n )\n return out\n def _qdrant_filter_from_dict(\n self, filter: Optional[DictFilter]\n ) -> Optional[rest.Filter]:\n from qdrant_client.http import models as rest\n if not filter:\n return None\n return rest.Filter(\n must=[\n condition\n for key, value in filter.items()\n for condition in self._build_condition(key, value)\n ]\n )\n def _embed_query(self, query: str) -> List[float]:\n \"\"\"Embed query text.\n Used to provide backward compatibility with `embedding_function` argument.\n Args:\n query: Query text.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} {"id": "f114cfb7f7b7-18", "text": "Args:\n query: Query text.\n Returns:\n List of floats representing the query embedding.\n \"\"\"\n if self.embeddings is not None:\n embedding = self.embeddings.embed_query(query)\n else:\n if self._embeddings_function is not None:\n embedding = self._embeddings_function(query)\n else:\n raise ValueError(\"Neither of embeddings or embedding_function is set\")\n return embedding.tolist() if hasattr(embedding, \"tolist\") else embedding\n def _embed_texts(self, texts: Iterable[str]) -> List[List[float]]:\n \"\"\"Embed search texts.\n Used to provide backward compatibility with `embedding_function` argument.\n Args:\n texts: Iterable of texts to embed.\n Returns:\n List of floats representing the texts embedding.\n \"\"\"\n if self.embeddings is not None:\n embeddings = self.embeddings.embed_documents(list(texts))\n if hasattr(embeddings, \"tolist\"):\n embeddings = embeddings.tolist()\n elif self._embeddings_function is not None:\n embeddings = []\n for text in texts:\n embedding = self._embeddings_function(text)\n if hasattr(embeddings, \"tolist\"):\n embedding = embedding.tolist()\n embeddings.append(embedding)\n else:\n raise ValueError(\"Neither of embeddings or embedding_function is set\")\n return embeddings", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} {"id": "9812de8c34d1-0", "text": "Source code for langchain.vectorstores.tair\n\"\"\"Wrapper around Tair Vector.\"\"\"\nfrom __future__ import annotations\nimport json\nimport logging\nimport uuid\nfrom typing import Any, Iterable, List, Optional, Type\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nfrom langchain.vectorstores.base import VectorStore\nlogger = logging.getLogger(__name__)\ndef _uuid_key() -> str:\n return uuid.uuid4().hex\n[docs]class Tair(VectorStore):\n \"\"\"Wrapper around Tair Vector store.\"\"\"\n def __init__(\n self,\n embedding_function: Embeddings,\n url: str,\n index_name: str,\n content_key: str = \"content\",\n metadata_key: str = \"metadata\",\n search_params: Optional[dict] = None,\n **kwargs: Any,\n ):\n self.embedding_function = embedding_function\n self.index_name = index_name\n try:\n from tair import Tair as TairClient\n except ImportError:\n raise ImportError(\n \"Could not import tair python package. \"\n \"Please install it with `pip install tair`.\"\n )\n try:\n # connect to tair from url\n client = TairClient.from_url(url, **kwargs)\n except ValueError as e:\n raise ValueError(f\"Tair failed to connect: {e}\")\n self.client = client\n self.content_key = content_key\n self.metadata_key = metadata_key\n self.search_params = search_params\n[docs] def create_index_if_not_exist(\n self,\n dim: int,\n distance_type: str,\n index_type: str,\n data_type: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/tair.html"} {"id": "9812de8c34d1-1", "text": "index_type: str,\n data_type: str,\n **kwargs: Any,\n ) -> bool:\n index = self.client.tvs_get_index(self.index_name)\n if index is not None:\n logger.info(\"Index already exists\")\n return False\n self.client.tvs_create_index(\n self.index_name,\n dim,\n distance_type,\n index_type,\n data_type,\n **kwargs,\n )\n return True\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Add texts data to an existing index.\"\"\"\n ids = []\n keys = kwargs.get(\"keys\", None)\n # Write data to tair\n pipeline = self.client.pipeline(transaction=False)\n embeddings = self.embedding_function.embed_documents(list(texts))\n for i, text in enumerate(texts):\n # Use provided key otherwise use default key\n key = keys[i] if keys else _uuid_key()\n metadata = metadatas[i] if metadatas else {}\n pipeline.tvs_hset(\n self.index_name,\n key,\n embeddings[i],\n False,\n **{\n self.content_key: text,\n self.metadata_key: json.dumps(metadata),\n },\n )\n ids.append(key)\n pipeline.execute()\n return ids\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"\n Returns the most similar indexed documents to the query text.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/tair.html"} {"id": "9812de8c34d1-2", "text": "\"\"\"\n Returns the most similar indexed documents to the query text.\n Args:\n query (str): The query text for which to find similar documents.\n k (int): The number of documents to return. Default is 4.\n Returns:\n List[Document]: A list of documents that are most similar to the query text.\n \"\"\"\n # Creates embedding vector from user query\n embedding = self.embedding_function.embed_query(query)\n keys_and_scores = self.client.tvs_knnsearch(\n self.index_name, k, embedding, False, None, **kwargs\n )\n pipeline = self.client.pipeline(transaction=False)\n for key, _ in keys_and_scores:\n pipeline.tvs_hmget(\n self.index_name, key, self.metadata_key, self.content_key\n )\n docs = pipeline.execute()\n return [\n Document(\n page_content=d[1],\n metadata=json.loads(d[0]),\n )\n for d in docs\n ]\n[docs] @classmethod\n def from_texts(\n cls: Type[Tair],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n index_name: str = \"langchain\",\n content_key: str = \"content\",\n metadata_key: str = \"metadata\",\n **kwargs: Any,\n ) -> Tair:\n try:\n from tair import tairvector\n except ImportError:\n raise ValueError(\n \"Could not import tair python package. \"\n \"Please install it with `pip install tair`.\"\n )\n url = get_from_dict_or_env(kwargs, \"tair_url\", \"TAIR_URL\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/tair.html"} {"id": "9812de8c34d1-3", "text": "if \"tair_url\" in kwargs:\n kwargs.pop(\"tair_url\")\n distance_type = tairvector.DistanceMetric.InnerProduct\n if \"distance_type\" in kwargs:\n distance_type = kwargs.pop(\"distance_typ\")\n index_type = tairvector.IndexType.HNSW\n if \"index_type\" in kwargs:\n index_type = kwargs.pop(\"index_type\")\n data_type = tairvector.DataType.Float32\n if \"data_type\" in kwargs:\n data_type = kwargs.pop(\"data_type\")\n index_params = {}\n if \"index_params\" in kwargs:\n index_params = kwargs.pop(\"index_params\")\n search_params = {}\n if \"search_params\" in kwargs:\n search_params = kwargs.pop(\"search_params\")\n keys = None\n if \"keys\" in kwargs:\n keys = kwargs.pop(\"keys\")\n try:\n tair_vector_store = cls(\n embedding,\n url,\n index_name,\n content_key=content_key,\n metadata_key=metadata_key,\n search_params=search_params,\n **kwargs,\n )\n except ValueError as e:\n raise ValueError(f\"tair failed to connect: {e}\")\n # Create embeddings for documents\n embeddings = embedding.embed_documents(texts)\n tair_vector_store.create_index_if_not_exist(\n len(embeddings[0]),\n distance_type,\n index_type,\n data_type,\n **index_params,\n )\n tair_vector_store.add_texts(texts, metadatas, keys=keys)\n return tair_vector_store\n[docs] @classmethod\n def from_documents(\n cls,\n documents: List[Document],\n embedding: Embeddings,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/tair.html"} {"id": "9812de8c34d1-4", "text": "cls,\n documents: List[Document],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n index_name: str = \"langchain\",\n content_key: str = \"content\",\n metadata_key: str = \"metadata\",\n **kwargs: Any,\n ) -> Tair:\n texts = [d.page_content for d in documents]\n metadatas = [d.metadata for d in documents]\n return cls.from_texts(\n texts, embedding, metadatas, index_name, content_key, metadata_key, **kwargs\n )\n[docs] @staticmethod\n def drop_index(\n index_name: str = \"langchain\",\n **kwargs: Any,\n ) -> bool:\n \"\"\"\n Drop an existing index.\n Args:\n index_name (str): Name of the index to drop.\n Returns:\n bool: True if the index is dropped successfully.\n \"\"\"\n try:\n from tair import Tair as TairClient\n except ImportError:\n raise ValueError(\n \"Could not import tair python package. \"\n \"Please install it with `pip install tair`.\"\n )\n url = get_from_dict_or_env(kwargs, \"tair_url\", \"TAIR_URL\")\n try:\n if \"tair_url\" in kwargs:\n kwargs.pop(\"tair_url\")\n client = TairClient.from_url(url=url, **kwargs)\n except ValueError as e:\n raise ValueError(f\"Tair connection error: {e}\")\n # delete index\n ret = client.tvs_del_index(index_name)\n if ret == 0:\n # index not exist\n logger.info(\"Index does not exist\")\n return False", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/tair.html"} {"id": "9812de8c34d1-5", "text": "# index not exist\n logger.info(\"Index does not exist\")\n return False\n return True\n[docs] @classmethod\n def from_existing_index(\n cls,\n embedding: Embeddings,\n index_name: str = \"langchain\",\n content_key: str = \"content\",\n metadata_key: str = \"metadata\",\n **kwargs: Any,\n ) -> Tair:\n \"\"\"Connect to an existing Tair index.\"\"\"\n url = get_from_dict_or_env(kwargs, \"tair_url\", \"TAIR_URL\")\n search_params = {}\n if \"search_params\" in kwargs:\n search_params = kwargs.pop(\"search_params\")\n return cls(\n embedding,\n url,\n index_name,\n content_key=content_key,\n metadata_key=metadata_key,\n search_params=search_params,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/tair.html"} {"id": "3a0cae07932b-0", "text": "Source code for langchain.vectorstores.clarifai\nfrom __future__ import annotations\nimport logging\nimport os\nimport traceback\nfrom typing import Any, Iterable, List, Optional, Tuple\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nlogger = logging.getLogger(__name__)\n[docs]class Clarifai(VectorStore):\n \"\"\"Wrapper around Clarifai AI platform's vector store.\n To use, you should have the ``clarifai`` python package installed.\n Example:\n .. code-block:: python\n from langchain.vectorstores import Clarifai\n from langchain.embeddings.openai import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n vectorstore = Clarifai(\"langchain_store\", embeddings.embed_query)\n \"\"\"\n def __init__(\n self,\n user_id: Optional[str] = None,\n app_id: Optional[str] = None,\n pat: Optional[str] = None,\n number_of_docs: Optional[int] = None,\n api_base: Optional[str] = None,\n ) -> None:\n \"\"\"Initialize with Clarifai client.\n Args:\n user_id (Optional[str], optional): User ID. Defaults to None.\n app_id (Optional[str], optional): App ID. Defaults to None.\n pat (Optional[str], optional): Personal access token. Defaults to None.\n number_of_docs (Optional[int], optional): Number of documents to return\n during vector search. Defaults to None.\n api_base (Optional[str], optional): API base. Defaults to None.\n Raises:\n ValueError: If user ID, app ID or personal access token is not provided.\n \"\"\"\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clarifai.html"} {"id": "3a0cae07932b-1", "text": "\"\"\"\n try:\n from clarifai.auth.helper import DEFAULT_BASE, ClarifaiAuthHelper\n from clarifai.client import create_stub\n except ImportError:\n raise ValueError(\n \"Could not import clarifai python package. \"\n \"Please install it with `pip install clarifai`.\"\n )\n if api_base is None:\n self._api_base = DEFAULT_BASE\n self._user_id = user_id or os.environ.get(\"CLARIFAI_USER_ID\")\n self._app_id = app_id or os.environ.get(\"CLARIFAI_APP_ID\")\n self._pat = pat or os.environ.get(\"CLARIFAI_PAT\")\n if self._user_id is None or self._app_id is None or self._pat is None:\n raise ValueError(\n \"Could not find CLARIFAI_USER_ID, CLARIFAI_APP_ID or\\\n CLARIFAI_PAT in your environment. \"\n \"Please set those env variables with a valid user ID, \\\n app ID and personal access token \\\n from https://clarifai.com/settings/security.\"\n )\n self._auth = ClarifaiAuthHelper(\n user_id=self._user_id,\n app_id=self._app_id,\n pat=self._pat,\n base=self._api_base,\n )\n self._stub = create_stub(self._auth)\n self._userDataObject = self._auth.get_user_app_id_proto()\n self._number_of_docs = number_of_docs\n def _post_text_input(self, text: str, metadata: dict) -> str:\n \"\"\"Post text to Clarifai and return the ID of the input.\n Args:\n text (str): Text to post.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clarifai.html"} {"id": "3a0cae07932b-2", "text": "Args:\n text (str): Text to post.\n metadata (dict): Metadata to post.\n Returns:\n str: ID of the input.\n \"\"\"\n try:\n from clarifai_grpc.grpc.api import resources_pb2, service_pb2\n from clarifai_grpc.grpc.api.status import status_code_pb2\n from google.protobuf.struct_pb2 import Struct # type: ignore\n except ImportError as e:\n raise ImportError(\n \"Could not import clarifai python package. \"\n \"Please install it with `pip install clarifai`.\"\n ) from e\n input_metadata = Struct()\n input_metadata.update(metadata)\n post_inputs_response = self._stub.PostInputs(\n service_pb2.PostInputsRequest(\n user_app_id=self._userDataObject,\n inputs=[\n resources_pb2.Input(\n data=resources_pb2.Data(\n text=resources_pb2.Text(raw=text),\n metadata=input_metadata,\n )\n )\n ],\n )\n )\n if post_inputs_response.status.code != status_code_pb2.SUCCESS:\n logger.error(post_inputs_response.status)\n raise Exception(\n \"Post inputs failed, status: \" + post_inputs_response.status.description\n )\n input_id = post_inputs_response.inputs[0].id\n return input_id\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Add texts to the Clarifai vectorstore. This will push the text\n to a Clarifai application.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clarifai.html"} {"id": "3a0cae07932b-3", "text": "to a Clarifai application.\n Application use base workflow that create and store embedding for each text.\n Make sure you are using a base workflow that is compatible with text\n (such as Language Understanding).\n Args:\n texts (Iterable[str]): Texts to add to the vectorstore.\n metadatas (Optional[List[dict]], optional): Optional list of metadatas.\n ids (Optional[List[str]], optional): Optional list of IDs.\n Returns:\n List[str]: List of IDs of the added texts.\n \"\"\"\n assert len(list(texts)) > 0, \"No texts provided to add to the vectorstore.\"\n if metadatas is not None:\n assert len(list(texts)) == len(\n metadatas\n ), \"Number of texts and metadatas should be the same.\"\n input_ids = []\n for idx, text in enumerate(texts):\n try:\n metadata = metadatas[idx] if metadatas else {}\n input_id = self._post_text_input(text, metadata)\n input_ids.append(input_id)\n logger.debug(f\"Input {input_id} posted successfully.\")\n except Exception as error:\n logger.warning(f\"Post inputs failed: {error}\")\n traceback.print_exc()\n return input_ids\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,\n filter: Optional[dict] = None,\n namespace: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Run similarity search with score using Clarifai.\n Args:\n query (str): Query text to search for.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clarifai.html"} {"id": "3a0cae07932b-4", "text": "Args:\n query (str): Query text to search for.\n k (int): Number of results to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata.\n Defaults to None.\n Returns:\n List[Document]: List of documents most simmilar to the query text.\n \"\"\"\n try:\n from clarifai_grpc.grpc.api import resources_pb2, service_pb2\n from clarifai_grpc.grpc.api.status import status_code_pb2\n from google.protobuf import json_format # type: ignore\n except ImportError as e:\n raise ImportError(\n \"Could not import clarifai python package. \"\n \"Please install it with `pip install clarifai`.\"\n ) from e\n # Get number of docs to return\n if self._number_of_docs is not None:\n k = self._number_of_docs\n post_annotations_searches_response = self._stub.PostAnnotationsSearches(\n service_pb2.PostAnnotationsSearchesRequest(\n user_app_id=self._userDataObject,\n searches=[\n resources_pb2.Search(\n query=resources_pb2.Query(\n ranks=[\n resources_pb2.Rank(\n annotation=resources_pb2.Annotation(\n data=resources_pb2.Data(\n text=resources_pb2.Text(raw=query),\n )\n )\n )\n ]\n )\n )\n ],\n pagination=service_pb2.Pagination(page=1, per_page=k),\n )\n )\n # Check if search was successful\n if post_annotations_searches_response.status.code != status_code_pb2.SUCCESS:\n raise Exception(\n \"Post searches failed, status: \"\n + post_annotations_searches_response.status.description", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clarifai.html"} {"id": "3a0cae07932b-5", "text": "\"Post searches failed, status: \"\n + post_annotations_searches_response.status.description\n )\n # Retrieve hits\n hits = post_annotations_searches_response.hits\n docs_and_scores = []\n # Iterate over hits and retrieve metadata and text\n for hit in hits:\n metadata = json_format.MessageToDict(hit.input.data.metadata)\n request = requests.get(hit.input.data.text.url)\n # override encoding by real educated guess as provided by chardet\n request.encoding = request.apparent_encoding\n requested_text = request.text\n logger.debug(\n f\"\\tScore {hit.score:.2f} for annotation: {hit.annotation.id}\\\n off input: {hit.input.id}, text: {requested_text[:125]}\"\n )\n docs_and_scores.append(\n (Document(page_content=requested_text, metadata=metadata), hit.score)\n )\n return docs_and_scores\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Run similarity search using Clarifai.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(query, **kwargs)\n return [doc for doc, _ in docs_and_scores]\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Optional[Embeddings] = None,\n metadatas: Optional[List[dict]] = None,\n user_id: Optional[str] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clarifai.html"} {"id": "3a0cae07932b-6", "text": "user_id: Optional[str] = None,\n app_id: Optional[str] = None,\n pat: Optional[str] = None,\n number_of_docs: Optional[int] = None,\n api_base: Optional[str] = None,\n **kwargs: Any,\n ) -> Clarifai:\n \"\"\"Create a Clarifai vectorstore from a list of texts.\n Args:\n user_id (str): User ID.\n app_id (str): App ID.\n texts (List[str]): List of texts to add.\n pat (Optional[str]): Personal access token. Defaults to None.\n number_of_docs (Optional[int]): Number of documents to return\n during vector search. Defaults to None.\n api_base (Optional[str]): API base. Defaults to None.\n metadatas (Optional[List[dict]]): Optional list of metadatas.\n Defaults to None.\n Returns:\n Clarifai: Clarifai vectorstore.\n \"\"\"\n clarifai_vector_db = cls(\n user_id=user_id,\n app_id=app_id,\n pat=pat,\n number_of_docs=number_of_docs,\n api_base=api_base,\n )\n clarifai_vector_db.add_texts(texts=texts, metadatas=metadatas)\n return clarifai_vector_db\n[docs] @classmethod\n def from_documents(\n cls,\n documents: List[Document],\n embedding: Optional[Embeddings] = None,\n user_id: Optional[str] = None,\n app_id: Optional[str] = None,\n pat: Optional[str] = None,\n number_of_docs: Optional[int] = None,\n api_base: Optional[str] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clarifai.html"} {"id": "3a0cae07932b-7", "text": "api_base: Optional[str] = None,\n **kwargs: Any,\n ) -> Clarifai:\n \"\"\"Create a Clarifai vectorstore from a list of documents.\n Args:\n user_id (str): User ID.\n app_id (str): App ID.\n documents (List[Document]): List of documents to add.\n pat (Optional[str]): Personal access token. Defaults to None.\n number_of_docs (Optional[int]): Number of documents to return\n during vector search. Defaults to None.\n api_base (Optional[str]): API base. Defaults to None.\n Returns:\n Clarifai: Clarifai vectorstore.\n \"\"\"\n texts = [doc.page_content for doc in documents]\n metadatas = [doc.metadata for doc in documents]\n return cls.from_texts(\n user_id=user_id,\n app_id=app_id,\n texts=texts,\n pat=pat,\n number_of_docs=number_of_docs,\n api_base=api_base,\n metadatas=metadatas,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clarifai.html"} {"id": "8ed80657b2a4-0", "text": "Source code for langchain.vectorstores.milvus\n\"\"\"Wrapper around the Milvus vector database.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, Iterable, List, Optional, Tuple, Union\nfrom uuid import uuid4\nimport numpy as np\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nlogger = logging.getLogger(__name__)\nDEFAULT_MILVUS_CONNECTION = {\n \"host\": \"localhost\",\n \"port\": \"19530\",\n \"user\": \"\",\n \"password\": \"\",\n \"secure\": False,\n}\n[docs]class Milvus(VectorStore):\n \"\"\"Initialize wrapper around the milvus vector database.\n In order to use this you need to have `pymilvus` installed and a\n running Milvus\n See the following documentation for how to run a Milvus instance:\n https://milvus.io/docs/install_standalone-docker.md\n If looking for a hosted Milvus, take a look at this documentation:\n https://zilliz.com/cloud and make use of the Zilliz vectorstore found in\n this project,\n IF USING L2/IP metric IT IS HIGHLY SUGGESTED TO NORMALIZE YOUR DATA.\n Args:\n embedding_function (Embeddings): Function used to embed the text.\n collection_name (str): Which Milvus collection to use. Defaults to\n \"LangChainCollection\".\n connection_args (Optional[dict[str, any]]): The connection args used for\n this class comes in the form of a dict.\n consistency_level (str): The consistency level to use for a collection.\n Defaults to \"Session\".", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} {"id": "8ed80657b2a4-1", "text": "Defaults to \"Session\".\n index_params (Optional[dict]): Which index params to use. Defaults to\n HNSW/AUTOINDEX depending on service.\n search_params (Optional[dict]): Which search params to use. Defaults to\n default of index.\n drop_old (Optional[bool]): Whether to drop the current collection. Defaults\n to False.\n The connection args used for this class comes in the form of a dict,\n here are a few of the options:\n address (str): The actual address of Milvus\n instance. Example address: \"localhost:19530\"\n uri (str): The uri of Milvus instance. Example uri:\n \"http://randomwebsite:19530\",\n \"tcp:foobarsite:19530\",\n \"https://ok.s3.south.com:19530\".\n host (str): The host of Milvus instance. Default at \"localhost\",\n PyMilvus will fill in the default host if only port is provided.\n port (str/int): The port of Milvus instance. Default at 19530, PyMilvus\n will fill in the default port if only host is provided.\n user (str): Use which user to connect to Milvus instance. If user and\n password are provided, we will add related header in every RPC call.\n password (str): Required when user is provided. The password\n corresponding to the user.\n secure (bool): Default is false. If set to true, tls will be enabled.\n client_key_path (str): If use tls two-way authentication, need to\n write the client.key path.\n client_pem_path (str): If use tls two-way authentication, need to\n write the client.pem path.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} {"id": "8ed80657b2a4-2", "text": "write the client.pem path.\n ca_pem_path (str): If use tls two-way authentication, need to write\n the ca.pem path.\n server_pem_path (str): If use tls one-way authentication, need to\n write the server.pem path.\n server_name (str): If use tls, need to write the common name.\n Example:\n .. code-block:: python\n from langchain import Milvus\n from langchain.embeddings import OpenAIEmbeddings\n embedding = OpenAIEmbeddings()\n # Connect to a milvus instance on localhost\n milvus_store = Milvus(\n embedding_function = Embeddings,\n collection_name = \"LangChainCollection\",\n drop_old = True,\n )\n Raises:\n ValueError: If the pymilvus python package is not installed.\n \"\"\"\n def __init__(\n self,\n embedding_function: Embeddings,\n collection_name: str = \"LangChainCollection\",\n connection_args: Optional[dict[str, Any]] = None,\n consistency_level: str = \"Session\",\n index_params: Optional[dict] = None,\n search_params: Optional[dict] = None,\n drop_old: Optional[bool] = False,\n ):\n \"\"\"Initialize the Milvus vector store.\"\"\"\n try:\n from pymilvus import Collection, utility\n except ImportError:\n raise ValueError(\n \"Could not import pymilvus python package. \"\n \"Please install it with `pip install pymilvus`.\"\n )\n # Default search params when one is not provided.\n self.default_search_params = {", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} {"id": "8ed80657b2a4-3", "text": "# Default search params when one is not provided.\n self.default_search_params = {\n \"IVF_FLAT\": {\"metric_type\": \"L2\", \"params\": {\"nprobe\": 10}},\n \"IVF_SQ8\": {\"metric_type\": \"L2\", \"params\": {\"nprobe\": 10}},\n \"IVF_PQ\": {\"metric_type\": \"L2\", \"params\": {\"nprobe\": 10}},\n \"HNSW\": {\"metric_type\": \"L2\", \"params\": {\"ef\": 10}},\n \"RHNSW_FLAT\": {\"metric_type\": \"L2\", \"params\": {\"ef\": 10}},\n \"RHNSW_SQ\": {\"metric_type\": \"L2\", \"params\": {\"ef\": 10}},\n \"RHNSW_PQ\": {\"metric_type\": \"L2\", \"params\": {\"ef\": 10}},\n \"IVF_HNSW\": {\"metric_type\": \"L2\", \"params\": {\"nprobe\": 10, \"ef\": 10}},\n \"ANNOY\": {\"metric_type\": \"L2\", \"params\": {\"search_k\": 10}},\n \"AUTOINDEX\": {\"metric_type\": \"L2\", \"params\": {}},\n }\n self.embedding_func = embedding_function\n self.collection_name = collection_name\n self.index_params = index_params\n self.search_params = search_params\n self.consistency_level = consistency_level\n # In order for a collection to be compatible, pk needs to be auto'id and int\n self._primary_field = \"pk\"\n # In order for compatiblility, the text field will need to be called \"text\"\n self._text_field = \"text\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} {"id": "8ed80657b2a4-4", "text": "self._text_field = \"text\"\n # In order for compatbility, the vector field needs to be called \"vector\"\n self._vector_field = \"vector\"\n self.fields: list[str] = []\n # Create the connection to the server\n if connection_args is None:\n connection_args = DEFAULT_MILVUS_CONNECTION\n self.alias = self._create_connection_alias(connection_args)\n self.col: Optional[Collection] = None\n # Grab the existing colection if it exists\n if utility.has_collection(self.collection_name, using=self.alias):\n self.col = Collection(\n self.collection_name,\n using=self.alias,\n )\n # If need to drop old, drop it\n if drop_old and isinstance(self.col, Collection):\n self.col.drop()\n self.col = None\n # Initialize the vector store\n self._init()\n def _create_connection_alias(self, connection_args: dict) -> str:\n \"\"\"Create the connection to the Milvus server.\"\"\"\n from pymilvus import MilvusException, connections\n # Grab the connection arguments that are used for checking existing connection\n host: str = connection_args.get(\"host\", None)\n port: Union[str, int] = connection_args.get(\"port\", None)\n address: str = connection_args.get(\"address\", None)\n uri: str = connection_args.get(\"uri\", None)\n user = connection_args.get(\"user\", None)\n # Order of use is host/port, uri, address\n if host is not None and port is not None:\n given_address = str(host) + \":\" + str(port)\n elif uri is not None:\n given_address = uri.split(\"https://\")[1]\n elif address is not None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} {"id": "8ed80657b2a4-5", "text": "elif address is not None:\n given_address = address\n else:\n given_address = None\n logger.debug(\"Missing standard address type for reuse atttempt\")\n # User defaults to empty string when getting connection info\n if user is not None:\n tmp_user = user\n else:\n tmp_user = \"\"\n # If a valid address was given, then check if a connection exists\n if given_address is not None:\n for con in connections.list_connections():\n addr = connections.get_connection_addr(con[0])\n if (\n con[1]\n and (\"address\" in addr)\n and (addr[\"address\"] == given_address)\n and (\"user\" in addr)\n and (addr[\"user\"] == tmp_user)\n ):\n logger.debug(\"Using previous connection: %s\", con[0])\n return con[0]\n # Generate a new connection if one doesnt exist\n alias = uuid4().hex\n try:\n connections.connect(alias=alias, **connection_args)\n logger.debug(\"Created new connection using: %s\", alias)\n return alias\n except MilvusException as e:\n logger.error(\"Failed to create new connection using: %s\", alias)\n raise e\n def _init(\n self, embeddings: Optional[list] = None, metadatas: Optional[list[dict]] = None\n ) -> None:\n if embeddings is not None:\n self._create_collection(embeddings, metadatas)\n self._extract_fields()\n self._create_index()\n self._create_search_params()\n self._load()\n def _create_collection(\n self, embeddings: list, metadatas: Optional[list[dict]] = None\n ) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} {"id": "8ed80657b2a4-6", "text": ") -> None:\n from pymilvus import (\n Collection,\n CollectionSchema,\n DataType,\n FieldSchema,\n MilvusException,\n )\n from pymilvus.orm.types import infer_dtype_bydata\n # Determine embedding dim\n dim = len(embeddings[0])\n fields = []\n # Determine metadata schema\n if metadatas:\n # Create FieldSchema for each entry in metadata.\n for key, value in metadatas[0].items():\n # Infer the corresponding datatype of the metadata\n dtype = infer_dtype_bydata(value)\n # Datatype isnt compatible\n if dtype == DataType.UNKNOWN or dtype == DataType.NONE:\n logger.error(\n \"Failure to create collection, unrecognized dtype for key: %s\",\n key,\n )\n raise ValueError(f\"Unrecognized datatype for {key}.\")\n # Dataype is a string/varchar equivalent\n elif dtype == DataType.VARCHAR:\n fields.append(FieldSchema(key, DataType.VARCHAR, max_length=65_535))\n else:\n fields.append(FieldSchema(key, dtype))\n # Create the text field\n fields.append(\n FieldSchema(self._text_field, DataType.VARCHAR, max_length=65_535)\n )\n # Create the primary key field\n fields.append(\n FieldSchema(\n self._primary_field, DataType.INT64, is_primary=True, auto_id=True\n )\n )\n # Create the vector field, supports binary or float vectors\n fields.append(\n FieldSchema(self._vector_field, infer_dtype_bydata(embeddings[0]), dim=dim)\n )\n # Create the schema for the collection\n schema = CollectionSchema(fields)\n # Create the collection", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} {"id": "8ed80657b2a4-7", "text": "schema = CollectionSchema(fields)\n # Create the collection\n try:\n self.col = Collection(\n name=self.collection_name,\n schema=schema,\n consistency_level=self.consistency_level,\n using=self.alias,\n )\n except MilvusException as e:\n logger.error(\n \"Failed to create collection: %s error: %s\", self.collection_name, e\n )\n raise e\n def _extract_fields(self) -> None:\n \"\"\"Grab the existing fields from the Collection\"\"\"\n from pymilvus import Collection\n if isinstance(self.col, Collection):\n schema = self.col.schema\n for x in schema.fields:\n self.fields.append(x.name)\n # Since primary field is auto-id, no need to track it\n self.fields.remove(self._primary_field)\n def _get_index(self) -> Optional[dict[str, Any]]:\n \"\"\"Return the vector index information if it exists\"\"\"\n from pymilvus import Collection\n if isinstance(self.col, Collection):\n for x in self.col.indexes:\n if x.field_name == self._vector_field:\n return x.to_dict()\n return None\n def _create_index(self) -> None:\n \"\"\"Create a index on the collection\"\"\"\n from pymilvus import Collection, MilvusException\n if isinstance(self.col, Collection) and self._get_index() is None:\n try:\n # If no index params, use a default HNSW based one\n if self.index_params is None:\n self.index_params = {\n \"metric_type\": \"L2\",\n \"index_type\": \"HNSW\",\n \"params\": {\"M\": 8, \"efConstruction\": 64},\n }", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} {"id": "8ed80657b2a4-8", "text": "}\n try:\n self.col.create_index(\n self._vector_field,\n index_params=self.index_params,\n using=self.alias,\n )\n # If default did not work, most likely on Zilliz Cloud\n except MilvusException:\n # Use AUTOINDEX based index\n self.index_params = {\n \"metric_type\": \"L2\",\n \"index_type\": \"AUTOINDEX\",\n \"params\": {},\n }\n self.col.create_index(\n self._vector_field,\n index_params=self.index_params,\n using=self.alias,\n )\n logger.debug(\n \"Successfully created an index on collection: %s\",\n self.collection_name,\n )\n except MilvusException as e:\n logger.error(\n \"Failed to create an index on collection: %s\", self.collection_name\n )\n raise e\n def _create_search_params(self) -> None:\n \"\"\"Generate search params based on the current index type\"\"\"\n from pymilvus import Collection\n if isinstance(self.col, Collection) and self.search_params is None:\n index = self._get_index()\n if index is not None:\n index_type: str = index[\"index_param\"][\"index_type\"]\n metric_type: str = index[\"index_param\"][\"metric_type\"]\n self.search_params = self.default_search_params[index_type]\n self.search_params[\"metric_type\"] = metric_type\n def _load(self) -> None:\n \"\"\"Load the collection if available.\"\"\"\n from pymilvus import Collection\n if isinstance(self.col, Collection) and self._get_index() is not None:\n self.col.load()\n[docs] def add_texts(\n self,\n texts: Iterable[str],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} {"id": "8ed80657b2a4-9", "text": "[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n timeout: Optional[int] = None,\n batch_size: int = 1000,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Insert text data into Milvus.\n Inserting data when the collection has not be made yet will result\n in creating a new Collection. The data of the first entity decides\n the schema of the new collection, the dim is extracted from the first\n embedding and the columns are decided by the first metadata dict.\n Metada keys will need to be present for all inserted values. At\n the moment there is no None equivalent in Milvus.\n Args:\n texts (Iterable[str]): The texts to embed, it is assumed\n that they all fit in memory.\n metadatas (Optional[List[dict]]): Metadata dicts attached to each of\n the texts. Defaults to None.\n timeout (Optional[int]): Timeout for each batch insert. Defaults\n to None.\n batch_size (int, optional): Batch size to use for insertion.\n Defaults to 1000.\n Raises:\n MilvusException: Failure to add texts\n Returns:\n List[str]: The resulting keys for each inserted element.\n \"\"\"\n from pymilvus import Collection, MilvusException\n texts = list(texts)\n try:\n embeddings = self.embedding_func.embed_documents(texts)\n except NotImplementedError:\n embeddings = [self.embedding_func.embed_query(x) for x in texts]\n if len(embeddings) == 0:\n logger.debug(\"Nothing to insert, skipping.\")\n return []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} {"id": "8ed80657b2a4-10", "text": "logger.debug(\"Nothing to insert, skipping.\")\n return []\n # If the collection hasnt been initialized yet, perform all steps to do so\n if not isinstance(self.col, Collection):\n self._init(embeddings, metadatas)\n # Dict to hold all insert columns\n insert_dict: dict[str, list] = {\n self._text_field: texts,\n self._vector_field: embeddings,\n }\n # Collect the metadata into the insert dict.\n if metadatas is not None:\n for d in metadatas:\n for key, value in d.items():\n if key in self.fields:\n insert_dict.setdefault(key, []).append(value)\n # Total insert count\n vectors: list = insert_dict[self._vector_field]\n total_count = len(vectors)\n pks: list[str] = []\n assert isinstance(self.col, Collection)\n for i in range(0, total_count, batch_size):\n # Grab end index\n end = min(i + batch_size, total_count)\n # Convert dict to list of lists batch for insertion\n insert_list = [insert_dict[x][i:end] for x in self.fields]\n # Insert into the collection.\n try:\n res: Collection\n res = self.col.insert(insert_list, timeout=timeout, **kwargs)\n pks.extend(res.primary_keys)\n except MilvusException as e:\n logger.error(\n \"Failed to insert batch starting at entity: %s/%s\", i, total_count\n )\n raise e\n return pks\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n param: Optional[dict] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} {"id": "8ed80657b2a4-11", "text": "k: int = 4,\n param: Optional[dict] = None,\n expr: Optional[str] = None,\n timeout: Optional[int] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Perform a similarity search against the query string.\n Args:\n query (str): The text to search.\n k (int, optional): How many results to return. Defaults to 4.\n param (dict, optional): The search params for the index type.\n Defaults to None.\n expr (str, optional): Filtering expression. Defaults to None.\n timeout (int, optional): How long to wait before timeout error.\n Defaults to None.\n kwargs: Collection.search() keyword arguments.\n Returns:\n List[Document]: Document results for search.\n \"\"\"\n if self.col is None:\n logger.debug(\"No existing collection to search.\")\n return []\n res = self.similarity_search_with_score(\n query=query, k=k, param=param, expr=expr, timeout=timeout, **kwargs\n )\n return [doc for doc, _ in res]\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n param: Optional[dict] = None,\n expr: Optional[str] = None,\n timeout: Optional[int] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Perform a similarity search against the query string.\n Args:\n embedding (List[float]): The embedding vector to search.\n k (int, optional): How many results to return. Defaults to 4.\n param (dict, optional): The search params for the index type.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} {"id": "8ed80657b2a4-12", "text": "param (dict, optional): The search params for the index type.\n Defaults to None.\n expr (str, optional): Filtering expression. Defaults to None.\n timeout (int, optional): How long to wait before timeout error.\n Defaults to None.\n kwargs: Collection.search() keyword arguments.\n Returns:\n List[Document]: Document results for search.\n \"\"\"\n if self.col is None:\n logger.debug(\"No existing collection to search.\")\n return []\n res = self.similarity_search_with_score_by_vector(\n embedding=embedding, k=k, param=param, expr=expr, timeout=timeout, **kwargs\n )\n return [doc for doc, _ in res]\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,\n param: Optional[dict] = None,\n expr: Optional[str] = None,\n timeout: Optional[int] = None,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Perform a search on a query string and return results with score.\n For more information about the search parameters, take a look at the pymilvus\n documentation found here:\n https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md\n Args:\n query (str): The text being searched.\n k (int, optional): The amount of results ot return. Defaults to 4.\n param (dict): The search params for the specified index.\n Defaults to None.\n expr (str, optional): Filtering expression. Defaults to None.\n timeout (int, optional): How long to wait before timeout error.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} {"id": "8ed80657b2a4-13", "text": "timeout (int, optional): How long to wait before timeout error.\n Defaults to None.\n kwargs: Collection.search() keyword arguments.\n Returns:\n List[float], List[Tuple[Document, any, any]]:\n \"\"\"\n if self.col is None:\n logger.debug(\"No existing collection to search.\")\n return []\n # Embed the query text.\n embedding = self.embedding_func.embed_query(query)\n res = self.similarity_search_with_score_by_vector(\n embedding=embedding, k=k, param=param, expr=expr, timeout=timeout, **kwargs\n )\n return res\n[docs] def similarity_search_with_score_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n param: Optional[dict] = None,\n expr: Optional[str] = None,\n timeout: Optional[int] = None,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Perform a search on a query string and return results with score.\n For more information about the search parameters, take a look at the pymilvus\n documentation found here:\n https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md\n Args:\n embedding (List[float]): The embedding vector being searched.\n k (int, optional): The amount of results ot return. Defaults to 4.\n param (dict): The search params for the specified index.\n Defaults to None.\n expr (str, optional): Filtering expression. Defaults to None.\n timeout (int, optional): How long to wait before timeout error.\n Defaults to None.\n kwargs: Collection.search() keyword arguments.\n Returns:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} {"id": "8ed80657b2a4-14", "text": "Defaults to None.\n kwargs: Collection.search() keyword arguments.\n Returns:\n List[Tuple[Document, float]]: Result doc and score.\n \"\"\"\n if self.col is None:\n logger.debug(\"No existing collection to search.\")\n return []\n if param is None:\n param = self.search_params\n # Determine result metadata fields.\n output_fields = self.fields[:]\n output_fields.remove(self._vector_field)\n # Perform the search.\n res = self.col.search(\n data=[embedding],\n anns_field=self._vector_field,\n param=param,\n limit=k,\n expr=expr,\n output_fields=output_fields,\n timeout=timeout,\n **kwargs,\n )\n # Organize results.\n ret = []\n for result in res[0]:\n meta = {x: result.entity.get(x) for x in output_fields}\n doc = Document(page_content=meta.pop(self._text_field), metadata=meta)\n pair = (doc, result.score)\n ret.append(pair)\n return ret\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n param: Optional[dict] = None,\n expr: Optional[str] = None,\n timeout: Optional[int] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Perform a search and return results that are reordered by MMR.\n Args:\n query (str): The text being searched.\n k (int, optional): How many results to give. Defaults to 4.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} {"id": "8ed80657b2a4-15", "text": "k (int, optional): How many results to give. Defaults to 4.\n fetch_k (int, optional): Total results to select k from.\n Defaults to 20.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5\n param (dict, optional): The search params for the specified index.\n Defaults to None.\n expr (str, optional): Filtering expression. Defaults to None.\n timeout (int, optional): How long to wait before timeout error.\n Defaults to None.\n kwargs: Collection.search() keyword arguments.\n Returns:\n List[Document]: Document results for search.\n \"\"\"\n if self.col is None:\n logger.debug(\"No existing collection to search.\")\n return []\n embedding = self.embedding_func.embed_query(query)\n return self.max_marginal_relevance_search_by_vector(\n embedding=embedding,\n k=k,\n fetch_k=fetch_k,\n lambda_mult=lambda_mult,\n param=param,\n expr=expr,\n timeout=timeout,\n **kwargs,\n )\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: list[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n param: Optional[dict] = None,\n expr: Optional[str] = None,\n timeout: Optional[int] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Perform a search and return results that are reordered by MMR.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} {"id": "8ed80657b2a4-16", "text": "\"\"\"Perform a search and return results that are reordered by MMR.\n Args:\n embedding (str): The embedding vector being searched.\n k (int, optional): How many results to give. Defaults to 4.\n fetch_k (int, optional): Total results to select k from.\n Defaults to 20.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5\n param (dict, optional): The search params for the specified index.\n Defaults to None.\n expr (str, optional): Filtering expression. Defaults to None.\n timeout (int, optional): How long to wait before timeout error.\n Defaults to None.\n kwargs: Collection.search() keyword arguments.\n Returns:\n List[Document]: Document results for search.\n \"\"\"\n if self.col is None:\n logger.debug(\"No existing collection to search.\")\n return []\n if param is None:\n param = self.search_params\n # Determine result metadata fields.\n output_fields = self.fields[:]\n output_fields.remove(self._vector_field)\n # Perform the search.\n res = self.col.search(\n data=[embedding],\n anns_field=self._vector_field,\n param=param,\n limit=fetch_k,\n expr=expr,\n output_fields=output_fields,\n timeout=timeout,\n **kwargs,\n )\n # Organize results.\n ids = []\n documents = []\n scores = []\n for result in res[0]:\n meta = {x: result.entity.get(x) for x in output_fields}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} {"id": "8ed80657b2a4-17", "text": "meta = {x: result.entity.get(x) for x in output_fields}\n doc = Document(page_content=meta.pop(self._text_field), metadata=meta)\n documents.append(doc)\n scores.append(result.score)\n ids.append(result.id)\n vectors = self.col.query(\n expr=f\"{self._primary_field} in {ids}\",\n output_fields=[self._primary_field, self._vector_field],\n timeout=timeout,\n )\n # Reorganize the results from query to match search order.\n vectors = {x[self._primary_field]: x[self._vector_field] for x in vectors}\n ordered_result_embeddings = [vectors[x] for x in ids]\n # Get the new order of results.\n new_ordering = maximal_marginal_relevance(\n np.array(embedding), ordered_result_embeddings, k=k, lambda_mult=lambda_mult\n )\n # Reorder the values and return.\n ret = []\n for x in new_ordering:\n # Function can return -1 index\n if x == -1:\n break\n else:\n ret.append(documents[x])\n return ret\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n collection_name: str = \"LangChainCollection\",\n connection_args: dict[str, Any] = DEFAULT_MILVUS_CONNECTION,\n consistency_level: str = \"Session\",\n index_params: Optional[dict] = None,\n search_params: Optional[dict] = None,\n drop_old: bool = False,\n **kwargs: Any,\n ) -> Milvus:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} {"id": "8ed80657b2a4-18", "text": "**kwargs: Any,\n ) -> Milvus:\n \"\"\"Create a Milvus collection, indexes it with HNSW, and insert data.\n Args:\n texts (List[str]): Text data.\n embedding (Embeddings): Embedding function.\n metadatas (Optional[List[dict]]): Metadata for each text if it exists.\n Defaults to None.\n collection_name (str, optional): Collection name to use. Defaults to\n \"LangChainCollection\".\n connection_args (dict[str, Any], optional): Connection args to use. Defaults\n to DEFAULT_MILVUS_CONNECTION.\n consistency_level (str, optional): Which consistency level to use. Defaults\n to \"Session\".\n index_params (Optional[dict], optional): Which index_params to use. Defaults\n to None.\n search_params (Optional[dict], optional): Which search params to use.\n Defaults to None.\n drop_old (Optional[bool], optional): Whether to drop the collection with\n that name if it exists. Defaults to False.\n Returns:\n Milvus: Milvus Vector Store\n \"\"\"\n vector_db = cls(\n embedding_function=embedding,\n collection_name=collection_name,\n connection_args=connection_args,\n consistency_level=consistency_level,\n index_params=index_params,\n search_params=search_params,\n drop_old=drop_old,\n **kwargs,\n )\n vector_db.add_texts(texts=texts, metadatas=metadatas)\n return vector_db", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} {"id": "1a33f1530265-0", "text": "Source code for langchain.vectorstores.pgembedding\n\"\"\"VectorStore wrapper around a Postgres database.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport uuid\nfrom typing import Any, Dict, Iterable, List, Optional, Tuple, Type\nimport sqlalchemy\nfrom sqlalchemy import func\nfrom sqlalchemy.dialects.postgresql import JSON, UUID\nfrom sqlalchemy.orm import Session, declarative_base, relationship\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nfrom langchain.vectorstores.base import VectorStore\nBase = declarative_base() # type: Any\nADA_TOKEN_COUNT = 1536\n_LANGCHAIN_DEFAULT_COLLECTION_NAME = \"langchain\"\n[docs]class BaseModel(Base):\n __abstract__ = True\n uuid = sqlalchemy.Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)\n[docs]class CollectionStore(BaseModel):\n __tablename__ = \"langchain_pg_collection\"\n name = sqlalchemy.Column(sqlalchemy.String)\n cmetadata = sqlalchemy.Column(JSON)\n embeddings = relationship(\n \"EmbeddingStore\",\n back_populates=\"collection\",\n passive_deletes=True,\n )\n[docs] @classmethod\n def get_by_name(cls, session: Session, name: str) -> Optional[\"CollectionStore\"]:\n return session.query(cls).filter(cls.name == name).first() # type: ignore\n[docs] @classmethod\n def get_or_create(\n cls,\n session: Session,\n name: str,\n cmetadata: Optional[dict] = None,\n ) -> Tuple[\"CollectionStore\", bool]:\n \"\"\"\n Get or create a collection.\n Returns [Collection, bool] where the bool is True if the collection was created.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pgembedding.html"} {"id": "1a33f1530265-1", "text": "\"\"\"\n created = False\n collection = cls.get_by_name(session, name)\n if collection:\n return collection, created\n collection = cls(name=name, cmetadata=cmetadata)\n session.add(collection)\n session.commit()\n created = True\n return collection, created\n[docs]class EmbeddingStore(BaseModel):\n __tablename__ = \"langchain_pg_embedding\"\n collection_id = sqlalchemy.Column(\n UUID(as_uuid=True),\n sqlalchemy.ForeignKey(\n f\"{CollectionStore.__tablename__}.uuid\",\n ondelete=\"CASCADE\",\n ),\n )\n collection = relationship(CollectionStore, back_populates=\"embeddings\")\n embedding = sqlalchemy.Column(sqlalchemy.ARRAY(sqlalchemy.REAL)) # type: ignore\n document = sqlalchemy.Column(sqlalchemy.String, nullable=True)\n cmetadata = sqlalchemy.Column(JSON, nullable=True)\n # custom_id : any user defined id\n custom_id = sqlalchemy.Column(sqlalchemy.String, nullable=True)\nclass QueryResult:\n EmbeddingStore: EmbeddingStore\n distance: float\n[docs]class PGEmbedding(VectorStore):\n \"\"\"\n VectorStore implementation using Postgres and the pg_embedding extension.\n pg_embedding uses sequential scan by default. but you can create a HNSW index\n using the create_hnsw_index method.\n - `connection_string` is a postgres connection string.\n - `embedding_function` any embedding function implementing\n `langchain.embeddings.base.Embeddings` interface.\n - `collection_name` is the name of the collection to use. (default: langchain)\n - NOTE: This is not the name of the table, but the name of the collection.\n The tables will be created when initializing the store (if not exists)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pgembedding.html"} {"id": "1a33f1530265-2", "text": "The tables will be created when initializing the store (if not exists)\n So, make sure the user has the right permissions to create tables.\n - `distance_strategy` is the distance strategy to use. (default: EUCLIDEAN)\n - `EUCLIDEAN` is the euclidean distance.\n - `pre_delete_collection` if True, will delete the collection if it exists.\n (default: False)\n - Useful for testing.\n \"\"\"\n def __init__(\n self,\n connection_string: str,\n embedding_function: Embeddings,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n collection_metadata: Optional[dict] = None,\n pre_delete_collection: bool = False,\n logger: Optional[logging.Logger] = None,\n ) -> None:\n self.connection_string = connection_string\n self.embedding_function = embedding_function\n self.collection_name = collection_name\n self.collection_metadata = collection_metadata\n self.pre_delete_collection = pre_delete_collection\n self.logger = logger or logging.getLogger(__name__)\n self.__post_init__()\n def __post_init__(\n self,\n ) -> None:\n self._conn = self.connect()\n self.create_hnsw_extension()\n self.create_tables_if_not_exists()\n self.create_collection()\n[docs] def connect(self) -> sqlalchemy.engine.Connection:\n engine = sqlalchemy.create_engine(self.connection_string)\n conn = engine.connect()\n return conn\n[docs] def create_hnsw_extension(self) -> None:\n try:\n with Session(self._conn) as session:\n statement = sqlalchemy.text(\"CREATE EXTENSION IF NOT EXISTS embedding\")\n session.execute(statement)\n session.commit()\n except Exception as e:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pgembedding.html"} {"id": "1a33f1530265-3", "text": "session.execute(statement)\n session.commit()\n except Exception as e:\n self.logger.exception(e)\n[docs] def create_tables_if_not_exists(self) -> None:\n with self._conn.begin():\n Base.metadata.create_all(self._conn)\n[docs] def drop_tables(self) -> None:\n with self._conn.begin():\n Base.metadata.drop_all(self._conn)\n[docs] def create_collection(self) -> None:\n if self.pre_delete_collection:\n self.delete_collection()\n with Session(self._conn) as session:\n CollectionStore.get_or_create(\n session, self.collection_name, cmetadata=self.collection_metadata\n )\n[docs] def create_hnsw_index(\n self,\n max_elements: int = 10000,\n dims: int = ADA_TOKEN_COUNT,\n m: int = 8,\n ef_construction: int = 16,\n ef_search: int = 16,\n ) -> None:\n create_index_query = sqlalchemy.text(\n \"CREATE INDEX IF NOT EXISTS langchain_pg_embedding_idx \"\n \"ON langchain_pg_embedding USING hnsw (embedding) \"\n \"WITH (\"\n \"maxelements = {}, \"\n \"dims = {}, \"\n \"m = {}, \"\n \"efconstruction = {}, \"\n \"efsearch = {}\"\n \");\".format(max_elements, dims, m, ef_construction, ef_search)\n )\n # Execute the queries\n try:\n with Session(self._conn) as session:\n # Create the HNSW index\n session.execute(create_index_query)\n session.commit()\n print(\"HNSW extension and index created successfully.\")\n except Exception as e:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pgembedding.html"} {"id": "1a33f1530265-4", "text": "print(\"HNSW extension and index created successfully.\")\n except Exception as e:\n print(f\"Failed to create HNSW extension or index: {e}\")\n[docs] def delete_collection(self) -> None:\n self.logger.debug(\"Trying to delete collection\")\n with Session(self._conn) as session:\n collection = self.get_collection(session)\n if not collection:\n self.logger.warning(\"Collection not found\")\n return\n session.delete(collection)\n session.commit()\n[docs] def get_collection(self, session: Session) -> Optional[\"CollectionStore\"]:\n return CollectionStore.get_by_name(session, self.collection_name)\n @classmethod\n def _initialize_from_embeddings(\n cls,\n texts: List[str],\n embeddings: List[List[float]],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n pre_delete_collection: bool = False,\n **kwargs: Any,\n ) -> PGEmbedding:\n if ids is None:\n ids = [str(uuid.uuid1()) for _ in texts]\n if not metadatas:\n metadatas = [{} for _ in texts]\n connection_string = cls.get_connection_string(kwargs)\n store = cls(\n connection_string=connection_string,\n collection_name=collection_name,\n embedding_function=embedding,\n pre_delete_collection=pre_delete_collection,\n )\n store.add_embeddings(\n texts=texts, embeddings=embeddings, metadatas=metadatas, ids=ids, **kwargs\n )\n return store\n[docs] def add_embeddings(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pgembedding.html"} {"id": "1a33f1530265-5", "text": ")\n return store\n[docs] def add_embeddings(\n self,\n texts: List[str],\n embeddings: List[List[float]],\n metadatas: List[dict],\n ids: List[str],\n **kwargs: Any,\n ) -> None:\n with Session(self._conn) as session:\n collection = self.get_collection(session)\n if not collection:\n raise ValueError(\"Collection not found\")\n for text, metadata, embedding, id in zip(texts, metadatas, embeddings, ids):\n embedding_store = EmbeddingStore(\n embedding=embedding,\n document=text,\n cmetadata=metadata,\n custom_id=id,\n )\n collection.embeddings.append(embedding_store)\n session.add(embedding_store)\n session.commit()\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n if ids is None:\n ids = [str(uuid.uuid1()) for _ in texts]\n embeddings = self.embedding_function.embed_documents(list(texts))\n if not metadatas:\n metadatas = [{} for _ in texts]\n with Session(self._conn) as session:\n collection = self.get_collection(session)\n if not collection:\n raise ValueError(\"Collection not found\")\n for text, metadata, embedding, id in zip(texts, metadatas, embeddings, ids):\n embedding_store = EmbeddingStore(\n embedding=embedding,\n document=text,\n cmetadata=metadata,\n custom_id=id,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pgembedding.html"} {"id": "1a33f1530265-6", "text": "cmetadata=metadata,\n custom_id=id,\n )\n collection.embeddings.append(embedding_store)\n session.add(embedding_store)\n session.commit()\n return ids\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n filter: Optional[dict] = None,\n **kwargs: Any,\n ) -> List[Document]:\n embedding = self.embedding_function.embed_query(text=query)\n return self.similarity_search_by_vector(\n embedding=embedding,\n k=k,\n filter=filter,\n )\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,\n filter: Optional[dict] = None,\n ) -> List[Tuple[Document, float]]:\n embedding = self.embedding_function.embed_query(query)\n docs = self.similarity_search_with_score_by_vector(\n embedding=embedding, k=k, filter=filter\n )\n return docs\n[docs] def similarity_search_with_score_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n filter: Optional[dict] = None,\n ) -> List[Tuple[Document, float]]:\n with Session(self._conn) as session:\n collection = self.get_collection(session)\n set_enable_seqscan_stmt = sqlalchemy.text(\"SET enable_seqscan = off\")\n session.execute(set_enable_seqscan_stmt)\n if not collection:\n raise ValueError(\"Collection not found\")\n filter_by = EmbeddingStore.collection_id == collection.uuid\n if filter is not None:\n filter_clauses = []\n for key, value in filter.items():", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pgembedding.html"} {"id": "1a33f1530265-7", "text": "filter_clauses = []\n for key, value in filter.items():\n IN = \"in\"\n if isinstance(value, dict) and IN in map(str.lower, value):\n value_case_insensitive = {\n k.lower(): v for k, v in value.items()\n }\n filter_by_metadata = EmbeddingStore.cmetadata[key].astext.in_(\n value_case_insensitive[IN]\n )\n filter_clauses.append(filter_by_metadata)\n else:\n filter_by_metadata = EmbeddingStore.cmetadata[\n key\n ].astext == str(value)\n filter_clauses.append(filter_by_metadata)\n filter_by = sqlalchemy.and_(filter_by, *filter_clauses)\n results: List[QueryResult] = (\n session.query(\n EmbeddingStore,\n func.abs(EmbeddingStore.embedding.op(\"<->\")(embedding)).label(\n \"distance\"\n ),\n ) # Specify the columns you need here, e.g., EmbeddingStore.embedding\n .filter(filter_by)\n .order_by(\n func.abs(EmbeddingStore.embedding.op(\"<->\")(embedding)).asc()\n ) # Using PostgreSQL specific operator with the correct column name\n .limit(k)\n .all()\n )\n docs = [\n (\n Document(\n page_content=result.EmbeddingStore.document,\n metadata=result.EmbeddingStore.cmetadata,\n ),\n result.distance if self.embedding_function is not None else None,\n )\n for result in results\n ]\n return docs\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n filter: Optional[dict] = None,\n **kwargs: Any,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pgembedding.html"} {"id": "1a33f1530265-8", "text": "filter: Optional[dict] = None,\n **kwargs: Any,\n ) -> List[Document]:\n docs_and_scores = self.similarity_search_with_score_by_vector(\n embedding=embedding, k=k, filter=filter\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] @classmethod\n def from_texts(\n cls: Type[PGEmbedding],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n ids: Optional[List[str]] = None,\n pre_delete_collection: bool = False,\n **kwargs: Any,\n ) -> PGEmbedding:\n embeddings = embedding.embed_documents(list(texts))\n return cls._initialize_from_embeddings(\n texts,\n embeddings,\n embedding,\n metadatas=metadatas,\n ids=ids,\n collection_name=collection_name,\n pre_delete_collection=pre_delete_collection,\n **kwargs,\n )\n[docs] @classmethod\n def from_embeddings(\n cls,\n text_embeddings: List[Tuple[str, List[float]]],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n ids: Optional[List[str]] = None,\n pre_delete_collection: bool = False,\n **kwargs: Any,\n ) -> PGEmbedding:\n texts = [t[0] for t in text_embeddings]\n embeddings = [t[1] for t in text_embeddings]\n return cls._initialize_from_embeddings(\n texts,\n embeddings,\n embedding,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pgembedding.html"} {"id": "1a33f1530265-9", "text": "texts,\n embeddings,\n embedding,\n metadatas=metadatas,\n ids=ids,\n collection_name=collection_name,\n pre_delete_collection=pre_delete_collection,\n **kwargs,\n )\n[docs] @classmethod\n def from_existing_index(\n cls: Type[PGEmbedding],\n embedding: Embeddings,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n pre_delete_collection: bool = False,\n **kwargs: Any,\n ) -> PGEmbedding:\n connection_string = cls.get_connection_string(kwargs)\n store = cls(\n connection_string=connection_string,\n collection_name=collection_name,\n embedding_function=embedding,\n pre_delete_collection=pre_delete_collection,\n )\n return store\n[docs] @classmethod\n def get_connection_string(cls, kwargs: Dict[str, Any]) -> str:\n connection_string: str = get_from_dict_or_env(\n data=kwargs,\n key=\"connection_string\",\n env_key=\"POSTGRES_CONNECTION_STRING\",\n )\n if not connection_string:\n raise ValueError(\n \"Postgres connection string is required\"\n \"Either pass it as a parameter\"\n \"or set the POSTGRES_CONNECTION_STRING environment variable.\"\n )\n return connection_string\n[docs] @classmethod\n def from_documents(\n cls: Type[PGEmbedding],\n documents: List[Document],\n embedding: Embeddings,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n ids: Optional[List[str]] = None,\n pre_delete_collection: bool = False,\n **kwargs: Any,\n ) -> PGEmbedding:\n texts = [d.page_content for d in documents]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pgembedding.html"} {"id": "1a33f1530265-10", "text": "texts = [d.page_content for d in documents]\n metadatas = [d.metadata for d in documents]\n connection_string = cls.get_connection_string(kwargs)\n kwargs[\"connection_string\"] = connection_string\n return cls.from_texts(\n texts=texts,\n pre_delete_collection=pre_delete_collection,\n embedding=embedding,\n metadatas=metadatas,\n ids=ids,\n collection_name=collection_name,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pgembedding.html"} {"id": "818bea0305a3-0", "text": "Source code for langchain.vectorstores.lancedb\n\"\"\"Wrapper around LanceDB vector database\"\"\"\nfrom __future__ import annotations\nimport uuid\nfrom typing import Any, Iterable, List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\n[docs]class LanceDB(VectorStore):\n \"\"\"Wrapper around LanceDB vector database.\n To use, you should have ``lancedb`` python package installed.\n Example:\n .. code-block:: python\n db = lancedb.connect('./lancedb')\n table = db.open_table('my_table')\n vectorstore = LanceDB(table, embedding_function)\n vectorstore.add_texts(['text1', 'text2'])\n result = vectorstore.similarity_search('text1')\n \"\"\"\n def __init__(\n self,\n connection: Any,\n embedding: Embeddings,\n vector_key: Optional[str] = \"vector\",\n id_key: Optional[str] = \"id\",\n text_key: Optional[str] = \"text\",\n ):\n \"\"\"Initialize with Lance DB connection\"\"\"\n try:\n import lancedb\n except ImportError:\n raise ValueError(\n \"Could not import lancedb python package. \"\n \"Please install it with `pip install lancedb`.\"\n )\n if not isinstance(connection, lancedb.db.LanceTable):\n raise ValueError(\n \"connection should be an instance of lancedb.db.LanceTable, \",\n f\"got {type(connection)}\",\n )\n self._connection = connection\n self._embedding = embedding\n self._vector_key = vector_key\n self._id_key = id_key\n self._text_key = text_key", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/lancedb.html"} {"id": "818bea0305a3-1", "text": "self._id_key = id_key\n self._text_key = text_key\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Turn texts into embedding and add it to the database\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n ids: Optional list of ids to associate with the texts.\n Returns:\n List of ids of the added texts.\n \"\"\"\n # Embed texts and create documents\n docs = []\n ids = ids or [str(uuid.uuid4()) for _ in texts]\n embeddings = self._embedding.embed_documents(list(texts))\n for idx, text in enumerate(texts):\n embedding = embeddings[idx]\n metadata = metadatas[idx] if metadatas else {}\n docs.append(\n {\n self._vector_key: embedding,\n self._id_key: ids[idx],\n self._text_key: text,\n **metadata,\n }\n )\n self._connection.add(docs)\n return ids\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return documents most similar to the query\n Args:\n query: String to query the vectorstore with.\n k: Number of documents to return.\n Returns:\n List of documents most similar to the query.\n \"\"\"\n embedding = self._embedding.embed_query(query)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/lancedb.html"} {"id": "818bea0305a3-2", "text": "\"\"\"\n embedding = self._embedding.embed_query(query)\n docs = self._connection.search(embedding).limit(k).to_df()\n return [\n Document(\n page_content=row[self._text_key],\n metadata=row[docs.columns != self._text_key],\n )\n for _, row in docs.iterrows()\n ]\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n connection: Any = None,\n vector_key: Optional[str] = \"vector\",\n id_key: Optional[str] = \"id\",\n text_key: Optional[str] = \"text\",\n **kwargs: Any,\n ) -> LanceDB:\n instance = LanceDB(\n connection,\n embedding,\n vector_key,\n id_key,\n text_key,\n )\n instance.add_texts(texts, metadatas=metadatas, **kwargs)\n return instance", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/lancedb.html"} {"id": "fee98655b8e7-0", "text": "Source code for langchain.vectorstores.base\n\"\"\"Interface for vector stores.\"\"\"\nfrom __future__ import annotations\nimport asyncio\nimport warnings\nfrom abc import ABC, abstractmethod\nfrom functools import partial\nfrom typing import (\n Any,\n ClassVar,\n Collection,\n Dict,\n Iterable,\n List,\n Optional,\n Tuple,\n Type,\n TypeVar,\n)\nfrom pydantic import Field, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import BaseRetriever\nVST = TypeVar(\"VST\", bound=\"VectorStore\")\n[docs]class VectorStore(ABC):\n \"\"\"Interface for vector stores.\"\"\"\n[docs] @abstractmethod\n def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n kwargs: vectorstore specific parameters\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n[docs] def delete(self, ids: Optional[List[str]] = None, **kwargs: Any) -> Optional[bool]:\n \"\"\"Delete by vector ID or other criteria.\n Args:\n ids: List of ids to delete.\n **kwargs: Other keyword arguments that subclasses might use.\n Returns:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"} {"id": "fee98655b8e7-1", "text": "**kwargs: Other keyword arguments that subclasses might use.\n Returns:\n Optional[bool]: True if deletion is successful,\n False otherwise, None if not implemented.\n \"\"\"\n raise NotImplementedError(\"delete method must be implemented by subclass.\")\n[docs] async def aadd_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\"\"\"\n raise NotImplementedError\n[docs] def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:\n \"\"\"Run more documents through the embeddings and add to the vectorstore.\n Args:\n documents (List[Document]: Documents to add to the vectorstore.\n Returns:\n List[str]: List of IDs of the added texts.\n \"\"\"\n # TODO: Handle the case where the user doesn't provide ids on the Collection\n texts = [doc.page_content for doc in documents]\n metadatas = [doc.metadata for doc in documents]\n return self.add_texts(texts, metadatas, **kwargs)\n[docs] async def aadd_documents(\n self, documents: List[Document], **kwargs: Any\n ) -> List[str]:\n \"\"\"Run more documents through the embeddings and add to the vectorstore.\n Args:\n documents (List[Document]: Documents to add to the vectorstore.\n Returns:\n List[str]: List of IDs of the added texts.\n \"\"\"\n texts = [doc.page_content for doc in documents]\n metadatas = [doc.metadata for doc in documents]\n return await self.aadd_texts(texts, metadatas, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"} {"id": "fee98655b8e7-2", "text": "return await self.aadd_texts(texts, metadatas, **kwargs)\n[docs] def search(self, query: str, search_type: str, **kwargs: Any) -> List[Document]:\n \"\"\"Return docs most similar to query using specified search type.\"\"\"\n if search_type == \"similarity\":\n return self.similarity_search(query, **kwargs)\n elif search_type == \"mmr\":\n return self.max_marginal_relevance_search(query, **kwargs)\n else:\n raise ValueError(\n f\"search_type of {search_type} not allowed. Expected \"\n \"search_type to be 'similarity' or 'mmr'.\"\n )\n[docs] async def asearch(\n self, query: str, search_type: str, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query using specified search type.\"\"\"\n if search_type == \"similarity\":\n return await self.asimilarity_search(query, **kwargs)\n elif search_type == \"mmr\":\n return await self.amax_marginal_relevance_search(query, **kwargs)\n else:\n raise ValueError(\n f\"search_type of {search_type} not allowed. Expected \"\n \"search_type to be 'similarity' or 'mmr'.\"\n )\n[docs] @abstractmethod\n def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\"\"\"\n[docs] def similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"} {"id": "fee98655b8e7-3", "text": "k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and relevance scores in the range [0, 1].\n 0 is dissimilar, 1 is most similar.\n Args:\n query: input text\n k: Number of Documents to return. Defaults to 4.\n **kwargs: kwargs to be passed to similarity search. Should include:\n score_threshold: Optional, a floating point value between 0 to 1 to\n filter the resulting set of retrieved docs\n Returns:\n List of Tuples of (doc, similarity_score)\n \"\"\"\n docs_and_similarities = self._similarity_search_with_relevance_scores(\n query, k=k, **kwargs\n )\n if any(\n similarity < 0.0 or similarity > 1.0\n for _, similarity in docs_and_similarities\n ):\n warnings.warn(\n \"Relevance scores must be between\"\n f\" 0 and 1, got {docs_and_similarities}\"\n )\n score_threshold = kwargs.get(\"score_threshold\")\n if score_threshold is not None:\n docs_and_similarities = [\n (doc, similarity)\n for doc, similarity in docs_and_similarities\n if similarity >= score_threshold\n ]\n if len(docs_and_similarities) == 0:\n warnings.warn(\n \"No relevant docs were retrieved using the relevance score\"\n f\" threshold {score_threshold}\"\n )\n return docs_and_similarities\n def _similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"} {"id": "fee98655b8e7-4", "text": "k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and relevance scores, normalized on a scale from 0 to 1.\n 0 is dissimilar, 1 is most similar.\n \"\"\"\n raise NotImplementedError\n[docs] async def asimilarity_search_with_relevance_scores(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\"\"\"\n # This is a temporary workaround to make the similarity search\n # asynchronous. The proper solution is to make the similarity search\n # asynchronous in the vector store implementations.\n func = partial(self.similarity_search_with_relevance_scores, query, k, **kwargs)\n return await asyncio.get_event_loop().run_in_executor(None, func)\n[docs] async def asimilarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\"\"\"\n # This is a temporary workaround to make the similarity search\n # asynchronous. The proper solution is to make the similarity search\n # asynchronous in the vector store implementations.\n func = partial(self.similarity_search, query, k, **kwargs)\n return await asyncio.get_event_loop().run_in_executor(None, func)\n[docs] def similarity_search_by_vector(\n self, embedding: List[float], k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding: Embedding to look up documents similar to.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"} {"id": "fee98655b8e7-5", "text": "Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query vector.\n \"\"\"\n raise NotImplementedError\n[docs] async def asimilarity_search_by_vector(\n self, embedding: List[float], k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\"\"\"\n # This is a temporary workaround to make the similarity search\n # asynchronous. The proper solution is to make the similarity search\n # asynchronous in the vector store implementations.\n func = partial(self.similarity_search_by_vector, embedding, k, **kwargs)\n return await asyncio.get_event_loop().run_in_executor(None, func)\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"} {"id": "fee98655b8e7-6", "text": "List of Documents selected by maximal marginal relevance.\n \"\"\"\n raise NotImplementedError\n[docs] async def amax_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\"\"\"\n # This is a temporary workaround to make the similarity search\n # asynchronous. The proper solution is to make the similarity search\n # asynchronous in the vector store implementations.\n func = partial(\n self.max_marginal_relevance_search, query, k, fetch_k, lambda_mult, **kwargs\n )\n return await asyncio.get_event_loop().run_in_executor(None, func)\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"} {"id": "fee98655b8e7-7", "text": "Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n raise NotImplementedError\n[docs] async def amax_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\"\"\"\n raise NotImplementedError\n[docs] @classmethod\n def from_documents(\n cls: Type[VST],\n documents: List[Document],\n embedding: Embeddings,\n **kwargs: Any,\n ) -> VST:\n \"\"\"Return VectorStore initialized from documents and embeddings.\"\"\"\n texts = [d.page_content for d in documents]\n metadatas = [d.metadata for d in documents]\n return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)\n[docs] @classmethod\n async def afrom_documents(\n cls: Type[VST],\n documents: List[Document],\n embedding: Embeddings,\n **kwargs: Any,\n ) -> VST:\n \"\"\"Return VectorStore initialized from documents and embeddings.\"\"\"\n texts = [d.page_content for d in documents]\n metadatas = [d.metadata for d in documents]\n return await cls.afrom_texts(texts, embedding, metadatas=metadatas, **kwargs)\n[docs] @classmethod\n @abstractmethod\n def from_texts(\n cls: Type[VST],\n texts: List[str],\n embedding: Embeddings,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"} {"id": "fee98655b8e7-8", "text": "texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> VST:\n \"\"\"Return VectorStore initialized from texts and embeddings.\"\"\"\n[docs] @classmethod\n async def afrom_texts(\n cls: Type[VST],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> VST:\n \"\"\"Return VectorStore initialized from texts and embeddings.\"\"\"\n raise NotImplementedError\n[docs] def as_retriever(self, **kwargs: Any) -> VectorStoreRetriever:\n return VectorStoreRetriever(vectorstore=self, **kwargs)\n[docs]class VectorStoreRetriever(BaseRetriever):\n vectorstore: VectorStore\n search_type: str = \"similarity\"\n search_kwargs: dict = Field(default_factory=dict)\n allowed_search_types: ClassVar[Collection[str]] = (\n \"similarity\",\n \"similarity_score_threshold\",\n \"mmr\",\n )\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] @root_validator()\n def validate_search_type(cls, values: Dict) -> Dict:\n \"\"\"Validate search type.\"\"\"\n search_type = values[\"search_type\"]\n if search_type not in cls.allowed_search_types:\n raise ValueError(\n f\"search_type of {search_type} not allowed. Valid values are: \"\n f\"{cls.allowed_search_types}\"\n )\n if search_type == \"similarity_score_threshold\":\n score_threshold = values[\"search_kwargs\"].get(\"score_threshold\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"} {"id": "fee98655b8e7-9", "text": "score_threshold = values[\"search_kwargs\"].get(\"score_threshold\")\n if (score_threshold is None) or (not isinstance(score_threshold, float)):\n raise ValueError(\n \"`score_threshold` is not specified with a float value(0~1) \"\n \"in `search_kwargs`.\"\n )\n return values\n def _get_relevant_documents(\n self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n ) -> List[Document]:\n if self.search_type == \"similarity\":\n docs = self.vectorstore.similarity_search(query, **self.search_kwargs)\n elif self.search_type == \"similarity_score_threshold\":\n docs_and_similarities = (\n self.vectorstore.similarity_search_with_relevance_scores(\n query, **self.search_kwargs\n )\n )\n docs = [doc for doc, _ in docs_and_similarities]\n elif self.search_type == \"mmr\":\n docs = self.vectorstore.max_marginal_relevance_search(\n query, **self.search_kwargs\n )\n else:\n raise ValueError(f\"search_type of {self.search_type} not allowed.\")\n return docs\n async def _aget_relevant_documents(\n self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n ) -> List[Document]:\n if self.search_type == \"similarity\":\n docs = await self.vectorstore.asimilarity_search(\n query, **self.search_kwargs\n )\n elif self.search_type == \"similarity_score_threshold\":\n docs_and_similarities = (\n await self.vectorstore.asimilarity_search_with_relevance_scores(\n query, **self.search_kwargs\n )\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"} {"id": "fee98655b8e7-10", "text": "query, **self.search_kwargs\n )\n )\n docs = [doc for doc, _ in docs_and_similarities]\n elif self.search_type == \"mmr\":\n docs = await self.vectorstore.amax_marginal_relevance_search(\n query, **self.search_kwargs\n )\n else:\n raise ValueError(f\"search_type of {self.search_type} not allowed.\")\n return docs\n[docs] def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:\n \"\"\"Add documents to vectorstore.\"\"\"\n return self.vectorstore.add_documents(documents, **kwargs)\n[docs] async def aadd_documents(\n self, documents: List[Document], **kwargs: Any\n ) -> List[str]:\n \"\"\"Add documents to vectorstore.\"\"\"\n return await self.vectorstore.aadd_documents(documents, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"} {"id": "b1e3639cde6a-0", "text": "Source code for langchain.vectorstores.utils\n\"\"\"Utility functions for working with vectors and vectorstores.\"\"\"\nfrom typing import List\nimport numpy as np\nfrom langchain.math_utils import cosine_similarity\n[docs]def maximal_marginal_relevance(\n query_embedding: np.ndarray,\n embedding_list: list,\n lambda_mult: float = 0.5,\n k: int = 4,\n) -> List[int]:\n \"\"\"Calculate maximal marginal relevance.\"\"\"\n if min(k, len(embedding_list)) <= 0:\n return []\n if query_embedding.ndim == 1:\n query_embedding = np.expand_dims(query_embedding, axis=0)\n similarity_to_query = cosine_similarity(query_embedding, embedding_list)[0]\n most_similar = int(np.argmax(similarity_to_query))\n idxs = [most_similar]\n selected = np.array([embedding_list[most_similar]])\n while len(idxs) < min(k, len(embedding_list)):\n best_score = -np.inf\n idx_to_add = -1\n similarity_to_selected = cosine_similarity(embedding_list, selected)\n for i, query_score in enumerate(similarity_to_query):\n if i in idxs:\n continue\n redundant_score = max(similarity_to_selected[i])\n equation_score = (\n lambda_mult * query_score - (1 - lambda_mult) * redundant_score\n )\n if equation_score > best_score:\n best_score = equation_score\n idx_to_add = i\n idxs.append(idx_to_add)\n selected = np.append(selected, [embedding_list[idx_to_add]], axis=0)\n return idxs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/utils.html"} {"id": "88e818ffa403-0", "text": "Source code for langchain.vectorstores.tigris\nfrom __future__ import annotations\nimport itertools\nfrom typing import TYPE_CHECKING, Any, Iterable, List, Optional, Tuple\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import Document\nfrom langchain.vectorstores import VectorStore\nif TYPE_CHECKING:\n from tigrisdb import TigrisClient\n from tigrisdb import VectorStore as TigrisVectorStore\n from tigrisdb.types.filters import Filter as TigrisFilter\n from tigrisdb.types.vector import Document as TigrisDocument\n[docs]class Tigris(VectorStore):\n def __init__(self, client: TigrisClient, embeddings: Embeddings, index_name: str):\n \"\"\"Initialize Tigris vector store\"\"\"\n try:\n import tigrisdb # noqa: F401\n except ImportError:\n raise ValueError(\n \"Could not import tigrisdb python package. \"\n \"Please install it with `pip install tigrisdb`\"\n )\n self._embed_fn = embeddings\n self._vector_store = TigrisVectorStore(client.get_search(), index_name)\n @property\n def search_index(self) -> TigrisVectorStore:\n return self._vector_store\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/tigris.html"} {"id": "88e818ffa403-1", "text": "metadatas: Optional list of metadatas associated with the texts.\n ids: Optional list of ids for documents.\n Ids will be autogenerated if not provided.\n kwargs: vectorstore specific parameters\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n docs = self._prep_docs(texts, metadatas, ids)\n result = self.search_index.add_documents(docs)\n return [r.id for r in result]\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n filter: Optional[TigrisFilter] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\"\"\"\n docs_with_scores = self.similarity_search_with_score(query, k, filter)\n return [doc for doc, _ in docs_with_scores]\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,\n filter: Optional[TigrisFilter] = None,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Run similarity search with Chroma with distance.\n Args:\n query (str): Query text to search for.\n k (int): Number of results to return. Defaults to 4.\n filter (Optional[TigrisFilter]): Filter by metadata. Defaults to None.\n Returns:\n List[Tuple[Document, float]]: List of documents most similar to the query\n text with distance in float.\n \"\"\"\n vector = self._embed_fn.embed_query(query)\n result = self.search_index.similarity_search(\n vector=vector, k=k, filter_by=filter\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/tigris.html"} {"id": "88e818ffa403-2", "text": "vector=vector, k=k, filter_by=filter\n )\n docs: List[Tuple[Document, float]] = []\n for r in result:\n docs.append(\n (\n Document(\n page_content=r.doc[\"text\"], metadata=r.doc.get(\"metadata\")\n ),\n r.score,\n )\n )\n return docs\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n client: Optional[TigrisClient] = None,\n index_name: Optional[str] = None,\n **kwargs: Any,\n ) -> Tigris:\n \"\"\"Return VectorStore initialized from texts and embeddings.\"\"\"\n if not index_name:\n raise ValueError(\"`index_name` is required\")\n if not client:\n client = TigrisClient()\n store = cls(client, embedding, index_name)\n store.add_texts(texts=texts, metadatas=metadatas, ids=ids)\n return store\n def _prep_docs(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]],\n ids: Optional[List[str]],\n ) -> List[TigrisDocument]:\n embeddings: List[List[float]] = self._embed_fn.embed_documents(list(texts))\n docs: List[TigrisDocument] = []\n for t, m, e, _id in itertools.zip_longest(\n texts, metadatas or [], embeddings or [], ids or []\n ):\n doc: TigrisDocument = {\n \"text\": t,\n \"embeddings\": e or [],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/tigris.html"} {"id": "88e818ffa403-3", "text": "\"text\": t,\n \"embeddings\": e or [],\n \"metadata\": m or {},\n }\n if _id:\n doc[\"id\"] = _id\n docs.append(doc)\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/tigris.html"} {"id": "7e6cbb89484b-0", "text": "Source code for langchain.vectorstores.deeplake\n\"\"\"Wrapper around Activeloop Deep Lake.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Union\nimport numpy as np\ntry:\n import deeplake\n from deeplake.core.fast_forwarding import version_compare\n from deeplake.core.vectorstore import DeepLakeVectorStore\n _DEEPLAKE_INSTALLED = True\nexcept ImportError:\n _DEEPLAKE_INSTALLED = False\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nlogger = logging.getLogger(__name__)\n[docs]class DeepLake(VectorStore):\n \"\"\"Wrapper around Deep Lake, a data lake for deep learning applications.\n We integrated deeplake's similarity search and filtering for fast prototyping,\n Now, it supports Tensor Query Language (TQL) for production use cases\n over billion rows.\n Why Deep Lake?\n - Not only stores embeddings, but also the original data with version control.\n - Serverless, doesn't require another service and can be used with major\n cloud providers (S3, GCS, etc.)\n - More than just a multi-modal vector store. You can use the dataset\n to fine-tune your own LLM models.\n To use, you should have the ``deeplake`` python package installed.\n Example:\n .. code-block:: python\n from langchain.vectorstores import DeepLake\n from langchain.embeddings.openai import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n vectorstore = DeepLake(\"langchain_store\", embeddings.embed_query)\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} {"id": "7e6cbb89484b-1", "text": "vectorstore = DeepLake(\"langchain_store\", embeddings.embed_query)\n \"\"\"\n _LANGCHAIN_DEFAULT_DEEPLAKE_PATH = \"./deeplake/\"\n def __init__(\n self,\n dataset_path: str = _LANGCHAIN_DEFAULT_DEEPLAKE_PATH,\n token: Optional[str] = None,\n embedding_function: Optional[Embeddings] = None,\n read_only: bool = False,\n ingestion_batch_size: int = 1000,\n num_workers: int = 0,\n verbose: bool = True,\n exec_option: str = \"python\",\n **kwargs: Any,\n ) -> None:\n \"\"\"Creates an empty DeepLakeVectorStore or loads an existing one.\n The DeepLakeVectorStore is located at the specified ``path``.\n Examples:\n >>> # Create a vector store with default tensors\n >>> deeplake_vectorstore = DeepLake(\n ... path = ,\n ... )\n >>>\n >>> # Create a vector store in the Deep Lake Managed Tensor Database\n >>> data = DeepLake(\n ... path = \"hub://org_id/dataset_name\",\n ... exec_option = \"tensor_db\",\n ... )\n Args:\n dataset_path (str): Path to existing dataset or where to create\n a new one. Defaults to _LANGCHAIN_DEFAULT_DEEPLAKE_PATH.\n token (str, optional): Activeloop token, for fetching credentials\n to the dataset at path if it is a Deep Lake dataset.\n Tokens are normally autogenerated. Optional.\n embedding_function (str, optional): Function to convert\n either documents or query. Optional.\n read_only (bool): Open dataset in read-only mode. Default is False.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} {"id": "7e6cbb89484b-2", "text": "read_only (bool): Open dataset in read-only mode. Default is False.\n ingestion_batch_size (int): During data ingestion, data is divided\n into batches. Batch size is the size of each batch.\n Default is 1000.\n num_workers (int): Number of workers to use during data ingestion.\n Default is 0.\n verbose (bool): Print dataset summary after each operation.\n Default is True.\n exec_option (str): DeepLakeVectorStore supports 3 ways to perform\n searching - \"python\", \"compute_engine\", \"tensor_db\".\n Default is \"python\".\n - ``python`` - Pure-python implementation that runs on the client.\n WARNING: using this with big datasets can lead to memory\n issues. Data can be stored anywhere.\n - ``compute_engine`` - C++ implementation of the Deep Lake Compute\n Engine that runs on the client. Can be used for any data stored in\n or connected to Deep Lake. Not for in-memory or local datasets.\n - ``tensor_db`` - Hosted Managed Tensor Database that is\n responsible for storage and query execution. Only for data stored in\n the Deep Lake Managed Database. Use runtime = {\"db_engine\": True} during\n dataset creation.\n **kwargs: Other optional keyword arguments.\n Raises:\n ValueError: If some condition is not met.\n \"\"\"\n self.ingestion_batch_size = ingestion_batch_size\n self.num_workers = num_workers\n self.verbose = verbose\n if _DEEPLAKE_INSTALLED is False:\n raise ValueError(\n \"Could not import deeplake python package. \"\n \"Please install it with `pip install deeplake`.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} {"id": "7e6cbb89484b-3", "text": "\"Please install it with `pip install deeplake`.\"\n )\n if version_compare(deeplake.__version__, \"3.6.2\") == -1:\n raise ValueError(\n \"deeplake version should be >= 3.6.3, but you've installed\"\n f\" {deeplake.__version__}. Consider upgrading deeplake version \\\n pip install --upgrade deeplake.\"\n )\n self.dataset_path = dataset_path\n self.vectorstore = DeepLakeVectorStore(\n path=self.dataset_path,\n embedding_function=embedding_function,\n read_only=read_only,\n token=token,\n exec_option=exec_option,\n verbose=verbose,\n **kwargs,\n )\n self._embedding_function = embedding_function\n self._id_tensor_name = \"ids\" if \"ids\" in self.vectorstore.tensors() else \"id\"\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Examples:\n >>> ids = deeplake_vectorstore.add_texts(\n ... texts = ,\n ... metadatas = ,\n ... ids = ,\n ... )\n Args:\n texts (Iterable[str]): Texts to add to the vectorstore.\n metadatas (Optional[List[dict]], optional): Optional list of metadatas.\n ids (Optional[List[str]], optional): Optional list of IDs.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} {"id": "7e6cbb89484b-4", "text": "ids (Optional[List[str]], optional): Optional list of IDs.\n **kwargs: other optional keyword arguments.\n Returns:\n List[str]: List of IDs of the added texts.\n \"\"\"\n kwargs = {}\n if ids:\n if self._id_tensor_name == \"ids\": # for backwards compatibility\n kwargs[\"ids\"] = ids\n else:\n kwargs[\"id\"] = ids\n if metadatas is None:\n metadatas = [{}] * len(list(texts))\n return self.vectorstore.add(\n text=texts,\n metadata=metadatas,\n embedding_data=texts,\n embedding_tensor=\"embedding\",\n embedding_function=kwargs.get(\"embedding_function\")\n or self._embedding_function.embed_documents, # type: ignore\n return_ids=True,\n **kwargs,\n )\n def _search_tql(\n self,\n tql_query: Optional[str],\n exec_option: Optional[str] = None,\n return_score: bool = False,\n ) -> Any[List[Document], List[Tuple[Document, float]]]:\n \"\"\"Function for performing tql_search.\n Args:\n tql_query (str): TQL Query string for direct evaluation.\n Available only for `compute_engine` and `tensor_db`.\n exec_option (str, optional): Supports 3 ways to search.\n Could be \"python\", \"compute_engine\" or \"tensor_db\". Default is \"python\".\n - ``python`` - Pure-python implementation for the client.\n WARNING: not recommended for big datasets due to potential memory\n issues.\n - ``compute_engine`` - C++ implementation of Deep Lake Compute\n Engine for the client. Not for in-memory or local datasets.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} {"id": "7e6cbb89484b-5", "text": "Engine for the client. Not for in-memory or local datasets.\n - ``tensor_db`` - Hosted Managed Tensor Database for storage\n and query execution. Only for data in Deep Lake Managed Database.\n Use runtime = {\"db_engine\": True} during dataset creation.\n return_score (bool): Return score with document. Default is False.\n Returns:\n List[Document] - A list of documents\n Raises:\n ValueError: If return_score is True but some condition is not met.\n \"\"\"\n result = self.vectorstore.search(\n query=tql_query,\n exec_option=exec_option,\n )\n metadatas = result[\"metadata\"]\n texts = result[\"text\"]\n docs = [\n Document(\n page_content=text,\n metadata=metadata,\n )\n for text, metadata in zip(texts, metadatas)\n ]\n if return_score:\n raise ValueError(\"scores can't be returned with tql search\")\n return docs\n def _search(\n self,\n query: Optional[str] = None,\n embedding: Optional[Union[List[float], np.ndarray]] = None,\n embedding_function: Optional[Callable] = None,\n k: int = 4,\n distance_metric: str = \"L2\",\n use_maximal_marginal_relevance: bool = False,\n fetch_k: Optional[int] = 20,\n filter: Optional[Union[Dict, Callable]] = None,\n return_score: bool = False,\n exec_option: Optional[str] = None,\n **kwargs: Any,\n ) -> Any[List[Document], List[Tuple[Document, float]]]:\n \"\"\"\n Return docs similar to query.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} {"id": "7e6cbb89484b-6", "text": "\"\"\"\n Return docs similar to query.\n Args:\n query (str, optional): Text to look up similar docs.\n embedding (Union[List[float], np.ndarray], optional): Query's embedding.\n embedding_function (Callable, optional): Function to convert `query`\n into embedding.\n k (int): Number of Documents to return.\n distance_metric (str): `L2` for Euclidean, `L1` for Nuclear, `max`\n for L-infinity distance, `cos` for cosine similarity, 'dot' for dot\n product.\n filter (Union[Dict, Callable], optional): Additional filter prior\n to the embedding search.\n - ``Dict`` - Key-value search on tensors of htype json, on an\n AND basis (a sample must satisfy all key-value filters to be True)\n Dict = {\"tensor_name_1\": {\"key\": value},\n \"tensor_name_2\": {\"key\": value}}\n - ``Function`` - Any function compatible with `deeplake.filter`.\n use_maximal_marginal_relevance (bool): Use maximal marginal relevance.\n fetch_k (int): Number of Documents for MMR algorithm.\n return_score (bool): Return the score.\n exec_option (str, optional): Supports 3 ways to perform searching.\n Could be \"python\", \"compute_engine\" or \"tensor_db\".\n - ``python`` - Pure-python implementation for the client.\n WARNING: not recommended for big datasets.\n - ``compute_engine`` - C++ implementation of Deep Lake Compute\n Engine for the client. Not for in-memory or local datasets.\n - ``tensor_db`` - Hosted Managed Tensor Database for storage\n and query execution. Only for data in Deep Lake Managed Database.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} {"id": "7e6cbb89484b-7", "text": "and query execution. Only for data in Deep Lake Managed Database.\n Use runtime = {\"db_engine\": True} during dataset creation.\n **kwargs: Additional keyword arguments.\n Returns:\n List of Documents by the specified distance metric,\n if return_score True, return a tuple of (Document, score)\n Raises:\n ValueError: if both `embedding` and `embedding_function` are not specified.\n \"\"\"\n if kwargs.get(\"tql_query\"):\n return self._search_tql(\n tql_query=kwargs[\"tql_query\"],\n exec_option=exec_option,\n return_score=return_score,\n )\n if embedding_function:\n if isinstance(embedding_function, Embeddings):\n _embedding_function = embedding_function.embed_query\n else:\n _embedding_function = embedding_function\n elif self._embedding_function:\n _embedding_function = self._embedding_function.embed_query\n else:\n _embedding_function = None\n if embedding is None:\n if _embedding_function is None:\n raise ValueError(\n \"Either `embedding` or `embedding_function` needs to be\"\n \" specified.\"\n )\n embedding = _embedding_function(query) if query else None\n if isinstance(embedding, list):\n embedding = np.array(embedding, dtype=np.float32)\n if len(embedding.shape) > 1:\n embedding = embedding[0]\n result = self.vectorstore.search(\n embedding=embedding,\n k=fetch_k if use_maximal_marginal_relevance else k,\n distance_metric=distance_metric,\n filter=filter,\n exec_option=exec_option,\n return_tensors=[\"embedding\", \"metadata\", \"text\"],\n )\n scores = result[\"score\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} {"id": "7e6cbb89484b-8", "text": ")\n scores = result[\"score\"]\n embeddings = result[\"embedding\"]\n metadatas = result[\"metadata\"]\n texts = result[\"text\"]\n if use_maximal_marginal_relevance:\n lambda_mult = kwargs.get(\"lambda_mult\", 0.5)\n indices = maximal_marginal_relevance( # type: ignore\n embedding, # type: ignore\n embeddings,\n k=min(k, len(texts)),\n lambda_mult=lambda_mult,\n )\n scores = [scores[i] for i in indices]\n texts = [texts[i] for i in indices]\n metadatas = [metadatas[i] for i in indices]\n docs = [\n Document(\n page_content=text,\n metadata=metadata,\n )\n for text, metadata in zip(texts, metadatas)\n ]\n if return_score:\n return [(doc, score) for doc, score in zip(docs, scores)]\n return docs\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"\n Return docs most similar to query.\n Examples:\n >>> # Search using an embedding\n >>> data = vector_store.similarity_search(\n ... query=,\n ... k=,\n ... exec_option=,\n ... )\n >>> # Run tql search:\n >>> data = vector_store.tql_search(\n ... tql_query=\"SELECT * WHERE id == \",\n ... exec_option=\"compute_engine\",\n ... )\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} {"id": "7e6cbb89484b-9", "text": "... exec_option=\"compute_engine\",\n ... )\n Args:\n k (int): Number of Documents to return. Defaults to 4.\n query (str): Text to look up similar documents.\n **kwargs: Additional keyword arguments include:\n embedding (Callable): Embedding function to use. Defaults to None.\n distance_metric (str): 'L2' for Euclidean, 'L1' for Nuclear, 'max'\n for L-infinity, 'cos' for cosine, 'dot' for dot product.\n Defaults to 'L2'.\n filter (Union[Dict, Callable], optional): Additional filter\n before embedding search.\n - Dict: Key-value search on tensors of htype json,\n (sample must satisfy all key-value filters)\n Dict = {\"tensor_1\": {\"key\": value}, \"tensor_2\": {\"key\": value}}\n - Function: Compatible with `deeplake.filter`.\n Defaults to None.\n exec_option (str): Supports 3 ways to perform searching.\n 'python', 'compute_engine', or 'tensor_db'. Defaults to 'python'.\n - 'python': Pure-python implementation for the client.\n WARNING: not recommended for big datasets.\n - 'compute_engine': C++ implementation of the Compute Engine for\n the client. Not for in-memory or local datasets.\n - 'tensor_db': Managed Tensor Database for storage and query.\n Only for data in Deep Lake Managed Database.\n Use `runtime = {\"db_engine\": True}` during dataset creation.\n Returns:\n List[Document]: List of Documents most similar to the query vector.\n \"\"\"\n return self._search(\n query=query,\n k=k,\n use_maximal_marginal_relevance=False,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} {"id": "7e6cbb89484b-10", "text": "k=k,\n use_maximal_marginal_relevance=False,\n return_score=False,\n **kwargs,\n )\n[docs] def similarity_search_by_vector(\n self,\n embedding: Union[List[float], np.ndarray],\n k: int = 4,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"\n Return docs most similar to embedding vector.\n Examples:\n >>> # Search using an embedding\n >>> data = vector_store.similarity_search_by_vector(\n ... embedding=,\n ... k=,\n ... exec_option=,\n ... )\n Args:\n embedding (Union[List[float], np.ndarray]):\n Embedding to find similar docs.\n k (int): Number of Documents to return. Defaults to 4.\n **kwargs: Additional keyword arguments including:\n filter (Union[Dict, Callable], optional):\n Additional filter before embedding search.\n - ``Dict`` - Key-value search on tensors of htype json. True\n if all key-value filters are satisfied.\n Dict = {\"tensor_name_1\": {\"key\": value},\n \"tensor_name_2\": {\"key\": value}}\n - ``Function`` - Any function compatible with\n `deeplake.filter`.\n Defaults to None.\n exec_option (str): Options for search execution include\n \"python\", \"compute_engine\", or \"tensor_db\". Defaults to\n \"python\".\n - \"python\" - Pure-python implementation running on the client.\n Can be used for data stored anywhere. WARNING: using this\n option with big datasets is discouraged due to potential\n memory issues.\n - \"compute_engine\" - Performant C++ implementation of the Deep", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} {"id": "7e6cbb89484b-11", "text": "- \"compute_engine\" - Performant C++ implementation of the Deep\n Lake Compute Engine. Runs on the client and can be used for\n any data stored in or connected to Deep Lake. It cannot be\n used with in-memory or local datasets.\n - \"tensor_db\" - Performant, fully-hosted Managed Tensor Database.\n Responsible for storage and query execution. Only available\n for data stored in the Deep Lake Managed Database.\n To store datasets in this database, specify\n `runtime = {\"db_engine\": True}` during dataset creation.\n distance_metric (str): `L2` for Euclidean, `L1` for Nuclear,\n `max` for L-infinity distance, `cos` for cosine similarity,\n 'dot' for dot product. Defaults to `L2`.\n Returns:\n List[Document]: List of Documents most similar to the query vector.\n \"\"\"\n return self._search(\n embedding=embedding,\n k=k,\n use_maximal_marginal_relevance=False,\n return_score=False,\n **kwargs,\n )\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"\n Run similarity search with Deep Lake with distance returned.\n Examples:\n >>> data = vector_store.similarity_search_with_score(\n ... query=,\n ... embedding=\n ... k=,\n ... exec_option=,\n ... )\n Args:\n query (str): Query text to search for.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} {"id": "7e6cbb89484b-12", "text": "... )\n Args:\n query (str): Query text to search for.\n k (int): Number of results to return. Defaults to 4.\n **kwargs: Additional keyword arguments. Some of these arguments are:\n distance_metric: `L2` for Euclidean, `L1` for Nuclear, `max` L-infinity\n distance, `cos` for cosine similarity, 'dot' for dot product.\n Defaults to `L2`.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n embedding_function (Callable): Embedding function to use. Defaults\n to None.\n exec_option (str): DeepLakeVectorStore supports 3 ways to perform\n searching. It could be either \"python\", \"compute_engine\" or\n \"tensor_db\". Defaults to \"python\".\n - \"python\" - Pure-python implementation running on the client.\n Can be used for data stored anywhere. WARNING: using this\n option with big datasets is discouraged due to potential\n memory issues.\n - \"compute_engine\" - Performant C++ implementation of the Deep\n Lake Compute Engine. Runs on the client and can be used for\n any data stored in or connected to Deep Lake. It cannot be used\n with in-memory or local datasets.\n - \"tensor_db\" - Performant, fully-hosted Managed Tensor Database.\n Responsible for storage and query execution. Only available for\n data stored in the Deep Lake Managed Database. To store datasets\n in this database, specify `runtime = {\"db_engine\": True}`\n during dataset creation.\n Returns:\n List[Tuple[Document, float]]: List of documents most similar to the query\n text with distance in float.\"\"\"\n return self._search(\n query=query,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} {"id": "7e6cbb89484b-13", "text": "text with distance in float.\"\"\"\n return self._search(\n query=query,\n k=k,\n return_score=True,\n **kwargs,\n )\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n exec_option: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"\n Return docs selected using the maximal marginal relevance. Maximal marginal\n relevance optimizes for similarity to query AND diversity among selected docs.\n Examples:\n >>> data = vector_store.max_marginal_relevance_search_by_vector(\n ... embedding=,\n ... fetch_k=,\n ... k=,\n ... exec_option=,\n ... )\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch for MMR algorithm.\n lambda_mult: Number between 0 and 1 determining the degree of diversity.\n 0 corresponds to max diversity and 1 to min diversity. Defaults to 0.5.\n exec_option (str): DeepLakeVectorStore supports 3 ways for searching.\n Could be \"python\", \"compute_engine\" or \"tensor_db\". Defaults to\n \"python\".\n - \"python\" - Pure-python implementation running on the client.\n Can be used for data stored anywhere. WARNING: using this\n option with big datasets is discouraged due to potential\n memory issues.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} {"id": "7e6cbb89484b-14", "text": "option with big datasets is discouraged due to potential\n memory issues.\n - \"compute_engine\" - Performant C++ implementation of the Deep\n Lake Compute Engine. Runs on the client and can be used for\n any data stored in or connected to Deep Lake. It cannot be used\n with in-memory or local datasets.\n - \"tensor_db\" - Performant, fully-hosted Managed Tensor Database.\n Responsible for storage and query execution. Only available for\n data stored in the Deep Lake Managed Database. To store datasets\n in this database, specify `runtime = {\"db_engine\": True}`\n during dataset creation.\n **kwargs: Additional keyword arguments.\n Returns:\n List[Documents] - A list of documents.\n \"\"\"\n return self._search(\n embedding=embedding,\n k=k,\n fetch_k=fetch_k,\n use_maximal_marginal_relevance=True,\n lambda_mult=lambda_mult,\n exec_option=exec_option,\n **kwargs,\n )\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n exec_option: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Examples:\n >>> # Search using an embedding\n >>> data = vector_store.max_marginal_relevance_search(\n ... query = ,\n ... embedding_function = ,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} {"id": "7e6cbb89484b-15", "text": "... embedding_function = ,\n ... k = ,\n ... exec_option = ,\n ... )\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents for MMR algorithm.\n lambda_mult: Value between 0 and 1. 0 corresponds\n to maximum diversity and 1 to minimum.\n Defaults to 0.5.\n exec_option (str): Supports 3 ways to perform searching.\n - \"python\" - Pure-python implementation running on the client.\n Can be used for data stored anywhere. WARNING: using this\n option with big datasets is discouraged due to potential\n memory issues.\n - \"compute_engine\" - Performant C++ implementation of the Deep\n Lake Compute Engine. Runs on the client and can be used for\n any data stored in or connected to Deep Lake. It cannot be\n used with in-memory or local datasets.\n - \"tensor_db\" - Performant, fully-hosted Managed Tensor Database.\n Responsible for storage and query execution. Only available\n for data stored in the Deep Lake Managed Database. To store\n datasets in this database, specify\n `runtime = {\"db_engine\": True}` during dataset creation.\n **kwargs: Additional keyword arguments\n Returns:\n List of Documents selected by maximal marginal relevance.\n Raises:\n ValueError: when MRR search is on but embedding function is\n not specified.\n \"\"\"\n embedding_function = kwargs.get(\"embedding\") or self._embedding_function\n if embedding_function is None:\n raise ValueError(\n \"For MMR search, you must specify an embedding function on\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} {"id": "7e6cbb89484b-16", "text": "\"For MMR search, you must specify an embedding function on\"\n \" `creation` or during add call.\"\n )\n return self._search(\n query=query,\n k=k,\n fetch_k=fetch_k,\n use_maximal_marginal_relevance=True,\n lambda_mult=lambda_mult,\n exec_option=exec_option,\n embedding_function=embedding_function, # type: ignore\n **kwargs,\n )\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Optional[Embeddings] = None,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n dataset_path: str = _LANGCHAIN_DEFAULT_DEEPLAKE_PATH,\n **kwargs: Any,\n ) -> DeepLake:\n \"\"\"Create a Deep Lake dataset from a raw documents.\n If a dataset_path is specified, the dataset will be persisted in that location,\n otherwise by default at `./deeplake`\n Examples:\n >>> # Search using an embedding\n >>> vector_store = DeepLake.from_texts(\n ... texts = ,\n ... embedding_function = ,\n ... k = ,\n ... exec_option = ,\n ... )\n Args:\n dataset_path (str): - The full path to the dataset. Can be:\n - Deep Lake cloud path of the form ``hub://username/dataset_name``.\n To write to Deep Lake cloud datasets,\n ensure that you are logged in to Deep Lake\n (use 'activeloop login' from command line)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} {"id": "7e6cbb89484b-17", "text": "(use 'activeloop login' from command line)\n - AWS S3 path of the form ``s3://bucketname/path/to/dataset``.\n Credentials are required in either the environment\n - Google Cloud Storage path of the form\n ``gcs://bucketname/path/to/dataset`` Credentials are required\n in either the environment\n - Local file system path of the form ``./path/to/dataset`` or\n ``~/path/to/dataset`` or ``path/to/dataset``.\n - In-memory path of the form ``mem://path/to/dataset`` which doesn't\n save the dataset, but keeps it in memory instead.\n Should be used only for testing as it does not persist.\n texts (List[Document]): List of documents to add.\n embedding (Optional[Embeddings]): Embedding function. Defaults to None.\n Note, in other places, it is called embedding_function.\n metadatas (Optional[List[dict]]): List of metadatas. Defaults to None.\n ids (Optional[List[str]]): List of document IDs. Defaults to None.\n **kwargs: Additional keyword arguments.\n Returns:\n DeepLake: Deep Lake dataset.\n Raises:\n ValueError: If 'embedding' is provided in kwargs. This is deprecated,\n please use `embedding_function` instead.\n \"\"\"\n if kwargs.get(\"embedding\"):\n raise ValueError(\n \"using embedding as embedidng_functions is deprecated. \"\n \"Please use `embedding_function` instead.\"\n )\n deeplake_dataset = cls(\n dataset_path=dataset_path, embedding_function=embedding, **kwargs\n )\n deeplake_dataset.add_texts(\n texts=texts,\n metadatas=metadatas,\n ids=ids,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} {"id": "7e6cbb89484b-18", "text": "metadatas=metadatas,\n ids=ids,\n embedding_function=embedding.embed_documents, # type: ignore\n )\n return deeplake_dataset\n[docs] def delete(self, ids: Optional[List[str]] = None, **kwargs: Any) -> bool:\n \"\"\"Delete the entities in the dataset.\n Args:\n ids (Optional[List[str]], optional): The document_ids to delete.\n Defaults to None.\n **kwargs: Other keyword arguments that subclasses might use.\n - filter (Optional[Dict[str, str]], optional): The filter to delete by.\n - delete_all (Optional[bool], optional): Whether to drop the dataset.\n Returns:\n bool: Whether the delete operation was successful.\n \"\"\"\n filter = kwargs.get(\"filter\")\n delete_all = kwargs.get(\"delete_all\")\n self.vectorstore.delete(ids=ids, filter=filter, delete_all=delete_all)\n return True\n[docs] @classmethod\n def force_delete_by_path(cls, path: str) -> None:\n \"\"\"Force delete dataset by path.\n Args:\n path (str): path of the dataset to delete.\n Raises:\n ValueError: if deeplake is not installed.\n \"\"\"\n try:\n import deeplake\n except ImportError:\n raise ValueError(\n \"Could not import deeplake python package. \"\n \"Please install it with `pip install deeplake`.\"\n )\n deeplake.delete(path, large_ok=True, force=True)\n[docs] def delete_dataset(self) -> None:\n \"\"\"Delete the collection.\"\"\"\n self.delete(delete_all=True)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} {"id": "1a1e7c6f5e60-0", "text": "Source code for langchain.vectorstores.awadb\n\"\"\"Wrapper around AwaDB for embedding vectors\"\"\"\nfrom __future__ import annotations\nimport logging\nimport uuid\nfrom typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Set, Tuple, Type\nimport numpy as np\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\n# from pydantic import BaseModel, Field, root_validator\nif TYPE_CHECKING:\n import awadb\nlogger = logging.getLogger()\nDEFAULT_TOPN = 4\n[docs]class AwaDB(VectorStore):\n \"\"\"Interface implemented by AwaDB vector stores.\"\"\"\n _DEFAULT_TABLE_NAME = \"langchain_awadb\"\n def __init__(\n self,\n table_name: str = _DEFAULT_TABLE_NAME,\n embedding: Optional[Embeddings] = None,\n log_and_data_dir: Optional[str] = None,\n client: Optional[awadb.Client] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Initialize with AwaDB client.\n Args:\n table_name: Iterable of strings to add to the vectorstore.\n embedding: Optional list of metadatas associated with the texts.\n log_and_data_dir: Optional whether to duplicate texts.\n client: Optional AwaDB client.\n kwargs: any possible extend parameters in the future.\n Returns:\n None.\n \"\"\"\n try:\n import awadb\n except ImportError:\n raise ValueError(\n \"Could not import awadb python package. \"\n \"Please install it with `pip install awadb`.\"\n )\n if client is not None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"} {"id": "1a1e7c6f5e60-1", "text": ")\n if client is not None:\n self.awadb_client = client\n else:\n if log_and_data_dir is not None:\n self.awadb_client = awadb.Client(log_and_data_dir)\n else:\n self.awadb_client = awadb.Client()\n if table_name == self._DEFAULT_TABLE_NAME:\n table_name += \"_\"\n table_name += str(uuid.uuid4()).split(\"-\")[-1]\n self.awadb_client.Create(table_name)\n self.table2embeddings: dict[str, Embeddings] = {}\n if embedding is not None:\n self.table2embeddings[table_name] = embedding\n self.using_table_name = table_name\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n is_duplicate_texts: Optional[bool] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n is_duplicate_texts: Optional whether to duplicate texts.\n kwargs: any possible extend parameters in the future.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n if self.awadb_client is None:\n raise ValueError(\"AwaDB client is None!!!\")\n embeddings = None\n if self.using_table_name in self.table2embeddings:\n embeddings = self.table2embeddings[self.using_table_name].embed_documents(\n list(texts)\n )\n return self.awadb_client.AddTexts(\n \"embedding_text\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"} {"id": "1a1e7c6f5e60-2", "text": ")\n return self.awadb_client.AddTexts(\n \"embedding_text\",\n \"text_embedding\",\n texts,\n embeddings,\n metadatas,\n is_duplicate_texts,\n )\n[docs] def load_local(\n self,\n table_name: str,\n **kwargs: Any,\n ) -> bool:\n \"\"\"Load the local specified table.\n Args:\n table_name: Table name\n kwargs: Any possible extend parameters in the future.\n Returns:\n Success or failure of loading the local specified table\n \"\"\"\n if self.awadb_client is None:\n raise ValueError(\"AwaDB client is None!!!\")\n return self.awadb_client.Load(table_name)\n[docs] def similarity_search(\n self,\n query: str,\n k: int = DEFAULT_TOPN,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text query.\n k: The maximum number of documents to return.\n kwargs: Any possible extend parameters in the future.\n Returns:\n Returns the k most similar documents to the specified text query.\n \"\"\"\n if self.awadb_client is None:\n raise ValueError(\"AwaDB client is None!!!\")\n embedding = None\n if self.using_table_name in self.table2embeddings:\n embedding = self.table2embeddings[self.using_table_name].embed_query(query)\n else:\n from awadb import llm_embedding\n llm = llm_embedding.LLMEmbedding()\n embedding = llm.Embedding(query)\n not_include_fields: Set[str] = {\"text_embedding\", \"_id\", \"score\"}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"} {"id": "1a1e7c6f5e60-3", "text": "not_include_fields: Set[str] = {\"text_embedding\", \"_id\", \"score\"}\n return self.similarity_search_by_vector(\n embedding, k, not_include_fields_in_metadata=not_include_fields\n )\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = DEFAULT_TOPN,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"The most k similar documents and scores of the specified query.\n Args:\n query: Text query.\n k: The k most similar documents to the text query.\n kwargs: Any possible extend parameters in the future.\n Returns:\n The k most similar documents to the specified text query.\n 0 is dissimilar, 1 is the most similar.\n \"\"\"\n if self.awadb_client is None:\n raise ValueError(\"AwaDB client is None!!!\")\n embedding = None\n if self.using_table_name in self.table2embeddings:\n embedding = self.table2embeddings[self.using_table_name].embed_query(query)\n else:\n from awadb import llm_embedding\n llm = llm_embedding.LLMEmbedding()\n embedding = llm.Embedding(query)\n results: List[Tuple[Document, float]] = []\n dists: List[float] = []\n not_include_fields: Set[str] = {\"text_embedding\", \"_id\", \"score\"}\n retrieval_docs = self.similarity_search_by_vector(\n embedding,\n k,\n scores=dists,\n not_include_fields_in_metadata=not_include_fields,\n )\n doc_no = 0\n for doc in retrieval_docs:\n doc_tuple = (doc, dists[doc_no])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"} {"id": "1a1e7c6f5e60-4", "text": "doc_tuple = (doc, dists[doc_no])\n results.append(doc_tuple)\n doc_no = doc_no + 1\n return results\n[docs] def similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = DEFAULT_TOPN,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and relevance scores\n which denote the InnerProduct distance, range from 0 to 1.\n Args:\n query: Text query.\n k: Number of the most similar documents to return. Defaults to 4.\n Returns:\n List of (Document, relevance_score) tuples similar to the text query.\n Note that relevance_score ranged from 0 to 1.\n 0 is dissimilar, 1 is the most similar.\n \"\"\"\n if self.awadb_client is None:\n raise ValueError(\"AwaDB client is None!!!\")\n embedding = None\n if self.using_table_name in self.table2embeddings:\n embedding = self.table2embeddings[self.using_table_name].embed_query(query)\n show_results = self.awadb_client.Search(embedding, k)\n results: List[Tuple[Document, float]] = []\n if show_results.__len__() == 0:\n return results\n dists: List[float] = []\n not_include_fields: Set[str] = {\"text_embedding\", \"_id\", \"score\"}\n retrieval_docs = self.similarity_search_by_vector(\n embedding,\n k,\n scores=dists,\n not_include_fields_in_metadata=not_include_fields,\n )\n doc_no = 0\n for doc in retrieval_docs:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"} {"id": "1a1e7c6f5e60-5", "text": ")\n doc_no = 0\n for doc in retrieval_docs:\n doc_tuple = (doc, dists[doc_no])\n results.append(doc_tuple)\n doc_no = doc_no + 1\n return results\n[docs] def similarity_search_by_vector(\n self,\n embedding: Optional[List[float]] = None,\n k: int = DEFAULT_TOPN,\n scores: Optional[list] = None,\n not_include_fields_in_metadata: Optional[Set[str]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n scores: Scores for retrieved docs.\n not_incude_fields_in_metadata: Not include meta fields of each document.\n Returns:\n List of Documents which are the most similar to the query vector.\n \"\"\"\n if self.awadb_client is None:\n raise ValueError(\"AwaDB client is None!!!\")\n results: List[Document] = []\n if embedding is None:\n return results\n show_results = self.awadb_client.Search(\n embedding, k, not_include_fields=not_include_fields_in_metadata\n )\n if show_results.__len__() == 0:\n return results\n for item_detail in show_results[0][\"ResultItems\"]:\n content = \"\"\n meta_data = {}\n for item_key in item_detail:\n if item_key == \"embedding_text\":\n content = item_detail[item_key]\n continue\n elif item_key == \"score\":\n if scores is not None:\n scores.append(item_detail[item_key])\n continue", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"} {"id": "1a1e7c6f5e60-6", "text": "if scores is not None:\n scores.append(item_detail[item_key])\n continue\n elif not_include_fields_in_metadata is not None:\n if item_key in not_include_fields_in_metadata:\n continue\n meta_data[item_key] = item_detail[item_key]\n results.append(Document(page_content=content, metadata=meta_data))\n return results\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n if self.awadb_client is None:\n raise ValueError(\"AwaDB client is None!!!\")\n embedding: List[float] = []\n if self.using_table_name in self.table2embeddings:\n embedding = self.table2embeddings[self.using_table_name].embed_query(query)\n else:\n from awadb import llm_embedding\n llm = llm_embedding.LLMEmbedding()\n embedding = llm.Embedding(query)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"} {"id": "1a1e7c6f5e60-7", "text": "embedding = llm.Embedding(query)\n if embedding.__len__() == 0:\n return []\n results = self.max_marginal_relevance_search_by_vector(\n embedding, k, fetch_k, lambda_mult=lambda_mult\n )\n return results\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n if self.awadb_client is None:\n raise ValueError(\"AwaDB client is None!!!\")\n results: List[Document] = []\n if embedding is None:\n return results\n not_include_fields: set = {\"_id\", \"score\"}\n retrieved_docs = self.similarity_search_by_vector(\n embedding, fetch_k, not_include_fields_in_metadata=not_include_fields\n )\n top_embeddings = []\n for doc in retrieved_docs:\n top_embeddings.append(doc.metadata[\"text_embedding\"])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"} {"id": "1a1e7c6f5e60-8", "text": "for doc in retrieved_docs:\n top_embeddings.append(doc.metadata[\"text_embedding\"])\n selected_docs = maximal_marginal_relevance(\n np.array(embedding, dtype=np.float32), embedding_list=top_embeddings\n )\n for s_id in selected_docs:\n if \"text_embedding\" in retrieved_docs[s_id].metadata:\n del retrieved_docs[s_id].metadata[\"text_embedding\"]\n results.append(retrieved_docs[s_id])\n return results\n[docs] def get(\n self,\n ids: List[str],\n not_include_fields: Optional[Set[str]] = None,\n **kwargs: Any,\n ) -> Dict[str, Document]:\n \"\"\"Return docs according ids.\n Args:\n ids: The ids of the embedding vectors.\n Returns:\n Documents which have the ids.\n \"\"\"\n if self.awadb_client is None:\n raise ValueError(\"AwaDB client is None!!!\")\n docs_detail = self.awadb_client.Get(ids, not_include_fields=not_include_fields)\n results: Dict[str, Document] = {}\n for doc_detail in docs_detail:\n content = \"\"\n meta_info = {}\n for field in doc_detail:\n if field == \"embeddint_text\":\n content = doc_detail[field]\n continue\n elif field == \"text_embedding\" or field == \"_id\":\n continue\n meta_info[field] = doc_detail[field]\n doc = Document(page_content=content, metadata=meta_info)\n results[doc_detail[\"_id\"]] = doc\n return results\n[docs] def delete(\n self,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> Optional[bool]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"} {"id": "1a1e7c6f5e60-9", "text": "**kwargs: Any,\n ) -> Optional[bool]:\n \"\"\"Delete the documents which have the specified ids.\n Args:\n ids: The ids of the embedding vectors.\n **kwargs: Other keyword arguments that subclasses might use.\n Returns:\n Optional[bool]: True if deletion is successful.\n False otherwise, None if not implemented.\n \"\"\"\n if self.awadb_client is None:\n raise ValueError(\"AwaDB client is None!!!\")\n ret: Optional[bool] = None\n if ids is None or ids.__len__() == 0:\n return ret\n ret = self.awadb_client.Delete(ids)\n return ret\n[docs] def update(\n self,\n ids: List[str],\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Update the documents which have the specified ids.\n Args:\n ids: The id list of the updating embedding vector.\n texts: The texts of the updating documents.\n metadatas: The metadatas of the updating documents.\n Returns:\n the ids of the updated documents.\n \"\"\"\n if self.awadb_client is None:\n raise ValueError(\"AwaDB client is None!!!\")\n return self.awadb_client.UpdateTexts(\n ids=ids, text_field_name=\"embedding_text\", texts=texts, metadatas=metadatas\n )\n[docs] def create_table(\n self,\n table_name: str,\n **kwargs: Any,\n ) -> bool:\n \"\"\"Create a new table.\"\"\"\n if self.awadb_client is None:\n return False", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"} {"id": "1a1e7c6f5e60-10", "text": "if self.awadb_client is None:\n return False\n ret = self.awadb_client.Create(table_name)\n if ret:\n self.using_table_name = table_name\n return ret\n[docs] def use(\n self,\n table_name: str,\n **kwargs: Any,\n ) -> bool:\n \"\"\"Use the specified table. Don't know the tables, please invoke list_tables.\"\"\"\n if self.awadb_client is None:\n return False\n ret = self.awadb_client.Use(table_name)\n if ret:\n self.using_table_name = table_name\n return ret\n[docs] def list_tables(\n self,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"List all the tables created by the client.\"\"\"\n if self.awadb_client is None:\n return []\n return self.awadb_client.ListAllTables()\n[docs] def get_current_table(\n self,\n **kwargs: Any,\n ) -> str:\n \"\"\"Get the current table.\"\"\"\n return self.using_table_name\n[docs] @classmethod\n def from_texts(\n cls: Type[AwaDB],\n texts: List[str],\n embedding: Optional[Embeddings] = None,\n metadatas: Optional[List[dict]] = None,\n table_name: str = _DEFAULT_TABLE_NAME,\n log_and_data_dir: Optional[str] = None,\n client: Optional[awadb.Client] = None,\n **kwargs: Any,\n ) -> AwaDB:\n \"\"\"Create an AwaDB vectorstore from a raw documents.\n Args:\n texts (List[str]): List of texts to add to the table.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"} {"id": "1a1e7c6f5e60-11", "text": "Args:\n texts (List[str]): List of texts to add to the table.\n embedding (Optional[Embeddings]): Embedding function. Defaults to None.\n metadatas (Optional[List[dict]]): List of metadatas. Defaults to None.\n table_name (str): Name of the table to create.\n log_and_data_dir (Optional[str]): Directory of logging and persistence.\n client (Optional[awadb.Client]): AwaDB client\n Returns:\n AwaDB: AwaDB vectorstore.\n \"\"\"\n awadb_client = cls(\n table_name=table_name,\n embedding=embedding,\n log_and_data_dir=log_and_data_dir,\n client=client,\n )\n awadb_client.add_texts(texts=texts, metadatas=metadatas)\n return awadb_client\n[docs] @classmethod\n def from_documents(\n cls: Type[AwaDB],\n documents: List[Document],\n embedding: Optional[Embeddings] = None,\n table_name: str = _DEFAULT_TABLE_NAME,\n log_and_data_dir: Optional[str] = None,\n client: Optional[awadb.Client] = None,\n **kwargs: Any,\n ) -> AwaDB:\n \"\"\"Create an AwaDB vectorstore from a list of documents.\n If a log_and_data_dir specified, the table will be persisted there.\n Args:\n documents (List[Document]): List of documents to add to the vectorstore.\n embedding (Optional[Embeddings]): Embedding function. Defaults to None.\n table_name (str): Name of the table to create.\n log_and_data_dir (Optional[str]): Directory to persist the table.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"} {"id": "1a1e7c6f5e60-12", "text": "log_and_data_dir (Optional[str]): Directory to persist the table.\n client (Optional[awadb.Client]): AwaDB client.\n Any: Any possible parameters in the future\n Returns:\n AwaDB: AwaDB vectorstore.\n \"\"\"\n texts = [doc.page_content for doc in documents]\n metadatas = [doc.metadata for doc in documents]\n return cls.from_texts(\n texts=texts,\n embedding=embedding,\n metadatas=metadatas,\n table_name=table_name,\n log_and_data_dir=log_and_data_dir,\n client=client,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"} {"id": "a3f59a549826-0", "text": "Source code for langchain.vectorstores.starrocks\n\"\"\"Wrapper around open source StarRocks VectorSearch capability.\"\"\"\nfrom __future__ import annotations\nimport json\nimport logging\nfrom hashlib import sha1\nfrom threading import Thread\nfrom typing import Any, Dict, Iterable, List, Optional, Tuple\nfrom pydantic import BaseSettings\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nlogger = logging.getLogger()\nDEBUG = False\n[docs]def has_mul_sub_str(s: str, *args: Any) -> bool:\n \"\"\"\n Check if a string has multiple substrings.\n Args:\n s: The string to check\n *args: The substrings to check for in the string\n Returns:\n bool: True if all substrings are present in the string, False otherwise\n \"\"\"\n for a in args:\n if a not in s:\n return False\n return True\n[docs]def debug_output(s: Any) -> None:\n \"\"\"\n Print a debug message if DEBUG is True.\n Args:\n s: The message to print\n \"\"\"\n if DEBUG:\n print(s)\n[docs]def get_named_result(connection: Any, query: str) -> List[dict[str, Any]]:\n \"\"\"\n Get a named result from a query.\n Args:\n connection: The connection to the database\n query: The query to execute\n Returns:\n List[dict[str, Any]]: The result of the query\n \"\"\"\n cursor = connection.cursor()\n cursor.execute(query)\n columns = cursor.description\n result = []\n for value in cursor.fetchall():\n r = {}\n for idx, datum in enumerate(value):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/starrocks.html"} {"id": "a3f59a549826-1", "text": "r = {}\n for idx, datum in enumerate(value):\n k = columns[idx][0]\n r[k] = datum\n result.append(r)\n debug_output(result)\n cursor.close()\n return result\n[docs]class StarRocksSettings(BaseSettings):\n \"\"\"StarRocks Client Configuration\n Attribute:\n StarRocks_host (str) : An URL to connect to MyScale backend.\n Defaults to 'localhost'.\n StarRocks_port (int) : URL port to connect with HTTP. Defaults to 8443.\n username (str) : Username to login. Defaults to None.\n password (str) : Password to login. Defaults to None.\n database (str) : Database name to find the table. Defaults to 'default'.\n table (str) : Table name to operate on.\n Defaults to 'vector_table'.\n column_map (Dict) : Column type map to project column name onto langchain\n semantics. Must have keys: `text`, `id`, `vector`,\n must be same size to number of columns. For example:\n .. code-block:: python\n {\n 'id': 'text_id',\n 'embedding': 'text_embedding',\n 'document': 'text_plain',\n 'metadata': 'metadata_dictionary_in_json',\n }\n Defaults to identity map.\n \"\"\"\n host: str = \"localhost\"\n port: int = 9030\n username: str = \"root\"\n password: str = \"\"\n column_map: Dict[str, str] = {\n \"id\": \"id\",\n \"document\": \"document\",\n \"embedding\": \"embedding\",\n \"metadata\": \"metadata\",\n }\n database: str = \"default\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/starrocks.html"} {"id": "a3f59a549826-2", "text": "\"metadata\": \"metadata\",\n }\n database: str = \"default\"\n table: str = \"langchain\"\n def __getitem__(self, item: str) -> Any:\n return getattr(self, item)\n[docs] class Config:\n env_file = \".env\"\n env_prefix = \"starrocks_\"\n env_file_encoding = \"utf-8\"\n[docs]class StarRocks(VectorStore):\n \"\"\"Wrapper around StarRocks vector database\n You need a `pymysql` python package, and a valid account\n to connect to StarRocks.\n Right now StarRocks has only implemented `cosine_similarity` function to\n compute distance between two vectors. And there is no vector inside right now,\n so we have to iterate all vectors and compute spatial distance.\n For more information, please visit\n [StarRocks official site](https://www.starrocks.io/)\n [StarRocks github](https://github.com/StarRocks/starrocks)\n \"\"\"\n def __init__(\n self,\n embedding: Embeddings,\n config: Optional[StarRocksSettings] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"StarRocks Wrapper to LangChain\n embedding_function (Embeddings):\n config (StarRocksSettings): Configuration to StarRocks Client\n \"\"\"\n try:\n import pymysql # type: ignore[import]\n except ImportError:\n raise ImportError(\n \"Could not import pymysql python package. \"\n \"Please install it with `pip install pymysql`.\"\n )\n try:\n from tqdm import tqdm\n self.pgbar = tqdm\n except ImportError:\n # Just in case if tqdm is not installed", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/starrocks.html"} {"id": "a3f59a549826-3", "text": "except ImportError:\n # Just in case if tqdm is not installed\n self.pgbar = lambda x, **kwargs: x\n super().__init__()\n if config is not None:\n self.config = config\n else:\n self.config = StarRocksSettings()\n assert self.config\n assert self.config.host and self.config.port\n assert self.config.column_map and self.config.database and self.config.table\n for k in [\"id\", \"embedding\", \"document\", \"metadata\"]:\n assert k in self.config.column_map\n # initialize the schema\n dim = len(embedding.embed_query(\"test\"))\n self.schema = f\"\"\"\\\nCREATE TABLE IF NOT EXISTS {self.config.database}.{self.config.table}( \n {self.config.column_map['id']} string,\n {self.config.column_map['document']} string,\n {self.config.column_map['embedding']} array,\n {self.config.column_map['metadata']} string\n) ENGINE = OLAP PRIMARY KEY(id) DISTRIBUTED BY HASH(id) \\\n PROPERTIES (\"replication_num\" = \"1\")\\\n\"\"\"\n self.dim = dim\n self.BS = \"\\\\\"\n self.must_escape = (\"\\\\\", \"'\")\n self.embedding_function = embedding\n self.dist_order = \"DESC\"\n debug_output(self.config)\n # Create a connection to StarRocks\n self.connection = pymysql.connect(\n host=self.config.host,\n port=self.config.port,\n user=self.config.username,\n password=self.config.password,\n database=self.config.database,\n **kwargs,\n )\n debug_output(self.schema)\n get_named_result(self.connection, self.schema)\n[docs] def escape_str(self, value: str) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/starrocks.html"} {"id": "a3f59a549826-4", "text": "[docs] def escape_str(self, value: str) -> str:\n return \"\".join(f\"{self.BS}{c}\" if c in self.must_escape else c for c in value)\n def _build_insert_sql(self, transac: Iterable, column_names: Iterable[str]) -> str:\n ks = \",\".join(column_names)\n embed_tuple_index = tuple(column_names).index(\n self.config.column_map[\"embedding\"]\n )\n _data = []\n for n in transac:\n n = \",\".join(\n [\n f\"'{self.escape_str(str(_n))}'\"\n if idx != embed_tuple_index\n else f\"array{str(_n)}\"\n for (idx, _n) in enumerate(n)\n ]\n )\n _data.append(f\"({n})\")\n i_str = f\"\"\"\n INSERT INTO\n {self.config.database}.{self.config.table}({ks})\n VALUES\n {','.join(_data)}\n \"\"\"\n return i_str\n def _insert(self, transac: Iterable, column_names: Iterable[str]) -> None:\n _insert_query = self._build_insert_sql(transac, column_names)\n debug_output(_insert_query)\n get_named_result(self.connection, _insert_query)\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n batch_size: int = 32,\n ids: Optional[Iterable[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Insert more texts through the embeddings and add to the VectorStore.\n Args:\n texts: Iterable of strings to add to the VectorStore.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/starrocks.html"} {"id": "a3f59a549826-5", "text": "Args:\n texts: Iterable of strings to add to the VectorStore.\n ids: Optional list of ids to associate with the texts.\n batch_size: Batch size of insertion\n metadata: Optional column data to be inserted\n Returns:\n List of ids from adding the texts into the VectorStore.\n \"\"\"\n # Embed and create the documents\n ids = ids or [sha1(t.encode(\"utf-8\")).hexdigest() for t in texts]\n colmap_ = self.config.column_map\n transac = []\n column_names = {\n colmap_[\"id\"]: ids,\n colmap_[\"document\"]: texts,\n colmap_[\"embedding\"]: self.embedding_function.embed_documents(list(texts)),\n }\n metadatas = metadatas or [{} for _ in texts]\n column_names[colmap_[\"metadata\"]] = map(json.dumps, metadatas)\n assert len(set(colmap_) - set(column_names)) >= 0\n keys, values = zip(*column_names.items())\n try:\n t = None\n for v in self.pgbar(\n zip(*values), desc=\"Inserting data...\", total=len(metadatas)\n ):\n assert (\n len(v[keys.index(self.config.column_map[\"embedding\"])]) == self.dim\n )\n transac.append(v)\n if len(transac) == batch_size:\n if t:\n t.join()\n t = Thread(target=self._insert, args=[transac, keys])\n t.start()\n transac = []\n if len(transac) > 0:\n if t:\n t.join()\n self._insert(transac, keys)\n return [i for i in ids]\n except Exception as e:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/starrocks.html"} {"id": "a3f59a549826-6", "text": "return [i for i in ids]\n except Exception as e:\n logger.error(f\"\\033[91m\\033[1m{type(e)}\\033[0m \\033[95m{str(e)}\\033[0m\")\n return []\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[Dict[Any, Any]]] = None,\n config: Optional[StarRocksSettings] = None,\n text_ids: Optional[Iterable[str]] = None,\n batch_size: int = 32,\n **kwargs: Any,\n ) -> StarRocks:\n \"\"\"Create StarRocks wrapper with existing texts\n Args:\n embedding_function (Embeddings): Function to extract text embedding\n texts (Iterable[str]): List or tuple of strings to be added\n config (StarRocksSettings, Optional): StarRocks configuration\n text_ids (Optional[Iterable], optional): IDs for the texts.\n Defaults to None.\n batch_size (int, optional): Batchsize when transmitting data to StarRocks.\n Defaults to 32.\n metadata (List[dict], optional): metadata to texts. Defaults to None.\n Returns:\n StarRocks Index\n \"\"\"\n ctx = cls(embedding, config, **kwargs)\n ctx.add_texts(texts, ids=text_ids, batch_size=batch_size, metadatas=metadatas)\n return ctx\n def __repr__(self) -> str:\n \"\"\"Text representation for StarRocks Vector Store, prints backends, username\n and schemas. Easy to use with `str(StarRocks())`\n Returns:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/starrocks.html"} {"id": "a3f59a549826-7", "text": "Returns:\n repr: string to show connection info and data schema\n \"\"\"\n _repr = f\"\\033[92m\\033[1m{self.config.database}.{self.config.table} @ \"\n _repr += f\"{self.config.host}:{self.config.port}\\033[0m\\n\\n\"\n _repr += f\"\\033[1musername: {self.config.username}\\033[0m\\n\\nTable Schema:\\n\"\n width = 25\n fields = 3\n _repr += \"-\" * (width * fields + 1) + \"\\n\"\n columns = [\"name\", \"type\", \"key\"]\n _repr += f\"|\\033[94m{columns[0]:24s}\\033[0m|\\033[96m{columns[1]:24s}\"\n _repr += f\"\\033[0m|\\033[96m{columns[2]:24s}\\033[0m|\\n\"\n _repr += \"-\" * (width * fields + 1) + \"\\n\"\n q_str = f\"DESC {self.config.database}.{self.config.table}\"\n debug_output(q_str)\n rs = get_named_result(self.connection, q_str)\n for r in rs:\n _repr += f\"|\\033[94m{r['Field']:24s}\\033[0m|\\033[96m{r['Type']:24s}\"\n _repr += f\"\\033[0m|\\033[96m{r['Key']:24s}\\033[0m|\\n\"\n _repr += \"-\" * (width * fields + 1) + \"\\n\"\n return _repr\n def _build_query_sql(\n self, q_emb: List[float], topk: int, where_str: Optional[str] = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/starrocks.html"} {"id": "a3f59a549826-8", "text": ") -> str:\n q_emb_str = \",\".join(map(str, q_emb))\n if where_str:\n where_str = f\"WHERE {where_str}\"\n else:\n where_str = \"\"\n q_str = f\"\"\"\n SELECT {self.config.column_map['document']}, \n {self.config.column_map['metadata']}, \n cosine_similarity_norm(array[{q_emb_str}],\n {self.config.column_map['embedding']}) as dist\n FROM {self.config.database}.{self.config.table}\n {where_str}\n ORDER BY dist {self.dist_order}\n LIMIT {topk}\n \"\"\"\n debug_output(q_str)\n return q_str\n[docs] def similarity_search(\n self, query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Perform a similarity search with StarRocks\n Args:\n query (str): query string\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): where condition string.\n Defaults to None.\n NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection. When dealing with metadatas, remember to\n use `{self.metadata_column}.attribute` instead of `attribute`\n alone. The default name for it is `metadata`.\n Returns:\n List[Document]: List of Documents\n \"\"\"\n return self.similarity_search_by_vector(\n self.embedding_function.embed_query(query), k, where_str, **kwargs\n )\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/starrocks.html"} {"id": "a3f59a549826-9", "text": "self,\n embedding: List[float],\n k: int = 4,\n where_str: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Perform a similarity search with StarRocks by vectors\n Args:\n query (str): query string\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): where condition string.\n Defaults to None.\n NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection. When dealing with metadatas, remember to\n use `{self.metadata_column}.attribute` instead of `attribute`\n alone. The default name for it is `metadata`.\n Returns:\n List[Document]: List of (Document, similarity)\n \"\"\"\n q_str = self._build_query_sql(embedding, k, where_str)\n try:\n return [\n Document(\n page_content=r[self.config.column_map[\"document\"]],\n metadata=json.loads(r[self.config.column_map[\"metadata\"]]),\n )\n for r in get_named_result(self.connection, q_str)\n ]\n except Exception as e:\n logger.error(f\"\\033[91m\\033[1m{type(e)}\\033[0m \\033[95m{str(e)}\\033[0m\")\n return []\n[docs] def similarity_search_with_relevance_scores(\n self, query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n \"\"\"Perform a similarity search with StarRocks\n Args:\n query (str): query string", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/starrocks.html"} {"id": "a3f59a549826-10", "text": "Args:\n query (str): query string\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): where condition string.\n Defaults to None.\n NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection. When dealing with metadatas, remember to\n use `{self.metadata_column}.attribute` instead of `attribute`\n alone. The default name for it is `metadata`.\n Returns:\n List[Document]: List of documents\n \"\"\"\n q_str = self._build_query_sql(\n self.embedding_function.embed_query(query), k, where_str\n )\n try:\n return [\n (\n Document(\n page_content=r[self.config.column_map[\"document\"]],\n metadata=json.loads(r[self.config.column_map[\"metadata\"]]),\n ),\n r[\"dist\"],\n )\n for r in get_named_result(self.connection, q_str)\n ]\n except Exception as e:\n logger.error(f\"\\033[91m\\033[1m{type(e)}\\033[0m \\033[95m{str(e)}\\033[0m\")\n return []\n[docs] def drop(self) -> None:\n \"\"\"\n Helper function: Drop data\n \"\"\"\n get_named_result(\n self.connection,\n f\"DROP TABLE IF EXISTS {self.config.database}.{self.config.table}\",\n )\n @property\n def metadata_column(self) -> str:\n return self.config.column_map[\"metadata\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/starrocks.html"} {"id": "56f4c73f0a2e-0", "text": "Source code for langchain.vectorstores.weaviate\n\"\"\"Wrapper around weaviate vector database.\"\"\"\nfrom __future__ import annotations\nimport datetime\nfrom typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Type\nfrom uuid import uuid4\nimport numpy as np\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\ndef _default_schema(index_name: str) -> Dict:\n return {\n \"class\": index_name,\n \"properties\": [\n {\n \"name\": \"text\",\n \"dataType\": [\"text\"],\n }\n ],\n }\ndef _create_weaviate_client(**kwargs: Any) -> Any:\n client = kwargs.get(\"client\")\n if client is not None:\n return client\n weaviate_url = get_from_dict_or_env(kwargs, \"weaviate_url\", \"WEAVIATE_URL\")\n try:\n # the weaviate api key param should not be mandatory\n weaviate_api_key = get_from_dict_or_env(\n kwargs, \"weaviate_api_key\", \"WEAVIATE_API_KEY\", None\n )\n except ValueError:\n weaviate_api_key = None\n try:\n import weaviate\n except ImportError:\n raise ValueError(\n \"Could not import weaviate python package. \"\n \"Please install it with `pip install weaviate-client`\"\n )\n auth = (\n weaviate.auth.AuthApiKey(api_key=weaviate_api_key)\n if weaviate_api_key is not None\n else None\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"} {"id": "56f4c73f0a2e-1", "text": "if weaviate_api_key is not None\n else None\n )\n client = weaviate.Client(weaviate_url, auth_client_secret=auth)\n return client\ndef _default_score_normalizer(val: float) -> float:\n return 1 - 1 / (1 + np.exp(val))\ndef _json_serializable(value: Any) -> Any:\n if isinstance(value, datetime.datetime):\n return value.isoformat()\n return value\n[docs]class Weaviate(VectorStore):\n \"\"\"Wrapper around Weaviate vector database.\n To use, you should have the ``weaviate-client`` python package installed.\n Example:\n .. code-block:: python\n import weaviate\n from langchain.vectorstores import Weaviate\n client = weaviate.Client(url=os.environ[\"WEAVIATE_URL\"], ...)\n weaviate = Weaviate(client, index_name, text_key)\n \"\"\"\n def __init__(\n self,\n client: Any,\n index_name: str,\n text_key: str,\n embedding: Optional[Embeddings] = None,\n attributes: Optional[List[str]] = None,\n relevance_score_fn: Optional[\n Callable[[float], float]\n ] = _default_score_normalizer,\n by_text: bool = True,\n ):\n \"\"\"Initialize with Weaviate client.\"\"\"\n try:\n import weaviate\n except ImportError:\n raise ValueError(\n \"Could not import weaviate python package. \"\n \"Please install it with `pip install weaviate-client`.\"\n )\n if not isinstance(client, weaviate.Client):\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"} {"id": "56f4c73f0a2e-2", "text": ")\n if not isinstance(client, weaviate.Client):\n raise ValueError(\n f\"client should be an instance of weaviate.Client, got {type(client)}\"\n )\n self._client = client\n self._index_name = index_name\n self._embedding = embedding\n self._text_key = text_key\n self._query_attrs = [self._text_key]\n self._relevance_score_fn = relevance_score_fn\n self._by_text = by_text\n if attributes is not None:\n self._query_attrs.extend(attributes)\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Upload texts with metadata (properties) to Weaviate.\"\"\"\n from weaviate.util import get_valid_uuid\n ids = []\n with self._client.batch as batch:\n for i, text in enumerate(texts):\n data_properties = {self._text_key: text}\n if metadatas is not None:\n for key, val in metadatas[i].items():\n data_properties[key] = _json_serializable(val)\n # Allow for ids (consistent w/ other methods)\n # # Or uuids (backwards compatble w/ existing arg)\n # If the UUID of one of the objects already exists\n # then the existing object will be replaced by the new object.\n _id = get_valid_uuid(uuid4())\n if \"uuids\" in kwargs:\n _id = kwargs[\"uuids\"][i]\n elif \"ids\" in kwargs:\n _id = kwargs[\"ids\"][i]\n if self._embedding is not None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"} {"id": "56f4c73f0a2e-3", "text": "if self._embedding is not None:\n vector = self._embedding.embed_documents([text])[0]\n else:\n vector = None\n batch.add_data_object(\n data_object=data_properties,\n class_name=self._index_name,\n uuid=_id,\n vector=vector,\n )\n ids.append(_id)\n return ids\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n if self._by_text:\n return self.similarity_search_by_text(query, k, **kwargs)\n else:\n if self._embedding is None:\n raise ValueError(\n \"_embedding cannot be None for similarity_search when \"\n \"_by_text=False\"\n )\n embedding = self._embedding.embed_query(query)\n return self.similarity_search_by_vector(embedding, k, **kwargs)\n[docs] def similarity_search_by_text(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n content: Dict[str, Any] = {\"concepts\": [query]}\n if kwargs.get(\"search_distance\"):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"} {"id": "56f4c73f0a2e-4", "text": "if kwargs.get(\"search_distance\"):\n content[\"certainty\"] = kwargs.get(\"search_distance\")\n query_obj = self._client.query.get(self._index_name, self._query_attrs)\n if kwargs.get(\"where_filter\"):\n query_obj = query_obj.with_where(kwargs.get(\"where_filter\"))\n if kwargs.get(\"additional\"):\n query_obj = query_obj.with_additional(kwargs.get(\"additional\"))\n result = query_obj.with_near_text(content).with_limit(k).do()\n if \"errors\" in result:\n raise ValueError(f\"Error during query: {result['errors']}\")\n docs = []\n for res in result[\"data\"][\"Get\"][self._index_name]:\n text = res.pop(self._text_key)\n docs.append(Document(page_content=text, metadata=res))\n return docs\n[docs] def similarity_search_by_vector(\n self, embedding: List[float], k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Look up similar documents by embedding vector in Weaviate.\"\"\"\n vector = {\"vector\": embedding}\n query_obj = self._client.query.get(self._index_name, self._query_attrs)\n if kwargs.get(\"where_filter\"):\n query_obj = query_obj.with_where(kwargs.get(\"where_filter\"))\n if kwargs.get(\"additional\"):\n query_obj = query_obj.with_additional(kwargs.get(\"additional\"))\n result = query_obj.with_near_vector(vector).with_limit(k).do()\n if \"errors\" in result:\n raise ValueError(f\"Error during query: {result['errors']}\")\n docs = []\n for res in result[\"data\"][\"Get\"][self._index_name]:\n text = res.pop(self._text_key)\n docs.append(Document(page_content=text, metadata=res))\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"} {"id": "56f4c73f0a2e-5", "text": "docs.append(Document(page_content=text, metadata=res))\n return docs\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n if self._embedding is not None:\n embedding = self._embedding.embed_query(query)\n else:\n raise ValueError(\n \"max_marginal_relevance_search requires a suitable Embeddings object\"\n )\n return self.max_marginal_relevance_search_by_vector(\n embedding, k=k, fetch_k=fetch_k, lambda_mult=lambda_mult, **kwargs\n )\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"} {"id": "56f4c73f0a2e-6", "text": "**kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n vector = {\"vector\": embedding}\n query_obj = self._client.query.get(self._index_name, self._query_attrs)\n if kwargs.get(\"where_filter\"):\n query_obj = query_obj.with_where(kwargs.get(\"where_filter\"))\n results = (\n query_obj.with_additional(\"vector\")\n .with_near_vector(vector)\n .with_limit(fetch_k)\n .do()\n )\n payload = results[\"data\"][\"Get\"][self._index_name]\n embeddings = [result[\"_additional\"][\"vector\"] for result in payload]\n mmr_selected = maximal_marginal_relevance(\n np.array(embedding), embeddings, k=k, lambda_mult=lambda_mult\n )\n docs = []\n for idx in mmr_selected:\n text = payload[idx].pop(self._text_key)\n payload[idx].pop(\"_additional\")\n meta = payload[idx]\n docs.append(Document(page_content=text, metadata=meta))\n return docs\n[docs] def similarity_search_with_score(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"} {"id": "56f4c73f0a2e-7", "text": "return docs\n[docs] def similarity_search_with_score(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n \"\"\"\n Return list of documents most similar to the query\n text and cosine distance in float for each.\n Lower score represents more similarity.\n \"\"\"\n if self._embedding is None:\n raise ValueError(\n \"_embedding cannot be None for similarity_search_with_score\"\n )\n content: Dict[str, Any] = {\"concepts\": [query]}\n if kwargs.get(\"search_distance\"):\n content[\"certainty\"] = kwargs.get(\"search_distance\")\n query_obj = self._client.query.get(self._index_name, self._query_attrs)\n if not self._by_text:\n embedding = self._embedding.embed_query(query)\n vector = {\"vector\": embedding}\n result = (\n query_obj.with_near_vector(vector)\n .with_limit(k)\n .with_additional(\"vector\")\n .do()\n )\n else:\n result = (\n query_obj.with_near_text(content)\n .with_limit(k)\n .with_additional(\"vector\")\n .do()\n )\n if \"errors\" in result:\n raise ValueError(f\"Error during query: {result['errors']}\")\n docs_and_scores = []\n for res in result[\"data\"][\"Get\"][self._index_name]:\n text = res.pop(self._text_key)\n score = np.dot(\n res[\"_additional\"][\"vector\"], self._embedding.embed_query(query)\n )\n docs_and_scores.append((Document(page_content=text, metadata=res), score))\n return docs_and_scores\n def _similarity_search_with_relevance_scores(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"} {"id": "56f4c73f0a2e-8", "text": "return docs_and_scores\n def _similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and relevance scores, normalized on a scale from 0 to 1.\n 0 is dissimilar, 1 is most similar.\n \"\"\"\n if self._relevance_score_fn is None:\n raise ValueError(\n \"relevance_score_fn must be provided to\"\n \" Weaviate constructor to normalize scores\"\n )\n docs_and_scores = self.similarity_search_with_score(query, k=k, **kwargs)\n return [\n (doc, self._relevance_score_fn(score)) for doc, score in docs_and_scores\n ]\n[docs] @classmethod\n def from_texts(\n cls: Type[Weaviate],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> Weaviate:\n \"\"\"Construct Weaviate wrapper from raw documents.\n This is a user-friendly interface that:\n 1. Embeds documents.\n 2. Creates a new index for the embeddings in the Weaviate instance.\n 3. Adds the documents to the newly created Weaviate index.\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain.vectorstores.weaviate import Weaviate\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n weaviate = Weaviate.from_texts(\n texts,\n embeddings,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"} {"id": "56f4c73f0a2e-9", "text": "weaviate = Weaviate.from_texts(\n texts,\n embeddings,\n weaviate_url=\"http://localhost:8080\"\n )\n \"\"\"\n client = _create_weaviate_client(**kwargs)\n from weaviate.util import get_valid_uuid\n index_name = kwargs.get(\"index_name\", f\"LangChain_{uuid4().hex}\")\n embeddings = embedding.embed_documents(texts) if embedding else None\n text_key = \"text\"\n schema = _default_schema(index_name)\n attributes = list(metadatas[0].keys()) if metadatas else None\n # check whether the index already exists\n if not client.schema.contains(schema):\n client.schema.create_class(schema)\n with client.batch as batch:\n for i, text in enumerate(texts):\n data_properties = {\n text_key: text,\n }\n if metadatas is not None:\n for key in metadatas[i].keys():\n data_properties[key] = metadatas[i][key]\n # If the UUID of one of the objects already exists\n # then the existing objectwill be replaced by the new object.\n if \"uuids\" in kwargs:\n _id = kwargs[\"uuids\"][i]\n else:\n _id = get_valid_uuid(uuid4())\n # if an embedding strategy is not provided, we let\n # weaviate create the embedding. Note that this will only\n # work if weaviate has been installed with a vectorizer module\n # like text2vec-contextionary for example\n params = {\n \"uuid\": _id,\n \"data_object\": data_properties,\n \"class_name\": index_name,\n }\n if embeddings is not None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"} {"id": "56f4c73f0a2e-10", "text": "\"class_name\": index_name,\n }\n if embeddings is not None:\n params[\"vector\"] = embeddings[i]\n batch.add_data_object(**params)\n batch.flush()\n relevance_score_fn = kwargs.get(\"relevance_score_fn\")\n by_text: bool = kwargs.get(\"by_text\", False)\n return cls(\n client,\n index_name,\n text_key,\n embedding=embedding,\n attributes=attributes,\n relevance_score_fn=relevance_score_fn,\n by_text=by_text,\n )\n[docs] def delete(self, ids: Optional[List[str]] = None, **kwargs: Any) -> None:\n \"\"\"Delete by vector IDs.\n Args:\n ids: List of ids to delete.\n \"\"\"\n if ids is None:\n raise ValueError(\"No ids provided to delete.\")\n # TODO: Check if this can be done in bulk\n for id in ids:\n self._client.data_object.delete(uuid=id)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"} {"id": "33dbd7f8fd45-0", "text": "Source code for langchain.vectorstores.hologres\n\"\"\"VectorStore wrapper around a Hologres database.\"\"\"\nfrom __future__ import annotations\nimport json\nimport logging\nimport uuid\nfrom typing import Any, Dict, Iterable, List, Optional, Tuple, Type\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nfrom langchain.vectorstores.base import VectorStore\nADA_TOKEN_COUNT = 1536\n_LANGCHAIN_DEFAULT_TABLE_NAME = \"langchain_pg_embedding\"\nclass HologresWrapper:\n def __init__(self, connection_string: str, ndims: int, table_name: str) -> None:\n import psycopg2\n self.table_name = table_name\n self.conn = psycopg2.connect(connection_string)\n self.cursor = self.conn.cursor()\n self.conn.autocommit = False\n self.ndims = ndims\n def create_vector_extension(self) -> None:\n self.cursor.execute(\"create extension if not exists proxima\")\n self.conn.commit()\n def create_table(self, drop_if_exist: bool = True) -> None:\n if drop_if_exist:\n self.cursor.execute(f\"drop table if exists {self.table_name}\")\n self.conn.commit()\n self.cursor.execute(\n f\"\"\"create table if not exists {self.table_name} (\nid text,\nembedding float4[] check(array_ndims(embedding) = 1 and \\\narray_length(embedding, 1) = {self.ndims}),\nmetadata json,\ndocument text);\"\"\"\n )\n self.cursor.execute(\n f\"call set_table_property('{self.table_name}'\"\n + \"\"\", 'proxima_vectors', \n'{\"embedding\":{\"algorithm\":\"Graph\",\n\"distance_method\":\"SquaredEuclidean\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/hologres.html"} {"id": "33dbd7f8fd45-1", "text": "'{\"embedding\":{\"algorithm\":\"Graph\",\n\"distance_method\":\"SquaredEuclidean\",\n\"build_params\":{\"min_flush_proxima_row_count\" : 1,\n\"min_compaction_proxima_row_count\" : 1, \n\"max_total_size_to_merge_mb\" : 2000}}}');\"\"\"\n )\n self.conn.commit()\n def get_by_id(self, id: str) -> List[Tuple]:\n statement = (\n f\"select id, embedding, metadata, \"\n f\"document from {self.table_name} where id = %s;\"\n )\n self.cursor.execute(\n statement,\n (id),\n )\n self.conn.commit()\n return self.cursor.fetchall()\n def insert(\n self,\n embedding: List[float],\n metadata: dict,\n document: str,\n id: Optional[str] = None,\n ) -> None:\n self.cursor.execute(\n f'insert into \"{self.table_name}\" '\n f\"values (%s, array{json.dumps(embedding)}::float4[], %s, %s)\",\n (id if id is not None else \"null\", json.dumps(metadata), document),\n )\n self.conn.commit()\n def query_nearest_neighbours(\n self, embedding: List[float], k: int, filter: Optional[Dict[str, str]] = None\n ) -> List[Tuple[str, str, float]]:\n params = []\n filter_clause = \"\"\n if filter is not None:\n conjuncts = []\n for key, val in filter.items():\n conjuncts.append(\"metadata->>%s=%s\")\n params.append(key)\n params.append(val)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/hologres.html"} {"id": "33dbd7f8fd45-2", "text": "params.append(key)\n params.append(val)\n filter_clause = \"where \" + \" and \".join(conjuncts)\n sql = (\n f\"select document, metadata::text, \"\n f\"pm_approx_squared_euclidean_distance(array{json.dumps(embedding)}\"\n f\"::float4[], embedding) as distance from\"\n f\" {self.table_name} {filter_clause} order by distance asc limit {k};\"\n )\n self.cursor.execute(sql, tuple(params))\n self.conn.commit()\n return self.cursor.fetchall()\n[docs]class Hologres(VectorStore):\n \"\"\"VectorStore implementation using Hologres.\n - `connection_string` is a hologres connection string.\n - `embedding_function` any embedding function implementing\n `langchain.embeddings.base.Embeddings` interface.\n - `ndims` is the number of dimensions of the embedding output.\n - `table_name` is the name of the table to store embeddings and data.\n (default: langchain_pg_embedding)\n - NOTE: The table will be created when initializing the store (if not exists)\n So, make sure the user has the right permissions to create tables.\n - `pre_delete_table` if True, will delete the table if it exists.\n (default: False)\n - Useful for testing.\n \"\"\"\n def __init__(\n self,\n connection_string: str,\n embedding_function: Embeddings,\n ndims: int = ADA_TOKEN_COUNT,\n table_name: str = _LANGCHAIN_DEFAULT_TABLE_NAME,\n pre_delete_table: bool = False,\n logger: Optional[logging.Logger] = None,\n ) -> None:\n self.connection_string = connection_string\n self.ndims = ndims", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/hologres.html"} {"id": "33dbd7f8fd45-3", "text": "self.connection_string = connection_string\n self.ndims = ndims\n self.table_name = table_name\n self.embedding_function = embedding_function\n self.pre_delete_table = pre_delete_table\n self.logger = logger or logging.getLogger(__name__)\n self.__post_init__()\n def __post_init__(\n self,\n ) -> None:\n \"\"\"\n Initialize the store.\n \"\"\"\n self.storage = HologresWrapper(\n self.connection_string, self.ndims, self.table_name\n )\n self.create_vector_extension()\n self.create_table()\n[docs] def create_vector_extension(self) -> None:\n try:\n self.storage.create_vector_extension()\n except Exception as e:\n self.logger.exception(e)\n raise e\n[docs] def create_table(self) -> None:\n self.storage.create_table(self.pre_delete_table)\n @classmethod\n def __from(\n cls,\n texts: List[str],\n embeddings: List[List[float]],\n embedding_function: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n ndims: int = ADA_TOKEN_COUNT,\n table_name: str = _LANGCHAIN_DEFAULT_TABLE_NAME,\n pre_delete_table: bool = False,\n **kwargs: Any,\n ) -> Hologres:\n if ids is None:\n ids = [str(uuid.uuid1()) for _ in texts]\n if not metadatas:\n metadatas = [{} for _ in texts]\n connection_string = cls.get_connection_string(kwargs)\n store = cls(\n connection_string=connection_string,\n embedding_function=embedding_function,\n ndims=ndims,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/hologres.html"} {"id": "33dbd7f8fd45-4", "text": "embedding_function=embedding_function,\n ndims=ndims,\n table_name=table_name,\n pre_delete_table=pre_delete_table,\n )\n store.add_embeddings(\n texts=texts, embeddings=embeddings, metadatas=metadatas, ids=ids, **kwargs\n )\n return store\n[docs] def add_embeddings(\n self,\n texts: Iterable[str],\n embeddings: List[List[float]],\n metadatas: List[dict],\n ids: List[str],\n **kwargs: Any,\n ) -> None:\n \"\"\"Add embeddings to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n embeddings: List of list of embedding vectors.\n metadatas: List of metadatas associated with the texts.\n kwargs: vectorstore specific parameters\n \"\"\"\n try:\n for text, metadata, embedding, id in zip(texts, metadatas, embeddings, ids):\n self.storage.insert(embedding, metadata, text, id)\n except Exception as e:\n self.logger.exception(e)\n self.storage.conn.commit()\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n kwargs: vectorstore specific parameters\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/hologres.html"} {"id": "33dbd7f8fd45-5", "text": "List of ids from adding the texts into the vectorstore.\n \"\"\"\n if ids is None:\n ids = [str(uuid.uuid1()) for _ in texts]\n embeddings = self.embedding_function.embed_documents(list(texts))\n if not metadatas:\n metadatas = [{} for _ in texts]\n self.add_embeddings(texts, embeddings, metadatas, ids, **kwargs)\n return ids\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n filter: Optional[dict] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Run similarity search with Hologres with distance.\n Args:\n query (str): Query text to search for.\n k (int): Number of results to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n embedding = self.embedding_function.embed_query(text=query)\n return self.similarity_search_by_vector(\n embedding=embedding,\n k=k,\n filter=filter,\n )\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n filter: Optional[dict] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/hologres.html"} {"id": "33dbd7f8fd45-6", "text": "Returns:\n List of Documents most similar to the query vector.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score_by_vector(\n embedding=embedding, k=k, filter=filter\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,\n filter: Optional[dict] = None,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n embedding = self.embedding_function.embed_query(query)\n docs = self.similarity_search_with_score_by_vector(\n embedding=embedding, k=k, filter=filter\n )\n return docs\n[docs] def similarity_search_with_score_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n filter: Optional[dict] = None,\n ) -> List[Tuple[Document, float]]:\n results: List[Tuple[str, str, float]] = self.storage.query_nearest_neighbours(\n embedding, k, filter\n )\n docs = [\n (\n Document(\n page_content=result[0],\n metadata=json.loads(result[1]),\n ),\n result[2],\n )\n for result in results\n ]\n return docs\n[docs] @classmethod\n def from_texts(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/hologres.html"} {"id": "33dbd7f8fd45-7", "text": "]\n return docs\n[docs] @classmethod\n def from_texts(\n cls: Type[Hologres],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ndims: int = ADA_TOKEN_COUNT,\n table_name: str = _LANGCHAIN_DEFAULT_TABLE_NAME,\n ids: Optional[List[str]] = None,\n pre_delete_table: bool = False,\n **kwargs: Any,\n ) -> Hologres:\n \"\"\"\n Return VectorStore initialized from texts and embeddings.\n Postgres connection string is required\n \"Either pass it as a parameter\n or set the HOLOGRES_CONNECTION_STRING environment variable.\n \"\"\"\n embeddings = embedding.embed_documents(list(texts))\n return cls.__from(\n texts,\n embeddings,\n embedding,\n metadatas=metadatas,\n ids=ids,\n ndims=ndims,\n table_name=table_name,\n pre_delete_table=pre_delete_table,\n **kwargs,\n )\n[docs] @classmethod\n def from_embeddings(\n cls,\n text_embeddings: List[Tuple[str, List[float]]],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ndims: int = ADA_TOKEN_COUNT,\n table_name: str = _LANGCHAIN_DEFAULT_TABLE_NAME,\n ids: Optional[List[str]] = None,\n pre_delete_table: bool = False,\n **kwargs: Any,\n ) -> Hologres:\n \"\"\"Construct Hologres wrapper from raw documents and pre-\n generated embeddings.\n Return VectorStore initialized from documents and embeddings.\n Postgres connection string is required", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/hologres.html"} {"id": "33dbd7f8fd45-8", "text": "Return VectorStore initialized from documents and embeddings.\n Postgres connection string is required\n \"Either pass it as a parameter\n or set the HOLOGRES_CONNECTION_STRING environment variable.\n Example:\n .. code-block:: python\n from langchain import Hologres\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n text_embeddings = embeddings.embed_documents(texts)\n text_embedding_pairs = list(zip(texts, text_embeddings))\n faiss = Hologres.from_embeddings(text_embedding_pairs, embeddings)\n \"\"\"\n texts = [t[0] for t in text_embeddings]\n embeddings = [t[1] for t in text_embeddings]\n return cls.__from(\n texts,\n embeddings,\n embedding,\n metadatas=metadatas,\n ids=ids,\n ndims=ndims,\n table_name=table_name,\n pre_delete_table=pre_delete_table,\n **kwargs,\n )\n[docs] @classmethod\n def from_existing_index(\n cls: Type[Hologres],\n embedding: Embeddings,\n ndims: int = ADA_TOKEN_COUNT,\n table_name: str = _LANGCHAIN_DEFAULT_TABLE_NAME,\n pre_delete_table: bool = False,\n **kwargs: Any,\n ) -> Hologres:\n \"\"\"\n Get intsance of an existing Hologres store.This method will\n return the instance of the store without inserting any new\n embeddings\n \"\"\"\n connection_string = cls.get_connection_string(kwargs)\n store = cls(\n connection_string=connection_string,\n ndims=ndims,\n table_name=table_name,\n embedding_function=embedding,\n pre_delete_table=pre_delete_table,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/hologres.html"} {"id": "33dbd7f8fd45-9", "text": "embedding_function=embedding,\n pre_delete_table=pre_delete_table,\n )\n return store\n[docs] @classmethod\n def get_connection_string(cls, kwargs: Dict[str, Any]) -> str:\n connection_string: str = get_from_dict_or_env(\n data=kwargs,\n key=\"connection_string\",\n env_key=\"HOLOGRES_CONNECTION_STRING\",\n )\n if not connection_string:\n raise ValueError(\n \"Postgres connection string is required\"\n \"Either pass it as a parameter\"\n \"or set the HOLOGRES_CONNECTION_STRING environment variable.\"\n )\n return connection_string\n[docs] @classmethod\n def from_documents(\n cls: Type[Hologres],\n documents: List[Document],\n embedding: Embeddings,\n ndims: int = ADA_TOKEN_COUNT,\n table_name: str = _LANGCHAIN_DEFAULT_TABLE_NAME,\n ids: Optional[List[str]] = None,\n pre_delete_collection: bool = False,\n **kwargs: Any,\n ) -> Hologres:\n \"\"\"\n Return VectorStore initialized from documents and embeddings.\n Postgres connection string is required\n \"Either pass it as a parameter\n or set the HOLOGRES_CONNECTION_STRING environment variable.\n \"\"\"\n texts = [d.page_content for d in documents]\n metadatas = [d.metadata for d in documents]\n connection_string = cls.get_connection_string(kwargs)\n kwargs[\"connection_string\"] = connection_string\n return cls.from_texts(\n texts=texts,\n pre_delete_collection=pre_delete_collection,\n embedding=embedding,\n metadatas=metadatas,\n ids=ids,\n ndims=ndims,\n table_name=table_name,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/hologres.html"} {"id": "33dbd7f8fd45-10", "text": "ndims=ndims,\n table_name=table_name,\n **kwargs,\n )\n[docs] @classmethod\n def connection_string_from_db_params(\n cls,\n host: str,\n port: int,\n database: str,\n user: str,\n password: str,\n ) -> str:\n \"\"\"Return connection string from database parameters.\"\"\"\n return (\n f\"dbname={database} user={user} password={password} host={host} port={port}\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/hologres.html"} {"id": "f511801286ec-0", "text": "Source code for langchain.vectorstores.typesense\n\"\"\"Wrapper around Typesense vector search\"\"\"\nfrom __future__ import annotations\nimport uuid\nfrom typing import TYPE_CHECKING, Any, Iterable, List, Optional, Tuple, Union\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_env\nfrom langchain.vectorstores.base import VectorStore\nif TYPE_CHECKING:\n from typesense.client import Client\n from typesense.collection import Collection\n[docs]class Typesense(VectorStore):\n \"\"\"Wrapper around Typesense vector search.\n To use, you should have the ``typesense`` python package installed.\n Example:\n .. code-block:: python\n from langchain.embedding.openai import OpenAIEmbeddings\n from langchain.vectorstores import Typesense\n import typesense\n node = {\n \"host\": \"localhost\", # For Typesense Cloud use xxx.a1.typesense.net\n \"port\": \"8108\", # For Typesense Cloud use 443\n \"protocol\": \"http\" # For Typesense Cloud use https\n }\n typesense_client = typesense.Client(\n {\n \"nodes\": [node],\n \"api_key\": \"\",\n \"connection_timeout_seconds\": 2\n }\n )\n typesense_collection_name = \"langchain-memory\"\n embedding = OpenAIEmbeddings()\n vectorstore = Typesense(\n typesense_client=typesense_client,\n embedding=embedding,\n typesense_collection_name=typesense_collection_name,\n text_key=\"text\",\n )\n \"\"\"\n def __init__(\n self,\n typesense_client: Client,\n embedding: Embeddings,\n *,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/typesense.html"} {"id": "f511801286ec-1", "text": "typesense_client: Client,\n embedding: Embeddings,\n *,\n typesense_collection_name: Optional[str] = None,\n text_key: str = \"text\",\n ):\n \"\"\"Initialize with Typesense client.\"\"\"\n try:\n from typesense import Client\n except ImportError:\n raise ValueError(\n \"Could not import typesense python package. \"\n \"Please install it with `pip install typesense`.\"\n )\n if not isinstance(typesense_client, Client):\n raise ValueError(\n f\"typesense_client should be an instance of typesense.Client, \"\n f\"got {type(typesense_client)}\"\n )\n self._typesense_client = typesense_client\n self._embedding = embedding\n self._typesense_collection_name = (\n typesense_collection_name or f\"langchain-{str(uuid.uuid4())}\"\n )\n self._text_key = text_key\n @property\n def _collection(self) -> Collection:\n return self._typesense_client.collections[self._typesense_collection_name]\n def _prep_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]],\n ids: Optional[List[str]],\n ) -> List[dict]:\n \"\"\"Embed and create the documents\"\"\"\n _ids = ids or (str(uuid.uuid4()) for _ in texts)\n _metadatas: Iterable[dict] = metadatas or ({} for _ in texts)\n embedded_texts = self._embedding.embed_documents(list(texts))\n return [\n {\"id\": _id, \"vec\": vec, f\"{self._text_key}\": text, \"metadata\": metadata}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/typesense.html"} {"id": "f511801286ec-2", "text": "for _id, vec, text, metadata in zip(_ids, embedded_texts, texts, _metadatas)\n ]\n def _create_collection(self, num_dim: int) -> None:\n fields = [\n {\"name\": \"vec\", \"type\": \"float[]\", \"num_dim\": num_dim},\n {\"name\": f\"{self._text_key}\", \"type\": \"string\"},\n {\"name\": \".*\", \"type\": \"auto\"},\n ]\n self._typesense_client.collections.create(\n {\"name\": self._typesense_collection_name, \"fields\": fields}\n )\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embedding and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n ids: Optional list of ids to associate with the texts.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n from typesense.exceptions import ObjectNotFound\n docs = self._prep_texts(texts, metadatas, ids)\n try:\n self._collection.documents.import_(docs, {\"action\": \"upsert\"})\n except ObjectNotFound:\n # Create the collection if it doesn't already exist\n self._create_collection(len(docs[0][\"vec\"]))\n self._collection.documents.import_(docs, {\"action\": \"upsert\"})\n return [doc[\"id\"] for doc in docs]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/typesense.html"} {"id": "f511801286ec-3", "text": "return [doc[\"id\"] for doc in docs]\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 10,\n filter: Optional[str] = \"\",\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return typesense documents most similar to query, along with scores.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 10.\n Minimum 10 results would be returned.\n filter: typesense filter_by expression to filter documents on\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n embedded_query = [str(x) for x in self._embedding.embed_query(query)]\n query_obj = {\n \"q\": \"*\",\n \"vector_query\": f'vec:([{\",\".join(embedded_query)}], k:{k})',\n \"filter_by\": filter,\n \"collection\": self._typesense_collection_name,\n }\n docs = []\n response = self._typesense_client.multi_search.perform(\n {\"searches\": [query_obj]}, {}\n )\n for hit in response[\"results\"][0][\"hits\"]:\n document = hit[\"document\"]\n metadata = document[\"metadata\"]\n text = document[self._text_key]\n score = hit[\"vector_distance\"]\n docs.append((Document(page_content=text, metadata=metadata), score))\n return docs\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 10,\n filter: Optional[str] = \"\",\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return typesense documents most similar to query.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/typesense.html"} {"id": "f511801286ec-4", "text": ") -> List[Document]:\n \"\"\"Return typesense documents most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 10.\n Minimum 10 results would be returned.\n filter: typesense filter_by expression to filter documents on\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n docs_and_score = self.similarity_search_with_score(query, k=k, filter=filter)\n return [doc for doc, _ in docs_and_score]\n[docs] @classmethod\n def from_client_params(\n cls,\n embedding: Embeddings,\n *,\n host: str = \"localhost\",\n port: Union[str, int] = \"8108\",\n protocol: str = \"http\",\n typesense_api_key: Optional[str] = None,\n connection_timeout_seconds: int = 2,\n **kwargs: Any,\n ) -> Typesense:\n \"\"\"Initialize Typesense directly from client parameters.\n Example:\n .. code-block:: python\n from langchain.embedding.openai import OpenAIEmbeddings\n from langchain.vectorstores import Typesense\n # Pass in typesense_api_key as kwarg or set env var \"TYPESENSE_API_KEY\".\n vectorstore = Typesense(\n OpenAIEmbeddings(),\n host=\"localhost\",\n port=\"8108\",\n protocol=\"http\",\n typesense_collection_name=\"langchain-memory\",\n )\n \"\"\"\n try:\n from typesense import Client\n except ImportError:\n raise ValueError(\n \"Could not import typesense python package. \"\n \"Please install it with `pip install typesense`.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/typesense.html"} {"id": "f511801286ec-5", "text": "\"Please install it with `pip install typesense`.\"\n )\n node = {\n \"host\": host,\n \"port\": str(port),\n \"protocol\": protocol,\n }\n typesense_api_key = typesense_api_key or get_from_env(\n \"typesense_api_key\", \"TYPESENSE_API_KEY\"\n )\n client_config = {\n \"nodes\": [node],\n \"api_key\": typesense_api_key,\n \"connection_timeout_seconds\": connection_timeout_seconds,\n }\n return cls(Client(client_config), embedding, **kwargs)\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n typesense_client: Optional[Client] = None,\n typesense_client_params: Optional[dict] = None,\n typesense_collection_name: Optional[str] = None,\n text_key: str = \"text\",\n **kwargs: Any,\n ) -> Typesense:\n \"\"\"Construct Typesense wrapper from raw text.\"\"\"\n if typesense_client:\n vectorstore = cls(typesense_client, embedding, **kwargs)\n elif typesense_client_params:\n vectorstore = cls.from_client_params(\n embedding, **typesense_client_params, **kwargs\n )\n else:\n raise ValueError(\n \"Must specify one of typesense_client or typesense_client_params.\"\n )\n vectorstore.add_texts(texts, metadatas=metadatas, ids=ids)\n return vectorstore", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/typesense.html"} {"id": "93c5f0958daa-0", "text": "Source code for langchain.vectorstores.opensearch_vector_search\n\"\"\"Wrapper around OpenSearch vector database.\"\"\"\nfrom __future__ import annotations\nimport uuid\nfrom typing import Any, Dict, Iterable, List, Optional, Tuple\nimport numpy as np\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import Document\nfrom langchain.utils import get_from_dict_or_env\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nIMPORT_OPENSEARCH_PY_ERROR = (\n \"Could not import OpenSearch. Please install it with `pip install opensearch-py`.\"\n)\nSCRIPT_SCORING_SEARCH = \"script_scoring\"\nPAINLESS_SCRIPTING_SEARCH = \"painless_scripting\"\nMATCH_ALL_QUERY = {\"match_all\": {}} # type: Dict\ndef _import_opensearch() -> Any:\n \"\"\"Import OpenSearch if available, otherwise raise error.\"\"\"\n try:\n from opensearchpy import OpenSearch\n except ImportError:\n raise ValueError(IMPORT_OPENSEARCH_PY_ERROR)\n return OpenSearch\ndef _import_bulk() -> Any:\n \"\"\"Import bulk if available, otherwise raise error.\"\"\"\n try:\n from opensearchpy.helpers import bulk\n except ImportError:\n raise ValueError(IMPORT_OPENSEARCH_PY_ERROR)\n return bulk\ndef _import_not_found_error() -> Any:\n \"\"\"Import not found error if available, otherwise raise error.\"\"\"\n try:\n from opensearchpy.exceptions import NotFoundError\n except ImportError:\n raise ValueError(IMPORT_OPENSEARCH_PY_ERROR)\n return NotFoundError\ndef _get_opensearch_client(opensearch_url: str, **kwargs: Any) -> Any:\n \"\"\"Get OpenSearch client from the opensearch_url, otherwise raise error.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} {"id": "93c5f0958daa-1", "text": "\"\"\"Get OpenSearch client from the opensearch_url, otherwise raise error.\"\"\"\n try:\n opensearch = _import_opensearch()\n client = opensearch(opensearch_url, **kwargs)\n except ValueError as e:\n raise ValueError(\n f\"OpenSearch client string provided is not in proper format. \"\n f\"Got error: {e} \"\n )\n return client\ndef _validate_embeddings_and_bulk_size(embeddings_length: int, bulk_size: int) -> None:\n \"\"\"Validate Embeddings Length and Bulk Size.\"\"\"\n if embeddings_length == 0:\n raise RuntimeError(\"Embeddings size is zero\")\n if bulk_size < embeddings_length:\n raise RuntimeError(\n f\"The embeddings count, {embeddings_length} is more than the \"\n f\"[bulk_size], {bulk_size}. Increase the value of [bulk_size].\"\n )\ndef _bulk_ingest_embeddings(\n client: Any,\n index_name: str,\n embeddings: List[List[float]],\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n vector_field: str = \"vector_field\",\n text_field: str = \"text\",\n mapping: Optional[Dict] = None,\n max_chunk_bytes: Optional[int] = 1 * 1024 * 1024,\n) -> List[str]:\n \"\"\"Bulk Ingest Embeddings into given index.\"\"\"\n if not mapping:\n mapping = dict()\n bulk = _import_bulk()\n not_found_error = _import_not_found_error()\n requests = []\n return_ids = []\n mapping = mapping\n try:\n client.indices.get(index=index_name)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} {"id": "93c5f0958daa-2", "text": "mapping = mapping\n try:\n client.indices.get(index=index_name)\n except not_found_error:\n client.indices.create(index=index_name, body=mapping)\n for i, text in enumerate(texts):\n metadata = metadatas[i] if metadatas else {}\n _id = ids[i] if ids else str(uuid.uuid4())\n request = {\n \"_op_type\": \"index\",\n \"_index\": index_name,\n vector_field: embeddings[i],\n text_field: text,\n \"metadata\": metadata,\n \"_id\": _id,\n }\n requests.append(request)\n return_ids.append(_id)\n bulk(client, requests, max_chunk_bytes=max_chunk_bytes)\n client.indices.refresh(index=index_name)\n return return_ids\ndef _default_scripting_text_mapping(\n dim: int,\n vector_field: str = \"vector_field\",\n) -> Dict:\n \"\"\"For Painless Scripting or Script Scoring,the default mapping to create index.\"\"\"\n return {\n \"mappings\": {\n \"properties\": {\n vector_field: {\"type\": \"knn_vector\", \"dimension\": dim},\n }\n }\n }\ndef _default_text_mapping(\n dim: int,\n engine: str = \"nmslib\",\n space_type: str = \"l2\",\n ef_search: int = 512,\n ef_construction: int = 512,\n m: int = 16,\n vector_field: str = \"vector_field\",\n) -> Dict:\n \"\"\"For Approximate k-NN Search, this is the default mapping to create index.\"\"\"\n return {", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} {"id": "93c5f0958daa-3", "text": "return {\n \"settings\": {\"index\": {\"knn\": True, \"knn.algo_param.ef_search\": ef_search}},\n \"mappings\": {\n \"properties\": {\n vector_field: {\n \"type\": \"knn_vector\",\n \"dimension\": dim,\n \"method\": {\n \"name\": \"hnsw\",\n \"space_type\": space_type,\n \"engine\": engine,\n \"parameters\": {\"ef_construction\": ef_construction, \"m\": m},\n },\n }\n }\n },\n }\ndef _default_approximate_search_query(\n query_vector: List[float],\n k: int = 4,\n vector_field: str = \"vector_field\",\n) -> Dict:\n \"\"\"For Approximate k-NN Search, this is the default query.\"\"\"\n return {\n \"size\": k,\n \"query\": {\"knn\": {vector_field: {\"vector\": query_vector, \"k\": k}}},\n }\ndef _approximate_search_query_with_boolean_filter(\n query_vector: List[float],\n boolean_filter: Dict,\n k: int = 4,\n vector_field: str = \"vector_field\",\n subquery_clause: str = \"must\",\n) -> Dict:\n \"\"\"For Approximate k-NN Search, with Boolean Filter.\"\"\"\n return {\n \"size\": k,\n \"query\": {\n \"bool\": {\n \"filter\": boolean_filter,\n subquery_clause: [\n {\"knn\": {vector_field: {\"vector\": query_vector, \"k\": k}}}\n ],\n }\n },\n }\ndef _approximate_search_query_with_lucene_filter(\n query_vector: List[float],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} {"id": "93c5f0958daa-4", "text": "def _approximate_search_query_with_lucene_filter(\n query_vector: List[float],\n lucene_filter: Dict,\n k: int = 4,\n vector_field: str = \"vector_field\",\n) -> Dict:\n \"\"\"For Approximate k-NN Search, with Lucene Filter.\"\"\"\n search_query = _default_approximate_search_query(\n query_vector, k=k, vector_field=vector_field\n )\n search_query[\"query\"][\"knn\"][vector_field][\"filter\"] = lucene_filter\n return search_query\ndef _default_script_query(\n query_vector: List[float],\n space_type: str = \"l2\",\n pre_filter: Optional[Dict] = None,\n vector_field: str = \"vector_field\",\n) -> Dict:\n \"\"\"For Script Scoring Search, this is the default query.\"\"\"\n if not pre_filter:\n pre_filter = MATCH_ALL_QUERY\n return {\n \"query\": {\n \"script_score\": {\n \"query\": pre_filter,\n \"script\": {\n \"source\": \"knn_score\",\n \"lang\": \"knn\",\n \"params\": {\n \"field\": vector_field,\n \"query_value\": query_vector,\n \"space_type\": space_type,\n },\n },\n }\n }\n }\ndef __get_painless_scripting_source(\n space_type: str, query_vector: List[float], vector_field: str = \"vector_field\"\n) -> str:\n \"\"\"For Painless Scripting, it returns the script source based on space type.\"\"\"\n source_value = (\n \"(1.0 + \"\n + space_type\n + \"(\"\n + str(query_vector)\n + \", doc['\"\n + vector_field", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} {"id": "93c5f0958daa-5", "text": "+ str(query_vector)\n + \", doc['\"\n + vector_field\n + \"']))\"\n )\n if space_type == \"cosineSimilarity\":\n return source_value\n else:\n return \"1/\" + source_value\ndef _default_painless_scripting_query(\n query_vector: List[float],\n space_type: str = \"l2Squared\",\n pre_filter: Optional[Dict] = None,\n vector_field: str = \"vector_field\",\n) -> Dict:\n \"\"\"For Painless Scripting Search, this is the default query.\"\"\"\n if not pre_filter:\n pre_filter = MATCH_ALL_QUERY\n source = __get_painless_scripting_source(space_type, query_vector)\n return {\n \"query\": {\n \"script_score\": {\n \"query\": pre_filter,\n \"script\": {\n \"source\": source,\n \"params\": {\n \"field\": vector_field,\n \"query_value\": query_vector,\n },\n },\n }\n }\n }\ndef _get_kwargs_value(kwargs: Any, key: str, default_value: Any) -> Any:\n \"\"\"Get the value of the key if present. Else get the default_value.\"\"\"\n if key in kwargs:\n return kwargs.get(key)\n return default_value\n[docs]class OpenSearchVectorSearch(VectorStore):\n \"\"\"Wrapper around OpenSearch as a vector database.\n Example:\n .. code-block:: python\n from langchain import OpenSearchVectorSearch\n opensearch_vector_search = OpenSearchVectorSearch(\n \"http://localhost:9200\",\n \"embeddings\",\n embedding_function\n )\n \"\"\"\n def __init__(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} {"id": "93c5f0958daa-6", "text": "embedding_function\n )\n \"\"\"\n def __init__(\n self,\n opensearch_url: str,\n index_name: str,\n embedding_function: Embeddings,\n **kwargs: Any,\n ):\n \"\"\"Initialize with necessary components.\"\"\"\n self.embedding_function = embedding_function\n self.index_name = index_name\n self.client = _get_opensearch_client(opensearch_url, **kwargs)\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n bulk_size: int = 500,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n ids: Optional list of ids to associate with the texts.\n bulk_size: Bulk API request count; Default: 500\n Returns:\n List of ids from adding the texts into the vectorstore.\n Optional Args:\n vector_field: Document field embeddings are stored in. Defaults to\n \"vector_field\".\n text_field: Document field the text of the document is stored in. Defaults\n to \"text\".\n \"\"\"\n embeddings = self.embedding_function.embed_documents(list(texts))\n _validate_embeddings_and_bulk_size(len(embeddings), bulk_size)\n text_field = _get_kwargs_value(kwargs, \"text_field\", \"text\")\n dim = len(embeddings[0])\n engine = _get_kwargs_value(kwargs, \"engine\", \"nmslib\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} {"id": "93c5f0958daa-7", "text": "engine = _get_kwargs_value(kwargs, \"engine\", \"nmslib\")\n space_type = _get_kwargs_value(kwargs, \"space_type\", \"l2\")\n ef_search = _get_kwargs_value(kwargs, \"ef_search\", 512)\n ef_construction = _get_kwargs_value(kwargs, \"ef_construction\", 512)\n m = _get_kwargs_value(kwargs, \"m\", 16)\n vector_field = _get_kwargs_value(kwargs, \"vector_field\", \"vector_field\")\n max_chunk_bytes = _get_kwargs_value(kwargs, \"max_chunk_bytes\", 1 * 1024 * 1024)\n mapping = _default_text_mapping(\n dim, engine, space_type, ef_search, ef_construction, m, vector_field\n )\n return _bulk_ingest_embeddings(\n self.client,\n self.index_name,\n embeddings,\n texts,\n metadatas=metadatas,\n ids=ids,\n vector_field=vector_field,\n text_field=text_field,\n mapping=mapping,\n max_chunk_bytes=max_chunk_bytes,\n )\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n By default, supports Approximate Search.\n Also supports Script Scoring and Painless Scripting.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query.\n Optional Args:\n vector_field: Document field embeddings are stored in. Defaults to\n \"vector_field\".", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} {"id": "93c5f0958daa-8", "text": "vector_field: Document field embeddings are stored in. Defaults to\n \"vector_field\".\n text_field: Document field the text of the document is stored in. Defaults\n to \"text\".\n metadata_field: Document field that metadata is stored in. Defaults to\n \"metadata\".\n Can be set to a special value \"*\" to include the entire document.\n Optional Args for Approximate Search:\n search_type: \"approximate_search\"; default: \"approximate_search\"\n boolean_filter: A Boolean filter consists of a Boolean query that\n contains a k-NN query and a filter.\n subquery_clause: Query clause on the knn vector field; default: \"must\"\n lucene_filter: the Lucene algorithm decides whether to perform an exact\n k-NN search with pre-filtering or an approximate search with modified\n post-filtering.\n Optional Args for Script Scoring Search:\n search_type: \"script_scoring\"; default: \"approximate_search\"\n space_type: \"l2\", \"l1\", \"linf\", \"cosinesimil\", \"innerproduct\",\n \"hammingbit\"; default: \"l2\"\n pre_filter: script_score query to pre-filter documents before identifying\n nearest neighbors; default: {\"match_all\": {}}\n Optional Args for Painless Scripting Search:\n search_type: \"painless_scripting\"; default: \"approximate_search\"\n space_type: \"l2Squared\", \"l1Norm\", \"cosineSimilarity\"; default: \"l2Squared\"\n pre_filter: script_score query to pre-filter documents before identifying\n nearest neighbors; default: {\"match_all\": {}}\n \"\"\"\n docs_with_scores = self.similarity_search_with_score(query, k, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} {"id": "93c5f0958daa-9", "text": "docs_with_scores = self.similarity_search_with_score(query, k, **kwargs)\n return [doc[0] for doc in docs_with_scores]\n[docs] def similarity_search_with_score(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and it's scores most similar to query.\n By default, supports Approximate Search.\n Also supports Script Scoring and Painless Scripting.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents along with its scores most similar to the query.\n Optional Args:\n same as `similarity_search`\n \"\"\"\n text_field = _get_kwargs_value(kwargs, \"text_field\", \"text\")\n metadata_field = _get_kwargs_value(kwargs, \"metadata_field\", \"metadata\")\n hits = self._raw_similarity_search_with_score(query=query, k=k, **kwargs)\n documents_with_scores = [\n (\n Document(\n page_content=hit[\"_source\"][text_field],\n metadata=hit[\"_source\"]\n if metadata_field == \"*\" or metadata_field not in hit[\"_source\"]\n else hit[\"_source\"][metadata_field],\n ),\n hit[\"_score\"],\n )\n for hit in hits\n ]\n return documents_with_scores\n def _raw_similarity_search_with_score(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[dict]:\n \"\"\"Return raw opensearch documents (dict) including vectors,\n scores most similar to query.\n By default, supports Approximate Search.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} {"id": "93c5f0958daa-10", "text": "scores most similar to query.\n By default, supports Approximate Search.\n Also supports Script Scoring and Painless Scripting.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of dict with its scores most similar to the query.\n Optional Args:\n same as `similarity_search`\n \"\"\"\n embedding = self.embedding_function.embed_query(query)\n search_type = _get_kwargs_value(kwargs, \"search_type\", \"approximate_search\")\n vector_field = _get_kwargs_value(kwargs, \"vector_field\", \"vector_field\")\n if search_type == \"approximate_search\":\n boolean_filter = _get_kwargs_value(kwargs, \"boolean_filter\", {})\n subquery_clause = _get_kwargs_value(kwargs, \"subquery_clause\", \"must\")\n lucene_filter = _get_kwargs_value(kwargs, \"lucene_filter\", {})\n if boolean_filter != {} and lucene_filter != {}:\n raise ValueError(\n \"Both `boolean_filter` and `lucene_filter` are provided which \"\n \"is invalid\"\n )\n if boolean_filter != {}:\n search_query = _approximate_search_query_with_boolean_filter(\n embedding,\n boolean_filter,\n k=k,\n vector_field=vector_field,\n subquery_clause=subquery_clause,\n )\n elif lucene_filter != {}:\n search_query = _approximate_search_query_with_lucene_filter(\n embedding, lucene_filter, k=k, vector_field=vector_field\n )\n else:\n search_query = _default_approximate_search_query(\n embedding, k=k, vector_field=vector_field\n )\n elif search_type == SCRIPT_SCORING_SEARCH:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} {"id": "93c5f0958daa-11", "text": ")\n elif search_type == SCRIPT_SCORING_SEARCH:\n space_type = _get_kwargs_value(kwargs, \"space_type\", \"l2\")\n pre_filter = _get_kwargs_value(kwargs, \"pre_filter\", MATCH_ALL_QUERY)\n search_query = _default_script_query(\n embedding, space_type, pre_filter, vector_field\n )\n elif search_type == PAINLESS_SCRIPTING_SEARCH:\n space_type = _get_kwargs_value(kwargs, \"space_type\", \"l2Squared\")\n pre_filter = _get_kwargs_value(kwargs, \"pre_filter\", MATCH_ALL_QUERY)\n search_query = _default_painless_scripting_query(\n embedding, space_type, pre_filter, vector_field\n )\n else:\n raise ValueError(\"Invalid `search_type` provided as an argument\")\n response = self.client.search(index=self.index_name, body=search_query)\n return [hit for hit in response[\"hits\"][\"hits\"][:k]]\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> list[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n Defaults to 20.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} {"id": "93c5f0958daa-12", "text": "of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n vector_field = _get_kwargs_value(kwargs, \"vector_field\", \"vector_field\")\n text_field = _get_kwargs_value(kwargs, \"text_field\", \"text\")\n metadata_field = _get_kwargs_value(kwargs, \"metadata_field\", \"metadata\")\n # Get embedding of the user query\n embedding = self.embedding_function.embed_query(query)\n # Do ANN/KNN search to get top fetch_k results where fetch_k >= k\n results = self._raw_similarity_search_with_score(query, fetch_k, **kwargs)\n embeddings = [result[\"_source\"][vector_field] for result in results]\n # Rerank top k results using MMR, (mmr_selected is a list of indices)\n mmr_selected = maximal_marginal_relevance(\n np.array(embedding), embeddings, k=k, lambda_mult=lambda_mult\n )\n return [\n Document(\n page_content=results[i][\"_source\"][text_field],\n metadata=results[i][\"_source\"][metadata_field],\n )\n for i in mmr_selected\n ]\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n bulk_size: int = 500,\n **kwargs: Any,\n ) -> OpenSearchVectorSearch:\n \"\"\"Construct OpenSearchVectorSearch wrapper from raw documents.\n Example:\n .. code-block:: python\n from langchain import OpenSearchVectorSearch", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} {"id": "93c5f0958daa-13", "text": ".. code-block:: python\n from langchain import OpenSearchVectorSearch\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n opensearch_vector_search = OpenSearchVectorSearch.from_texts(\n texts,\n embeddings,\n opensearch_url=\"http://localhost:9200\"\n )\n OpenSearch by default supports Approximate Search powered by nmslib, faiss\n and lucene engines recommended for large datasets. Also supports brute force\n search through Script Scoring and Painless Scripting.\n Optional Args:\n vector_field: Document field embeddings are stored in. Defaults to\n \"vector_field\".\n text_field: Document field the text of the document is stored in. Defaults\n to \"text\".\n Optional Keyword Args for Approximate Search:\n engine: \"nmslib\", \"faiss\", \"lucene\"; default: \"nmslib\"\n space_type: \"l2\", \"l1\", \"cosinesimil\", \"linf\", \"innerproduct\"; default: \"l2\"\n ef_search: Size of the dynamic list used during k-NN searches. Higher values\n lead to more accurate but slower searches; default: 512\n ef_construction: Size of the dynamic list used during k-NN graph creation.\n Higher values lead to more accurate graph but slower indexing speed;\n default: 512\n m: Number of bidirectional links created for each new element. Large impact\n on memory consumption. Between 2 and 100; default: 16\n Keyword Args for Script Scoring or Painless Scripting:\n is_appx_search: False\n \"\"\"\n opensearch_url = get_from_dict_or_env(\n kwargs, \"opensearch_url\", \"OPENSEARCH_URL\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} {"id": "93c5f0958daa-14", "text": "kwargs, \"opensearch_url\", \"OPENSEARCH_URL\"\n )\n # List of arguments that needs to be removed from kwargs\n # before passing kwargs to get opensearch client\n keys_list = [\n \"opensearch_url\",\n \"index_name\",\n \"is_appx_search\",\n \"vector_field\",\n \"text_field\",\n \"engine\",\n \"space_type\",\n \"ef_search\",\n \"ef_construction\",\n \"m\",\n \"max_chunk_bytes\",\n ]\n embeddings = embedding.embed_documents(texts)\n _validate_embeddings_and_bulk_size(len(embeddings), bulk_size)\n dim = len(embeddings[0])\n # Get the index name from either from kwargs or ENV Variable\n # before falling back to random generation\n index_name = get_from_dict_or_env(\n kwargs, \"index_name\", \"OPENSEARCH_INDEX_NAME\", default=uuid.uuid4().hex\n )\n is_appx_search = _get_kwargs_value(kwargs, \"is_appx_search\", True)\n vector_field = _get_kwargs_value(kwargs, \"vector_field\", \"vector_field\")\n text_field = _get_kwargs_value(kwargs, \"text_field\", \"text\")\n max_chunk_bytes = _get_kwargs_value(kwargs, \"max_chunk_bytes\", 1 * 1024 * 1024)\n if is_appx_search:\n engine = _get_kwargs_value(kwargs, \"engine\", \"nmslib\")\n space_type = _get_kwargs_value(kwargs, \"space_type\", \"l2\")\n ef_search = _get_kwargs_value(kwargs, \"ef_search\", 512)\n ef_construction = _get_kwargs_value(kwargs, \"ef_construction\", 512)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} {"id": "93c5f0958daa-15", "text": "ef_construction = _get_kwargs_value(kwargs, \"ef_construction\", 512)\n m = _get_kwargs_value(kwargs, \"m\", 16)\n mapping = _default_text_mapping(\n dim, engine, space_type, ef_search, ef_construction, m, vector_field\n )\n else:\n mapping = _default_scripting_text_mapping(dim)\n [kwargs.pop(key, None) for key in keys_list]\n client = _get_opensearch_client(opensearch_url, **kwargs)\n _bulk_ingest_embeddings(\n client,\n index_name,\n embeddings,\n texts,\n metadatas=metadatas,\n vector_field=vector_field,\n text_field=text_field,\n mapping=mapping,\n max_chunk_bytes=max_chunk_bytes,\n )\n return cls(opensearch_url, index_name, embedding, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} {"id": "d02369ee9aef-0", "text": "Source code for langchain.vectorstores.atlas\n\"\"\"Wrapper around Atlas by Nomic.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport uuid\nfrom typing import Any, Iterable, List, Optional, Type\nimport numpy as np\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nlogger = logging.getLogger(__name__)\n[docs]class AtlasDB(VectorStore):\n \"\"\"Wrapper around Atlas: Nomic's neural database and rhizomatic instrument.\n To use, you should have the ``nomic`` python package installed.\n Example:\n .. code-block:: python\n from langchain.vectorstores import AtlasDB\n from langchain.embeddings.openai import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n vectorstore = AtlasDB(\"my_project\", embeddings.embed_query)\n \"\"\"\n _ATLAS_DEFAULT_ID_FIELD = \"atlas_id\"\n def __init__(\n self,\n name: str,\n embedding_function: Optional[Embeddings] = None,\n api_key: Optional[str] = None,\n description: str = \"A description for your project\",\n is_public: bool = True,\n reset_project_if_exists: bool = False,\n ) -> None:\n \"\"\"\n Initialize the Atlas Client\n Args:\n name (str): The name of your project. If the project already exists,\n it will be loaded.\n embedding_function (Optional[Callable]): An optional function used for\n embedding your data. If None, data will be embedded with\n Nomic's embed model.\n api_key (str): Your nomic API key\n description (str): A description for your project.\n is_public (bool): Whether your project is publicly accessible.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html"} {"id": "d02369ee9aef-1", "text": "is_public (bool): Whether your project is publicly accessible.\n True by default.\n reset_project_if_exists (bool): Whether to reset this project if it\n already exists. Default False.\n Generally userful during development and testing.\n \"\"\"\n try:\n import nomic\n from nomic import AtlasProject\n except ImportError:\n raise ValueError(\n \"Could not import nomic python package. \"\n \"Please install it with `pip install nomic`.\"\n )\n if api_key is None:\n raise ValueError(\"No API key provided. Sign up at atlas.nomic.ai!\")\n nomic.login(api_key)\n self._embedding_function = embedding_function\n modality = \"text\"\n if self._embedding_function is not None:\n modality = \"embedding\"\n # Check if the project exists, create it if not\n self.project = AtlasProject(\n name=name,\n description=description,\n modality=modality,\n is_public=is_public,\n reset_project_if_exists=reset_project_if_exists,\n unique_id_field=AtlasDB._ATLAS_DEFAULT_ID_FIELD,\n )\n self.project._latest_project_state()\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n refresh: bool = True,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts (Iterable[str]): Texts to add to the vectorstore.\n metadatas (Optional[List[dict]], optional): Optional list of metadatas.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html"} {"id": "d02369ee9aef-2", "text": "metadatas (Optional[List[dict]], optional): Optional list of metadatas.\n ids (Optional[List[str]]): An optional list of ids.\n refresh(bool): Whether or not to refresh indices with the updated data.\n Default True.\n Returns:\n List[str]: List of IDs of the added texts.\n \"\"\"\n if (\n metadatas is not None\n and len(metadatas) > 0\n and \"text\" in metadatas[0].keys()\n ):\n raise ValueError(\"Cannot accept key text in metadata!\")\n texts = list(texts)\n if ids is None:\n ids = [str(uuid.uuid1()) for _ in texts]\n # Embedding upload case\n if self._embedding_function is not None:\n _embeddings = self._embedding_function.embed_documents(texts)\n embeddings = np.stack(_embeddings)\n if metadatas is None:\n data = [\n {AtlasDB._ATLAS_DEFAULT_ID_FIELD: ids[i], \"text\": texts[i]}\n for i, _ in enumerate(texts)\n ]\n else:\n for i in range(len(metadatas)):\n metadatas[i][AtlasDB._ATLAS_DEFAULT_ID_FIELD] = ids[i]\n metadatas[i][\"text\"] = texts[i]\n data = metadatas\n self.project._validate_map_data_inputs(\n [], id_field=AtlasDB._ATLAS_DEFAULT_ID_FIELD, data=data\n )\n with self.project.wait_for_project_lock():\n self.project.add_embeddings(embeddings=embeddings, data=data)\n # Text upload case\n else:\n if metadatas is None:\n data = [", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html"} {"id": "d02369ee9aef-3", "text": "else:\n if metadatas is None:\n data = [\n {\"text\": text, AtlasDB._ATLAS_DEFAULT_ID_FIELD: ids[i]}\n for i, text in enumerate(texts)\n ]\n else:\n for i, text in enumerate(texts):\n metadatas[i][\"text\"] = texts\n metadatas[i][AtlasDB._ATLAS_DEFAULT_ID_FIELD] = ids[i]\n data = metadatas\n self.project._validate_map_data_inputs(\n [], id_field=AtlasDB._ATLAS_DEFAULT_ID_FIELD, data=data\n )\n with self.project.wait_for_project_lock():\n self.project.add_text(data)\n if refresh:\n if len(self.project.indices) > 0:\n with self.project.wait_for_project_lock():\n self.project.rebuild_maps()\n return ids\n[docs] def create_index(self, **kwargs: Any) -> Any:\n \"\"\"Creates an index in your project.\n See\n https://docs.nomic.ai/atlas_api.html#nomic.project.AtlasProject.create_index\n for full detail.\n \"\"\"\n with self.project.wait_for_project_lock():\n return self.project.create_index(**kwargs)\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Run similarity search with AtlasDB\n Args:\n query (str): Query text to search for.\n k (int): Number of results to return. Defaults to 4.\n Returns:\n List[Document]: List of documents most similar to the query text.\n \"\"\"\n if self._embedding_function is None:\n raise NotImplementedError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html"} {"id": "d02369ee9aef-4", "text": "\"\"\"\n if self._embedding_function is None:\n raise NotImplementedError(\n \"AtlasDB requires an embedding_function for text similarity search!\"\n )\n _embedding = self._embedding_function.embed_documents([query])[0]\n embedding = np.array(_embedding).reshape(1, -1)\n with self.project.wait_for_project_lock():\n neighbors, _ = self.project.projections[0].vector_search(\n queries=embedding, k=k\n )\n datas = self.project.get_data(ids=neighbors[0])\n docs = [\n Document(page_content=datas[i][\"text\"], metadata=datas[i])\n for i, neighbor in enumerate(neighbors)\n ]\n return docs\n[docs] @classmethod\n def from_texts(\n cls: Type[AtlasDB],\n texts: List[str],\n embedding: Optional[Embeddings] = None,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n name: Optional[str] = None,\n api_key: Optional[str] = None,\n description: str = \"A description for your project\",\n is_public: bool = True,\n reset_project_if_exists: bool = False,\n index_kwargs: Optional[dict] = None,\n **kwargs: Any,\n ) -> AtlasDB:\n \"\"\"Create an AtlasDB vectorstore from a raw documents.\n Args:\n texts (List[str]): The list of texts to ingest.\n name (str): Name of the project to create.\n api_key (str): Your nomic API key,\n embedding (Optional[Embeddings]): Embedding function. Defaults to None.\n metadatas (Optional[List[dict]]): List of metadatas. Defaults to None.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html"} {"id": "d02369ee9aef-5", "text": "ids (Optional[List[str]]): Optional list of document IDs. If None,\n ids will be auto created\n description (str): A description for your project.\n is_public (bool): Whether your project is publicly accessible.\n True by default.\n reset_project_if_exists (bool): Whether to reset this project if it\n already exists. Default False.\n Generally userful during development and testing.\n index_kwargs (Optional[dict]): Dict of kwargs for index creation.\n See https://docs.nomic.ai/atlas_api.html\n Returns:\n AtlasDB: Nomic's neural database and finest rhizomatic instrument\n \"\"\"\n if name is None or api_key is None:\n raise ValueError(\"`name` and `api_key` cannot be None.\")\n # Inject relevant kwargs\n all_index_kwargs = {\"name\": name + \"_index\", \"indexed_field\": \"text\"}\n if index_kwargs is not None:\n for k, v in index_kwargs.items():\n all_index_kwargs[k] = v\n # Build project\n atlasDB = cls(\n name,\n embedding_function=embedding,\n api_key=api_key,\n description=\"A description for your project\",\n is_public=is_public,\n reset_project_if_exists=reset_project_if_exists,\n )\n with atlasDB.project.wait_for_project_lock():\n atlasDB.add_texts(texts=texts, metadatas=metadatas, ids=ids)\n atlasDB.create_index(**all_index_kwargs)\n return atlasDB\n[docs] @classmethod\n def from_documents(\n cls: Type[AtlasDB],\n documents: List[Document],\n embedding: Optional[Embeddings] = None,\n ids: Optional[List[str]] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html"} {"id": "d02369ee9aef-6", "text": "ids: Optional[List[str]] = None,\n name: Optional[str] = None,\n api_key: Optional[str] = None,\n persist_directory: Optional[str] = None,\n description: str = \"A description for your project\",\n is_public: bool = True,\n reset_project_if_exists: bool = False,\n index_kwargs: Optional[dict] = None,\n **kwargs: Any,\n ) -> AtlasDB:\n \"\"\"Create an AtlasDB vectorstore from a list of documents.\n Args:\n name (str): Name of the collection to create.\n api_key (str): Your nomic API key,\n documents (List[Document]): List of documents to add to the vectorstore.\n embedding (Optional[Embeddings]): Embedding function. Defaults to None.\n ids (Optional[List[str]]): Optional list of document IDs. If None,\n ids will be auto created\n description (str): A description for your project.\n is_public (bool): Whether your project is publicly accessible.\n True by default.\n reset_project_if_exists (bool): Whether to reset this project if\n it already exists. Default False.\n Generally userful during development and testing.\n index_kwargs (Optional[dict]): Dict of kwargs for index creation.\n See https://docs.nomic.ai/atlas_api.html\n Returns:\n AtlasDB: Nomic's neural database and finest rhizomatic instrument\n \"\"\"\n if name is None or api_key is None:\n raise ValueError(\"`name` and `api_key` cannot be None.\")\n texts = [doc.page_content for doc in documents]\n metadatas = [doc.metadata for doc in documents]\n return cls.from_texts(\n name=name,\n api_key=api_key,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html"} {"id": "d02369ee9aef-7", "text": "return cls.from_texts(\n name=name,\n api_key=api_key,\n texts=texts,\n embedding=embedding,\n metadatas=metadatas,\n ids=ids,\n description=description,\n is_public=is_public,\n reset_project_if_exists=reset_project_if_exists,\n index_kwargs=index_kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html"} {"id": "6ca8eba44877-0", "text": "Source code for langchain.vectorstores.sklearn\n\"\"\" Wrapper around scikit-learn NearestNeighbors implementation.\nThe vector store can be persisted in json, bson or parquet format.\n\"\"\"\nimport json\nimport math\nimport os\nfrom abc import ABC, abstractmethod\nfrom typing import Any, Dict, Iterable, List, Literal, Optional, Tuple, Type\nfrom uuid import uuid4\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import guard_import\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nDEFAULT_K = 4 # Number of Documents to return.\nDEFAULT_FETCH_K = 20 # Number of Documents to initially fetch during MMR search.\n[docs]class BaseSerializer(ABC):\n \"\"\"Abstract base class for saving and loading data.\"\"\"\n def __init__(self, persist_path: str) -> None:\n self.persist_path = persist_path\n[docs] @classmethod\n @abstractmethod\n def extension(cls) -> str:\n \"\"\"The file extension suggested by this serializer (without dot).\"\"\"\n[docs] @abstractmethod\n def save(self, data: Any) -> None:\n \"\"\"Saves the data to the persist_path\"\"\"\n[docs] @abstractmethod\n def load(self) -> Any:\n \"\"\"Loads the data from the persist_path\"\"\"\n[docs]class JsonSerializer(BaseSerializer):\n \"\"\"Serializes data in json using the json package from python standard library.\"\"\"\n[docs] @classmethod\n def extension(cls) -> str:\n return \"json\"\n[docs] def save(self, data: Any) -> None:\n with open(self.persist_path, \"w\") as fp:\n json.dump(data, fp)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"} {"id": "6ca8eba44877-1", "text": "json.dump(data, fp)\n[docs] def load(self) -> Any:\n with open(self.persist_path, \"r\") as fp:\n return json.load(fp)\n[docs]class BsonSerializer(BaseSerializer):\n \"\"\"Serializes data in binary json using the bson python package.\"\"\"\n def __init__(self, persist_path: str) -> None:\n super().__init__(persist_path)\n self.bson = guard_import(\"bson\")\n[docs] @classmethod\n def extension(cls) -> str:\n return \"bson\"\n[docs] def save(self, data: Any) -> None:\n with open(self.persist_path, \"wb\") as fp:\n fp.write(self.bson.dumps(data))\n[docs] def load(self) -> Any:\n with open(self.persist_path, \"rb\") as fp:\n return self.bson.loads(fp.read())\n[docs]class ParquetSerializer(BaseSerializer):\n \"\"\"Serializes data in Apache Parquet format using the pyarrow package.\"\"\"\n def __init__(self, persist_path: str) -> None:\n super().__init__(persist_path)\n self.pd = guard_import(\"pandas\")\n self.pa = guard_import(\"pyarrow\")\n self.pq = guard_import(\"pyarrow.parquet\")\n[docs] @classmethod\n def extension(cls) -> str:\n return \"parquet\"\n[docs] def save(self, data: Any) -> None:\n df = self.pd.DataFrame(data)\n table = self.pa.Table.from_pandas(df)\n if os.path.exists(self.persist_path):\n backup_path = str(self.persist_path) + \"-backup\"\n os.rename(self.persist_path, backup_path)\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"} {"id": "6ca8eba44877-2", "text": "os.rename(self.persist_path, backup_path)\n try:\n self.pq.write_table(table, self.persist_path)\n except Exception as exc:\n os.rename(backup_path, self.persist_path)\n raise exc\n else:\n os.remove(backup_path)\n else:\n self.pq.write_table(table, self.persist_path)\n[docs] def load(self) -> Any:\n table = self.pq.read_table(self.persist_path)\n df = table.to_pandas()\n return {col: series.tolist() for col, series in df.items()}\nSERIALIZER_MAP: Dict[str, Type[BaseSerializer]] = {\n \"json\": JsonSerializer,\n \"bson\": BsonSerializer,\n \"parquet\": ParquetSerializer,\n}\n[docs]class SKLearnVectorStoreException(RuntimeError):\n \"\"\"Exception raised by SKLearnVectorStore.\"\"\"\n pass\n[docs]class SKLearnVectorStore(VectorStore):\n \"\"\"A simple in-memory vector store based on the scikit-learn library\n NearestNeighbors implementation.\"\"\"\n def __init__(\n self,\n embedding: Embeddings,\n *,\n persist_path: Optional[str] = None,\n serializer: Literal[\"json\", \"bson\", \"parquet\"] = \"json\",\n metric: str = \"cosine\",\n **kwargs: Any,\n ) -> None:\n np = guard_import(\"numpy\")\n sklearn_neighbors = guard_import(\"sklearn.neighbors\", pip_name=\"scikit-learn\")\n # non-persistent properties\n self._np = np\n self._neighbors = sklearn_neighbors.NearestNeighbors(metric=metric, **kwargs)\n self._neighbors_fitted = False\n self._embedding_function = embedding\n self._persist_path = persist_path", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"} {"id": "6ca8eba44877-3", "text": "self._embedding_function = embedding\n self._persist_path = persist_path\n self._serializer: Optional[BaseSerializer] = None\n if self._persist_path is not None:\n serializer_cls = SERIALIZER_MAP[serializer]\n self._serializer = serializer_cls(persist_path=self._persist_path)\n # data properties\n self._embeddings: List[List[float]] = []\n self._texts: List[str] = []\n self._metadatas: List[dict] = []\n self._ids: List[str] = []\n # cache properties\n self._embeddings_np: Any = np.asarray([])\n if self._persist_path is not None and os.path.isfile(self._persist_path):\n self._load()\n[docs] def persist(self) -> None:\n if self._serializer is None:\n raise SKLearnVectorStoreException(\n \"You must specify a persist_path on creation to persist the \"\n \"collection.\"\n )\n data = {\n \"ids\": self._ids,\n \"texts\": self._texts,\n \"metadatas\": self._metadatas,\n \"embeddings\": self._embeddings,\n }\n self._serializer.save(data)\n def _load(self) -> None:\n if self._serializer is None:\n raise SKLearnVectorStoreException(\n \"You must specify a persist_path on creation to load the \" \"collection.\"\n )\n data = self._serializer.load()\n self._embeddings = data[\"embeddings\"]\n self._texts = data[\"texts\"]\n self._metadatas = data[\"metadatas\"]\n self._ids = data[\"ids\"]\n self._update_neighbors()\n[docs] def add_texts(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"} {"id": "6ca8eba44877-4", "text": "self._update_neighbors()\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n _texts = list(texts)\n _ids = ids or [str(uuid4()) for _ in _texts]\n self._texts.extend(_texts)\n self._embeddings.extend(self._embedding_function.embed_documents(_texts))\n self._metadatas.extend(metadatas or ([{}] * len(_texts)))\n self._ids.extend(_ids)\n self._update_neighbors()\n return _ids\n def _update_neighbors(self) -> None:\n if len(self._embeddings) == 0:\n raise SKLearnVectorStoreException(\n \"No data was added to SKLearnVectorStore.\"\n )\n self._embeddings_np = self._np.asarray(self._embeddings)\n self._neighbors.fit(self._embeddings_np)\n self._neighbors_fitted = True\n def _similarity_index_search_with_score(\n self, query_embedding: List[float], *, k: int = DEFAULT_K, **kwargs: Any\n ) -> List[Tuple[int, float]]:\n \"\"\"Search k embeddings similar to the query embedding. Returns a list of\n (index, distance) tuples.\"\"\"\n if not self._neighbors_fitted:\n raise SKLearnVectorStoreException(\n \"No data was added to SKLearnVectorStore.\"\n )\n neigh_dists, neigh_idxs = self._neighbors.kneighbors(\n [query_embedding], n_neighbors=k\n )\n return list(zip(neigh_idxs[0], neigh_dists[0]))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"} {"id": "6ca8eba44877-5", "text": ")\n return list(zip(neigh_idxs[0], neigh_dists[0]))\n[docs] def similarity_search_with_score(\n self, query: str, *, k: int = DEFAULT_K, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n query_embedding = self._embedding_function.embed_query(query)\n indices_dists = self._similarity_index_search_with_score(\n query_embedding, k=k, **kwargs\n )\n return [\n (\n Document(\n page_content=self._texts[idx],\n metadata={\"id\": self._ids[idx], **self._metadatas[idx]},\n ),\n dist,\n )\n for idx, dist in indices_dists\n ]\n[docs] def similarity_search(\n self, query: str, k: int = DEFAULT_K, **kwargs: Any\n ) -> List[Document]:\n docs_scores = self.similarity_search_with_score(query, k=k, **kwargs)\n return [doc for doc, _ in docs_scores]\n def _similarity_search_with_relevance_scores(\n self, query: str, k: int = DEFAULT_K, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n docs_dists = self.similarity_search_with_score(query, k=k, **kwargs)\n docs, dists = zip(*docs_dists)\n scores = [1 / math.exp(dist) for dist in dists]\n return list(zip(list(docs), scores))\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = DEFAULT_K,\n fetch_k: int = DEFAULT_FETCH_K,\n lambda_mult: float = 0.5,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"} {"id": "6ca8eba44877-6", "text": "lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n indices_dists = self._similarity_index_search_with_score(\n embedding, k=fetch_k, **kwargs\n )\n indices, _ = zip(*indices_dists)\n result_embeddings = self._embeddings_np[indices,]\n mmr_selected = maximal_marginal_relevance(\n self._np.array(embedding, dtype=self._np.float32),\n result_embeddings,\n k=k,\n lambda_mult=lambda_mult,\n )\n mmr_indices = [indices[i] for i in mmr_selected]\n return [\n Document(\n page_content=self._texts[idx],\n metadata={\"id\": self._ids[idx], **self._metadatas[idx]},\n )\n for idx in mmr_indices\n ]\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = DEFAULT_K,\n fetch_k: int = DEFAULT_FETCH_K,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"} {"id": "6ca8eba44877-7", "text": "k: int = DEFAULT_K,\n fetch_k: int = DEFAULT_FETCH_K,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n if self._embedding_function is None:\n raise ValueError(\n \"For MMR search, you must specify an embedding function on creation.\"\n )\n embedding = self._embedding_function.embed_query(query)\n docs = self.max_marginal_relevance_search_by_vector(\n embedding, k, fetch_k, lambda_mul=lambda_mult\n )\n return docs\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n persist_path: Optional[str] = None,\n **kwargs: Any,\n ) -> \"SKLearnVectorStore\":\n vs = SKLearnVectorStore(embedding, persist_path=persist_path, **kwargs)\n vs.add_texts(texts, metadatas=metadatas, ids=ids)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"} {"id": "6ca8eba44877-8", "text": "vs.add_texts(texts, metadatas=metadatas, ids=ids)\n return vs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"} {"id": "25f2a7ded9f5-0", "text": "Source code for langchain.vectorstores.chroma\n\"\"\"Wrapper around ChromaDB embeddings platform.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport uuid\nfrom typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Tuple, Type\nimport numpy as np\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import xor_args\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nif TYPE_CHECKING:\n import chromadb\n import chromadb.config\n from chromadb.api.types import ID, OneOrMany, Where, WhereDocument\nlogger = logging.getLogger()\nDEFAULT_K = 4 # Number of Documents to return.\ndef _results_to_docs(results: Any) -> List[Document]:\n return [doc for doc, _ in _results_to_docs_and_scores(results)]\ndef _results_to_docs_and_scores(results: Any) -> List[Tuple[Document, float]]:\n return [\n # TODO: Chroma can do batch querying,\n # we shouldn't hard code to the 1st result\n (Document(page_content=result[0], metadata=result[1] or {}), result[2])\n for result in zip(\n results[\"documents\"][0],\n results[\"metadatas\"][0],\n results[\"distances\"][0],\n )\n ]\n[docs]class Chroma(VectorStore):\n \"\"\"Wrapper around ChromaDB embeddings platform.\n To use, you should have the ``chromadb`` python package installed.\n Example:\n .. code-block:: python\n from langchain.vectorstores import Chroma\n from langchain.embeddings.openai import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"} {"id": "25f2a7ded9f5-1", "text": "embeddings = OpenAIEmbeddings()\n vectorstore = Chroma(\"langchain_store\", embeddings)\n \"\"\"\n _LANGCHAIN_DEFAULT_COLLECTION_NAME = \"langchain\"\n def __init__(\n self,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n embedding_function: Optional[Embeddings] = None,\n persist_directory: Optional[str] = None,\n client_settings: Optional[chromadb.config.Settings] = None,\n collection_metadata: Optional[Dict] = None,\n client: Optional[chromadb.Client] = None,\n ) -> None:\n \"\"\"Initialize with Chroma client.\"\"\"\n try:\n import chromadb\n import chromadb.config\n except ImportError:\n raise ValueError(\n \"Could not import chromadb python package. \"\n \"Please install it with `pip install chromadb`.\"\n )\n if client is not None:\n self._client = client\n else:\n if client_settings:\n self._client_settings = client_settings\n else:\n self._client_settings = chromadb.config.Settings()\n if persist_directory is not None:\n self._client_settings = chromadb.config.Settings(\n chroma_db_impl=\"duckdb+parquet\",\n persist_directory=persist_directory,\n )\n self._client = chromadb.Client(self._client_settings)\n self._embedding_function = embedding_function\n self._persist_directory = (\n self._client_settings.persist_directory or persist_directory\n )\n self._collection = self._client.get_or_create_collection(\n name=collection_name,\n embedding_function=self._embedding_function.embed_documents\n if self._embedding_function is not None\n else None,\n metadata=collection_metadata,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"} {"id": "25f2a7ded9f5-2", "text": "else None,\n metadata=collection_metadata,\n )\n @xor_args((\"query_texts\", \"query_embeddings\"))\n def __query_collection(\n self,\n query_texts: Optional[List[str]] = None,\n query_embeddings: Optional[List[List[float]]] = None,\n n_results: int = 4,\n where: Optional[Dict[str, str]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Query the chroma collection.\"\"\"\n try:\n import chromadb # noqa: F401\n except ImportError:\n raise ValueError(\n \"Could not import chromadb python package. \"\n \"Please install it with `pip install chromadb`.\"\n )\n return self._collection.query(\n query_texts=query_texts,\n query_embeddings=query_embeddings,\n n_results=n_results,\n where=where,\n **kwargs,\n )\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts (Iterable[str]): Texts to add to the vectorstore.\n metadatas (Optional[List[dict]], optional): Optional list of metadatas.\n ids (Optional[List[str]], optional): Optional list of IDs.\n Returns:\n List[str]: List of IDs of the added texts.\n \"\"\"\n # TODO: Handle the case where the user doesn't provide ids on the Collection\n if ids is None:\n ids = [str(uuid.uuid1()) for _ in texts]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"} {"id": "25f2a7ded9f5-3", "text": "ids = [str(uuid.uuid1()) for _ in texts]\n embeddings = None\n if self._embedding_function is not None:\n embeddings = self._embedding_function.embed_documents(list(texts))\n self._collection.upsert(\n metadatas=metadatas, embeddings=embeddings, documents=texts, ids=ids\n )\n return ids\n[docs] def similarity_search(\n self,\n query: str,\n k: int = DEFAULT_K,\n filter: Optional[Dict[str, str]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Run similarity search with Chroma.\n Args:\n query (str): Query text to search for.\n k (int): Number of results to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List[Document]: List of documents most similar to the query text.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(query, k, filter=filter)\n return [doc for doc, _ in docs_and_scores]\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = DEFAULT_K,\n filter: Optional[Dict[str, str]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding (List[float]): Embedding to look up documents similar to.\n k (int): Number of Documents to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"} {"id": "25f2a7ded9f5-4", "text": "Returns:\n List of Documents most similar to the query vector.\n \"\"\"\n results = self.__query_collection(\n query_embeddings=embedding, n_results=k, where=filter\n )\n return _results_to_docs(results)\n[docs] def similarity_search_by_vector_with_relevance_scores(\n self,\n embedding: List[float],\n k: int = DEFAULT_K,\n filter: Optional[Dict[str, str]] = None,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"\n Return docs most similar to embedding vector and similarity score.\n Args:\n embedding (List[float]): Embedding to look up documents similar to.\n k (int): Number of Documents to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List[Tuple[Document, float]]: List of documents most similar to\n the query text and cosine distance in float for each.\n Lower score represents more similarity.\n \"\"\"\n results = self.__query_collection(\n query_embeddings=embedding, n_results=k, where=filter\n )\n return _results_to_docs_and_scores(results)\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = DEFAULT_K,\n filter: Optional[Dict[str, str]] = None,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Run similarity search with Chroma with distance.\n Args:\n query (str): Query text to search for.\n k (int): Number of results to return. Defaults to 4.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"} {"id": "25f2a7ded9f5-5", "text": "k (int): Number of results to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List[Tuple[Document, float]]: List of documents most similar to\n the query text and cosine distance in float for each.\n Lower score represents more similarity.\n \"\"\"\n if self._embedding_function is None:\n results = self.__query_collection(\n query_texts=[query], n_results=k, where=filter\n )\n else:\n query_embedding = self._embedding_function.embed_query(query)\n results = self.__query_collection(\n query_embeddings=[query_embedding], n_results=k, where=filter\n )\n return _results_to_docs_and_scores(results)\n def _similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n return self.similarity_search_with_score(query, k, **kwargs)\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = DEFAULT_K,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n filter: Optional[Dict[str, str]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"} {"id": "25f2a7ded9f5-6", "text": "k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n results = self.__query_collection(\n query_embeddings=embedding,\n n_results=fetch_k,\n where=filter,\n include=[\"metadatas\", \"documents\", \"distances\", \"embeddings\"],\n )\n mmr_selected = maximal_marginal_relevance(\n np.array(embedding, dtype=np.float32),\n results[\"embeddings\"][0],\n k=k,\n lambda_mult=lambda_mult,\n )\n candidates = _results_to_docs(results)\n selected_results = [r for i, r in enumerate(candidates) if i in mmr_selected]\n return selected_results\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = DEFAULT_K,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n filter: Optional[Dict[str, str]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"} {"id": "25f2a7ded9f5-7", "text": "Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n if self._embedding_function is None:\n raise ValueError(\n \"For MMR search, you must specify an embedding function on\" \"creation.\"\n )\n embedding = self._embedding_function.embed_query(query)\n docs = self.max_marginal_relevance_search_by_vector(\n embedding, k, fetch_k, lambda_mult=lambda_mult, filter=filter\n )\n return docs\n[docs] def delete_collection(self) -> None:\n \"\"\"Delete the collection.\"\"\"\n self._client.delete_collection(self._collection.name)\n[docs] def get(\n self,\n ids: Optional[OneOrMany[ID]] = None,\n where: Optional[Where] = None,\n limit: Optional[int] = None,\n offset: Optional[int] = None,\n where_document: Optional[WhereDocument] = None,\n include: Optional[List[str]] = None,\n ) -> Dict[str, Any]:\n \"\"\"Gets the collection.\n Args:\n ids: The ids of the embeddings to get. Optional.\n where: A Where type dict used to filter results by.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"} {"id": "25f2a7ded9f5-8", "text": "where: A Where type dict used to filter results by.\n E.g. `{\"color\" : \"red\", \"price\": 4.20}`. Optional.\n limit: The number of documents to return. Optional.\n offset: The offset to start returning results from.\n Useful for paging results with limit. Optional.\n where_document: A WhereDocument type dict used to filter by the documents.\n E.g. `{$contains: {\"text\": \"hello\"}}`. Optional.\n include: A list of what to include in the results.\n Can contain `\"embeddings\"`, `\"metadatas\"`, `\"documents\"`.\n Ids are always included.\n Defaults to `[\"metadatas\", \"documents\"]`. Optional.\n \"\"\"\n kwargs = {\n \"ids\": ids,\n \"where\": where,\n \"limit\": limit,\n \"offset\": offset,\n \"where_document\": where_document,\n }\n if include is not None:\n kwargs[\"include\"] = include\n return self._collection.get(**kwargs)\n[docs] def persist(self) -> None:\n \"\"\"Persist the collection.\n This can be used to explicitly persist the data to disk.\n It will also be called automatically when the object is destroyed.\n \"\"\"\n if self._persist_directory is None:\n raise ValueError(\n \"You must specify a persist_directory on\"\n \"creation to persist the collection.\"\n )\n self._client.persist()\n[docs] def update_document(self, document_id: str, document: Document) -> None:\n \"\"\"Update a document in the collection.\n Args:\n document_id (str): ID of the document to update.\n document (Document): Document to update.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"} {"id": "25f2a7ded9f5-9", "text": "document (Document): Document to update.\n \"\"\"\n text = document.page_content\n metadata = document.metadata\n if self._embedding_function is None:\n raise ValueError(\n \"For update, you must specify an embedding function on creation.\"\n )\n embeddings = self._embedding_function.embed_documents([text])\n self._collection.update(\n ids=[document_id],\n embeddings=embeddings,\n documents=[text],\n metadatas=[metadata],\n )\n[docs] @classmethod\n def from_texts(\n cls: Type[Chroma],\n texts: List[str],\n embedding: Optional[Embeddings] = None,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n persist_directory: Optional[str] = None,\n client_settings: Optional[chromadb.config.Settings] = None,\n client: Optional[chromadb.Client] = None,\n **kwargs: Any,\n ) -> Chroma:\n \"\"\"Create a Chroma vectorstore from a raw documents.\n If a persist_directory is specified, the collection will be persisted there.\n Otherwise, the data will be ephemeral in-memory.\n Args:\n texts (List[str]): List of texts to add to the collection.\n collection_name (str): Name of the collection to create.\n persist_directory (Optional[str]): Directory to persist the collection.\n embedding (Optional[Embeddings]): Embedding function. Defaults to None.\n metadatas (Optional[List[dict]]): List of metadatas. Defaults to None.\n ids (Optional[List[str]]): List of document IDs. Defaults to None.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"} {"id": "25f2a7ded9f5-10", "text": "ids (Optional[List[str]]): List of document IDs. Defaults to None.\n client_settings (Optional[chromadb.config.Settings]): Chroma client settings\n Returns:\n Chroma: Chroma vectorstore.\n \"\"\"\n chroma_collection = cls(\n collection_name=collection_name,\n embedding_function=embedding,\n persist_directory=persist_directory,\n client_settings=client_settings,\n client=client,\n )\n chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)\n return chroma_collection\n[docs] @classmethod\n def from_documents(\n cls: Type[Chroma],\n documents: List[Document],\n embedding: Optional[Embeddings] = None,\n ids: Optional[List[str]] = None,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n persist_directory: Optional[str] = None,\n client_settings: Optional[chromadb.config.Settings] = None,\n client: Optional[chromadb.Client] = None, # Add this line\n **kwargs: Any,\n ) -> Chroma:\n \"\"\"Create a Chroma vectorstore from a list of documents.\n If a persist_directory is specified, the collection will be persisted there.\n Otherwise, the data will be ephemeral in-memory.\n Args:\n collection_name (str): Name of the collection to create.\n persist_directory (Optional[str]): Directory to persist the collection.\n ids (Optional[List[str]]): List of document IDs. Defaults to None.\n documents (List[Document]): List of documents to add to the vectorstore.\n embedding (Optional[Embeddings]): Embedding function. Defaults to None.\n client_settings (Optional[chromadb.config.Settings]): Chroma client settings", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"} {"id": "25f2a7ded9f5-11", "text": "client_settings (Optional[chromadb.config.Settings]): Chroma client settings\n Returns:\n Chroma: Chroma vectorstore.\n \"\"\"\n texts = [doc.page_content for doc in documents]\n metadatas = [doc.metadata for doc in documents]\n return cls.from_texts(\n texts=texts,\n embedding=embedding,\n metadatas=metadatas,\n ids=ids,\n collection_name=collection_name,\n persist_directory=persist_directory,\n client_settings=client_settings,\n client=client,\n )\n[docs] def delete(self, ids: Optional[List[str]] = None, **kwargs: Any) -> None:\n \"\"\"Delete by vector IDs.\n Args:\n ids: List of ids to delete.\n \"\"\"\n self._collection.delete(ids=ids)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"} {"id": "8c2caf3f4462-0", "text": "Source code for langchain.vectorstores.mongodb_atlas\nfrom __future__ import annotations\nimport logging\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Dict,\n Generator,\n Iterable,\n List,\n Optional,\n Tuple,\n TypeVar,\n Union,\n)\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nif TYPE_CHECKING:\n from pymongo.collection import Collection\nMongoDBDocumentType = TypeVar(\"MongoDBDocumentType\", bound=Dict[str, Any])\nlogger = logging.getLogger(__name__)\nDEFAULT_INSERT_BATCH_SIZE = 100\n[docs]class MongoDBAtlasVectorSearch(VectorStore):\n \"\"\"Wrapper around MongoDB Atlas Vector Search.\n To use, you should have both:\n - the ``pymongo`` python package installed\n - a connection string associated with a MongoDB Atlas Cluster having deployed an\n Atlas Search index\n Example:\n .. code-block:: python\n from langchain.vectorstores import MongoDBAtlasVectorSearch\n from langchain.embeddings.openai import OpenAIEmbeddings\n from pymongo import MongoClient\n mongo_client = MongoClient(\"\")\n collection = mongo_client[\"\"][\"\"]\n embeddings = OpenAIEmbeddings()\n vectorstore = MongoDBAtlasVectorSearch(collection, embeddings)\n \"\"\"\n def __init__(\n self,\n collection: Collection[MongoDBDocumentType],\n embedding: Embeddings,\n *,\n index_name: str = \"default\",\n text_key: str = \"text\",\n embedding_key: str = \"embedding\",\n ):\n \"\"\"\n Args:\n collection: MongoDB collection to add the texts to.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/mongodb_atlas.html"} {"id": "8c2caf3f4462-1", "text": "\"\"\"\n Args:\n collection: MongoDB collection to add the texts to.\n embedding: Text embedding model to use.\n text_key: MongoDB field that will contain the text for each\n document.\n embedding_key: MongoDB field that will contain the embedding for\n each document.\n \"\"\"\n self._collection = collection\n self._embedding = embedding\n self._index_name = index_name\n self._text_key = text_key\n self._embedding_key = embedding_key\n[docs] @classmethod\n def from_connection_string(\n cls,\n connection_string: str,\n namespace: str,\n embedding: Embeddings,\n **kwargs: Any,\n ) -> MongoDBAtlasVectorSearch:\n try:\n from pymongo import MongoClient\n except ImportError:\n raise ImportError(\n \"Could not import pymongo, please install it with \"\n \"`pip install pymongo`.\"\n )\n client: MongoClient = MongoClient(connection_string)\n db_name, collection_name = namespace.split(\".\")\n collection = client[db_name][collection_name]\n return cls(collection, embedding, **kwargs)\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[Dict[str, Any]]] = None,\n **kwargs: Any,\n ) -> List:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n batch_size = kwargs.get(\"batch_size\", DEFAULT_INSERT_BATCH_SIZE)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/mongodb_atlas.html"} {"id": "8c2caf3f4462-2", "text": "\"\"\"\n batch_size = kwargs.get(\"batch_size\", DEFAULT_INSERT_BATCH_SIZE)\n _metadatas: Union[List, Generator] = metadatas or ({} for _ in texts)\n texts_batch = []\n metadatas_batch = []\n result_ids = []\n for i, (text, metadata) in enumerate(zip(texts, _metadatas)):\n texts_batch.append(text)\n metadatas_batch.append(metadata)\n if (i + 1) % batch_size == 0:\n result_ids.extend(self._insert_texts(texts_batch, metadatas_batch))\n texts_batch = []\n metadatas_batch = []\n if texts_batch:\n result_ids.extend(self._insert_texts(texts_batch, metadatas_batch))\n return result_ids\n def _insert_texts(self, texts: List[str], metadatas: List[Dict[str, Any]]) -> List:\n if not texts:\n return []\n # Embed and create the documents\n embeddings = self._embedding.embed_documents(texts)\n to_insert = [\n {self._text_key: t, self._embedding_key: embedding, **m}\n for t, m, embedding in zip(texts, metadatas, embeddings)\n ]\n # insert the documents in MongoDB Atlas\n insert_result = self._collection.insert_many(to_insert)\n return insert_result.inserted_ids\n[docs] def similarity_search_with_score(\n self,\n query: str,\n *,\n k: int = 4,\n pre_filter: Optional[dict] = None,\n post_filter_pipeline: Optional[List[Dict]] = None,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return MongoDB documents most similar to query, along with scores.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/mongodb_atlas.html"} {"id": "8c2caf3f4462-3", "text": "\"\"\"Return MongoDB documents most similar to query, along with scores.\n Use the knnBeta Operator available in MongoDB Atlas Search\n This feature is in early access and available only for evaluation purposes, to\n validate functionality, and to gather feedback from a small closed group of\n early access users. It is not recommended for production deployments as we\n may introduce breaking changes.\n For more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta\n Args:\n query: Text to look up documents similar to.\n k: Optional Number of Documents to return. Defaults to 4.\n pre_filter: Optional Dictionary of argument(s) to prefilter on document\n fields.\n post_filter_pipeline: Optional Pipeline of MongoDB aggregation stages\n following the knnBeta search.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n knn_beta = {\n \"vector\": self._embedding.embed_query(query),\n \"path\": self._embedding_key,\n \"k\": k,\n }\n if pre_filter:\n knn_beta[\"filter\"] = pre_filter\n pipeline = [\n {\n \"$search\": {\n \"index\": self._index_name,\n \"knnBeta\": knn_beta,\n }\n },\n {\"$project\": {\"score\": {\"$meta\": \"searchScore\"}, self._embedding_key: 0}},\n ]\n if post_filter_pipeline is not None:\n pipeline.extend(post_filter_pipeline)\n cursor = self._collection.aggregate(pipeline)\n docs = []\n for res in cursor:\n text = res.pop(self._text_key)\n score = res.pop(\"score\")\n docs.append((Document(page_content=text, metadata=res), score))\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/mongodb_atlas.html"} {"id": "8c2caf3f4462-4", "text": "docs.append((Document(page_content=text, metadata=res), score))\n return docs\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n pre_filter: Optional[dict] = None,\n post_filter_pipeline: Optional[List[Dict]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return MongoDB documents most similar to query.\n Use the knnBeta Operator available in MongoDB Atlas Search\n This feature is in early access and available only for evaluation purposes, to\n validate functionality, and to gather feedback from a small closed group of\n early access users. It is not recommended for production deployments as we may\n introduce breaking changes.\n For more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta\n Args:\n query: Text to look up documents similar to.\n k: Optional Number of Documents to return. Defaults to 4.\n pre_filter: Optional Dictionary of argument(s) to prefilter on document\n fields.\n post_filter_pipeline: Optional Pipeline of MongoDB aggregation stages\n following the knnBeta search.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(\n query,\n k=k,\n pre_filter=pre_filter,\n post_filter_pipeline=post_filter_pipeline,\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n collection: Optional[Collection[MongoDBDocumentType]] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/mongodb_atlas.html"} {"id": "8c2caf3f4462-5", "text": "collection: Optional[Collection[MongoDBDocumentType]] = None,\n **kwargs: Any,\n ) -> MongoDBAtlasVectorSearch:\n \"\"\"Construct MongoDBAtlasVectorSearch wrapper from raw documents.\n This is a user-friendly interface that:\n 1. Embeds documents.\n 2. Adds the documents to a provided MongoDB Atlas Vector Search index\n (Lucene)\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from pymongo import MongoClient\n from langchain.vectorstores import MongoDBAtlasVectorSearch\n from langchain.embeddings import OpenAIEmbeddings\n client = MongoClient(\"\")\n collection = mongo_client[\"\"][\"\"]\n embeddings = OpenAIEmbeddings()\n vectorstore = MongoDBAtlasVectorSearch.from_texts(\n texts,\n embeddings,\n metadatas=metadatas,\n collection=collection\n )\n \"\"\"\n if collection is None:\n raise ValueError(\"Must provide 'collection' named parameter.\")\n vecstore = cls(collection, embedding, **kwargs)\n vecstore.add_texts(texts, metadatas=metadatas)\n return vecstore", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/mongodb_atlas.html"} {"id": "df98879a4d64-0", "text": "Source code for langchain.vectorstores.zilliz\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, List, Optional\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.milvus import Milvus\nlogger = logging.getLogger(__name__)\n[docs]class Zilliz(Milvus):\n \"\"\"Initialize wrapper around the Zilliz vector database.\n In order to use this you need to have `pymilvus` installed and a\n running Zilliz database.\n See the following documentation for how to run a Zilliz instance:\n https://docs.zilliz.com/docs/create-cluster\n IF USING L2/IP metric IT IS HIGHLY SUGGESTED TO NORMALIZE YOUR DATA.\n Args:\n embedding_function (Embeddings): Function used to embed the text.\n collection_name (str): Which Zilliz collection to use. Defaults to\n \"LangChainCollection\".\n connection_args (Optional[dict[str, any]]): The connection args used for\n this class comes in the form of a dict.\n consistency_level (str): The consistency level to use for a collection.\n Defaults to \"Session\".\n index_params (Optional[dict]): Which index params to use. Defaults to\n HNSW/AUTOINDEX depending on service.\n search_params (Optional[dict]): Which search params to use. Defaults to\n default of index.\n drop_old (Optional[bool]): Whether to drop the current collection. Defaults\n to False.\n The connection args used for this class comes in the form of a dict,\n here are a few of the options:\n address (str): The actual address of Zilliz\n instance. Example address: \"localhost:19530\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/zilliz.html"} {"id": "df98879a4d64-1", "text": "instance. Example address: \"localhost:19530\"\n uri (str): The uri of Zilliz instance. Example uri:\n \"https://in03-ba4234asae.api.gcp-us-west1.zillizcloud.com\",\n host (str): The host of Zilliz instance. Default at \"localhost\",\n PyMilvus will fill in the default host if only port is provided.\n port (str/int): The port of Zilliz instance. Default at 19530, PyMilvus\n will fill in the default port if only host is provided.\n user (str): Use which user to connect to Zilliz instance. If user and\n password are provided, we will add related header in every RPC call.\n password (str): Required when user is provided. The password\n corresponding to the user.\n secure (bool): Default is false. If set to true, tls will be enabled.\n client_key_path (str): If use tls two-way authentication, need to\n write the client.key path.\n client_pem_path (str): If use tls two-way authentication, need to\n write the client.pem path.\n ca_pem_path (str): If use tls two-way authentication, need to write\n the ca.pem path.\n server_pem_path (str): If use tls one-way authentication, need to\n write the server.pem path.\n server_name (str): If use tls, need to write the common name.\n Example:\n .. code-block:: python\n from langchain import Zilliz\n from langchain.embeddings import OpenAIEmbeddings\n embedding = OpenAIEmbeddings()\n # Connect to a Zilliz instance\n milvus_store = Milvus(\n embedding_function = embedding,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/zilliz.html"} {"id": "df98879a4d64-2", "text": "milvus_store = Milvus(\n embedding_function = embedding,\n collection_name = \"LangChainCollection\",\n connection_args = {\n \"uri\": \"https://in03-ba4234asae.api.gcp-us-west1.zillizcloud.com\",\n \"user\": \"temp\",\n \"password\": \"temp\",\n \"secure\": True\n }\n drop_old: True,\n )\n Raises:\n ValueError: If the pymilvus python package is not installed.\n \"\"\"\n def _create_index(self) -> None:\n \"\"\"Create a index on the collection\"\"\"\n from pymilvus import Collection, MilvusException\n if isinstance(self.col, Collection) and self._get_index() is None:\n try:\n # If no index params, use a default AutoIndex based one\n if self.index_params is None:\n self.index_params = {\n \"metric_type\": \"L2\",\n \"index_type\": \"AUTOINDEX\",\n \"params\": {},\n }\n try:\n self.col.create_index(\n self._vector_field,\n index_params=self.index_params,\n using=self.alias,\n )\n # If default did not work, most likely Milvus self-hosted\n except MilvusException:\n # Use HNSW based index\n self.index_params = {\n \"metric_type\": \"L2\",\n \"index_type\": \"HNSW\",\n \"params\": {\"M\": 8, \"efConstruction\": 64},\n }\n self.col.create_index(\n self._vector_field,\n index_params=self.index_params,\n using=self.alias,\n )\n logger.debug(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/zilliz.html"} {"id": "df98879a4d64-3", "text": "using=self.alias,\n )\n logger.debug(\n \"Successfully created an index on collection: %s\",\n self.collection_name,\n )\n except MilvusException as e:\n logger.error(\n \"Failed to create an index on collection: %s\", self.collection_name\n )\n raise e\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n collection_name: str = \"LangChainCollection\",\n connection_args: dict[str, Any] = {},\n consistency_level: str = \"Session\",\n index_params: Optional[dict] = None,\n search_params: Optional[dict] = None,\n drop_old: bool = False,\n **kwargs: Any,\n ) -> Zilliz:\n \"\"\"Create a Zilliz collection, indexes it with HNSW, and insert data.\n Args:\n texts (List[str]): Text data.\n embedding (Embeddings): Embedding function.\n metadatas (Optional[List[dict]]): Metadata for each text if it exists.\n Defaults to None.\n collection_name (str, optional): Collection name to use. Defaults to\n \"LangChainCollection\".\n connection_args (dict[str, Any], optional): Connection args to use. Defaults\n to DEFAULT_MILVUS_CONNECTION.\n consistency_level (str, optional): Which consistency level to use. Defaults\n to \"Session\".\n index_params (Optional[dict], optional): Which index_params to use.\n Defaults to None.\n search_params (Optional[dict], optional): Which search params to use.\n Defaults to None.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/zilliz.html"} {"id": "df98879a4d64-4", "text": "Defaults to None.\n drop_old (Optional[bool], optional): Whether to drop the collection with\n that name if it exists. Defaults to False.\n Returns:\n Zilliz: Zilliz Vector Store\n \"\"\"\n vector_db = cls(\n embedding_function=embedding,\n collection_name=collection_name,\n connection_args=connection_args,\n consistency_level=consistency_level,\n index_params=index_params,\n search_params=search_params,\n drop_old=drop_old,\n **kwargs,\n )\n vector_db.add_texts(texts=texts, metadatas=metadatas)\n return vector_db", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/zilliz.html"} {"id": "92665c342e7a-0", "text": "Source code for langchain.vectorstores.clickhouse\n\"\"\"Wrapper around open source ClickHouse VectorSearch capability.\"\"\"\nfrom __future__ import annotations\nimport json\nimport logging\nfrom hashlib import sha1\nfrom threading import Thread\nfrom typing import Any, Dict, Iterable, List, Optional, Tuple, Union\nfrom pydantic import BaseSettings\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nlogger = logging.getLogger()\n[docs]def has_mul_sub_str(s: str, *args: Any) -> bool:\n \"\"\"\n Check if a string contains multiple substrings.\n Args:\n s: string to check.\n *args: substrings to check.\n Returns:\n True if all substrings are in the string, False otherwise.\n \"\"\"\n for a in args:\n if a not in s:\n return False\n return True\n[docs]class ClickhouseSettings(BaseSettings):\n \"\"\"ClickHouse Client Configuration\n Attribute:\n clickhouse_host (str) : An URL to connect to MyScale backend.\n Defaults to 'localhost'.\n clickhouse_port (int) : URL port to connect with HTTP. Defaults to 8443.\n username (str) : Username to login. Defaults to None.\n password (str) : Password to login. Defaults to None.\n index_type (str): index type string.\n index_param (list): index build parameter.\n index_query_params(dict): index query parameters.\n database (str) : Database name to find the table. Defaults to 'default'.\n table (str) : Table name to operate on.\n Defaults to 'vector_table'.\n metric (str) : Metric to compute distance,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"} {"id": "92665c342e7a-1", "text": "Defaults to 'vector_table'.\n metric (str) : Metric to compute distance,\n supported are ('angular', 'euclidean', 'manhattan', 'hamming',\n 'dot'). Defaults to 'angular'.\n https://github.com/spotify/annoy/blob/main/src/annoymodule.cc#L149-L169\n column_map (Dict) : Column type map to project column name onto langchain\n semantics. Must have keys: `text`, `id`, `vector`,\n must be same size to number of columns. For example:\n .. code-block:: python\n {\n 'id': 'text_id',\n 'uuid': 'global_unique_id'\n 'embedding': 'text_embedding',\n 'document': 'text_plain',\n 'metadata': 'metadata_dictionary_in_json',\n }\n Defaults to identity map.\n \"\"\"\n host: str = \"localhost\"\n port: int = 8123\n username: Optional[str] = None\n password: Optional[str] = None\n index_type: str = \"annoy\"\n # Annoy supports L2Distance and cosineDistance.\n index_param: Optional[Union[List, Dict]] = [\"'L2Distance'\", 100]\n index_query_params: Dict[str, str] = {}\n column_map: Dict[str, str] = {\n \"id\": \"id\",\n \"uuid\": \"uuid\",\n \"document\": \"document\",\n \"embedding\": \"embedding\",\n \"metadata\": \"metadata\",\n }\n database: str = \"default\"\n table: str = \"langchain\"\n metric: str = \"angular\"\n def __getitem__(self, item: str) -> Any:\n return getattr(self, item)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"} {"id": "92665c342e7a-2", "text": "return getattr(self, item)\n[docs] class Config:\n env_file = \".env\"\n env_prefix = \"clickhouse_\"\n env_file_encoding = \"utf-8\"\n[docs]class Clickhouse(VectorStore):\n \"\"\"Wrapper around ClickHouse vector database\n You need a `clickhouse-connect` python package, and a valid account\n to connect to ClickHouse.\n ClickHouse can not only search with simple vector indexes,\n it also supports complex query with multiple conditions,\n constraints and even sub-queries.\n For more information, please visit\n [ClickHouse official site](https://clickhouse.com/clickhouse)\n \"\"\"\n def __init__(\n self,\n embedding: Embeddings,\n config: Optional[ClickhouseSettings] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"ClickHouse Wrapper to LangChain\n embedding_function (Embeddings):\n config (ClickHouseSettings): Configuration to ClickHouse Client\n Other keyword arguments will pass into\n [clickhouse-connect](https://docs.clickhouse.com/)\n \"\"\"\n try:\n from clickhouse_connect import get_client\n except ImportError:\n raise ValueError(\n \"Could not import clickhouse connect python package. \"\n \"Please install it with `pip install clickhouse-connect`.\"\n )\n try:\n from tqdm import tqdm\n self.pgbar = tqdm\n except ImportError:\n # Just in case if tqdm is not installed\n self.pgbar = lambda x, **kwargs: x\n super().__init__()\n if config is not None:\n self.config = config\n else:\n self.config = ClickhouseSettings()\n assert self.config\n assert self.config.host and self.config.port\n assert (", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"} {"id": "92665c342e7a-3", "text": "assert self.config\n assert self.config.host and self.config.port\n assert (\n self.config.column_map\n and self.config.database\n and self.config.table\n and self.config.metric\n )\n for k in [\"id\", \"embedding\", \"document\", \"metadata\", \"uuid\"]:\n assert k in self.config.column_map\n assert self.config.metric in [\n \"angular\",\n \"euclidean\",\n \"manhattan\",\n \"hamming\",\n \"dot\",\n ]\n # initialize the schema\n dim = len(embedding.embed_query(\"test\"))\n index_params = (\n (\n \",\".join([f\"'{k}={v}'\" for k, v in self.config.index_param.items()])\n if self.config.index_param\n else \"\"\n )\n if isinstance(self.config.index_param, Dict)\n else \",\".join([str(p) for p in self.config.index_param])\n if isinstance(self.config.index_param, List)\n else self.config.index_param\n )\n self.schema = f\"\"\"\\\nCREATE TABLE IF NOT EXISTS {self.config.database}.{self.config.table}(\n {self.config.column_map['id']} Nullable(String),\n {self.config.column_map['document']} Nullable(String),\n {self.config.column_map['embedding']} Array(Float32),\n {self.config.column_map['metadata']} JSON,\n {self.config.column_map['uuid']} UUID DEFAULT generateUUIDv4(),\n CONSTRAINT cons_vec_len CHECK length({self.config.column_map['embedding']}) = {dim},\n INDEX vec_idx {self.config.column_map['embedding']} TYPE \\\n{self.config.index_type}({index_params}) GRANULARITY 1000\n) ENGINE = MergeTree ORDER BY uuid SETTINGS index_granularity = 8192\\\n\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"} {"id": "92665c342e7a-4", "text": "\"\"\"\n self.dim = dim\n self.BS = \"\\\\\"\n self.must_escape = (\"\\\\\", \"'\")\n self.embedding_function = embedding\n self.dist_order = \"ASC\" # Only support ConsingDistance and L2Distance\n # Create a connection to clickhouse\n self.client = get_client(\n host=self.config.host,\n port=self.config.port,\n username=self.config.username,\n password=self.config.password,\n **kwargs,\n )\n # Enable JSON type\n self.client.command(\"SET allow_experimental_object_type=1\")\n # Enable Annoy index\n self.client.command(\"SET allow_experimental_annoy_index=1\")\n self.client.command(self.schema)\n[docs] def escape_str(self, value: str) -> str:\n return \"\".join(f\"{self.BS}{c}\" if c in self.must_escape else c for c in value)\n def _build_insert_sql(self, transac: Iterable, column_names: Iterable[str]) -> str:\n ks = \",\".join(column_names)\n _data = []\n for n in transac:\n n = \",\".join([f\"'{self.escape_str(str(_n))}'\" for _n in n])\n _data.append(f\"({n})\")\n i_str = f\"\"\"\n INSERT INTO TABLE \n {self.config.database}.{self.config.table}({ks})\n VALUES\n {','.join(_data)}\n \"\"\"\n return i_str\n def _insert(self, transac: Iterable, column_names: Iterable[str]) -> None:\n _insert_query = self._build_insert_sql(transac, column_names)\n self.client.command(_insert_query)\n[docs] def add_texts(\n self,\n texts: Iterable[str],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"} {"id": "92665c342e7a-5", "text": "[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n batch_size: int = 32,\n ids: Optional[Iterable[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Insert more texts through the embeddings and add to the VectorStore.\n Args:\n texts: Iterable of strings to add to the VectorStore.\n ids: Optional list of ids to associate with the texts.\n batch_size: Batch size of insertion\n metadata: Optional column data to be inserted\n Returns:\n List of ids from adding the texts into the VectorStore.\n \"\"\"\n # Embed and create the documents\n ids = ids or [sha1(t.encode(\"utf-8\")).hexdigest() for t in texts]\n colmap_ = self.config.column_map\n transac = []\n column_names = {\n colmap_[\"id\"]: ids,\n colmap_[\"document\"]: texts,\n colmap_[\"embedding\"]: self.embedding_function.embed_documents(list(texts)),\n }\n metadatas = metadatas or [{} for _ in texts]\n column_names[colmap_[\"metadata\"]] = map(json.dumps, metadatas)\n assert len(set(colmap_) - set(column_names)) >= 0\n keys, values = zip(*column_names.items())\n try:\n t = None\n for v in self.pgbar(\n zip(*values), desc=\"Inserting data...\", total=len(metadatas)\n ):\n assert (\n len(v[keys.index(self.config.column_map[\"embedding\"])]) == self.dim\n )\n transac.append(v)\n if len(transac) == batch_size:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"} {"id": "92665c342e7a-6", "text": "transac.append(v)\n if len(transac) == batch_size:\n if t:\n t.join()\n t = Thread(target=self._insert, args=[transac, keys])\n t.start()\n transac = []\n if len(transac) > 0:\n if t:\n t.join()\n self._insert(transac, keys)\n return [i for i in ids]\n except Exception as e:\n logger.error(f\"\\033[91m\\033[1m{type(e)}\\033[0m \\033[95m{str(e)}\\033[0m\")\n return []\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[Dict[Any, Any]]] = None,\n config: Optional[ClickhouseSettings] = None,\n text_ids: Optional[Iterable[str]] = None,\n batch_size: int = 32,\n **kwargs: Any,\n ) -> Clickhouse:\n \"\"\"Create ClickHouse wrapper with existing texts\n Args:\n embedding_function (Embeddings): Function to extract text embedding\n texts (Iterable[str]): List or tuple of strings to be added\n config (ClickHouseSettings, Optional): ClickHouse configuration\n text_ids (Optional[Iterable], optional): IDs for the texts.\n Defaults to None.\n batch_size (int, optional): Batchsize when transmitting data to ClickHouse.\n Defaults to 32.\n metadata (List[dict], optional): metadata to texts. Defaults to None.\n Other keyword arguments will pass into\n [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"} {"id": "92665c342e7a-7", "text": "Returns:\n ClickHouse Index\n \"\"\"\n ctx = cls(embedding, config, **kwargs)\n ctx.add_texts(texts, ids=text_ids, batch_size=batch_size, metadatas=metadatas)\n return ctx\n def __repr__(self) -> str:\n \"\"\"Text representation for ClickHouse Vector Store, prints backends, username\n and schemas. Easy to use with `str(ClickHouse())`\n Returns:\n repr: string to show connection info and data schema\n \"\"\"\n _repr = f\"\\033[92m\\033[1m{self.config.database}.{self.config.table} @ \"\n _repr += f\"{self.config.host}:{self.config.port}\\033[0m\\n\\n\"\n _repr += f\"\\033[1musername: {self.config.username}\\033[0m\\n\\nTable Schema:\\n\"\n _repr += \"-\" * 51 + \"\\n\"\n for r in self.client.query(\n f\"DESC {self.config.database}.{self.config.table}\"\n ).named_results():\n _repr += (\n f\"|\\033[94m{r['name']:24s}\\033[0m|\\033[96m{r['type']:24s}\\033[0m|\\n\"\n )\n _repr += \"-\" * 51 + \"\\n\"\n return _repr\n def _build_query_sql(\n self, q_emb: List[float], topk: int, where_str: Optional[str] = None\n ) -> str:\n q_emb_str = \",\".join(map(str, q_emb))\n if where_str:\n where_str = f\"PREWHERE {where_str}\"\n else:\n where_str = \"\"\n settings_strs = []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"} {"id": "92665c342e7a-8", "text": "else:\n where_str = \"\"\n settings_strs = []\n if self.config.index_query_params:\n for k in self.config.index_query_params:\n settings_strs.append(f\"SETTING {k}={self.config.index_query_params[k]}\")\n q_str = f\"\"\"\n SELECT {self.config.column_map['document']}, \n {self.config.column_map['metadata']}, dist\n FROM {self.config.database}.{self.config.table}\n {where_str}\n ORDER BY L2Distance({self.config.column_map['embedding']}, [{q_emb_str}]) \n AS dist {self.dist_order}\n LIMIT {topk} {' '.join(settings_strs)}\n \"\"\"\n return q_str\n[docs] def similarity_search(\n self, query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Perform a similarity search with ClickHouse\n Args:\n query (str): query string\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): where condition string.\n Defaults to None.\n NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection. When dealing with metadatas, remember to\n use `{self.metadata_column}.attribute` instead of `attribute`\n alone. The default name for it is `metadata`.\n Returns:\n List[Document]: List of Documents\n \"\"\"\n return self.similarity_search_by_vector(\n self.embedding_function.embed_query(query), k, where_str, **kwargs\n )\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"} {"id": "92665c342e7a-9", "text": "self,\n embedding: List[float],\n k: int = 4,\n where_str: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Perform a similarity search with ClickHouse by vectors\n Args:\n query (str): query string\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): where condition string.\n Defaults to None.\n NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection. When dealing with metadatas, remember to\n use `{self.metadata_column}.attribute` instead of `attribute`\n alone. The default name for it is `metadata`.\n Returns:\n List[Document]: List of (Document, similarity)\n \"\"\"\n q_str = self._build_query_sql(embedding, k, where_str)\n try:\n return [\n Document(\n page_content=r[self.config.column_map[\"document\"]],\n metadata=r[self.config.column_map[\"metadata\"]],\n )\n for r in self.client.query(q_str).named_results()\n ]\n except Exception as e:\n logger.error(f\"\\033[91m\\033[1m{type(e)}\\033[0m \\033[95m{str(e)}\\033[0m\")\n return []\n[docs] def similarity_search_with_relevance_scores(\n self, query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n \"\"\"Perform a similarity search with ClickHouse\n Args:\n query (str): query string", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"} {"id": "92665c342e7a-10", "text": "Args:\n query (str): query string\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): where condition string.\n Defaults to None.\n NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection. When dealing with metadatas, remember to\n use `{self.metadata_column}.attribute` instead of `attribute`\n alone. The default name for it is `metadata`.\n Returns:\n List[Document]: List of documents\n \"\"\"\n q_str = self._build_query_sql(\n self.embedding_function.embed_query(query), k, where_str\n )\n try:\n return [\n (\n Document(\n page_content=r[self.config.column_map[\"document\"]],\n metadata=r[self.config.column_map[\"metadata\"]],\n ),\n r[\"dist\"],\n )\n for r in self.client.query(q_str).named_results()\n ]\n except Exception as e:\n logger.error(f\"\\033[91m\\033[1m{type(e)}\\033[0m \\033[95m{str(e)}\\033[0m\")\n return []\n[docs] def drop(self) -> None:\n \"\"\"\n Helper function: Drop data\n \"\"\"\n self.client.command(\n f\"DROP TABLE IF EXISTS {self.config.database}.{self.config.table}\"\n )\n @property\n def metadata_column(self) -> str:\n return self.config.column_map[\"metadata\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"} {"id": "de699fd05654-0", "text": "Source code for langchain.vectorstores.myscale\n\"\"\"Wrapper around MyScale vector database.\"\"\"\nfrom __future__ import annotations\nimport json\nimport logging\nfrom hashlib import sha1\nfrom threading import Thread\nfrom typing import Any, Dict, Iterable, List, Optional, Tuple\nfrom pydantic import BaseSettings\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nlogger = logging.getLogger()\n[docs]def has_mul_sub_str(s: str, *args: Any) -> bool:\n \"\"\"\n Check if a string contains multiple substrings.\n Args:\n s: string to check.\n *args: substrings to check.\n Returns:\n True if all substrings are in the string, False otherwise.\n \"\"\"\n for a in args:\n if a not in s:\n return False\n return True\n[docs]class MyScaleSettings(BaseSettings):\n \"\"\"MyScale Client Configuration\n Attribute:\n myscale_host (str) : An URL to connect to MyScale backend.\n Defaults to 'localhost'.\n myscale_port (int) : URL port to connect with HTTP. Defaults to 8443.\n username (str) : Username to login. Defaults to None.\n password (str) : Password to login. Defaults to None.\n index_type (str): index type string.\n index_param (dict): index build parameter.\n database (str) : Database name to find the table. Defaults to 'default'.\n table (str) : Table name to operate on.\n Defaults to 'vector_table'.\n metric (str) : Metric to compute distance,\n supported are ('l2', 'cosine', 'ip'). Defaults to 'cosine'.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"} {"id": "de699fd05654-1", "text": "column_map (Dict) : Column type map to project column name onto langchain\n semantics. Must have keys: `text`, `id`, `vector`,\n must be same size to number of columns. For example:\n .. code-block:: python\n {\n 'id': 'text_id',\n 'vector': 'text_embedding',\n 'text': 'text_plain',\n 'metadata': 'metadata_dictionary_in_json',\n }\n Defaults to identity map.\n \"\"\"\n host: str = \"localhost\"\n port: int = 8443\n username: Optional[str] = None\n password: Optional[str] = None\n index_type: str = \"IVFFLAT\"\n index_param: Optional[Dict[str, str]] = None\n column_map: Dict[str, str] = {\n \"id\": \"id\",\n \"text\": \"text\",\n \"vector\": \"vector\",\n \"metadata\": \"metadata\",\n }\n database: str = \"default\"\n table: str = \"langchain\"\n metric: str = \"cosine\"\n def __getitem__(self, item: str) -> Any:\n return getattr(self, item)\n[docs] class Config:\n env_file = \".env\"\n env_prefix = \"myscale_\"\n env_file_encoding = \"utf-8\"\n[docs]class MyScale(VectorStore):\n \"\"\"Wrapper around MyScale vector database\n You need a `clickhouse-connect` python package, and a valid account\n to connect to MyScale.\n MyScale can not only search with simple vector indexes,\n it also supports complex query with multiple conditions,\n constraints and even sub-queries.\n For more information, please visit", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"} {"id": "de699fd05654-2", "text": "constraints and even sub-queries.\n For more information, please visit\n [myscale official site](https://docs.myscale.com/en/overview/)\n \"\"\"\n def __init__(\n self,\n embedding: Embeddings,\n config: Optional[MyScaleSettings] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"MyScale Wrapper to LangChain\n embedding_function (Embeddings):\n config (MyScaleSettings): Configuration to MyScale Client\n Other keyword arguments will pass into\n [clickhouse-connect](https://docs.myscale.com/)\n \"\"\"\n try:\n from clickhouse_connect import get_client\n except ImportError:\n raise ValueError(\n \"Could not import clickhouse connect python package. \"\n \"Please install it with `pip install clickhouse-connect`.\"\n )\n try:\n from tqdm import tqdm\n self.pgbar = tqdm\n except ImportError:\n # Just in case if tqdm is not installed\n self.pgbar = lambda x: x\n super().__init__()\n if config is not None:\n self.config = config\n else:\n self.config = MyScaleSettings()\n assert self.config\n assert self.config.host and self.config.port\n assert (\n self.config.column_map\n and self.config.database\n and self.config.table\n and self.config.metric\n )\n for k in [\"id\", \"vector\", \"text\", \"metadata\"]:\n assert k in self.config.column_map\n assert self.config.metric in [\"ip\", \"cosine\", \"l2\"]\n # initialize the schema\n dim = len(embedding.embed_query(\"try this out\"))\n index_params = (", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"} {"id": "de699fd05654-3", "text": "dim = len(embedding.embed_query(\"try this out\"))\n index_params = (\n \", \" + \",\".join([f\"'{k}={v}'\" for k, v in self.config.index_param.items()])\n if self.config.index_param\n else \"\"\n )\n schema_ = f\"\"\"\n CREATE TABLE IF NOT EXISTS {self.config.database}.{self.config.table}(\n {self.config.column_map['id']} String,\n {self.config.column_map['text']} String,\n {self.config.column_map['vector']} Array(Float32),\n {self.config.column_map['metadata']} JSON,\n CONSTRAINT cons_vec_len CHECK length(\\\n {self.config.column_map['vector']}) = {dim},\n VECTOR INDEX vidx {self.config.column_map['vector']} \\\n TYPE {self.config.index_type}(\\\n 'metric_type={self.config.metric}'{index_params})\n ) ENGINE = MergeTree ORDER BY {self.config.column_map['id']}\n \"\"\"\n self.dim = dim\n self.BS = \"\\\\\"\n self.must_escape = (\"\\\\\", \"'\")\n self.embedding_function = embedding.embed_query\n self.dist_order = \"ASC\" if self.config.metric in [\"cosine\", \"l2\"] else \"DESC\"\n # Create a connection to myscale\n self.client = get_client(\n host=self.config.host,\n port=self.config.port,\n username=self.config.username,\n password=self.config.password,\n **kwargs,\n )\n self.client.command(\"SET allow_experimental_object_type=1\")\n self.client.command(schema_)\n[docs] def escape_str(self, value: str) -> str:\n return \"\".join(f\"{self.BS}{c}\" if c in self.must_escape else c for c in value)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"} {"id": "de699fd05654-4", "text": "def _build_istr(self, transac: Iterable, column_names: Iterable[str]) -> str:\n ks = \",\".join(column_names)\n _data = []\n for n in transac:\n n = \",\".join([f\"'{self.escape_str(str(_n))}'\" for _n in n])\n _data.append(f\"({n})\")\n i_str = f\"\"\"\n INSERT INTO TABLE \n {self.config.database}.{self.config.table}({ks})\n VALUES\n {','.join(_data)}\n \"\"\"\n return i_str\n def _insert(self, transac: Iterable, column_names: Iterable[str]) -> None:\n _i_str = self._build_istr(transac, column_names)\n self.client.command(_i_str)\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n batch_size: int = 32,\n ids: Optional[Iterable[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n ids: Optional list of ids to associate with the texts.\n batch_size: Batch size of insertion\n metadata: Optional column data to be inserted\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n # Embed and create the documents\n ids = ids or [sha1(t.encode(\"utf-8\")).hexdigest() for t in texts]\n colmap_ = self.config.column_map\n transac = []\n column_names = {\n colmap_[\"id\"]: ids,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"} {"id": "de699fd05654-5", "text": "column_names = {\n colmap_[\"id\"]: ids,\n colmap_[\"text\"]: texts,\n colmap_[\"vector\"]: map(self.embedding_function, texts),\n }\n metadatas = metadatas or [{} for _ in texts]\n column_names[colmap_[\"metadata\"]] = map(json.dumps, metadatas)\n assert len(set(colmap_) - set(column_names)) >= 0\n keys, values = zip(*column_names.items())\n try:\n t = None\n for v in self.pgbar(\n zip(*values), desc=\"Inserting data...\", total=len(metadatas)\n ):\n assert len(v[keys.index(self.config.column_map[\"vector\"])]) == self.dim\n transac.append(v)\n if len(transac) == batch_size:\n if t:\n t.join()\n t = Thread(target=self._insert, args=[transac, keys])\n t.start()\n transac = []\n if len(transac) > 0:\n if t:\n t.join()\n self._insert(transac, keys)\n return [i for i in ids]\n except Exception as e:\n logger.error(f\"\\033[91m\\033[1m{type(e)}\\033[0m \\033[95m{str(e)}\\033[0m\")\n return []\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[Dict[Any, Any]]] = None,\n config: Optional[MyScaleSettings] = None,\n text_ids: Optional[Iterable[str]] = None,\n batch_size: int = 32,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"} {"id": "de699fd05654-6", "text": "batch_size: int = 32,\n **kwargs: Any,\n ) -> MyScale:\n \"\"\"Create Myscale wrapper with existing texts\n Args:\n embedding_function (Embeddings): Function to extract text embedding\n texts (Iterable[str]): List or tuple of strings to be added\n config (MyScaleSettings, Optional): Myscale configuration\n text_ids (Optional[Iterable], optional): IDs for the texts.\n Defaults to None.\n batch_size (int, optional): Batchsize when transmitting data to MyScale.\n Defaults to 32.\n metadata (List[dict], optional): metadata to texts. Defaults to None.\n Other keyword arguments will pass into\n [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api)\n Returns:\n MyScale Index\n \"\"\"\n ctx = cls(embedding, config, **kwargs)\n ctx.add_texts(texts, ids=text_ids, batch_size=batch_size, metadatas=metadatas)\n return ctx\n def __repr__(self) -> str:\n \"\"\"Text representation for myscale, prints backends, username and schemas.\n Easy to use with `str(Myscale())`\n Returns:\n repr: string to show connection info and data schema\n \"\"\"\n _repr = f\"\\033[92m\\033[1m{self.config.database}.{self.config.table} @ \"\n _repr += f\"{self.config.host}:{self.config.port}\\033[0m\\n\\n\"\n _repr += f\"\\033[1musername: {self.config.username}\\033[0m\\n\\nTable Schema:\\n\"\n _repr += \"-\" * 51 + \"\\n\"\n for r in self.client.query(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"} {"id": "de699fd05654-7", "text": "for r in self.client.query(\n f\"DESC {self.config.database}.{self.config.table}\"\n ).named_results():\n _repr += (\n f\"|\\033[94m{r['name']:24s}\\033[0m|\\033[96m{r['type']:24s}\\033[0m|\\n\"\n )\n _repr += \"-\" * 51 + \"\\n\"\n return _repr\n def _build_qstr(\n self, q_emb: List[float], topk: int, where_str: Optional[str] = None\n ) -> str:\n q_emb_str = \",\".join(map(str, q_emb))\n if where_str:\n where_str = f\"PREWHERE {where_str}\"\n else:\n where_str = \"\"\n q_str = f\"\"\"\n SELECT {self.config.column_map['text']}, \n {self.config.column_map['metadata']}, dist\n FROM {self.config.database}.{self.config.table}\n {where_str}\n ORDER BY distance({self.config.column_map['vector']}, [{q_emb_str}]) \n AS dist {self.dist_order}\n LIMIT {topk}\n \"\"\"\n return q_str\n[docs] def similarity_search(\n self, query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Perform a similarity search with MyScale\n Args:\n query (str): query string\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): where condition string.\n Defaults to None.\n NOTE: Please do not let end-user to fill this and always be aware", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"} {"id": "de699fd05654-8", "text": "NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection. When dealing with metadatas, remember to\n use `{self.metadata_column}.attribute` instead of `attribute`\n alone. The default name for it is `metadata`.\n Returns:\n List[Document]: List of Documents\n \"\"\"\n return self.similarity_search_by_vector(\n self.embedding_function(query), k, where_str, **kwargs\n )\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n where_str: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Perform a similarity search with MyScale by vectors\n Args:\n query (str): query string\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): where condition string.\n Defaults to None.\n NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection. When dealing with metadatas, remember to\n use `{self.metadata_column}.attribute` instead of `attribute`\n alone. The default name for it is `metadata`.\n Returns:\n List[Document]: List of (Document, similarity)\n \"\"\"\n q_str = self._build_qstr(embedding, k, where_str)\n try:\n return [\n Document(\n page_content=r[self.config.column_map[\"text\"]],\n metadata=r[self.config.column_map[\"metadata\"]],\n )\n for r in self.client.query(q_str).named_results()\n ]\n except Exception as e:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"} {"id": "de699fd05654-9", "text": "]\n except Exception as e:\n logger.error(f\"\\033[91m\\033[1m{type(e)}\\033[0m \\033[95m{str(e)}\\033[0m\")\n return []\n[docs] def similarity_search_with_relevance_scores(\n self, query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n \"\"\"Perform a similarity search with MyScale\n Args:\n query (str): query string\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): where condition string.\n Defaults to None.\n NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection. When dealing with metadatas, remember to\n use `{self.metadata_column}.attribute` instead of `attribute`\n alone. The default name for it is `metadata`.\n Returns:\n List[Document]: List of documents most similar to the query text\n and cosine distance in float for each.\n Lower score represents more similarity.\n \"\"\"\n q_str = self._build_qstr(self.embedding_function(query), k, where_str)\n try:\n return [\n (\n Document(\n page_content=r[self.config.column_map[\"text\"]],\n metadata=r[self.config.column_map[\"metadata\"]],\n ),\n r[\"dist\"],\n )\n for r in self.client.query(q_str).named_results()\n ]\n except Exception as e:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"} {"id": "de699fd05654-10", "text": "]\n except Exception as e:\n logger.error(f\"\\033[91m\\033[1m{type(e)}\\033[0m \\033[95m{str(e)}\\033[0m\")\n return []\n[docs] def drop(self) -> None:\n \"\"\"\n Helper function: Drop data\n \"\"\"\n self.client.command(\n f\"DROP TABLE IF EXISTS {self.config.database}.{self.config.table}\"\n )\n @property\n def metadata_column(self) -> str:\n return self.config.column_map[\"metadata\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"} {"id": "82e83aa0b6e7-0", "text": "Source code for langchain.vectorstores.alibabacloud_opensearch\nimport json\nimport logging\nimport numbers\nfrom hashlib import sha1\nfrom typing import Any, Dict, Iterable, List, Optional, Tuple\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import Document\nfrom langchain.vectorstores.base import VectorStore\nlogger = logging.getLogger()\nclass AlibabaCloudOpenSearchSettings:\n \"\"\"Opensearch Client Configuration\n Attribute:\n endpoint (str) : The endpoint of opensearch instance, You can find it\n from the console of Alibaba Cloud OpenSearch.\n instance_id (str) : The identify of opensearch instance, You can find\n it from the console of Alibaba Cloud OpenSearch.\n datasource_name (str): The name of the data source specified when creating it.\n username (str) : The username specified when purchasing the instance.\n password (str) : The password specified when purchasing the instance.\n embedding_index_name (str) : The name of the vector attribute specified\n when configuring the instance attributes.\n field_name_mapping (Dict) : Using field name mapping between opensearch\n vector store and opensearch instance configuration table field names:\n {\n 'id': 'The id field name map of index document.',\n 'document': 'The text field name map of index document.',\n 'embedding': 'In the embedding field of the opensearch instance,\n the values must be in float16 multivalue type and separated by commas.',\n 'metadata_field_x': 'Metadata field mapping includes the mapped\n field name and operator in the mapping value, separated by a comma\n between the mapped field name and the operator.',\n }\n \"\"\"\n endpoint: str\n instance_id: str\n username: str\n password: str", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/alibabacloud_opensearch.html"} {"id": "82e83aa0b6e7-1", "text": "instance_id: str\n username: str\n password: str\n datasource_name: str\n embedding_index_name: str\n field_name_mapping: Dict[str, str] = {\n \"id\": \"id\",\n \"document\": \"document\",\n \"embedding\": \"embedding\",\n \"metadata_field_x\": \"metadata_field_x,operator\",\n }\n def __init__(\n self,\n endpoint: str,\n instance_id: str,\n username: str,\n password: str,\n datasource_name: str,\n embedding_index_name: str,\n field_name_mapping: Dict[str, str],\n ) -> None:\n self.endpoint = endpoint\n self.instance_id = instance_id\n self.username = username\n self.password = password\n self.datasource_name = datasource_name\n self.embedding_index_name = embedding_index_name\n self.field_name_mapping = field_name_mapping\n def __getitem__(self, item: str) -> Any:\n return getattr(self, item)\n[docs]def create_metadata(fields: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Create metadata from fields.\n Args:\n fields: The fields of the document. The fields must be a dict.\n Returns:\n metadata: The metadata of the document. The metadata must be a dict.\n \"\"\"\n metadata: Dict[str, Any] = {}\n for key, value in fields.items():\n if key == \"id\" or key == \"document\" or key == \"embedding\":\n continue\n metadata[key] = value\n return metadata\n[docs]class AlibabaCloudOpenSearch(VectorStore):\n \"\"\"Alibaba Cloud OpenSearch Vector Store\"\"\"\n def __init__(\n self,\n embedding: Embeddings,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/alibabacloud_opensearch.html"} {"id": "82e83aa0b6e7-2", "text": "def __init__(\n self,\n embedding: Embeddings,\n config: AlibabaCloudOpenSearchSettings,\n **kwargs: Any,\n ) -> None:\n try:\n from alibabacloud_ha3engine import client, models\n from alibabacloud_tea_util import models as util_models\n except ImportError:\n raise ValueError(\n \"Could not import alibaba cloud opensearch python package. \"\n \"Please install it with `pip install alibabacloud-ha3engine`.\"\n )\n self.config = config\n self.embedding = embedding\n self.runtime = util_models.RuntimeOptions(\n connect_timeout=5000,\n read_timeout=10000,\n autoretry=False,\n ignore_ssl=False,\n max_idle_conns=50,\n )\n self.ha3EngineClient = client.Client(\n models.Config(\n endpoint=config.endpoint,\n instance_id=config.instance_id,\n protocol=\"http\",\n access_user_name=config.username,\n access_pass_word=config.password,\n )\n )\n self.options_headers: Dict[str, str] = {}\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n def _upsert(push_doc_list: List[Dict]) -> List[str]:\n if push_doc_list is None or len(push_doc_list) == 0:\n return []\n try:\n push_request = models.PushDocumentsRequestModel(\n self.options_headers, push_doc_list\n )\n push_response = self.ha3EngineClient.push_documents(\n self.config.datasource_name, field_name_map[\"id\"], push_request", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/alibabacloud_opensearch.html"} {"id": "82e83aa0b6e7-3", "text": "self.config.datasource_name, field_name_map[\"id\"], push_request\n )\n json_response = json.loads(push_response.body)\n if json_response[\"status\"] == \"OK\":\n return [\n push_doc[\"fields\"][field_name_map[\"id\"]]\n for push_doc in push_doc_list\n ]\n return []\n except Exception as e:\n logger.error(\n f\"add doc to endpoint:{self.config.endpoint} \"\n f\"instance_id:{self.config.instance_id} failed.\",\n e,\n )\n raise e\n from alibabacloud_ha3engine import models\n ids = [sha1(t.encode(\"utf-8\")).hexdigest() for t in texts]\n embeddings = self.embedding.embed_documents(list(texts))\n metadatas = metadatas or [{} for _ in texts]\n field_name_map = self.config.field_name_mapping\n add_doc_list = []\n text_list = list(texts)\n for idx, doc_id in enumerate(ids):\n embedding = embeddings[idx] if idx < len(embeddings) else None\n metadata = metadatas[idx] if idx < len(metadatas) else None\n text = text_list[idx] if idx < len(text_list) else None\n add_doc: Dict[str, Any] = dict()\n add_doc_fields: Dict[str, Any] = dict()\n add_doc_fields.__setitem__(field_name_map[\"id\"], doc_id)\n add_doc_fields.__setitem__(field_name_map[\"document\"], text)\n if embedding is not None:\n add_doc_fields.__setitem__(\n field_name_map[\"embedding\"],\n \",\".join(str(unit) for unit in embedding),\n )\n if metadata is not None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/alibabacloud_opensearch.html"} {"id": "82e83aa0b6e7-4", "text": ")\n if metadata is not None:\n for md_key, md_value in metadata.items():\n add_doc_fields.__setitem__(\n field_name_map[md_key].split(\",\")[0], md_value\n )\n add_doc.__setitem__(\"fields\", add_doc_fields)\n add_doc.__setitem__(\"cmd\", \"add\")\n add_doc_list.append(add_doc)\n return _upsert(add_doc_list)\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n search_filter: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n embedding = self.embedding.embed_query(query)\n return self.create_results(\n self.inner_embedding_query(\n embedding=embedding, search_filter=search_filter, k=k\n )\n )\n[docs] def similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n search_filter: Optional[dict] = None,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n embedding: List[float] = self.embedding.embed_query(query)\n return self.create_results_with_score(\n self.inner_embedding_query(\n embedding=embedding, search_filter=search_filter, k=k\n )\n )\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n search_filter: Optional[dict] = None,\n **kwargs: Any,\n ) -> List[Document]:\n return self.create_results(\n self.inner_embedding_query(\n embedding=embedding, search_filter=search_filter, k=k\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/alibabacloud_opensearch.html"} {"id": "82e83aa0b6e7-5", "text": "embedding=embedding, search_filter=search_filter, k=k\n )\n )\n[docs] def inner_embedding_query(\n self,\n embedding: List[float],\n search_filter: Optional[Dict[str, Any]] = None,\n k: int = 4,\n ) -> Dict[str, Any]:\n def generate_embedding_query() -> str:\n tmp_search_config_str = (\n f\"config=start:0,hit:{k},format:json&&cluster=general&&kvpairs=\"\n f\"first_formula:proxima_score({self.config.embedding_index_name})&&sort=+RANK\"\n )\n tmp_query_str = (\n f\"&&query={self.config.embedding_index_name}:\"\n + \"'\"\n + \",\".join(str(x) for x in embedding)\n + \"'\"\n )\n if search_filter is not None:\n filter_clause = \"&&filter=\" + \" AND \".join(\n [\n create_filter(md_key, md_value)\n for md_key, md_value in search_filter.items()\n ]\n )\n tmp_query_str += filter_clause\n return tmp_search_config_str + tmp_query_str\n def create_filter(md_key: str, md_value: Any) -> str:\n md_filter_expr = self.config.field_name_mapping[md_key]\n if md_filter_expr is None:\n return \"\"\n expr = md_filter_expr.split(\",\")\n if len(expr) != 2:\n logger.error(\n f\"filter {md_filter_expr} express is not correct, \"\n f\"must contain mapping field and operator.\"\n )\n return \"\"\n md_filter_key = expr[0].strip()\n md_filter_operator = expr[1].strip()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/alibabacloud_opensearch.html"} {"id": "82e83aa0b6e7-6", "text": "md_filter_operator = expr[1].strip()\n if isinstance(md_value, numbers.Number):\n return f\"{md_filter_key} {md_filter_operator} {md_value}\"\n return f'{md_filter_key}{md_filter_operator}\"{md_value}\"'\n def search_data(single_query_str: str) -> Dict[str, Any]:\n search_query = models.SearchQuery(query=single_query_str)\n search_request = models.SearchRequestModel(\n self.options_headers, search_query\n )\n return json.loads(self.ha3EngineClient.search(search_request).body)\n from alibabacloud_ha3engine import models\n try:\n query_str = generate_embedding_query()\n json_response = search_data(query_str)\n if len(json_response[\"errors\"]) != 0:\n logger.error(\n f\"query {self.config.endpoint} {self.config.instance_id} \"\n f\"errors:{json_response['errors']} failed.\"\n )\n else:\n return json_response\n except Exception as e:\n logger.error(\n f\"query instance endpoint:{self.config.endpoint} \"\n f\"instance_id:{self.config.instance_id} failed.\",\n e,\n )\n return {}\n[docs] def create_results(self, json_result: Dict[str, Any]) -> List[Document]:\n items = json_result[\"result\"][\"items\"]\n query_result_list: List[Document] = []\n for item in items:\n fields = item[\"fields\"]\n query_result_list.append(\n Document(\n page_content=fields[self.config.field_name_mapping[\"document\"]],\n metadata=create_metadata(fields),\n )\n )\n return query_result_list\n[docs] def create_results_with_score(\n self, json_result: Dict[str, Any]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/alibabacloud_opensearch.html"} {"id": "82e83aa0b6e7-7", "text": "self, json_result: Dict[str, Any]\n ) -> List[Tuple[Document, float]]:\n items = json_result[\"result\"][\"items\"]\n query_result_list: List[Tuple[Document, float]] = []\n for item in items:\n fields = item[\"fields\"]\n query_result_list.append(\n (\n Document(\n page_content=fields[self.config.field_name_mapping[\"document\"]],\n metadata=create_metadata(fields),\n ),\n float(item[\"sortExprValues\"][0]),\n )\n )\n return query_result_list\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n config: Optional[AlibabaCloudOpenSearchSettings] = None,\n **kwargs: Any,\n ) -> \"AlibabaCloudOpenSearch\":\n if config is None:\n raise Exception(\"config can't be none\")\n ctx = cls(embedding, config, **kwargs)\n ctx.add_texts(texts=texts, metadatas=metadatas)\n return ctx\n[docs] @classmethod\n def from_documents(\n cls,\n documents: List[Document],\n embedding: Embeddings,\n ids: Optional[List[str]] = None,\n config: Optional[AlibabaCloudOpenSearchSettings] = None,\n **kwargs: Any,\n ) -> \"AlibabaCloudOpenSearch\":\n if config is None:\n raise Exception(\"config can't be none\")\n texts = [d.page_content for d in documents]\n metadatas = [d.metadata for d in documents]\n return cls.from_texts(\n texts=texts,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/alibabacloud_opensearch.html"} {"id": "82e83aa0b6e7-8", "text": "return cls.from_texts(\n texts=texts,\n embedding=embedding,\n metadatas=metadatas,\n config=config,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/alibabacloud_opensearch.html"} {"id": "c30433a9b8b5-0", "text": "Source code for langchain.vectorstores.azuresearch\n\"\"\"Wrapper around Azure Cognitive Search.\"\"\"\nfrom __future__ import annotations\nimport base64\nimport json\nimport logging\nimport uuid\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Dict,\n Iterable,\n List,\n Optional,\n Tuple,\n Type,\n)\nimport numpy as np\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import BaseRetriever\nfrom langchain.utils import get_from_env\nfrom langchain.vectorstores.base import VectorStore\nlogger = logging.getLogger()\nif TYPE_CHECKING:\n from azure.search.documents import SearchClient\n# Allow overriding field names for Azure Search\nFIELDS_ID = get_from_env(\n key=\"AZURESEARCH_FIELDS_ID\", env_key=\"AZURESEARCH_FIELDS_ID\", default=\"id\"\n)\nFIELDS_CONTENT = get_from_env(\n key=\"AZURESEARCH_FIELDS_CONTENT\",\n env_key=\"AZURESEARCH_FIELDS_CONTENT\",\n default=\"content\",\n)\nFIELDS_CONTENT_VECTOR = get_from_env(\n key=\"AZURESEARCH_FIELDS_CONTENT_VECTOR\",\n env_key=\"AZURESEARCH_FIELDS_CONTENT_VECTOR\",\n default=\"content_vector\",\n)\nFIELDS_METADATA = get_from_env(\n key=\"AZURESEARCH_FIELDS_TAG\", env_key=\"AZURESEARCH_FIELDS_TAG\", default=\"metadata\"\n)\nMAX_UPLOAD_BATCH_SIZE = 1000\ndef _get_search_client(\n endpoint: str,\n key: str,\n index_name: str,\n embedding_function: Callable,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/azuresearch.html"} {"id": "c30433a9b8b5-1", "text": "key: str,\n index_name: str,\n embedding_function: Callable,\n semantic_configuration_name: Optional[str] = None,\n) -> SearchClient:\n from azure.core.credentials import AzureKeyCredential\n from azure.core.exceptions import ResourceNotFoundError\n from azure.identity import DefaultAzureCredential\n from azure.search.documents import SearchClient\n from azure.search.documents.indexes import SearchIndexClient\n from azure.search.documents.indexes.models import (\n PrioritizedFields,\n SearchableField,\n SearchField,\n SearchFieldDataType,\n SearchIndex,\n SemanticConfiguration,\n SemanticField,\n SemanticSettings,\n SimpleField,\n VectorSearch,\n VectorSearchAlgorithmConfiguration,\n )\n if key is None:\n credential = DefaultAzureCredential()\n else:\n credential = AzureKeyCredential(key)\n index_client: SearchIndexClient = SearchIndexClient(\n endpoint=endpoint, credential=credential\n )\n try:\n index_client.get_index(name=index_name)\n except ResourceNotFoundError:\n # Fields configuration\n fields = [\n SimpleField(\n name=FIELDS_ID,\n type=SearchFieldDataType.String,\n key=True,\n filterable=True,\n ),\n SearchableField(\n name=FIELDS_CONTENT,\n type=SearchFieldDataType.String,\n searchable=True,\n retrievable=True,\n ),\n SearchField(\n name=FIELDS_CONTENT_VECTOR,\n type=SearchFieldDataType.Collection(SearchFieldDataType.Single),\n searchable=True,\n dimensions=len(embedding_function(\"Text\")),\n vector_search_configuration=\"default\",\n ),\n SearchableField(\n name=FIELDS_METADATA,\n type=SearchFieldDataType.String,\n searchable=True,\n retrievable=True,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/azuresearch.html"} {"id": "c30433a9b8b5-2", "text": "type=SearchFieldDataType.String,\n searchable=True,\n retrievable=True,\n ),\n ]\n # Vector search configuration\n vector_search = VectorSearch(\n algorithm_configurations=[\n VectorSearchAlgorithmConfiguration(\n name=\"default\",\n kind=\"hnsw\",\n hnsw_parameters={\n \"m\": 4,\n \"efConstruction\": 400,\n \"efSearch\": 500,\n \"metric\": \"cosine\",\n },\n )\n ]\n )\n # Create the semantic settings with the configuration\n semantic_settings = (\n None\n if semantic_configuration_name is None\n else SemanticSettings(\n configurations=[\n SemanticConfiguration(\n name=semantic_configuration_name,\n prioritized_fields=PrioritizedFields(\n prioritized_content_fields=[\n SemanticField(field_name=FIELDS_CONTENT)\n ],\n ),\n )\n ]\n )\n )\n # Create the search index with the semantic settings and vector search\n index = SearchIndex(\n name=index_name,\n fields=fields,\n vector_search=vector_search,\n semantic_settings=semantic_settings,\n )\n index_client.create_index(index)\n # Create the search client\n return SearchClient(endpoint=endpoint, index_name=index_name, credential=credential)\n[docs]class AzureSearch(VectorStore):\n def __init__(\n self,\n azure_search_endpoint: str,\n azure_search_key: str,\n index_name: str,\n embedding_function: Callable,\n search_type: str = \"hybrid\",\n semantic_configuration_name: Optional[str] = None,\n semantic_query_language: str = \"en-us\",\n **kwargs: Any,\n ):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/azuresearch.html"} {"id": "c30433a9b8b5-3", "text": "**kwargs: Any,\n ):\n \"\"\"Initialize with necessary components.\"\"\"\n # Initialize base class\n self.embedding_function = embedding_function\n self.client = _get_search_client(\n azure_search_endpoint,\n azure_search_key,\n index_name,\n embedding_function,\n semantic_configuration_name,\n )\n self.search_type = search_type\n self.semantic_configuration_name = semantic_configuration_name\n self.semantic_query_language = semantic_query_language\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Add texts data to an existing index.\"\"\"\n keys = kwargs.get(\"keys\")\n ids = []\n # Write data to index\n data = []\n for i, text in enumerate(texts):\n # Use provided key otherwise use default key\n key = keys[i] if keys else str(uuid.uuid4())\n # Encoding key for Azure Search valid characters\n key = base64.urlsafe_b64encode(bytes(key, \"utf-8\")).decode(\"ascii\")\n metadata = metadatas[i] if metadatas else {}\n # Add data to index\n data.append(\n {\n \"@search.action\": \"upload\",\n FIELDS_ID: key,\n FIELDS_CONTENT: text,\n FIELDS_CONTENT_VECTOR: np.array(\n self.embedding_function(text), dtype=np.float32\n ).tolist(),\n FIELDS_METADATA: json.dumps(metadata),\n }\n )\n ids.append(key)\n # Upload data in batches\n if len(data) == MAX_UPLOAD_BATCH_SIZE:\n response = self.client.upload_documents(documents=data)\n # Check if all documents were successfully uploaded", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/azuresearch.html"} {"id": "c30433a9b8b5-4", "text": "# Check if all documents were successfully uploaded\n if not all([r.succeeded for r in response]):\n raise Exception(response)\n # Reset data\n data = []\n # Considering case where data is an exact multiple of batch-size entries\n if len(data) == 0:\n return ids\n # Upload data to index\n response = self.client.upload_documents(documents=data)\n # Check if all documents were successfully uploaded\n if all([r.succeeded for r in response]):\n return ids\n else:\n raise Exception(response)\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n search_type = kwargs.get(\"search_type\", self.search_type)\n if search_type == \"similarity\":\n docs = self.vector_search(query, k=k, **kwargs)\n elif search_type == \"hybrid\":\n docs = self.hybrid_search(query, k=k, **kwargs)\n elif search_type == \"semantic_hybrid\":\n docs = self.semantic_hybrid_search(query, k=k, **kwargs)\n else:\n raise ValueError(f\"search_type of {search_type} not allowed.\")\n return docs\n[docs] def vector_search(self, query: str, k: int = 4, **kwargs: Any) -> List[Document]:\n \"\"\"\n Returns the most similar indexed documents to the query text.\n Args:\n query (str): The query text for which to find similar documents.\n k (int): The number of documents to return. Default is 4.\n Returns:\n List[Document]: A list of documents that are most similar to the query text.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/azuresearch.html"} {"id": "c30433a9b8b5-5", "text": "\"\"\"\n docs_and_scores = self.vector_search_with_score(\n query, k=k, filters=kwargs.get(\"filters\", None)\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] def vector_search_with_score(\n self, query: str, k: int = 4, filters: Optional[str] = None\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n from azure.search.documents.models import Vector\n results = self.client.search(\n search_text=\"\",\n vector=Vector(\n value=np.array(\n self.embedding_function(query), dtype=np.float32\n ).tolist(),\n k=k,\n fields=FIELDS_CONTENT_VECTOR,\n ),\n select=[f\"{FIELDS_ID},{FIELDS_CONTENT},{FIELDS_METADATA}\"],\n filter=filters,\n )\n # Convert results to Document objects\n docs = [\n (\n Document(\n page_content=result[FIELDS_CONTENT],\n metadata=json.loads(result[FIELDS_METADATA]),\n ),\n float(result[\"@search.score\"]),\n )\n for result in results\n ]\n return docs\n[docs] def hybrid_search(self, query: str, k: int = 4, **kwargs: Any) -> List[Document]:\n \"\"\"\n Returns the most similar indexed documents to the query text.\n Args:\n query (str): The query text for which to find similar documents.\n k (int): The number of documents to return. Default is 4.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/azuresearch.html"} {"id": "c30433a9b8b5-6", "text": "k (int): The number of documents to return. Default is 4.\n Returns:\n List[Document]: A list of documents that are most similar to the query text.\n \"\"\"\n docs_and_scores = self.hybrid_search_with_score(\n query, k=k, filters=kwargs.get(\"filters\", None)\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] def hybrid_search_with_score(\n self, query: str, k: int = 4, filters: Optional[str] = None\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query with an hybrid query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n from azure.search.documents.models import Vector\n results = self.client.search(\n search_text=query,\n vector=Vector(\n value=np.array(\n self.embedding_function(query), dtype=np.float32\n ).tolist(),\n k=k,\n fields=FIELDS_CONTENT_VECTOR,\n ),\n select=[f\"{FIELDS_ID},{FIELDS_CONTENT},{FIELDS_METADATA}\"],\n filter=filters,\n top=k,\n )\n # Convert results to Document objects\n docs = [\n (\n Document(\n page_content=result[FIELDS_CONTENT],\n metadata=json.loads(result[FIELDS_METADATA]),\n ),\n float(result[\"@search.score\"]),\n )\n for result in results\n ]\n return docs\n[docs] def semantic_hybrid_search(\n self, query: str, k: int = 4, **kwargs: Any", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/azuresearch.html"} {"id": "c30433a9b8b5-7", "text": "self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"\n Returns the most similar indexed documents to the query text.\n Args:\n query (str): The query text for which to find similar documents.\n k (int): The number of documents to return. Default is 4.\n Returns:\n List[Document]: A list of documents that are most similar to the query text.\n \"\"\"\n docs_and_scores = self.semantic_hybrid_search_with_score(\n query, k=k, filters=kwargs.get(\"filters\", None)\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] def semantic_hybrid_search_with_score(\n self, query: str, k: int = 4, filters: Optional[str] = None\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query with an hybrid query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n from azure.search.documents.models import Vector\n results = self.client.search(\n search_text=query,\n vector=Vector(\n value=np.array(\n self.embedding_function(query), dtype=np.float32\n ).tolist(),\n k=50, # Hardcoded value to maximize L2 retrieval\n fields=FIELDS_CONTENT_VECTOR,\n ),\n select=[f\"{FIELDS_ID},{FIELDS_CONTENT},{FIELDS_METADATA}\"],\n filter=filters,\n query_type=\"semantic\",\n query_language=self.semantic_query_language,\n semantic_configuration_name=self.semantic_configuration_name,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/azuresearch.html"} {"id": "c30433a9b8b5-8", "text": "query_language=self.semantic_query_language,\n semantic_configuration_name=self.semantic_configuration_name,\n query_caption=\"extractive\",\n query_answer=\"extractive\",\n top=k,\n )\n # Get Semantic Answers\n semantic_answers = results.get_answers()\n semantic_answers_dict = {}\n for semantic_answer in semantic_answers:\n semantic_answers_dict[semantic_answer.key] = {\n \"text\": semantic_answer.text,\n \"highlights\": semantic_answer.highlights,\n }\n # Convert results to Document objects\n docs = [\n (\n Document(\n page_content=result[\"content\"],\n metadata={\n **json.loads(result[\"metadata\"]),\n **{\n \"captions\": {\n \"text\": result.get(\"@search.captions\", [{}])[0].text,\n \"highlights\": result.get(\"@search.captions\", [{}])[\n 0\n ].highlights,\n }\n if result.get(\"@search.captions\")\n else {},\n \"answers\": semantic_answers_dict.get(\n json.loads(result[\"metadata\"]).get(\"key\"), \"\"\n ),\n },\n },\n ),\n float(result[\"@search.score\"]),\n )\n for result in results\n ]\n return docs\n[docs] @classmethod\n def from_texts(\n cls: Type[AzureSearch],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n azure_search_endpoint: str = \"\",\n azure_search_key: str = \"\",\n index_name: str = \"langchain-index\",\n **kwargs: Any,\n ) -> AzureSearch:\n # Creating a new Azure Search instance\n azure_search = cls(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/azuresearch.html"} {"id": "c30433a9b8b5-9", "text": "# Creating a new Azure Search instance\n azure_search = cls(\n azure_search_endpoint,\n azure_search_key,\n index_name,\n embedding.embed_query,\n )\n azure_search.add_texts(texts, metadatas, **kwargs)\n return azure_search\n[docs]class AzureSearchVectorStoreRetriever(BaseRetriever):\n vectorstore: AzureSearch\n search_type: str = \"hybrid\"\n k: int = 4\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] @root_validator()\n def validate_search_type(cls, values: Dict) -> Dict:\n \"\"\"Validate search type.\"\"\"\n if \"search_type\" in values:\n search_type = values[\"search_type\"]\n if search_type not in (\"similarity\", \"hybrid\", \"semantic_hybrid\"):\n raise ValueError(f\"search_type of {search_type} not allowed.\")\n return values\n def _get_relevant_documents(\n self,\n query: str,\n *,\n run_manager: CallbackManagerForRetrieverRun,\n ) -> List[Document]:\n if self.search_type == \"similarity\":\n docs = self.vectorstore.vector_search(query, k=self.k)\n elif self.search_type == \"hybrid\":\n docs = self.vectorstore.hybrid_search(query, k=self.k)\n elif self.search_type == \"semantic_hybrid\":\n docs = self.vectorstore.semantic_hybrid_search(query, k=self.k)\n else:\n raise ValueError(f\"search_type of {self.search_type} not allowed.\")\n return docs\n async def _aget_relevant_documents(\n self,\n query: str,\n *,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/azuresearch.html"} {"id": "c30433a9b8b5-10", "text": "self,\n query: str,\n *,\n run_manager: AsyncCallbackManagerForRetrieverRun,\n ) -> List[Document]:\n raise NotImplementedError(\n \"AzureSearchVectorStoreRetriever does not support async\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/azuresearch.html"} {"id": "29f3763734d8-0", "text": "Source code for langchain.vectorstores.rocksetdb\n\"\"\"Wrapper around Rockset vector database.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom enum import Enum\nfrom typing import Any, Iterable, List, Optional, Tuple\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nlogger = logging.getLogger(__name__)\n[docs]class Rockset(VectorStore):\n \"\"\"Wrapper arpund Rockset vector database.\n To use, you should have the `rockset` python package installed. Note that to use\n this, the collection being used must already exist in your Rockset instance.\n You must also ensure you use a Rockset ingest transformation to apply\n `VECTOR_ENFORCE` on the column being used to store `embedding_key` in the\n collection.\n See: https://rockset.com/blog/introducing-vector-search-on-rockset/ for more details\n Everything below assumes `commons` Rockset workspace.\n TODO: Add support for workspace args.\n Example:\n .. code-block:: python\n from langchain.vectorstores import Rockset\n from langchain.embeddings.openai import OpenAIEmbeddings\n import rockset\n # Make sure you use the right host (region) for your Rockset instance\n # and APIKEY has both read-write access to your collection.\n rs = rockset.RocksetClient(host=rockset.Regions.use1a1, api_key=\"***\")\n collection_name = \"langchain_demo\"\n embeddings = OpenAIEmbeddings()\n vectorstore = Rockset(rs, collection_name, embeddings,\n \"description\", \"description_embedding\")\n \"\"\"\n def __init__(\n self,\n client: Any,\n embeddings: Embeddings,\n collection_name: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/rocksetdb.html"} {"id": "29f3763734d8-1", "text": "client: Any,\n embeddings: Embeddings,\n collection_name: str,\n text_key: str,\n embedding_key: str,\n ):\n \"\"\"Initialize with Rockset client.\n Args:\n client: Rockset client object\n collection: Rockset collection to insert docs / query\n embeddings: Langchain Embeddings object to use to generate\n embedding for given text.\n text_key: column in Rockset collection to use to store the text\n embedding_key: column in Rockset collection to use to store the embedding.\n Note: We must apply `VECTOR_ENFORCE()` on this column via\n Rockset ingest transformation.\n \"\"\"\n try:\n from rockset import RocksetClient\n except ImportError:\n raise ImportError(\n \"Could not import rockset client python package. \"\n \"Please install it with `pip install rockset`.\"\n )\n if not isinstance(client, RocksetClient):\n raise ValueError(\n f\"client should be an instance of rockset.RocksetClient, \"\n f\"got {type(client)}\"\n )\n # TODO: check that `collection_name` exists in rockset. Create if not.\n self._client = client\n self._collection_name = collection_name\n self._embeddings = embeddings\n self._text_key = text_key\n self._embedding_key = embedding_key\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n batch_size: int = 32,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/rocksetdb.html"} {"id": "29f3763734d8-2", "text": "\"\"\"Run more texts through the embeddings and add to the vectorstore\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n ids: Optional list of ids to associate with the texts.\n batch_size: Send documents in batches to rockset.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n batch: list[dict] = []\n stored_ids = []\n for i, text in enumerate(texts):\n if len(batch) == batch_size:\n stored_ids += self._write_documents_to_rockset(batch)\n batch = []\n doc = {}\n if metadatas and len(metadatas) > i:\n doc = metadatas[i]\n if ids and len(ids) > i:\n doc[\"_id\"] = ids[i]\n doc[self._text_key] = text\n doc[self._embedding_key] = self._embeddings.embed_query(text)\n batch.append(doc)\n if len(batch) > 0:\n stored_ids += self._write_documents_to_rockset(batch)\n batch = []\n return stored_ids\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n client: Any = None,\n collection_name: str = \"\",\n text_key: str = \"\",\n embedding_key: str = \"\",\n ids: Optional[List[str]] = None,\n batch_size: int = 32,\n **kwargs: Any,\n ) -> Rockset:\n \"\"\"Create Rockset wrapper with existing texts.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/rocksetdb.html"} {"id": "29f3763734d8-3", "text": ") -> Rockset:\n \"\"\"Create Rockset wrapper with existing texts.\n This is intended as a quicker way to get started.\n \"\"\"\n # Sanitize imputs\n assert client is not None, \"Rockset Client cannot be None\"\n assert collection_name, \"Collection name cannot be empty\"\n assert text_key, \"Text key name cannot be empty\"\n assert embedding_key, \"Embedding key cannot be empty\"\n rockset = cls(client, embedding, collection_name, text_key, embedding_key)\n rockset.add_texts(texts, metadatas, ids, batch_size)\n return rockset\n # Rockset supports these vector distance functions.\n[docs] class DistanceFunction(Enum):\n COSINE_SIM = \"COSINE_SIM\"\n EUCLIDEAN_DIST = \"EUCLIDEAN_DIST\"\n DOT_PRODUCT = \"DOT_PRODUCT\"\n # how to sort results for \"similarity\"\n[docs] def order_by(self) -> str:\n if self.value == \"EUCLIDEAN_DIST\":\n return \"ASC\"\n return \"DESC\"\n[docs] def similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n distance_func: DistanceFunction = DistanceFunction.COSINE_SIM,\n where_str: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Perform a similarity search with Rockset\n Args:\n query (str): Text to look up documents similar to.\n distance_func (DistanceFunction): how to compute distance between two\n vectors in Rockset.\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/rocksetdb.html"} {"id": "29f3763734d8-4", "text": "k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): Metadata filters supplied as a\n SQL `where` condition string. Defaults to None.\n eg. \"price<=70.0 AND brand='Nintendo'\"\n NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection.\n Returns:\n List[Tuple[Document, float]]: List of documents with their relevance score\n \"\"\"\n return self.similarity_search_by_vector_with_relevance_scores(\n self._embeddings.embed_query(query),\n k,\n distance_func,\n where_str,\n **kwargs,\n )\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n distance_func: DistanceFunction = DistanceFunction.COSINE_SIM,\n where_str: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Same as `similarity_search_with_relevance_scores` but\n doesn't return the scores.\n \"\"\"\n return self.similarity_search_by_vector(\n self._embeddings.embed_query(query),\n k,\n distance_func,\n where_str,\n **kwargs,\n )\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n distance_func: DistanceFunction = DistanceFunction.COSINE_SIM,\n where_str: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Accepts a query_embedding (vector), and returns documents with\n similar embeddings.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/rocksetdb.html"} {"id": "29f3763734d8-5", "text": "\"\"\"Accepts a query_embedding (vector), and returns documents with\n similar embeddings.\"\"\"\n docs_and_scores = self.similarity_search_by_vector_with_relevance_scores(\n embedding, k, distance_func, where_str, **kwargs\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] def similarity_search_by_vector_with_relevance_scores(\n self,\n embedding: List[float],\n k: int = 4,\n distance_func: DistanceFunction = DistanceFunction.COSINE_SIM,\n where_str: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Accepts a query_embedding (vector), and returns documents with\n similar embeddings along with their relevance scores.\"\"\"\n q_str = self._build_query_sql(embedding, distance_func, k, where_str)\n try:\n query_response = self._client.Queries.query(sql={\"query\": q_str})\n except Exception as e:\n logger.error(\"Exception when querying Rockset: %s\\n\", e)\n return []\n finalResult: list[Tuple[Document, float]] = []\n for document in query_response.results:\n metadata = {}\n assert isinstance(\n document, dict\n ), \"document should be of type `dict[str,Any]`. But found: `{}`\".format(\n type(document)\n )\n for k, v in document.items():\n if k == self._text_key:\n assert isinstance(\n v, str\n ), \"page content stored in column `{}` must be of type `str`. \\\n But found: `{}`\".format(\n self._text_key, type(v)\n )\n page_content = v", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/rocksetdb.html"} {"id": "29f3763734d8-6", "text": "self._text_key, type(v)\n )\n page_content = v\n elif k == \"dist\":\n assert isinstance(\n v, float\n ), \"Computed distance between vectors must of type `float`. \\\n But found {}\".format(\n type(v)\n )\n score = v\n elif k not in [\"_id\", \"_event_time\", \"_meta\"]:\n # These columns are populated by Rockset when documents are\n # inserted. No need to return them in metadata dict.\n metadata[k] = v\n finalResult.append(\n (Document(page_content=page_content, metadata=metadata), score)\n )\n return finalResult\n # Helper functions\n def _build_query_sql(\n self,\n query_embedding: List[float],\n distance_func: DistanceFunction,\n k: int = 4,\n where_str: Optional[str] = None,\n ) -> str:\n \"\"\"Builds Rockset SQL query to query similar vectors to query_vector\"\"\"\n q_embedding_str = \",\".join(map(str, query_embedding))\n distance_str = f\"\"\"{distance_func.value}({self._embedding_key}, \\\n[{q_embedding_str}]) as dist\"\"\"\n where_str = f\"WHERE {where_str}\\n\" if where_str else \"\"\n return f\"\"\"\\\nSELECT * EXCEPT({self._embedding_key}), {distance_str}\nFROM {self._collection_name}\n{where_str}\\\nORDER BY dist {distance_func.order_by()}\nLIMIT {str(k)}\n\"\"\"\n def _write_documents_to_rockset(self, batch: List[dict]) -> List[str]:\n add_doc_res = self._client.Documents.add_documents(\n collection=self._collection_name, data=batch\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/rocksetdb.html"} {"id": "29f3763734d8-7", "text": "collection=self._collection_name, data=batch\n )\n return [doc_status._id for doc_status in add_doc_res.data]\n[docs] def delete_texts(self, ids: List[str]) -> None:\n \"\"\"Delete a list of docs from the Rockset collection\"\"\"\n try:\n from rockset.models import DeleteDocumentsRequestData\n except ImportError:\n raise ImportError(\n \"Could not import rockset client python package. \"\n \"Please install it with `pip install rockset`.\"\n )\n self._client.Documents.delete_documents(\n collection=self._collection_name,\n data=[DeleteDocumentsRequestData(id=i) for i in ids],\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/rocksetdb.html"} {"id": "0c41807ab483-0", "text": "Source code for langchain.vectorstores.marqo\n\"\"\"Wrapper around weaviate vector database.\"\"\"\nfrom __future__ import annotations\nimport json\nimport uuid\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Dict,\n Iterable,\n List,\n Optional,\n Tuple,\n Type,\n Union,\n)\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nif TYPE_CHECKING:\n import marqo\n[docs]class Marqo(VectorStore):\n \"\"\"Wrapper around Marqo database.\n Marqo indexes have their own models associated with them to generate your\n embeddings. This means that you can selected from a range of different models\n and also use CLIP models to create multimodal indexes\n with images and text together.\n Marqo also supports more advanced queries with mutliple weighted terms, see See\n https://docs.marqo.ai/latest/#searching-using-weights-in-queries.\n This class can flexibly take strings or dictionaries for weighted queries\n in its similarity search methods.\n To use, you should have the `marqo` python package installed, you can do this with\n `pip install marqo`.\n Example:\n .. code-block:: python\n import marqo\n from langchain.vectorstores import Marqo\n client = marqo.Client(url=os.environ[\"MARQO_URL\"], ...)\n vectorstore = Marqo(client, index_name)\n \"\"\"\n def __init__(\n self,\n client: marqo.Client,\n index_name: str,\n add_documents_settings: Optional[Dict[str, Any]] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/marqo.html"} {"id": "0c41807ab483-1", "text": "add_documents_settings: Optional[Dict[str, Any]] = None,\n searchable_attributes: Optional[List[str]] = None,\n page_content_builder: Optional[Callable[[Dict[str, Any]], str]] = None,\n ):\n \"\"\"Initialize with Marqo client.\"\"\"\n try:\n import marqo\n except ImportError:\n raise ValueError(\n \"Could not import marqo python package. \"\n \"Please install it with `pip install marqo`.\"\n )\n if not isinstance(client, marqo.Client):\n raise ValueError(\n f\"client should be an instance of marqo.Client, got {type(client)}\"\n )\n self._client = client\n self._index_name = index_name\n self._add_documents_settings = (\n {} if add_documents_settings is None else add_documents_settings\n )\n self._searchable_attributes = searchable_attributes\n self.page_content_builder = page_content_builder\n self._non_tensor_fields = [\"metadata\"]\n self._document_batch_size = 1024\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Upload texts with metadata (properties) to Marqo.\n You can either have marqo generate ids for each document or you can provide\n your own by including a \"_id\" field in the metadata objects.\n Args:\n texts (Iterable[str]): am iterator of texts - assumed to preserve an\n order that matches the metadatas.\n metadatas (Optional[List[dict]], optional): a list of metadatas.\n Raises:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/marqo.html"} {"id": "0c41807ab483-2", "text": "Raises:\n ValueError: if metadatas is provided and the number of metadatas differs\n from the number of texts.\n Returns:\n List[str]: The list of ids that were added.\n \"\"\"\n if self._client.index(self._index_name).get_settings()[\"index_defaults\"][\n \"treat_urls_and_pointers_as_images\"\n ]:\n raise ValueError(\n \"Marqo.add_texts is disabled for multimodal indexes. To add documents \"\n \"with a multimodal index use the Python client for Marqo directly.\"\n )\n documents: List[Dict[str, str]] = []\n num_docs = 0\n for i, text in enumerate(texts):\n doc = {\n \"text\": text,\n \"metadata\": json.dumps(metadatas[i]) if metadatas else json.dumps({}),\n }\n documents.append(doc)\n num_docs += 1\n ids = []\n for i in range(0, num_docs, self._document_batch_size):\n response = self._client.index(self._index_name).add_documents(\n documents[i : i + self._document_batch_size],\n non_tensor_fields=self._non_tensor_fields,\n **self._add_documents_settings,\n )\n if response[\"errors\"]:\n err_msg = (\n f\"Error in upload for documents in index range [{i},\"\n f\"{i + self._document_batch_size}], \"\n f\"check Marqo logs.\"\n )\n raise RuntimeError(err_msg)\n ids += [item[\"_id\"] for item in response[\"items\"]]\n return ids\n[docs] def similarity_search(\n self,\n query: Union[str, Dict[str, float]],\n k: int = 4,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/marqo.html"} {"id": "0c41807ab483-3", "text": "k: int = 4,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Search the marqo index for the most similar documents.\n Args:\n query (Union[str, Dict[str, float]]): The query for the search, either\n as a string or a weighted query.\n k (int, optional): The number of documents to return. Defaults to 4.\n Returns:\n List[Document]: k documents ordered from best to worst match.\n \"\"\"\n results = self.marqo_similarity_search(query=query, k=k)\n documents = self._construct_documents_from_results_without_score(results)\n return documents\n[docs] def similarity_search_with_score(\n self,\n query: Union[str, Dict[str, float]],\n k: int = 4,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return documents from Marqo that are similar to the query as well\n as their scores.\n Args:\n query (str): The query to search with, either as a string or a weighted\n query.\n k (int, optional): The number of documents to return. Defaults to 4.\n Returns:\n List[Tuple[Document, float]]: The matching documents and their scores,\n ordered by descending score.\n \"\"\"\n results = self.marqo_similarity_search(query=query, k=k)\n scored_documents = self._construct_documents_from_results_with_score(results)\n return scored_documents\n[docs] def bulk_similarity_search(\n self,\n queries: Iterable[Union[str, Dict[str, float]]],\n k: int = 4,\n **kwargs: Any,\n ) -> List[List[Document]]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/marqo.html"} {"id": "0c41807ab483-4", "text": "**kwargs: Any,\n ) -> List[List[Document]]:\n \"\"\"Search the marqo index for the most similar documents in bulk with multiple\n queries.\n Args:\n queries (Iterable[Union[str, Dict[str, float]]]): An iterable of queries to\n execute in bulk, queries in the list can be strings or dictonaries of\n weighted queries.\n k (int, optional): The number of documents to return for each query.\n Defaults to 4.\n Returns:\n List[List[Document]]: A list of results for each query.\n \"\"\"\n bulk_results = self.marqo_bulk_similarity_search(queries=queries, k=k)\n bulk_documents: List[List[Document]] = []\n for results in bulk_results[\"result\"]:\n documents = self._construct_documents_from_results_without_score(results)\n bulk_documents.append(documents)\n return bulk_documents\n[docs] def bulk_similarity_search_with_score(\n self,\n queries: Iterable[Union[str, Dict[str, float]]],\n k: int = 4,\n **kwargs: Any,\n ) -> List[List[Tuple[Document, float]]]:\n \"\"\"Return documents from Marqo that are similar to the query as well as\n their scores using a batch of queries.\n Args:\n query (Iterable[Union[str, Dict[str, float]]]): An iterable of queries\n to execute in bulk, queries in the list can be strings or dictonaries\n of weighted queries.\n k (int, optional): The number of documents to return. Defaults to 4.\n Returns:\n List[Tuple[Document, float]]: A list of lists of the matching\n documents and their scores for each query\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/marqo.html"} {"id": "0c41807ab483-5", "text": "documents and their scores for each query\n \"\"\"\n bulk_results = self.marqo_bulk_similarity_search(queries=queries, k=k)\n bulk_documents: List[List[Tuple[Document, float]]] = []\n for results in bulk_results[\"result\"]:\n documents = self._construct_documents_from_results_with_score(results)\n bulk_documents.append(documents)\n return bulk_documents\n def _construct_documents_from_results_with_score(\n self, results: Dict[str, List[Dict[str, str]]]\n ) -> List[Tuple[Document, Any]]:\n \"\"\"Helper to convert Marqo results into documents.\n Args:\n results (List[dict]): A marqo results object with the 'hits'.\n include_scores (bool, optional): Include scores alongside documents.\n Defaults to False.\n Returns:\n Union[List[Document], List[Tuple[Document, float]]]: The documents or\n document score pairs if `include_scores` is true.\n \"\"\"\n documents: List[Tuple[Document, Any]] = []\n for res in results[\"hits\"]:\n if self.page_content_builder is None:\n text = res[\"text\"]\n else:\n text = self.page_content_builder(res)\n metadata = json.loads(res.get(\"metadata\", \"{}\"))\n documents.append(\n (Document(page_content=text, metadata=metadata), res[\"_score\"])\n )\n return documents\n def _construct_documents_from_results_without_score(\n self, results: Dict[str, List[Dict[str, str]]]\n ) -> List[Document]:\n \"\"\"Helper to convert Marqo results into documents.\n Args:\n results (List[dict]): A marqo results object with the 'hits'.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/marqo.html"} {"id": "0c41807ab483-6", "text": "results (List[dict]): A marqo results object with the 'hits'.\n include_scores (bool, optional): Include scores alongside documents.\n Defaults to False.\n Returns:\n Union[List[Document], List[Tuple[Document, float]]]: The documents or\n document score pairs if `include_scores` is true.\n \"\"\"\n documents: List[Document] = []\n for res in results[\"hits\"]:\n if self.page_content_builder is None:\n text = res[\"text\"]\n else:\n text = self.page_content_builder(res)\n metadata = json.loads(res.get(\"metadata\", \"{}\"))\n documents.append(Document(page_content=text, metadata=metadata))\n return documents\n[docs] def marqo_similarity_search(\n self,\n query: Union[str, Dict[str, float]],\n k: int = 4,\n ) -> Dict[str, List[Dict[str, str]]]:\n \"\"\"Return documents from Marqo exposing Marqo's output directly\n Args:\n query (str): The query to search with.\n k (int, optional): The number of documents to return. Defaults to 4.\n Returns:\n List[Dict[str, Any]]: This hits from marqo.\n \"\"\"\n results = self._client.index(self._index_name).search(\n q=query, searchable_attributes=self._searchable_attributes, limit=k\n )\n return results\n[docs] def marqo_bulk_similarity_search(\n self, queries: Iterable[Union[str, Dict[str, float]]], k: int = 4\n ) -> Dict[str, List[Dict[str, List[Dict[str, str]]]]]:\n \"\"\"Return documents from Marqo using a bulk search, exposes Marqo's", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/marqo.html"} {"id": "0c41807ab483-7", "text": "\"\"\"Return documents from Marqo using a bulk search, exposes Marqo's\n output directly\n Args:\n queries (Iterable[Union[str, Dict[str, float]]]): A list of queries.\n k (int, optional): The number of documents to return for each query.\n Defaults to 4.\n Returns:\n Dict[str, Dict[List[Dict[str, Dict[str, Any]]]]]: A bulk search results\n object\n \"\"\"\n bulk_results = self._client.bulk_search(\n [\n {\n \"index\": self._index_name,\n \"q\": query,\n \"searchableAttributes\": self._searchable_attributes,\n \"limit\": k,\n }\n for query in queries\n ]\n )\n return bulk_results\n[docs] @classmethod\n def from_documents(\n cls: Type[Marqo],\n documents: List[Document],\n embedding: Union[Embeddings, None] = None,\n **kwargs: Any,\n ) -> Marqo:\n \"\"\"Return VectorStore initialized from documents. Note that Marqo does not\n need embeddings, we retain the parameter to adhere to the Liskov substitution\n principle.\n Args:\n documents (List[Document]): Input documents\n embedding (Any, optional): Embeddings (not required). Defaults to None.\n Returns:\n VectorStore: A Marqo vectorstore\n \"\"\"\n texts = [d.page_content for d in documents]\n metadatas = [d.metadata for d in documents]\n return cls.from_texts(texts, metadatas=metadatas, **kwargs)\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/marqo.html"} {"id": "0c41807ab483-8", "text": "def from_texts(\n cls,\n texts: List[str],\n embedding: Any = None,\n metadatas: Optional[List[dict]] = None,\n index_name: str = \"\",\n url: str = \"http://localhost:8882\",\n api_key: str = \"\",\n add_documents_settings: Optional[Dict[str, Any]] = {},\n searchable_attributes: Optional[List[str]] = None,\n page_content_builder: Optional[Callable[[Dict[str, str]], str]] = None,\n index_settings: Optional[Dict[str, Any]] = {},\n verbose: bool = True,\n **kwargs: Any,\n ) -> Marqo:\n \"\"\"Return Marqo initialized from texts. Note that Marqo does not need\n embeddings, we retain the parameter to adhere to the Liskov\n substitution principle.\n This is a quick way to get started with marqo - simply provide your texts and\n metadatas and this will create an instance of the data store and index the\n provided data.\n To know the ids of your documents with this approach you will need to include\n them in under the key \"_id\" in your metadatas for each text\n Example:\n .. code-block:: python\n from langchain.vectorstores import Marqo\n datastore = Marqo(texts=['text'], index_name='my-first-index',\n url='http://localhost:8882')\n Args:\n texts (List[str]): A list of texts to index into marqo upon creation.\n embedding (Any, optional): Embeddings (not required). Defaults to None.\n index_name (str, optional): The name of the index to use, if none is\n provided then one will be created with a UUID. Defaults to None.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/marqo.html"} {"id": "0c41807ab483-9", "text": "provided then one will be created with a UUID. Defaults to None.\n url (str, optional): The URL for Marqo. Defaults to \"http://localhost:8882\".\n api_key (str, optional): The API key for Marqo. Defaults to \"\".\n metadatas (Optional[List[dict]], optional): A list of metadatas, to\n accompany the texts. Defaults to None.\n this is only used when a new index is being created. Defaults to \"cpu\". Can\n be \"cpu\" or \"cuda\".\n add_documents_settings (Optional[Dict[str, Any]], optional): Settings\n for adding documents, see\n https://docs.marqo.ai/0.0.16/API-Reference/documents/#query-parameters.\n Defaults to {}.\n index_settings (Optional[Dict[str, Any]], optional): Index settings if\n the index doesn't exist, see\n https://docs.marqo.ai/0.0.16/API-Reference/indexes/#index-defaults-object.\n Defaults to {}.\n Returns:\n Marqo: An instance of the Marqo vector store\n \"\"\"\n try:\n import marqo\n except ImportError:\n raise ValueError(\n \"Could not import marqo python package. \"\n \"Please install it with `pip install marqo`.\"\n )\n if not index_name:\n index_name = str(uuid.uuid4())\n client = marqo.Client(url=url, api_key=api_key)\n try:\n client.create_index(index_name, settings_dict=index_settings)\n if verbose:\n print(f\"Created {index_name} successfully.\")\n except Exception:\n if verbose:\n print(f\"Index {index_name} exists.\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/marqo.html"} {"id": "0c41807ab483-10", "text": "if verbose:\n print(f\"Index {index_name} exists.\")\n instance: Marqo = cls(\n client,\n index_name,\n searchable_attributes=searchable_attributes,\n add_documents_settings=add_documents_settings,\n page_content_builder=page_content_builder,\n )\n instance.add_texts(texts, metadatas)\n return instance\n[docs] def get_indexes(self) -> List[Dict[str, str]]:\n \"\"\"Helper to see your available indexes in marqo, useful if the\n from_texts method was used without an index name specified\n Returns:\n List[Dict[str, str]]: The list of indexes\n \"\"\"\n return self._client.get_indexes()[\"results\"]\n[docs] def get_number_of_documents(self) -> int:\n \"\"\"Helper to see the number of documents in the index\n Returns:\n int: The number of documents\n \"\"\"\n return self._client.index(self._index_name).get_stats()[\"numberOfDocuments\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/marqo.html"} {"id": "5e1bce9d3d44-0", "text": "Source code for langchain.vectorstores.pinecone\n\"\"\"Wrapper around Pinecone vector database.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport uuid\nfrom typing import Any, Callable, Iterable, List, Optional, Tuple\nimport numpy as np\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nlogger = logging.getLogger(__name__)\n[docs]class Pinecone(VectorStore):\n \"\"\"Wrapper around Pinecone vector database.\n To use, you should have the ``pinecone-client`` python package installed.\n Example:\n .. code-block:: python\n from langchain.vectorstores import Pinecone\n from langchain.embeddings.openai import OpenAIEmbeddings\n import pinecone\n # The environment should be the one specified next to the API key\n # in your Pinecone console\n pinecone.init(api_key=\"***\", environment=\"...\")\n index = pinecone.Index(\"langchain-demo\")\n embeddings = OpenAIEmbeddings()\n vectorstore = Pinecone(index, embeddings.embed_query, \"text\")\n \"\"\"\n def __init__(\n self,\n index: Any,\n embedding_function: Callable,\n text_key: str,\n namespace: Optional[str] = None,\n ):\n \"\"\"Initialize with Pinecone client.\"\"\"\n try:\n import pinecone\n except ImportError:\n raise ValueError(\n \"Could not import pinecone python package. \"\n \"Please install it with `pip install pinecone-client`.\"\n )\n if not isinstance(index, pinecone.index.Index):\n raise ValueError(\n f\"client should be an instance of pinecone.index.Index, \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html"} {"id": "5e1bce9d3d44-1", "text": "f\"client should be an instance of pinecone.index.Index, \"\n f\"got {type(index)}\"\n )\n self._index = index\n self._embedding_function = embedding_function\n self._text_key = text_key\n self._namespace = namespace\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n namespace: Optional[str] = None,\n batch_size: int = 32,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n ids: Optional list of ids to associate with the texts.\n namespace: Optional pinecone namespace to add the texts to.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n if namespace is None:\n namespace = self._namespace\n # Embed and create the documents\n docs = []\n ids = ids or [str(uuid.uuid4()) for _ in texts]\n for i, text in enumerate(texts):\n embedding = self._embedding_function(text)\n metadata = metadatas[i] if metadatas else {}\n metadata[self._text_key] = text\n docs.append((ids[i], embedding, metadata))\n # upsert to Pinecone\n self._index.upsert(vectors=docs, namespace=namespace, batch_size=batch_size)\n return ids\n[docs] def similarity_search_with_score(\n self,\n query: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html"} {"id": "5e1bce9d3d44-2", "text": "self,\n query: str,\n k: int = 4,\n filter: Optional[dict] = None,\n namespace: Optional[str] = None,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return pinecone documents most similar to query, along with scores.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter: Dictionary of argument(s) to filter on metadata\n namespace: Namespace to search in. Default will search in '' namespace.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n if namespace is None:\n namespace = self._namespace\n query_obj = self._embedding_function(query)\n docs = []\n results = self._index.query(\n [query_obj],\n top_k=k,\n include_metadata=True,\n namespace=namespace,\n filter=filter,\n )\n for res in results[\"matches\"]:\n metadata = res[\"metadata\"]\n if self._text_key in metadata:\n text = metadata.pop(self._text_key)\n score = res[\"score\"]\n docs.append((Document(page_content=text, metadata=metadata), score))\n else:\n logger.warning(\n f\"Found document with no `{self._text_key}` key. Skipping.\"\n )\n return docs\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n filter: Optional[dict] = None,\n namespace: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return pinecone documents most similar to query.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html"} {"id": "5e1bce9d3d44-3", "text": "\"\"\"Return pinecone documents most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter: Dictionary of argument(s) to filter on metadata\n namespace: Namespace to search in. Default will search in '' namespace.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(\n query, k=k, filter=filter, namespace=namespace, **kwargs\n )\n return [doc for doc, _ in docs_and_scores]\n def _similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n kwargs.pop(\"score_threshold\", None)\n return self.similarity_search_with_score(query, k, **kwargs)\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n filter: Optional[dict] = None,\n namespace: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html"} {"id": "5e1bce9d3d44-4", "text": "fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n if namespace is None:\n namespace = self._namespace\n results = self._index.query(\n [embedding],\n top_k=fetch_k,\n include_values=True,\n include_metadata=True,\n namespace=namespace,\n filter=filter,\n )\n mmr_selected = maximal_marginal_relevance(\n np.array([embedding], dtype=np.float32),\n [item[\"values\"] for item in results[\"matches\"]],\n k=k,\n lambda_mult=lambda_mult,\n )\n selected = [results[\"matches\"][i][\"metadata\"] for i in mmr_selected]\n return [\n Document(page_content=metadata.pop((self._text_key)), metadata=metadata)\n for metadata in selected\n ]\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n filter: Optional[dict] = None,\n namespace: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html"} {"id": "5e1bce9d3d44-5", "text": "Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n embedding = self._embedding_function(query)\n return self.max_marginal_relevance_search_by_vector(\n embedding, k, fetch_k, lambda_mult, filter, namespace\n )\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n batch_size: int = 32,\n text_key: str = \"text\",\n index_name: Optional[str] = None,\n namespace: Optional[str] = None,\n **kwargs: Any,\n ) -> Pinecone:\n \"\"\"Construct Pinecone wrapper from raw documents.\n This is a user friendly interface that:\n 1. Embeds documents.\n 2. Adds the documents to a provided Pinecone index\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import Pinecone\n from langchain.embeddings import OpenAIEmbeddings\n import pinecone\n # The environment should be the one specified next to the API key\n # in your Pinecone console\n pinecone.init(api_key=\"***\", environment=\"...\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html"} {"id": "5e1bce9d3d44-6", "text": "pinecone.init(api_key=\"***\", environment=\"...\")\n embeddings = OpenAIEmbeddings()\n pinecone = Pinecone.from_texts(\n texts,\n embeddings,\n index_name=\"langchain-demo\"\n )\n \"\"\"\n try:\n import pinecone\n except ImportError:\n raise ValueError(\n \"Could not import pinecone python package. \"\n \"Please install it with `pip install pinecone-client`.\"\n )\n indexes = pinecone.list_indexes() # checks if provided index exists\n if index_name in indexes:\n index = pinecone.Index(index_name)\n elif len(indexes) == 0:\n raise ValueError(\n \"No active indexes found in your Pinecone project, \"\n \"are you sure you're using the right API key and environment?\"\n )\n else:\n raise ValueError(\n f\"Index '{index_name}' not found in your Pinecone project. \"\n f\"Did you mean one of the following indexes: {', '.join(indexes)}\"\n )\n for i in range(0, len(texts), batch_size):\n # set end position of batch\n i_end = min(i + batch_size, len(texts))\n # get batch of texts and ids\n lines_batch = texts[i:i_end]\n # create ids if not provided\n if ids:\n ids_batch = ids[i:i_end]\n else:\n ids_batch = [str(uuid.uuid4()) for n in range(i, i_end)]\n # create embeddings\n embeds = embedding.embed_documents(lines_batch)\n # prep metadata and upsert batch\n if metadatas:\n metadata = metadatas[i:i_end]\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html"} {"id": "5e1bce9d3d44-7", "text": "metadata = metadatas[i:i_end]\n else:\n metadata = [{} for _ in range(i, i_end)]\n for j, line in enumerate(lines_batch):\n metadata[j][text_key] = line\n to_upsert = zip(ids_batch, embeds, metadata)\n # upsert to Pinecone\n index.upsert(vectors=list(to_upsert), namespace=namespace)\n return cls(index, embedding.embed_query, text_key, namespace)\n[docs] @classmethod\n def from_existing_index(\n cls,\n index_name: str,\n embedding: Embeddings,\n text_key: str = \"text\",\n namespace: Optional[str] = None,\n ) -> Pinecone:\n \"\"\"Load pinecone vectorstore from index name.\"\"\"\n try:\n import pinecone\n except ImportError:\n raise ValueError(\n \"Could not import pinecone python package. \"\n \"Please install it with `pip install pinecone-client`.\"\n )\n return cls(\n pinecone.Index(index_name), embedding.embed_query, text_key, namespace\n )\n[docs] def delete(\n self,\n ids: Optional[List[str]] = None,\n delete_all: Optional[bool] = None,\n namespace: Optional[str] = None,\n filter: Optional[dict] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Delete by vector IDs or filter.\n Args:\n ids: List of ids to delete.\n filter: Dictionary of conditions to filter vectors to delete.\n \"\"\"\n if namespace is None:\n namespace = self._namespace\n if delete_all:\n self._index.delete(delete_all=True, namespace=namespace, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html"} {"id": "5e1bce9d3d44-8", "text": "self._index.delete(delete_all=True, namespace=namespace, **kwargs)\n elif ids is not None:\n chunk_size = 1000\n for i in range(0, len(ids), chunk_size):\n chunk = ids[i : i + chunk_size]\n self._index.delete(ids=chunk, namespace=namespace, **kwargs)\n elif filter is not None:\n self._index.delete(filter=filter, namespace=namespace, **kwargs)\n else:\n raise ValueError(\"Either ids, delete_all, or filter must be provided.\")\n return None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html"} {"id": "bea56205a983-0", "text": "Source code for langchain.vectorstores.singlestoredb\n\"\"\"Wrapper around SingleStore DB.\"\"\"\nfrom __future__ import annotations\nimport enum\nimport json\nfrom typing import Any, ClassVar, Collection, Iterable, List, Optional, Tuple, Type\nfrom sqlalchemy.pool import QueuePool\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForRetrieverRun,\n CallbackManagerForRetrieverRun,\n)\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore, VectorStoreRetriever\n[docs]class DistanceStrategy(str, enum.Enum):\n \"\"\"Enumerator of the Distance strategies for SingleStoreDB.\"\"\"\n EUCLIDEAN_DISTANCE = \"EUCLIDEAN_DISTANCE\"\n DOT_PRODUCT = \"DOT_PRODUCT\"\nDEFAULT_DISTANCE_STRATEGY = DistanceStrategy.DOT_PRODUCT\nORDERING_DIRECTIVE: dict = {\n DistanceStrategy.EUCLIDEAN_DISTANCE: \"\",\n DistanceStrategy.DOT_PRODUCT: \"DESC\",\n}\n[docs]class SingleStoreDB(VectorStore):\n \"\"\"\n This class serves as a Pythonic interface to the SingleStore DB database.\n The prerequisite for using this class is the installation of the ``singlestoredb``\n Python package.\n The SingleStoreDB vectorstore can be created by providing an embedding function and\n the relevant parameters for the database connection, connection pool, and\n optionally, the names of the table and the fields to use.\n \"\"\"\n def _get_connection(self: SingleStoreDB) -> Any:\n try:\n import singlestoredb as s2\n except ImportError:\n raise ImportError(\n \"Could not import singlestoredb python package. \"\n \"Please install it with `pip install singlestoredb`.\"\n )\n return s2.connect(**self.connection_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"} {"id": "bea56205a983-1", "text": ")\n return s2.connect(**self.connection_kwargs)\n def __init__(\n self,\n embedding: Embeddings,\n *,\n distance_strategy: DistanceStrategy = DEFAULT_DISTANCE_STRATEGY,\n table_name: str = \"embeddings\",\n content_field: str = \"content\",\n metadata_field: str = \"metadata\",\n vector_field: str = \"vector\",\n pool_size: int = 5,\n max_overflow: int = 10,\n timeout: float = 30,\n **kwargs: Any,\n ):\n \"\"\"Initialize with necessary components.\n Args:\n embedding (Embeddings): A text embedding model.\n distance_strategy (DistanceStrategy, optional):\n Determines the strategy employed for calculating\n the distance between vectors in the embedding space.\n Defaults to DOT_PRODUCT.\n Available options are:\n - DOT_PRODUCT: Computes the scalar product of two vectors.\n This is the default behavior\n - EUCLIDEAN_DISTANCE: Computes the Euclidean distance between\n two vectors. This metric considers the geometric distance in\n the vector space, and might be more suitable for embeddings\n that rely on spatial relationships.\n table_name (str, optional): Specifies the name of the table in use.\n Defaults to \"embeddings\".\n content_field (str, optional): Specifies the field to store the content.\n Defaults to \"content\".\n metadata_field (str, optional): Specifies the field to store metadata.\n Defaults to \"metadata\".\n vector_field (str, optional): Specifies the field to store the vector.\n Defaults to \"vector\".\n Following arguments pertain to the connection pool:\n pool_size (int, optional): Determines the number of active connections in\n the pool. Defaults to 5.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"} {"id": "bea56205a983-2", "text": "the pool. Defaults to 5.\n max_overflow (int, optional): Determines the maximum number of connections\n allowed beyond the pool_size. Defaults to 10.\n timeout (float, optional): Specifies the maximum wait time in seconds for\n establishing a connection. Defaults to 30.\n Following arguments pertain to the database connection:\n host (str, optional): Specifies the hostname, IP address, or URL for the\n database connection. The default scheme is \"mysql\".\n user (str, optional): Database username.\n password (str, optional): Database password.\n port (int, optional): Database port. Defaults to 3306 for non-HTTP\n connections, 80 for HTTP connections, and 443 for HTTPS connections.\n database (str, optional): Database name.\n Additional optional arguments provide further customization over the\n database connection:\n pure_python (bool, optional): Toggles the connector mode. If True,\n operates in pure Python mode.\n local_infile (bool, optional): Allows local file uploads.\n charset (str, optional): Specifies the character set for string values.\n ssl_key (str, optional): Specifies the path of the file containing the SSL\n key.\n ssl_cert (str, optional): Specifies the path of the file containing the SSL\n certificate.\n ssl_ca (str, optional): Specifies the path of the file containing the SSL\n certificate authority.\n ssl_cipher (str, optional): Sets the SSL cipher list.\n ssl_disabled (bool, optional): Disables SSL usage.\n ssl_verify_cert (bool, optional): Verifies the server's certificate.\n Automatically enabled if ``ssl_ca`` is specified.\n ssl_verify_identity (bool, optional): Verifies the server's identity.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"} {"id": "bea56205a983-3", "text": "ssl_verify_identity (bool, optional): Verifies the server's identity.\n conv (dict[int, Callable], optional): A dictionary of data conversion\n functions.\n credential_type (str, optional): Specifies the type of authentication to\n use: auth.PASSWORD, auth.JWT, or auth.BROWSER_SSO.\n autocommit (bool, optional): Enables autocommits.\n results_type (str, optional): Determines the structure of the query results:\n tuples, namedtuples, dicts.\n results_format (str, optional): Deprecated. This option has been renamed to\n results_type.\n Examples:\n Basic Usage:\n .. code-block:: python\n from langchain.embeddings import OpenAIEmbeddings\n from langchain.vectorstores import SingleStoreDB\n vectorstore = SingleStoreDB(\n OpenAIEmbeddings(),\n host=\"https://user:password@127.0.0.1:3306/database\"\n )\n Advanced Usage:\n .. code-block:: python\n from langchain.embeddings import OpenAIEmbeddings\n from langchain.vectorstores import SingleStoreDB\n vectorstore = SingleStoreDB(\n OpenAIEmbeddings(),\n distance_strategy=DistanceStrategy.EUCLIDEAN_DISTANCE,\n host=\"127.0.0.1\",\n port=3306,\n user=\"user\",\n password=\"password\",\n database=\"db\",\n table_name=\"my_custom_table\",\n pool_size=10,\n timeout=60,\n )\n Using environment variables:\n .. code-block:: python\n from langchain.embeddings import OpenAIEmbeddings\n from langchain.vectorstores import SingleStoreDB", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"} {"id": "bea56205a983-4", "text": "from langchain.vectorstores import SingleStoreDB\n os.environ['SINGLESTOREDB_URL'] = 'me:p455w0rd@s2-host.com/my_db'\n vectorstore = SingleStoreDB(OpenAIEmbeddings())\n \"\"\"\n self.embedding = embedding\n self.distance_strategy = distance_strategy\n self.table_name = table_name\n self.content_field = content_field\n self.metadata_field = metadata_field\n self.vector_field = vector_field\n \"\"\"Pass the rest of the kwargs to the connection.\"\"\"\n self.connection_kwargs = kwargs\n \"\"\"Add program name and version to connection attributes.\"\"\"\n if \"conn_attrs\" not in self.connection_kwargs:\n self.connection_kwargs[\"conn_attrs\"] = dict()\n self.connection_kwargs[\"conn_attrs\"][\"_connector_name\"] = \"langchain python sdk\"\n self.connection_kwargs[\"conn_attrs\"][\"_connector_version\"] = \"1.0.0\"\n \"\"\"Create connection pool.\"\"\"\n self.connection_pool = QueuePool(\n self._get_connection,\n max_overflow=max_overflow,\n pool_size=pool_size,\n timeout=timeout,\n )\n self._create_table()\n def _create_table(self: SingleStoreDB) -> None:\n \"\"\"Create table if it doesn't exist.\"\"\"\n conn = self.connection_pool.connect()\n try:\n cur = conn.cursor()\n try:\n cur.execute(\n \"\"\"CREATE TABLE IF NOT EXISTS {}\n ({} TEXT CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci,\n {} BLOB, {} JSON);\"\"\".format(\n self.table_name,\n self.content_field,\n self.vector_field,\n self.metadata_field,\n ),\n )\n finally:\n cur.close()\n finally:\n conn.close()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"} {"id": "bea56205a983-5", "text": "finally:\n cur.close()\n finally:\n conn.close()\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n embeddings: Optional[List[List[float]]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Add more texts to the vectorstore.\n Args:\n texts (Iterable[str]): Iterable of strings/text to add to the vectorstore.\n metadatas (Optional[List[dict]], optional): Optional list of metadatas.\n Defaults to None.\n embeddings (Optional[List[List[float]]], optional): Optional pre-generated\n embeddings. Defaults to None.\n Returns:\n List[str]: empty list\n \"\"\"\n conn = self.connection_pool.connect()\n try:\n cur = conn.cursor()\n try:\n # Write data to singlestore db\n for i, text in enumerate(texts):\n # Use provided values by default or fallback\n metadata = metadatas[i] if metadatas else {}\n embedding = (\n embeddings[i]\n if embeddings\n else self.embedding.embed_documents([text])[0]\n )\n cur.execute(\n \"INSERT INTO {} VALUES (%s, JSON_ARRAY_PACK(%s), %s)\".format(\n self.table_name\n ),\n (\n text,\n \"[{}]\".format(\",\".join(map(str, embedding))),\n json.dumps(metadata),\n ),\n )\n finally:\n cur.close()\n finally:\n conn.close()\n return []\n[docs] def similarity_search(\n self, query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"} {"id": "bea56205a983-6", "text": ") -> List[Document]:\n \"\"\"Returns the most similar indexed documents to the query text.\n Uses cosine similarity.\n Args:\n query (str): The query text for which to find similar documents.\n k (int): The number of documents to return. Default is 4.\n filter (dict): A dictionary of metadata fields and values to filter by.\n Returns:\n List[Document]: A list of documents that are most similar to the query text.\n Examples:\n .. code-block:: python\n from langchain.vectorstores import SingleStoreDB\n from langchain.embeddings import OpenAIEmbeddings\n s2 = SingleStoreDB.from_documents(\n docs,\n OpenAIEmbeddings(),\n host=\"username:password@localhost:3306/database\"\n )\n s2.similarity_search(\"query text\", 1,\n {\"metadata_field\": \"metadata_value\"})\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(\n query=query, k=k, filter=filter\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] def similarity_search_with_score(\n self, query: str, k: int = 4, filter: Optional[dict] = None\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query. Uses cosine similarity.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter: A dictionary of metadata fields and values to filter by.\n Defaults to None.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n # Creates embedding vector from user query\n embedding = self.embedding.embed_query(query)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"} {"id": "bea56205a983-7", "text": "# Creates embedding vector from user query\n embedding = self.embedding.embed_query(query)\n conn = self.connection_pool.connect()\n result = []\n where_clause: str = \"\"\n where_clause_values: List[Any] = []\n if filter:\n where_clause = \"WHERE \"\n arguments = []\n def build_where_clause(\n where_clause_values: List[Any],\n sub_filter: dict,\n prefix_args: List[str] = [],\n ) -> None:\n for key in sub_filter.keys():\n if isinstance(sub_filter[key], dict):\n build_where_clause(\n where_clause_values, sub_filter[key], prefix_args + [key]\n )\n else:\n arguments.append(\n \"JSON_EXTRACT_JSON({}, {}) = %s\".format(\n self.metadata_field,\n \", \".join([\"%s\"] * (len(prefix_args) + 1)),\n )\n )\n where_clause_values += prefix_args + [key]\n where_clause_values.append(json.dumps(sub_filter[key]))\n build_where_clause(where_clause_values, filter)\n where_clause += \" AND \".join(arguments)\n try:\n cur = conn.cursor()\n try:\n cur.execute(\n \"\"\"SELECT {}, {}, {}({}, JSON_ARRAY_PACK(%s)) as __score\n FROM {} {} ORDER BY __score {} LIMIT %s\"\"\".format(\n self.content_field,\n self.metadata_field,\n self.distance_strategy,\n self.vector_field,\n self.table_name,\n where_clause,\n ORDERING_DIRECTIVE[self.distance_strategy],\n ),\n (\"[{}]\".format(\",\".join(map(str, embedding))),)\n + tuple(where_clause_values)\n + (k,),\n )\n for row in cur.fetchall():", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"} {"id": "bea56205a983-8", "text": "+ (k,),\n )\n for row in cur.fetchall():\n doc = Document(page_content=row[0], metadata=row[1])\n result.append((doc, float(row[2])))\n finally:\n cur.close()\n finally:\n conn.close()\n return result\n[docs] @classmethod\n def from_texts(\n cls: Type[SingleStoreDB],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n distance_strategy: DistanceStrategy = DEFAULT_DISTANCE_STRATEGY,\n table_name: str = \"embeddings\",\n content_field: str = \"content\",\n metadata_field: str = \"metadata\",\n vector_field: str = \"vector\",\n pool_size: int = 5,\n max_overflow: int = 10,\n timeout: float = 30,\n **kwargs: Any,\n ) -> SingleStoreDB:\n \"\"\"Create a SingleStoreDB vectorstore from raw documents.\n This is a user-friendly interface that:\n 1. Embeds documents.\n 2. Creates a new table for the embeddings in SingleStoreDB.\n 3. Adds the documents to the newly created table.\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain.vectorstores import SingleStoreDB\n from langchain.embeddings import OpenAIEmbeddings\n s2 = SingleStoreDB.from_texts(\n texts,\n OpenAIEmbeddings(),\n host=\"username:password@localhost:3306/database\"\n )\n \"\"\"\n instance = cls(\n embedding,\n distance_strategy=distance_strategy,\n table_name=table_name,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"} {"id": "bea56205a983-9", "text": "embedding,\n distance_strategy=distance_strategy,\n table_name=table_name,\n content_field=content_field,\n metadata_field=metadata_field,\n vector_field=vector_field,\n pool_size=pool_size,\n max_overflow=max_overflow,\n timeout=timeout,\n **kwargs,\n )\n instance.add_texts(texts, metadatas, embedding.embed_documents(texts), **kwargs)\n return instance\n[docs] def as_retriever(self, **kwargs: Any) -> SingleStoreDBRetriever:\n return SingleStoreDBRetriever(vectorstore=self, **kwargs)\n[docs]class SingleStoreDBRetriever(VectorStoreRetriever):\n \"\"\"Retriever for SingleStoreDB vector stores.\"\"\"\n vectorstore: SingleStoreDB\n k: int = 4\n allowed_search_types: ClassVar[Collection[str]] = (\"similarity\",)\n def _get_relevant_documents(\n self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n ) -> List[Document]:\n if self.search_type == \"similarity\":\n docs = self.vectorstore.similarity_search(query, k=self.k)\n else:\n raise ValueError(f\"search_type of {self.search_type} not allowed.\")\n return docs\n async def _aget_relevant_documents(\n self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n ) -> List[Document]:\n raise NotImplementedError(\n \"SingleStoreDBVectorStoreRetriever does not support async\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"} {"id": "f731db9a5d89-0", "text": "Source code for langchain.vectorstores.pgvector\n\"\"\"VectorStore wrapper around a Postgres/PGVector database.\"\"\"\nfrom __future__ import annotations\nimport enum\nimport logging\nimport uuid\nfrom typing import Any, Dict, Iterable, List, Optional, Tuple, Type\nimport sqlalchemy\nfrom sqlalchemy.dialects.postgresql import JSON, UUID\nfrom sqlalchemy.orm import Session, declarative_base, relationship\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nfrom langchain.vectorstores.base import VectorStore\n[docs]class DistanceStrategy(str, enum.Enum):\n \"\"\"Enumerator of the Distance strategies.\"\"\"\n EUCLIDEAN = \"l2\"\n COSINE = \"cosine\"\n MAX_INNER_PRODUCT = \"inner\"\nDEFAULT_DISTANCE_STRATEGY = DistanceStrategy.COSINE\nBase = declarative_base() # type: Any\n_LANGCHAIN_DEFAULT_COLLECTION_NAME = \"langchain\"\n[docs]class BaseModel(Base):\n __abstract__ = True\n uuid = sqlalchemy.Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)\n[docs]class CollectionStore(BaseModel):\n __tablename__ = \"langchain_pg_collection\"\n name = sqlalchemy.Column(sqlalchemy.String)\n cmetadata = sqlalchemy.Column(JSON)\n embeddings = relationship(\n \"EmbeddingStore\",\n back_populates=\"collection\",\n passive_deletes=True,\n )\n[docs] @classmethod\n def get_by_name(cls, session: Session, name: str) -> Optional[\"CollectionStore\"]:\n return session.query(cls).filter(cls.name == name).first() # type: ignore\n[docs] @classmethod\n def get_or_create(\n cls,\n session: Session,\n name: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pgvector.html"} {"id": "f731db9a5d89-1", "text": "cls,\n session: Session,\n name: str,\n cmetadata: Optional[dict] = None,\n ) -> Tuple[\"CollectionStore\", bool]:\n \"\"\"\n Get or create a collection.\n Returns [Collection, bool] where the bool is True if the collection was created.\n \"\"\"\n created = False\n collection = cls.get_by_name(session, name)\n if collection:\n return collection, created\n collection = cls(name=name, cmetadata=cmetadata)\n session.add(collection)\n session.commit()\n created = True\n return collection, created\n[docs]class PGVector(VectorStore):\n \"\"\"VectorStore implementation using Postgres and pgvector.\n To use, you should have the ``pgvector`` python package installed.\n Args:\n connection_string: Postgres connection string.\n embedding_function: Any embedding function implementing\n `langchain.embeddings.base.Embeddings` interface.\n collection_name: The name of the collection to use. (default: langchain)\n NOTE: This is not the name of the table, but the name of the collection.\n The tables will be created when initializing the store (if not exists)\n So, make sure the user has the right permissions to create tables.\n distance_strategy: The distance strategy to use. (default: COSINE)\n pre_delete_collection: If True, will delete the collection if it exists.\n (default: False). Useful for testing.\n Example:\n .. code-block:: python\n from langchain.vectorstores import PGVector\n from langchain.embeddings.openai import OpenAIEmbeddings\n CONNECTION_STRING = \"postgresql+psycopg2://hwc@localhost:5432/test3\"\n COLLECTION_NAME = \"state_of_the_union_test\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pgvector.html"} {"id": "f731db9a5d89-2", "text": "COLLECTION_NAME = \"state_of_the_union_test\"\n embeddings = OpenAIEmbeddings()\n vectorestore = PGVector.from_documents(\n embedding=embeddings,\n documents=docs,\n collection_name=COLLECTION_NAME,\n connection_string=CONNECTION_STRING,\n )\n \"\"\"\n def __init__(\n self,\n connection_string: str,\n embedding_function: Embeddings,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n collection_metadata: Optional[dict] = None,\n distance_strategy: DistanceStrategy = DEFAULT_DISTANCE_STRATEGY,\n pre_delete_collection: bool = False,\n logger: Optional[logging.Logger] = None,\n ) -> None:\n self.connection_string = connection_string\n self.embedding_function = embedding_function\n self.collection_name = collection_name\n self.collection_metadata = collection_metadata\n self._distance_strategy = distance_strategy\n self.pre_delete_collection = pre_delete_collection\n self.logger = logger or logging.getLogger(__name__)\n self.__post_init__()\n def __post_init__(\n self,\n ) -> None:\n \"\"\"\n Initialize the store.\n \"\"\"\n self._conn = self.connect()\n # self.create_vector_extension()\n from langchain.vectorstores._pgvector_data_models import EmbeddingStore\n self.EmbeddingStore = EmbeddingStore\n self.create_tables_if_not_exists()\n self.create_collection()\n[docs] def connect(self) -> sqlalchemy.engine.Connection:\n engine = sqlalchemy.create_engine(self.connection_string)\n conn = engine.connect()\n return conn\n[docs] def create_vector_extension(self) -> None:\n try:\n with Session(self._conn) as session:\n statement = sqlalchemy.text(\"CREATE EXTENSION IF NOT EXISTS vector\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pgvector.html"} {"id": "f731db9a5d89-3", "text": "statement = sqlalchemy.text(\"CREATE EXTENSION IF NOT EXISTS vector\")\n session.execute(statement)\n session.commit()\n except Exception as e:\n self.logger.exception(e)\n[docs] def create_tables_if_not_exists(self) -> None:\n with self._conn.begin():\n Base.metadata.create_all(self._conn)\n[docs] def drop_tables(self) -> None:\n with self._conn.begin():\n Base.metadata.drop_all(self._conn)\n[docs] def create_collection(self) -> None:\n if self.pre_delete_collection:\n self.delete_collection()\n with Session(self._conn) as session:\n CollectionStore.get_or_create(\n session, self.collection_name, cmetadata=self.collection_metadata\n )\n[docs] def delete_collection(self) -> None:\n self.logger.debug(\"Trying to delete collection\")\n with Session(self._conn) as session:\n collection = self.get_collection(session)\n if not collection:\n self.logger.warning(\"Collection not found\")\n return\n session.delete(collection)\n session.commit()\n[docs] def get_collection(self, session: Session) -> Optional[\"CollectionStore\"]:\n return CollectionStore.get_by_name(session, self.collection_name)\n @classmethod\n def __from(\n cls,\n texts: List[str],\n embeddings: List[List[float]],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n distance_strategy: DistanceStrategy = DEFAULT_DISTANCE_STRATEGY,\n pre_delete_collection: bool = False,\n **kwargs: Any,\n ) -> PGVector:\n connection_string = cls.get_connection_string(kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pgvector.html"} {"id": "f731db9a5d89-4", "text": ") -> PGVector:\n connection_string = cls.get_connection_string(kwargs)\n store = cls(\n connection_string=connection_string,\n collection_name=collection_name,\n embedding_function=embedding,\n distance_strategy=distance_strategy,\n pre_delete_collection=pre_delete_collection,\n )\n store.add_embeddings(\n texts=texts, embeddings=embeddings, metadatas=metadatas, ids=ids, **kwargs\n )\n return store\n[docs] def add_embeddings(\n self,\n texts: Iterable[str],\n embeddings: List[List[float]],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Add embeddings to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n embeddings: List of list of embedding vectors.\n metadatas: List of metadatas associated with the texts.\n kwargs: vectorstore specific parameters\n \"\"\"\n if ids is None:\n ids = [str(uuid.uuid1()) for _ in texts]\n if not metadatas:\n metadatas = [{} for _ in texts]\n with Session(self._conn) as session:\n collection = self.get_collection(session)\n if not collection:\n raise ValueError(\"Collection not found\")\n for text, metadata, embedding, id in zip(texts, metadatas, embeddings, ids):\n embedding_store = self.EmbeddingStore(\n embedding=embedding,\n document=text,\n cmetadata=metadata,\n custom_id=id,\n collection_id=collection.uuid,\n )\n session.add(embedding_store)\n session.commit()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pgvector.html"} {"id": "f731db9a5d89-5", "text": ")\n session.add(embedding_store)\n session.commit()\n return ids\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n kwargs: vectorstore specific parameters\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n embeddings = self.embedding_function.embed_documents(list(texts))\n return self.add_embeddings(\n texts=texts, embeddings=embeddings, metadatas=metadatas, ids=ids, **kwargs\n )\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n filter: Optional[dict] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Run similarity search with PGVector with distance.\n Args:\n query (str): Query text to search for.\n k (int): Number of results to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n embedding = self.embedding_function.embed_query(text=query)\n return self.similarity_search_by_vector(\n embedding=embedding,\n k=k,\n filter=filter,\n )\n[docs] def similarity_search_with_score(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pgvector.html"} {"id": "f731db9a5d89-6", "text": "filter=filter,\n )\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,\n filter: Optional[dict] = None,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n embedding = self.embedding_function.embed_query(query)\n docs = self.similarity_search_with_score_by_vector(\n embedding=embedding, k=k, filter=filter\n )\n return docs\n @property\n def distance_strategy(self) -> Any:\n if self._distance_strategy == \"l2\":\n return self.EmbeddingStore.embedding.l2_distance\n elif self._distance_strategy == \"cosine\":\n return self.EmbeddingStore.embedding.cosine_distance\n elif self._distance_strategy == \"inner\":\n return self.EmbeddingStore.embedding.max_inner_product\n else:\n raise ValueError(\n f\"Got unexpected value for distance: {self._distance_strategy}. \"\n f\"Should be one of `l2`, `cosine`, `inner`.\"\n )\n[docs] def similarity_search_with_score_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n filter: Optional[dict] = None,\n ) -> List[Tuple[Document, float]]:\n with Session(self._conn) as session:\n collection = self.get_collection(session)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pgvector.html"} {"id": "f731db9a5d89-7", "text": "with Session(self._conn) as session:\n collection = self.get_collection(session)\n if not collection:\n raise ValueError(\"Collection not found\")\n filter_by = self.EmbeddingStore.collection_id == collection.uuid\n if filter is not None:\n filter_clauses = []\n for key, value in filter.items():\n IN = \"in\"\n if isinstance(value, dict) and IN in map(str.lower, value):\n value_case_insensitive = {\n k.lower(): v for k, v in value.items()\n }\n filter_by_metadata = self.EmbeddingStore.cmetadata[\n key\n ].astext.in_(value_case_insensitive[IN])\n filter_clauses.append(filter_by_metadata)\n else:\n filter_by_metadata = self.EmbeddingStore.cmetadata[\n key\n ].astext == str(value)\n filter_clauses.append(filter_by_metadata)\n filter_by = sqlalchemy.and_(filter_by, *filter_clauses)\n _type = self.EmbeddingStore\n results: List[Any] = (\n session.query(\n self.EmbeddingStore,\n self.distance_strategy(embedding).label(\"distance\"), # type: ignore\n )\n .filter(filter_by)\n .order_by(sqlalchemy.asc(\"distance\"))\n .join(\n CollectionStore,\n self.EmbeddingStore.collection_id == CollectionStore.uuid,\n )\n .limit(k)\n .all()\n )\n docs = [\n (\n Document(\n page_content=result.EmbeddingStore.document,\n metadata=result.EmbeddingStore.cmetadata,\n ),\n result.distance if self.embedding_function is not None else None,\n )\n for result in results\n ]\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pgvector.html"} {"id": "f731db9a5d89-8", "text": ")\n for result in results\n ]\n return docs\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n filter: Optional[dict] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List of Documents most similar to the query vector.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score_by_vector(\n embedding=embedding, k=k, filter=filter\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] @classmethod\n def from_texts(\n cls: Type[PGVector],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n distance_strategy: DistanceStrategy = DEFAULT_DISTANCE_STRATEGY,\n ids: Optional[List[str]] = None,\n pre_delete_collection: bool = False,\n **kwargs: Any,\n ) -> PGVector:\n \"\"\"\n Return VectorStore initialized from texts and embeddings.\n Postgres connection string is required\n \"Either pass it as a parameter\n or set the PGVECTOR_CONNECTION_STRING environment variable.\n \"\"\"\n embeddings = embedding.embed_documents(list(texts))\n return cls.__from(\n texts,\n embeddings,\n embedding,\n metadatas=metadatas,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pgvector.html"} {"id": "f731db9a5d89-9", "text": "embeddings,\n embedding,\n metadatas=metadatas,\n ids=ids,\n collection_name=collection_name,\n distance_strategy=distance_strategy,\n pre_delete_collection=pre_delete_collection,\n **kwargs,\n )\n[docs] @classmethod\n def from_embeddings(\n cls,\n text_embeddings: List[Tuple[str, List[float]]],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n distance_strategy: DistanceStrategy = DEFAULT_DISTANCE_STRATEGY,\n ids: Optional[List[str]] = None,\n pre_delete_collection: bool = False,\n **kwargs: Any,\n ) -> PGVector:\n \"\"\"Construct PGVector wrapper from raw documents and pre-\n generated embeddings.\n Return VectorStore initialized from documents and embeddings.\n Postgres connection string is required\n \"Either pass it as a parameter\n or set the PGVECTOR_CONNECTION_STRING environment variable.\n Example:\n .. code-block:: python\n from langchain import PGVector\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n text_embeddings = embeddings.embed_documents(texts)\n text_embedding_pairs = list(zip(texts, text_embeddings))\n faiss = PGVector.from_embeddings(text_embedding_pairs, embeddings)\n \"\"\"\n texts = [t[0] for t in text_embeddings]\n embeddings = [t[1] for t in text_embeddings]\n return cls.__from(\n texts,\n embeddings,\n embedding,\n metadatas=metadatas,\n ids=ids,\n collection_name=collection_name,\n distance_strategy=distance_strategy,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pgvector.html"} {"id": "f731db9a5d89-10", "text": "collection_name=collection_name,\n distance_strategy=distance_strategy,\n pre_delete_collection=pre_delete_collection,\n **kwargs,\n )\n[docs] @classmethod\n def from_existing_index(\n cls: Type[PGVector],\n embedding: Embeddings,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n distance_strategy: DistanceStrategy = DEFAULT_DISTANCE_STRATEGY,\n pre_delete_collection: bool = False,\n **kwargs: Any,\n ) -> PGVector:\n \"\"\"\n Get intsance of an existing PGVector store.This method will\n return the instance of the store without inserting any new\n embeddings\n \"\"\"\n connection_string = cls.get_connection_string(kwargs)\n store = cls(\n connection_string=connection_string,\n collection_name=collection_name,\n embedding_function=embedding,\n distance_strategy=distance_strategy,\n pre_delete_collection=pre_delete_collection,\n )\n return store\n[docs] @classmethod\n def get_connection_string(cls, kwargs: Dict[str, Any]) -> str:\n connection_string: str = get_from_dict_or_env(\n data=kwargs,\n key=\"connection_string\",\n env_key=\"PGVECTOR_CONNECTION_STRING\",\n )\n if not connection_string:\n raise ValueError(\n \"Postgres connection string is required\"\n \"Either pass it as a parameter\"\n \"or set the PGVECTOR_CONNECTION_STRING environment variable.\"\n )\n return connection_string\n[docs] @classmethod\n def from_documents(\n cls: Type[PGVector],\n documents: List[Document],\n embedding: Embeddings,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n distance_strategy: DistanceStrategy = DEFAULT_DISTANCE_STRATEGY,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pgvector.html"} {"id": "f731db9a5d89-11", "text": "distance_strategy: DistanceStrategy = DEFAULT_DISTANCE_STRATEGY,\n ids: Optional[List[str]] = None,\n pre_delete_collection: bool = False,\n **kwargs: Any,\n ) -> PGVector:\n \"\"\"\n Return VectorStore initialized from documents and embeddings.\n Postgres connection string is required\n \"Either pass it as a parameter\n or set the PGVECTOR_CONNECTION_STRING environment variable.\n \"\"\"\n texts = [d.page_content for d in documents]\n metadatas = [d.metadata for d in documents]\n connection_string = cls.get_connection_string(kwargs)\n kwargs[\"connection_string\"] = connection_string\n return cls.from_texts(\n texts=texts,\n pre_delete_collection=pre_delete_collection,\n embedding=embedding,\n distance_strategy=distance_strategy,\n metadatas=metadatas,\n ids=ids,\n collection_name=collection_name,\n **kwargs,\n )\n[docs] @classmethod\n def connection_string_from_db_params(\n cls,\n driver: str,\n host: str,\n port: int,\n database: str,\n user: str,\n password: str,\n ) -> str:\n \"\"\"Return connection string from database parameters.\"\"\"\n return f\"postgresql+{driver}://{user}:{password}@{host}:{port}/{database}\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pgvector.html"} {"id": "678604c83636-0", "text": "Source code for langchain.vectorstores.analyticdb\n\"\"\"VectorStore wrapper around a Postgres/PGVector database.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport uuid\nfrom typing import Any, Dict, Iterable, List, Optional, Sequence, Tuple, Type\nfrom sqlalchemy import REAL, Column, String, Table, create_engine, insert, text\nfrom sqlalchemy.dialects.postgresql import ARRAY, JSON, TEXT\ntry:\n from sqlalchemy.orm import declarative_base\nexcept ImportError:\n from sqlalchemy.ext.declarative import declarative_base\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nfrom langchain.vectorstores.base import VectorStore\n_LANGCHAIN_DEFAULT_EMBEDDING_DIM = 1536\n_LANGCHAIN_DEFAULT_COLLECTION_NAME = \"langchain_document\"\nBase = declarative_base() # type: Any\n[docs]class AnalyticDB(VectorStore):\n \"\"\"VectorStore implementation using AnalyticDB.\n AnalyticDB is a distributed full PostgresSQL syntax cloud-native database.\n - `connection_string` is a postgres connection string.\n - `embedding_function` any embedding function implementing\n `langchain.embeddings.base.Embeddings` interface.\n - `collection_name` is the name of the collection to use. (default: langchain)\n - NOTE: This is not the name of the table, but the name of the collection.\n The tables will be created when initializing the store (if not exists)\n So, make sure the user has the right permissions to create tables.\n - `pre_delete_collection` if True, will delete the collection if it exists.\n (default: False)\n - Useful for testing.\n \"\"\"\n def __init__(\n self,\n connection_string: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"} {"id": "678604c83636-1", "text": "\"\"\"\n def __init__(\n self,\n connection_string: str,\n embedding_function: Embeddings,\n embedding_dimension: int = _LANGCHAIN_DEFAULT_EMBEDDING_DIM,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n pre_delete_collection: bool = False,\n logger: Optional[logging.Logger] = None,\n engine_args: Optional[dict] = None,\n ) -> None:\n self.connection_string = connection_string\n self.embedding_function = embedding_function\n self.embedding_dimension = embedding_dimension\n self.collection_name = collection_name\n self.pre_delete_collection = pre_delete_collection\n self.logger = logger or logging.getLogger(__name__)\n self.__post_init__(engine_args)\n def __post_init__(\n self,\n engine_args: Optional[dict] = None,\n ) -> None:\n \"\"\"\n Initialize the store.\n \"\"\"\n _engine_args = engine_args or {}\n if (\n \"pool_recycle\" not in _engine_args\n ): # Check if pool_recycle is not in _engine_args\n _engine_args[\n \"pool_recycle\"\n ] = 3600 # Set pool_recycle to 3600s if not present\n self.engine = create_engine(self.connection_string, **_engine_args)\n self.create_collection()\n[docs] def create_table_if_not_exists(self) -> None:\n # Define the dynamic table\n Table(\n self.collection_name,\n Base.metadata,\n Column(\"id\", TEXT, primary_key=True, default=uuid.uuid4),\n Column(\"embedding\", ARRAY(REAL)),\n Column(\"document\", String, nullable=True),\n Column(\"metadata\", JSON, nullable=True),\n extend_existing=True,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"} {"id": "678604c83636-2", "text": "Column(\"metadata\", JSON, nullable=True),\n extend_existing=True,\n )\n with self.engine.connect() as conn:\n with conn.begin():\n # Create the table\n Base.metadata.create_all(conn)\n # Check if the index exists\n index_name = f\"{self.collection_name}_embedding_idx\"\n index_query = text(\n f\"\"\"\n SELECT 1\n FROM pg_indexes\n WHERE indexname = '{index_name}';\n \"\"\"\n )\n result = conn.execute(index_query).scalar()\n # Create the index if it doesn't exist\n if not result:\n index_statement = text(\n f\"\"\"\n CREATE INDEX {index_name}\n ON {self.collection_name} USING ann(embedding)\n WITH (\n \"dim\" = {self.embedding_dimension},\n \"hnsw_m\" = 100\n );\n \"\"\"\n )\n conn.execute(index_statement)\n[docs] def create_collection(self) -> None:\n if self.pre_delete_collection:\n self.delete_collection()\n self.create_table_if_not_exists()\n[docs] def delete_collection(self) -> None:\n self.logger.debug(\"Trying to delete collection\")\n drop_statement = text(f\"DROP TABLE IF EXISTS {self.collection_name};\")\n with self.engine.connect() as conn:\n with conn.begin():\n conn.execute(drop_statement)\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n batch_size: int = 500,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"} {"id": "678604c83636-3", "text": "\"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n kwargs: vectorstore specific parameters\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n if ids is None:\n ids = [str(uuid.uuid1()) for _ in texts]\n embeddings = self.embedding_function.embed_documents(list(texts))\n if not metadatas:\n metadatas = [{} for _ in texts]\n # Define the table schema\n chunks_table = Table(\n self.collection_name,\n Base.metadata,\n Column(\"id\", TEXT, primary_key=True),\n Column(\"embedding\", ARRAY(REAL)),\n Column(\"document\", String, nullable=True),\n Column(\"metadata\", JSON, nullable=True),\n extend_existing=True,\n )\n chunks_table_data = []\n with self.engine.connect() as conn:\n with conn.begin():\n for document, metadata, chunk_id, embedding in zip(\n texts, metadatas, ids, embeddings\n ):\n chunks_table_data.append(\n {\n \"id\": chunk_id,\n \"embedding\": embedding,\n \"document\": document,\n \"metadata\": metadata,\n }\n )\n # Execute the batch insert when the batch size is reached\n if len(chunks_table_data) == batch_size:\n conn.execute(insert(chunks_table).values(chunks_table_data))\n # Clear the chunks_table_data list for the next batch\n chunks_table_data.clear()\n # Insert any remaining records that didn't make up a full batch\n if chunks_table_data:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"} {"id": "678604c83636-4", "text": "if chunks_table_data:\n conn.execute(insert(chunks_table).values(chunks_table_data))\n return ids\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n filter: Optional[dict] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Run similarity search with AnalyticDB with distance.\n Args:\n query (str): Query text to search for.\n k (int): Number of results to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n embedding = self.embedding_function.embed_query(text=query)\n return self.similarity_search_by_vector(\n embedding=embedding,\n k=k,\n filter=filter,\n )\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,\n filter: Optional[dict] = None,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n embedding = self.embedding_function.embed_query(query)\n docs = self.similarity_search_with_score_by_vector(\n embedding=embedding, k=k, filter=filter\n )\n return docs\n def _similarity_search_with_relevance_scores(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"} {"id": "678604c83636-5", "text": ")\n return docs\n def _similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and relevance scores in the range [0, 1].\n 0 is dissimilar, 1 is most similar.\n Args:\n query: input text\n k: Number of Documents to return. Defaults to 4.\n **kwargs: kwargs to be passed to similarity search. Should include:\n score_threshold: Optional, a floating point value between 0 to 1 to\n filter the resulting set of retrieved docs\n Returns:\n List of Tuples of (doc, similarity_score)\n \"\"\"\n return self.similarity_search_with_score(query, k, **kwargs)\n[docs] def similarity_search_with_score_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n filter: Optional[dict] = None,\n ) -> List[Tuple[Document, float]]:\n # Add the filter if provided\n try:\n from sqlalchemy.engine import Row\n except ImportError:\n raise ImportError(\n \"Could not import Row from sqlalchemy.engine. \"\n \"Please 'pip install sqlalchemy>=1.4'.\"\n )\n filter_condition = \"\"\n if filter is not None:\n conditions = [\n f\"metadata->>{key!r} = {value!r}\" for key, value in filter.items()\n ]\n filter_condition = f\"WHERE {' AND '.join(conditions)}\"\n # Define the base query\n sql_query = f\"\"\"\n SELECT *, l2_distance(embedding, :embedding) as distance", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"} {"id": "678604c83636-6", "text": "SELECT *, l2_distance(embedding, :embedding) as distance\n FROM {self.collection_name}\n {filter_condition}\n ORDER BY embedding <-> :embedding\n LIMIT :k\n \"\"\"\n # Set up the query parameters\n params = {\"embedding\": embedding, \"k\": k}\n # Execute the query and fetch the results\n with self.engine.connect() as conn:\n results: Sequence[Row] = conn.execute(text(sql_query), params).fetchall()\n documents_with_scores = [\n (\n Document(\n page_content=result.document,\n metadata=result.metadata,\n ),\n result.distance if self.embedding_function is not None else None,\n )\n for result in results\n ]\n return documents_with_scores\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n filter: Optional[dict] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List of Documents most similar to the query vector.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score_by_vector(\n embedding=embedding, k=k, filter=filter\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] def delete(self, ids: Optional[List[str]] = None, **kwargs: Any) -> Optional[bool]:\n \"\"\"Delete by vector IDs.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"} {"id": "678604c83636-7", "text": "\"\"\"Delete by vector IDs.\n Args:\n ids: List of ids to delete.\n \"\"\"\n if ids is None:\n raise ValueError(\"No ids provided to delete.\")\n # Define the table schema\n chunks_table = Table(\n self.collection_name,\n Base.metadata,\n Column(\"id\", TEXT, primary_key=True),\n Column(\"embedding\", ARRAY(REAL)),\n Column(\"document\", String, nullable=True),\n Column(\"metadata\", JSON, nullable=True),\n extend_existing=True,\n )\n try:\n with self.engine.connect() as conn:\n with conn.begin():\n delete_condition = chunks_table.c.id.in_(ids)\n conn.execute(chunks_table.delete().where(delete_condition))\n return True\n except Exception as e:\n print(\"Delete operation failed:\", str(e))\n return False\n[docs] @classmethod\n def from_texts(\n cls: Type[AnalyticDB],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n embedding_dimension: int = _LANGCHAIN_DEFAULT_EMBEDDING_DIM,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n ids: Optional[List[str]] = None,\n pre_delete_collection: bool = False,\n engine_args: Optional[dict] = None,\n **kwargs: Any,\n ) -> AnalyticDB:\n \"\"\"\n Return VectorStore initialized from texts and embeddings.\n Postgres Connection string is required\n Either pass it as a parameter\n or set the PG_CONNECTION_STRING environment variable.\n \"\"\"\n connection_string = cls.get_connection_string(kwargs)\n store = cls(\n connection_string=connection_string,\n collection_name=collection_name,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"} {"id": "678604c83636-8", "text": "connection_string=connection_string,\n collection_name=collection_name,\n embedding_function=embedding,\n embedding_dimension=embedding_dimension,\n pre_delete_collection=pre_delete_collection,\n engine_args=engine_args,\n )\n store.add_texts(texts=texts, metadatas=metadatas, ids=ids, **kwargs)\n return store\n[docs] @classmethod\n def get_connection_string(cls, kwargs: Dict[str, Any]) -> str:\n connection_string: str = get_from_dict_or_env(\n data=kwargs,\n key=\"connection_string\",\n env_key=\"PG_CONNECTION_STRING\",\n )\n if not connection_string:\n raise ValueError(\n \"Postgres connection string is required\"\n \"Either pass it as a parameter\"\n \"or set the PG_CONNECTION_STRING environment variable.\"\n )\n return connection_string\n[docs] @classmethod\n def from_documents(\n cls: Type[AnalyticDB],\n documents: List[Document],\n embedding: Embeddings,\n embedding_dimension: int = _LANGCHAIN_DEFAULT_EMBEDDING_DIM,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n ids: Optional[List[str]] = None,\n pre_delete_collection: bool = False,\n engine_args: Optional[dict] = None,\n **kwargs: Any,\n ) -> AnalyticDB:\n \"\"\"\n Return VectorStore initialized from documents and embeddings.\n Postgres Connection string is required\n Either pass it as a parameter\n or set the PG_CONNECTION_STRING environment variable.\n \"\"\"\n texts = [d.page_content for d in documents]\n metadatas = [d.metadata for d in documents]\n connection_string = cls.get_connection_string(kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"} {"id": "678604c83636-9", "text": "connection_string = cls.get_connection_string(kwargs)\n kwargs[\"connection_string\"] = connection_string\n return cls.from_texts(\n texts=texts,\n pre_delete_collection=pre_delete_collection,\n embedding=embedding,\n embedding_dimension=embedding_dimension,\n metadatas=metadatas,\n ids=ids,\n collection_name=collection_name,\n engine_args=engine_args,\n **kwargs,\n )\n[docs] @classmethod\n def connection_string_from_db_params(\n cls,\n driver: str,\n host: str,\n port: int,\n database: str,\n user: str,\n password: str,\n ) -> str:\n \"\"\"Return connection string from database parameters.\"\"\"\n return f\"postgresql+{driver}://{user}:{password}@{host}:{port}/{database}\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"} {"id": "65d318614e6f-0", "text": "Source code for langchain.vectorstores.docarray.hnsw\n\"\"\"Wrapper around Hnswlib store.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, List, Literal, Optional\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.docarray.base import (\n DocArrayIndex,\n _check_docarray_import,\n)\n[docs]class DocArrayHnswSearch(DocArrayIndex):\n \"\"\"Wrapper around HnswLib storage.\n To use it, you should have the ``docarray`` package with version >=0.32.0 installed.\n You can install it with `pip install \"langchain[docarray]\"`.\n \"\"\"\n[docs] @classmethod\n def from_params(\n cls,\n embedding: Embeddings,\n work_dir: str,\n n_dim: int,\n dist_metric: Literal[\"cosine\", \"ip\", \"l2\"] = \"cosine\",\n max_elements: int = 1024,\n index: bool = True,\n ef_construction: int = 200,\n ef: int = 10,\n M: int = 16,\n allow_replace_deleted: bool = True,\n num_threads: int = 1,\n **kwargs: Any,\n ) -> DocArrayHnswSearch:\n \"\"\"Initialize DocArrayHnswSearch store.\n Args:\n embedding (Embeddings): Embedding function.\n work_dir (str): path to the location where all the data will be stored.\n n_dim (int): dimension of an embedding.\n dist_metric (str): Distance metric for DocArrayHnswSearch can be one of:\n \"cosine\", \"ip\", and \"l2\". Defaults to \"cosine\".", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/docarray/hnsw.html"} {"id": "65d318614e6f-1", "text": "\"cosine\", \"ip\", and \"l2\". Defaults to \"cosine\".\n max_elements (int): Maximum number of vectors that can be stored.\n Defaults to 1024.\n index (bool): Whether an index should be built for this field.\n Defaults to True.\n ef_construction (int): defines a construction time/accuracy trade-off.\n Defaults to 200.\n ef (int): parameter controlling query time/accuracy trade-off.\n Defaults to 10.\n M (int): parameter that defines the maximum number of outgoing\n connections in the graph. Defaults to 16.\n allow_replace_deleted (bool): Enables replacing of deleted elements\n with new added ones. Defaults to True.\n num_threads (int): Sets the number of cpu threads to use. Defaults to 1.\n **kwargs: Other keyword arguments to be passed to the get_doc_cls method.\n \"\"\"\n _check_docarray_import()\n from docarray.index import HnswDocumentIndex\n doc_cls = cls._get_doc_cls(\n dim=n_dim,\n space=dist_metric,\n max_elements=max_elements,\n index=index,\n ef_construction=ef_construction,\n ef=ef,\n M=M,\n allow_replace_deleted=allow_replace_deleted,\n num_threads=num_threads,\n **kwargs,\n )\n doc_index = HnswDocumentIndex[doc_cls](work_dir=work_dir) # type: ignore\n return cls(doc_index, embedding)\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n work_dir: Optional[str] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/docarray/hnsw.html"} {"id": "65d318614e6f-2", "text": "work_dir: Optional[str] = None,\n n_dim: Optional[int] = None,\n **kwargs: Any,\n ) -> DocArrayHnswSearch:\n \"\"\"Create an DocArrayHnswSearch store and insert data.\n Args:\n texts (List[str]): Text data.\n embedding (Embeddings): Embedding function.\n metadatas (Optional[List[dict]]): Metadata for each text if it exists.\n Defaults to None.\n work_dir (str): path to the location where all the data will be stored.\n n_dim (int): dimension of an embedding.\n **kwargs: Other keyword arguments to be passed to the __init__ method.\n Returns:\n DocArrayHnswSearch Vector Store\n \"\"\"\n if work_dir is None:\n raise ValueError(\"`work_dir` parameter has not been set.\")\n if n_dim is None:\n raise ValueError(\"`n_dim` parameter has not been set.\")\n store = cls.from_params(embedding, work_dir, n_dim, **kwargs)\n store.add_texts(texts=texts, metadatas=metadatas)\n return store", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/docarray/hnsw.html"} {"id": "10d4f8e9ffaf-0", "text": "Source code for langchain.vectorstores.docarray.in_memory\n\"\"\"Wrapper around in-memory storage.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Literal, Optional\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.docarray.base import (\n DocArrayIndex,\n _check_docarray_import,\n)\n[docs]class DocArrayInMemorySearch(DocArrayIndex):\n \"\"\"Wrapper around in-memory storage for exact search.\n To use it, you should have the ``docarray`` package with version >=0.32.0 installed.\n You can install it with `pip install \"langchain[docarray]\"`.\n \"\"\"\n[docs] @classmethod\n def from_params(\n cls,\n embedding: Embeddings,\n metric: Literal[\n \"cosine_sim\", \"euclidian_dist\", \"sgeuclidean_dist\"\n ] = \"cosine_sim\",\n **kwargs: Any,\n ) -> DocArrayInMemorySearch:\n \"\"\"Initialize DocArrayInMemorySearch store.\n Args:\n embedding (Embeddings): Embedding function.\n metric (str): metric for exact nearest-neighbor search.\n Can be one of: \"cosine_sim\", \"euclidean_dist\" and \"sqeuclidean_dist\".\n Defaults to \"cosine_sim\".\n **kwargs: Other keyword arguments to be passed to the get_doc_cls method.\n \"\"\"\n _check_docarray_import()\n from docarray.index import InMemoryExactNNIndex\n doc_cls = cls._get_doc_cls(space=metric, **kwargs)\n doc_index = InMemoryExactNNIndex[doc_cls]() # type: ignore\n return cls(doc_index, embedding)\n[docs] @classmethod\n def from_texts(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/docarray/in_memory.html"} {"id": "10d4f8e9ffaf-1", "text": "[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[Dict[Any, Any]]] = None,\n **kwargs: Any,\n ) -> DocArrayInMemorySearch:\n \"\"\"Create an DocArrayInMemorySearch store and insert data.\n Args:\n texts (List[str]): Text data.\n embedding (Embeddings): Embedding function.\n metadatas (Optional[List[Dict[Any, Any]]]): Metadata for each text\n if it exists. Defaults to None.\n metric (str): metric for exact nearest-neighbor search.\n Can be one of: \"cosine_sim\", \"euclidean_dist\" and \"sqeuclidean_dist\".\n Defaults to \"cosine_sim\".\n Returns:\n DocArrayInMemorySearch Vector Store\n \"\"\"\n store = cls.from_params(embedding, **kwargs)\n store.add_texts(texts=texts, metadatas=metadatas)\n return store", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/docarray/in_memory.html"} {"id": "313d646caec5-0", "text": "Source code for langchain.vectorstores.docarray.base\nfrom abc import ABC\nfrom typing import TYPE_CHECKING, Any, Iterable, List, Optional, Tuple, Type\nimport numpy as np\nfrom pydantic import Field\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import Document\nfrom langchain.vectorstores import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nif TYPE_CHECKING:\n from docarray import BaseDoc\n from docarray.index.abstract import BaseDocIndex\ndef _check_docarray_import() -> None:\n try:\n import docarray\n da_version = docarray.__version__.split(\".\")\n if int(da_version[0]) == 0 and int(da_version[1]) <= 31:\n raise ValueError(\n f\"To use the DocArrayHnswSearch VectorStore the docarray \"\n f\"version >=0.32.0 is expected, received: {docarray.__version__}.\"\n f\"To upgrade, please run: `pip install -U docarray`.\"\n )\n except ImportError:\n raise ImportError(\n \"Could not import docarray python package. \"\n 'Please install it with `pip install \"langchain[docarray]\"`.'\n )\n[docs]class DocArrayIndex(VectorStore, ABC):\n def __init__(\n self,\n doc_index: \"BaseDocIndex\",\n embedding: Embeddings,\n ):\n \"\"\"Initialize a vector store from DocArray's DocIndex.\"\"\"\n self.doc_index = doc_index\n self.embedding = embedding\n @staticmethod\n def _get_doc_cls(**embeddings_params: Any) -> Type[\"BaseDoc\"]:\n \"\"\"Get docarray Document class describing the schema of DocIndex.\"\"\"\n from docarray import BaseDoc", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/docarray/base.html"} {"id": "313d646caec5-1", "text": "from docarray import BaseDoc\n from docarray.typing import NdArray\n class DocArrayDoc(BaseDoc):\n text: Optional[str]\n embedding: Optional[NdArray] = Field(**embeddings_params)\n metadata: Optional[dict]\n return DocArrayDoc\n @property\n def doc_cls(self) -> Type[\"BaseDoc\"]:\n if self.doc_index._schema is None:\n raise ValueError(\"doc_index expected to have non-null _schema attribute.\")\n return self.doc_index._schema\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n ids: List[str] = []\n embeddings = self.embedding.embed_documents(list(texts))\n for i, (t, e) in enumerate(zip(texts, embeddings)):\n m = metadatas[i] if metadatas else {}\n doc = self.doc_cls(text=t, embedding=e, metadata=m)\n self.doc_index.index([doc])\n ids.append(str(doc.id))\n return ids\n[docs] def similarity_search_with_score(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/docarray/base.html"} {"id": "313d646caec5-2", "text": "Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of documents most similar to the query text and\n cosine distance in float for each.\n Lower score represents more similarity.\n \"\"\"\n query_embedding = self.embedding.embed_query(query)\n query_doc = self.doc_cls(embedding=query_embedding) # type: ignore\n docs, scores = self.doc_index.find(query_doc, search_field=\"embedding\", limit=k)\n result = [\n (Document(page_content=doc.text, metadata=doc.metadata), score)\n for doc, score in zip(docs, scores)\n ]\n return result\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n results = self.similarity_search_with_score(query, k=k, **kwargs)\n return [doc for doc, _ in results]\n def _similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and relevance scores, normalized on a scale from 0 to 1.\n 0 is dissimilar, 1 is most similar.\n \"\"\"\n raise NotImplementedError\n[docs] def similarity_search_by_vector(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/docarray/base.html"} {"id": "313d646caec5-3", "text": "\"\"\"\n raise NotImplementedError\n[docs] def similarity_search_by_vector(\n self, embedding: List[float], k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query vector.\n \"\"\"\n query_doc = self.doc_cls(embedding=embedding) # type: ignore\n docs = self.doc_index.find(\n query_doc, search_field=\"embedding\", limit=k\n ).documents\n result = [\n Document(page_content=doc.text, metadata=doc.metadata) for doc in docs\n ]\n return result\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/docarray/base.html"} {"id": "313d646caec5-4", "text": "Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n query_embedding = self.embedding.embed_query(query)\n query_doc = self.doc_cls(embedding=query_embedding) # type: ignore\n docs = self.doc_index.find(\n query_doc, search_field=\"embedding\", limit=fetch_k\n ).documents\n mmr_selected = maximal_marginal_relevance(\n np.array(query_embedding), docs.embedding, k=k\n )\n results = [\n Document(page_content=docs[idx].text, metadata=docs[idx].metadata)\n for idx in mmr_selected\n ]\n return results", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/docarray/base.html"} {"id": "b463c118d644-0", "text": "Source code for langchain.callbacks.streaming_stdout_final_only\n\"\"\"Callback Handler streams to stdout on new llm token.\"\"\"\nimport sys\nfrom typing import Any, Dict, List, Optional\nfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\nDEFAULT_ANSWER_PREFIX_TOKENS = [\"Final\", \"Answer\", \":\"]\n[docs]class FinalStreamingStdOutCallbackHandler(StreamingStdOutCallbackHandler):\n \"\"\"Callback handler for streaming in agents.\n Only works with agents using LLMs that support streaming.\n Only the final output of the agent will be streamed.\n \"\"\"\n[docs] def append_to_last_tokens(self, token: str) -> None:\n self.last_tokens.append(token)\n self.last_tokens_stripped.append(token.strip())\n if len(self.last_tokens) > len(self.answer_prefix_tokens):\n self.last_tokens.pop(0)\n self.last_tokens_stripped.pop(0)\n[docs] def check_if_answer_reached(self) -> bool:\n if self.strip_tokens:\n return self.last_tokens_stripped == self.answer_prefix_tokens_stripped\n else:\n return self.last_tokens == self.answer_prefix_tokens\n def __init__(\n self,\n *,\n answer_prefix_tokens: Optional[List[str]] = None,\n strip_tokens: bool = True,\n stream_prefix: bool = False\n ) -> None:\n \"\"\"Instantiate FinalStreamingStdOutCallbackHandler.\n Args:\n answer_prefix_tokens: Token sequence that prefixes the answer.\n Default is [\"Final\", \"Answer\", \":\"]\n strip_tokens: Ignore white spaces and new lines when comparing\n answer_prefix_tokens to last tokens? (to determine if answer has been\n reached)\n stream_prefix: Should answer prefix itself also be streamed?\n \"\"\"\n super().__init__()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streaming_stdout_final_only.html"} {"id": "b463c118d644-1", "text": "\"\"\"\n super().__init__()\n if answer_prefix_tokens is None:\n self.answer_prefix_tokens = DEFAULT_ANSWER_PREFIX_TOKENS\n else:\n self.answer_prefix_tokens = answer_prefix_tokens\n if strip_tokens:\n self.answer_prefix_tokens_stripped = [\n token.strip() for token in self.answer_prefix_tokens\n ]\n else:\n self.answer_prefix_tokens_stripped = self.answer_prefix_tokens\n self.last_tokens = [\"\"] * len(self.answer_prefix_tokens)\n self.last_tokens_stripped = [\"\"] * len(self.answer_prefix_tokens)\n self.strip_tokens = strip_tokens\n self.stream_prefix = stream_prefix\n self.answer_reached = False\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM starts running.\"\"\"\n self.answer_reached = False\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Run on new LLM token. Only available when streaming is enabled.\"\"\"\n # Remember the last n tokens, where n = len(answer_prefix_tokens)\n self.append_to_last_tokens(token)\n # Check if the last n tokens match the answer_prefix_tokens list ...\n if self.check_if_answer_reached():\n self.answer_reached = True\n if self.stream_prefix:\n for t in self.last_tokens:\n sys.stdout.write(t)\n sys.stdout.flush()\n return\n # ... if yes, then print tokens from now on\n if self.answer_reached:\n sys.stdout.write(token)\n sys.stdout.flush()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streaming_stdout_final_only.html"} {"id": "7b096880f7b1-0", "text": "Source code for langchain.callbacks.arize_callback\nfrom datetime import datetime\nfrom typing import Any, Dict, List, Optional, Union\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.callbacks.utils import import_pandas\nfrom langchain.schema import AgentAction, AgentFinish, LLMResult\n[docs]class ArizeCallbackHandler(BaseCallbackHandler):\n \"\"\"Callback Handler that logs to Arize.\"\"\"\n def __init__(\n self,\n model_id: Optional[str] = None,\n model_version: Optional[str] = None,\n SPACE_KEY: Optional[str] = None,\n API_KEY: Optional[str] = None,\n ) -> None:\n \"\"\"Initialize callback handler.\"\"\"\n super().__init__()\n self.model_id = model_id\n self.model_version = model_version\n self.space_key = SPACE_KEY\n self.api_key = API_KEY\n self.prompt_records: List[str] = []\n self.response_records: List[str] = []\n self.prediction_ids: List[str] = []\n self.pred_timestamps: List[int] = []\n self.response_embeddings: List[float] = []\n self.prompt_embeddings: List[float] = []\n self.prompt_tokens = 0\n self.completion_tokens = 0\n self.total_tokens = 0\n self.step = 0\n from arize.pandas.embeddings import EmbeddingGenerator, UseCases\n from arize.pandas.logger import Client\n self.generator = EmbeddingGenerator.from_use_case(\n use_case=UseCases.NLP.SEQUENCE_CLASSIFICATION,\n model_name=\"distilbert-base-uncased\",\n tokenizer_max_length=512,\n batch_size=256,\n )\n self.arize_client = Client(space_key=SPACE_KEY, api_key=API_KEY)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/arize_callback.html"} {"id": "7b096880f7b1-1", "text": "self.arize_client = Client(space_key=SPACE_KEY, api_key=API_KEY)\n if SPACE_KEY == \"SPACE_KEY\" or API_KEY == \"API_KEY\":\n raise ValueError(\"\u274c CHANGE SPACE AND API KEYS\")\n else:\n print(\"\u2705 Arize client setup done! Now you can start using Arize!\")\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n for prompt in prompts:\n self.prompt_records.append(prompt.replace(\"\\n\", \"\"))\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n pd = import_pandas()\n from arize.utils.types import (\n EmbeddingColumnNames,\n Environments,\n ModelTypes,\n Schema,\n )\n # Safe check if 'llm_output' and 'token_usage' exist\n if response.llm_output and \"token_usage\" in response.llm_output:\n self.prompt_tokens = response.llm_output[\"token_usage\"].get(\n \"prompt_tokens\", 0\n )\n self.total_tokens = response.llm_output[\"token_usage\"].get(\n \"total_tokens\", 0\n )\n self.completion_tokens = response.llm_output[\"token_usage\"].get(\n \"completion_tokens\", 0\n )\n else:\n self.prompt_tokens = (\n self.total_tokens\n ) = self.completion_tokens = 0 # assign default value\n for generations in response.generations:\n for generation in generations:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/arize_callback.html"} {"id": "7b096880f7b1-2", "text": "for generations in response.generations:\n for generation in generations:\n prompt = self.prompt_records[self.step]\n self.step = self.step + 1\n prompt_embedding = pd.Series(\n self.generator.generate_embeddings(\n text_col=pd.Series(prompt.replace(\"\\n\", \" \"))\n ).reset_index(drop=True)\n )\n # Assigning text to response_text instead of response\n response_text = generation.text.replace(\"\\n\", \" \")\n response_embedding = pd.Series(\n self.generator.generate_embeddings(\n text_col=pd.Series(generation.text.replace(\"\\n\", \" \"))\n ).reset_index(drop=True)\n )\n pred_timestamp = datetime.now().timestamp()\n # Define the columns and data\n columns = [\n \"prediction_ts\",\n \"response\",\n \"prompt\",\n \"response_vector\",\n \"prompt_vector\",\n \"prompt_token\",\n \"completion_token\",\n \"total_token\",\n ]\n data = [\n [\n pred_timestamp,\n response_text,\n prompt,\n response_embedding[0],\n prompt_embedding[0],\n self.prompt_tokens,\n self.total_tokens,\n self.completion_tokens,\n ]\n ]\n # Create the DataFrame\n df = pd.DataFrame(data, columns=columns)\n # Declare prompt and response columns\n prompt_columns = EmbeddingColumnNames(\n vector_column_name=\"prompt_vector\", data_column_name=\"prompt\"\n )\n response_columns = EmbeddingColumnNames(\n vector_column_name=\"response_vector\", data_column_name=\"response\"\n )\n schema = Schema(\n timestamp_column_name=\"prediction_ts\",\n tag_column_names=[\n \"prompt_token\",\n \"completion_token\",\n \"total_token\",\n ],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/arize_callback.html"} {"id": "7b096880f7b1-3", "text": "\"completion_token\",\n \"total_token\",\n ],\n prompt_column_names=prompt_columns,\n response_column_names=response_columns,\n )\n response_from_arize = self.arize_client.log(\n dataframe=df,\n schema=schema,\n model_id=self.model_id,\n model_version=self.model_version,\n model_type=ModelTypes.GENERATIVE_LLM,\n environment=Environments.PRODUCTION,\n )\n if response_from_arize.status_code == 200:\n print(\"\u2705 Successfully logged data to Arize!\")\n else:\n print(f'\u274c Logging failed \"{response_from_arize.text}\"')\n[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n pass\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_tool_start(\n self,\n serialized: Dict[str, Any],\n input_str: str,\n **kwargs: Any,\n ) -> None:\n pass\n[docs] def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_tool_end(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/arize_callback.html"} {"id": "7b096880f7b1-4", "text": "pass\n[docs] def on_tool_end(\n self,\n output: str,\n observation_prefix: Optional[str] = None,\n llm_prefix: Optional[str] = None,\n **kwargs: Any,\n ) -> None:\n pass\n[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n pass\n[docs] def on_text(self, text: str, **kwargs: Any) -> None:\n pass\n[docs] def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None:\n pass", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/arize_callback.html"} {"id": "653e2422d738-0", "text": "Source code for langchain.callbacks.streaming_aiter_final_only\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional\nfrom langchain.callbacks.streaming_aiter import AsyncIteratorCallbackHandler\nfrom langchain.schema import LLMResult\nDEFAULT_ANSWER_PREFIX_TOKENS = [\"Final\", \"Answer\", \":\"]\n[docs]class AsyncFinalIteratorCallbackHandler(AsyncIteratorCallbackHandler):\n \"\"\"Callback handler that returns an async iterator.\n Only the final output of the agent will be iterated.\n \"\"\"\n[docs] def append_to_last_tokens(self, token: str) -> None:\n self.last_tokens.append(token)\n self.last_tokens_stripped.append(token.strip())\n if len(self.last_tokens) > len(self.answer_prefix_tokens):\n self.last_tokens.pop(0)\n self.last_tokens_stripped.pop(0)\n[docs] def check_if_answer_reached(self) -> bool:\n if self.strip_tokens:\n return self.last_tokens_stripped == self.answer_prefix_tokens_stripped\n else:\n return self.last_tokens == self.answer_prefix_tokens\n def __init__(\n self,\n *,\n answer_prefix_tokens: Optional[List[str]] = None,\n strip_tokens: bool = True,\n stream_prefix: bool = False,\n ) -> None:\n \"\"\"Instantiate AsyncFinalIteratorCallbackHandler.\n Args:\n answer_prefix_tokens: Token sequence that prefixes the answer.\n Default is [\"Final\", \"Answer\", \":\"]\n strip_tokens: Ignore white spaces and new lines when comparing\n answer_prefix_tokens to last tokens? (to determine if answer has been\n reached)\n stream_prefix: Should answer prefix itself also be streamed?\n \"\"\"\n super().__init__()\n if answer_prefix_tokens is None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streaming_aiter_final_only.html"} {"id": "653e2422d738-1", "text": "\"\"\"\n super().__init__()\n if answer_prefix_tokens is None:\n self.answer_prefix_tokens = DEFAULT_ANSWER_PREFIX_TOKENS\n else:\n self.answer_prefix_tokens = answer_prefix_tokens\n if strip_tokens:\n self.answer_prefix_tokens_stripped = [\n token.strip() for token in self.answer_prefix_tokens\n ]\n else:\n self.answer_prefix_tokens_stripped = self.answer_prefix_tokens\n self.last_tokens = [\"\"] * len(self.answer_prefix_tokens)\n self.last_tokens_stripped = [\"\"] * len(self.answer_prefix_tokens)\n self.strip_tokens = strip_tokens\n self.stream_prefix = stream_prefix\n self.answer_reached = False\n[docs] async def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n # If two calls are made in a row, this resets the state\n self.done.clear()\n self.answer_reached = False\n[docs] async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n if self.answer_reached:\n self.done.set()\n[docs] async def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n # Remember the last n tokens, where n = len(answer_prefix_tokens)\n self.append_to_last_tokens(token)\n # Check if the last n tokens match the answer_prefix_tokens list ...\n if self.check_if_answer_reached():\n self.answer_reached = True\n if self.stream_prefix:\n for t in self.last_tokens:\n self.queue.put_nowait(t)\n return\n # If yes, then put tokens from now on\n if self.answer_reached:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streaming_aiter_final_only.html"} {"id": "653e2422d738-2", "text": "# If yes, then put tokens from now on\n if self.answer_reached:\n self.queue.put_nowait(token)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streaming_aiter_final_only.html"} {"id": "39fdd077fd58-0", "text": "Source code for langchain.callbacks.clearml_callback\nimport tempfile\nfrom copy import deepcopy\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Optional, Sequence, Union\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.callbacks.utils import (\n BaseMetadataCallbackHandler,\n flatten_dict,\n hash_string,\n import_pandas,\n import_spacy,\n import_textstat,\n load_json,\n)\nfrom langchain.schema import AgentAction, AgentFinish, LLMResult\n[docs]def import_clearml() -> Any:\n \"\"\"Import the clearml python package and raise an error if it is not installed.\"\"\"\n try:\n import clearml # noqa: F401\n except ImportError:\n raise ImportError(\n \"To use the clearml callback manager you need to have the `clearml` python \"\n \"package installed. Please install it with `pip install clearml`\"\n )\n return clearml\n[docs]class ClearMLCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler):\n \"\"\"Callback Handler that logs to ClearML.\n Parameters:\n job_type (str): The type of clearml task such as \"inference\", \"testing\" or \"qc\"\n project_name (str): The clearml project name\n tags (list): Tags to add to the task\n task_name (str): Name of the clearml task\n visualize (bool): Whether to visualize the run.\n complexity_metrics (bool): Whether to log complexity metrics\n stream_logs (bool): Whether to stream callback actions to ClearML\n This handler will utilize the associated callback method and formats\n the input of each callback function with metadata regarding the state of LLM run,\n and adds the response to the list of records for both the {method}_records and", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/clearml_callback.html"} {"id": "39fdd077fd58-1", "text": "and adds the response to the list of records for both the {method}_records and\n action. It then logs the response to the ClearML console.\n \"\"\"\n def __init__(\n self,\n task_type: Optional[str] = \"inference\",\n project_name: Optional[str] = \"langchain_callback_demo\",\n tags: Optional[Sequence] = None,\n task_name: Optional[str] = None,\n visualize: bool = False,\n complexity_metrics: bool = False,\n stream_logs: bool = False,\n ) -> None:\n \"\"\"Initialize callback handler.\"\"\"\n clearml = import_clearml()\n spacy = import_spacy()\n super().__init__()\n self.task_type = task_type\n self.project_name = project_name\n self.tags = tags\n self.task_name = task_name\n self.visualize = visualize\n self.complexity_metrics = complexity_metrics\n self.stream_logs = stream_logs\n self.temp_dir = tempfile.TemporaryDirectory()\n # Check if ClearML task already exists (e.g. in pipeline)\n if clearml.Task.current_task():\n self.task = clearml.Task.current_task()\n else:\n self.task = clearml.Task.init( # type: ignore\n task_type=self.task_type,\n project_name=self.project_name,\n tags=self.tags,\n task_name=self.task_name,\n output_uri=True,\n )\n self.logger = self.task.get_logger()\n warning = (\n \"The clearml callback is currently in beta and is subject to change \"\n \"based on updates to `langchain`. Please report any issues to \"\n \"https://github.com/allegroai/clearml/issues with the tag `langchain`.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/clearml_callback.html"} {"id": "39fdd077fd58-2", "text": ")\n self.logger.report_text(warning, level=30, print_console=True)\n self.callback_columns: list = []\n self.action_records: list = []\n self.complexity_metrics = complexity_metrics\n self.visualize = visualize\n self.nlp = spacy.load(\"en_core_web_sm\")\n def _init_resp(self) -> Dict:\n return {k: None for k in self.callback_columns}\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM starts.\"\"\"\n self.step += 1\n self.llm_starts += 1\n self.starts += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_llm_start\"})\n resp.update(flatten_dict(serialized))\n resp.update(self.get_custom_callback_meta())\n for prompt in prompts:\n prompt_resp = deepcopy(resp)\n prompt_resp[\"prompts\"] = prompt\n self.on_llm_start_records.append(prompt_resp)\n self.action_records.append(prompt_resp)\n if self.stream_logs:\n self.logger.report_text(prompt_resp)\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Run when LLM generates a new token.\"\"\"\n self.step += 1\n self.llm_streams += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_llm_new_token\", \"token\": token})\n resp.update(self.get_custom_callback_meta())\n self.on_llm_token_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.logger.report_text(resp)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/clearml_callback.html"} {"id": "39fdd077fd58-3", "text": "if self.stream_logs:\n self.logger.report_text(resp)\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Run when LLM ends running.\"\"\"\n self.step += 1\n self.llm_ends += 1\n self.ends += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_llm_end\"})\n resp.update(flatten_dict(response.llm_output or {}))\n resp.update(self.get_custom_callback_meta())\n for generations in response.generations:\n for generation in generations:\n generation_resp = deepcopy(resp)\n generation_resp.update(flatten_dict(generation.dict()))\n generation_resp.update(self.analyze_text(generation.text))\n self.on_llm_end_records.append(generation_resp)\n self.action_records.append(generation_resp)\n if self.stream_logs:\n self.logger.report_text(generation_resp)\n[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain starts running.\"\"\"\n self.step += 1\n self.chain_starts += 1\n self.starts += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_chain_start\"})\n resp.update(flatten_dict(serialized))\n resp.update(self.get_custom_callback_meta())\n chain_input = inputs[\"input\"]\n if isinstance(chain_input, str):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/clearml_callback.html"} {"id": "39fdd077fd58-4", "text": "chain_input = inputs[\"input\"]\n if isinstance(chain_input, str):\n input_resp = deepcopy(resp)\n input_resp[\"input\"] = chain_input\n self.on_chain_start_records.append(input_resp)\n self.action_records.append(input_resp)\n if self.stream_logs:\n self.logger.report_text(input_resp)\n elif isinstance(chain_input, list):\n for inp in chain_input:\n input_resp = deepcopy(resp)\n input_resp.update(inp)\n self.on_chain_start_records.append(input_resp)\n self.action_records.append(input_resp)\n if self.stream_logs:\n self.logger.report_text(input_resp)\n else:\n raise ValueError(\"Unexpected data format provided!\")\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Run when chain ends running.\"\"\"\n self.step += 1\n self.chain_ends += 1\n self.ends += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_chain_end\", \"outputs\": outputs[\"output\"]})\n resp.update(self.get_custom_callback_meta())\n self.on_chain_end_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.logger.report_text(resp)\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_tool_start(\n self, serialized: Dict[str, Any], input_str: str, **kwargs: Any\n ) -> None:\n \"\"\"Run when tool starts running.\"\"\"\n self.step += 1\n self.tool_starts += 1", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/clearml_callback.html"} {"id": "39fdd077fd58-5", "text": "self.step += 1\n self.tool_starts += 1\n self.starts += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_tool_start\", \"input_str\": input_str})\n resp.update(flatten_dict(serialized))\n resp.update(self.get_custom_callback_meta())\n self.on_tool_start_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.logger.report_text(resp)\n[docs] def on_tool_end(self, output: str, **kwargs: Any) -> None:\n \"\"\"Run when tool ends running.\"\"\"\n self.step += 1\n self.tool_ends += 1\n self.ends += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_tool_end\", \"output\": output})\n resp.update(self.get_custom_callback_meta())\n self.on_tool_end_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.logger.report_text(resp)\n[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when tool errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_text(self, text: str, **kwargs: Any) -> None:\n \"\"\"\n Run when agent is ending.\n \"\"\"\n self.step += 1\n self.text_ctr += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_text\", \"text\": text})\n resp.update(self.get_custom_callback_meta())\n self.on_text_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.logger.report_text(resp)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/clearml_callback.html"} {"id": "39fdd077fd58-6", "text": "if self.stream_logs:\n self.logger.report_text(resp)\n[docs] def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None:\n \"\"\"Run when agent ends running.\"\"\"\n self.step += 1\n self.agent_ends += 1\n self.ends += 1\n resp = self._init_resp()\n resp.update(\n {\n \"action\": \"on_agent_finish\",\n \"output\": finish.return_values[\"output\"],\n \"log\": finish.log,\n }\n )\n resp.update(self.get_custom_callback_meta())\n self.on_agent_finish_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.logger.report_text(resp)\n[docs] def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Run on agent action.\"\"\"\n self.step += 1\n self.tool_starts += 1\n self.starts += 1\n resp = self._init_resp()\n resp.update(\n {\n \"action\": \"on_agent_action\",\n \"tool\": action.tool,\n \"tool_input\": action.tool_input,\n \"log\": action.log,\n }\n )\n resp.update(self.get_custom_callback_meta())\n self.on_agent_action_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.logger.report_text(resp)\n[docs] def analyze_text(self, text: str) -> dict:\n \"\"\"Analyze text using textstat and spacy.\n Parameters:\n text (str): The text to analyze.\n Returns:\n (dict): A dictionary containing the complexity metrics.\n \"\"\"\n resp = {}\n textstat = import_textstat()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/clearml_callback.html"} {"id": "39fdd077fd58-7", "text": "\"\"\"\n resp = {}\n textstat = import_textstat()\n spacy = import_spacy()\n if self.complexity_metrics:\n text_complexity_metrics = {\n \"flesch_reading_ease\": textstat.flesch_reading_ease(text),\n \"flesch_kincaid_grade\": textstat.flesch_kincaid_grade(text),\n \"smog_index\": textstat.smog_index(text),\n \"coleman_liau_index\": textstat.coleman_liau_index(text),\n \"automated_readability_index\": textstat.automated_readability_index(\n text\n ),\n \"dale_chall_readability_score\": textstat.dale_chall_readability_score(\n text\n ),\n \"difficult_words\": textstat.difficult_words(text),\n \"linsear_write_formula\": textstat.linsear_write_formula(text),\n \"gunning_fog\": textstat.gunning_fog(text),\n \"text_standard\": textstat.text_standard(text),\n \"fernandez_huerta\": textstat.fernandez_huerta(text),\n \"szigriszt_pazos\": textstat.szigriszt_pazos(text),\n \"gutierrez_polini\": textstat.gutierrez_polini(text),\n \"crawford\": textstat.crawford(text),\n \"gulpease_index\": textstat.gulpease_index(text),\n \"osman\": textstat.osman(text),\n }\n resp.update(text_complexity_metrics)\n if self.visualize and self.nlp and self.temp_dir.name is not None:\n doc = self.nlp(text)\n dep_out = spacy.displacy.render( # type: ignore", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/clearml_callback.html"} {"id": "39fdd077fd58-8", "text": "dep_out = spacy.displacy.render( # type: ignore\n doc, style=\"dep\", jupyter=False, page=True\n )\n dep_output_path = Path(\n self.temp_dir.name, hash_string(f\"dep-{text}\") + \".html\"\n )\n dep_output_path.open(\"w\", encoding=\"utf-8\").write(dep_out)\n ent_out = spacy.displacy.render( # type: ignore\n doc, style=\"ent\", jupyter=False, page=True\n )\n ent_output_path = Path(\n self.temp_dir.name, hash_string(f\"ent-{text}\") + \".html\"\n )\n ent_output_path.open(\"w\", encoding=\"utf-8\").write(ent_out)\n self.logger.report_media(\n \"Dependencies Plot\", text, local_path=dep_output_path\n )\n self.logger.report_media(\"Entities Plot\", text, local_path=ent_output_path)\n return resp\n def _create_session_analysis_df(self) -> Any:\n \"\"\"Create a dataframe with all the information from the session.\"\"\"\n pd = import_pandas()\n on_llm_start_records_df = pd.DataFrame(self.on_llm_start_records)\n on_llm_end_records_df = pd.DataFrame(self.on_llm_end_records)\n llm_input_prompts_df = (\n on_llm_start_records_df[[\"step\", \"prompts\", \"name\"]]\n .dropna(axis=1)\n .rename({\"step\": \"prompt_step\"}, axis=1)\n )\n complexity_metrics_columns = []\n visualizations_columns: List = []\n if self.complexity_metrics:\n complexity_metrics_columns = [\n \"flesch_reading_ease\",\n \"flesch_kincaid_grade\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/clearml_callback.html"} {"id": "39fdd077fd58-9", "text": "\"flesch_kincaid_grade\",\n \"smog_index\",\n \"coleman_liau_index\",\n \"automated_readability_index\",\n \"dale_chall_readability_score\",\n \"difficult_words\",\n \"linsear_write_formula\",\n \"gunning_fog\",\n \"text_standard\",\n \"fernandez_huerta\",\n \"szigriszt_pazos\",\n \"gutierrez_polini\",\n \"crawford\",\n \"gulpease_index\",\n \"osman\",\n ]\n llm_outputs_df = (\n on_llm_end_records_df[\n [\n \"step\",\n \"text\",\n \"token_usage_total_tokens\",\n \"token_usage_prompt_tokens\",\n \"token_usage_completion_tokens\",\n ]\n + complexity_metrics_columns\n + visualizations_columns\n ]\n .dropna(axis=1)\n .rename({\"step\": \"output_step\", \"text\": \"output\"}, axis=1)\n )\n session_analysis_df = pd.concat([llm_input_prompts_df, llm_outputs_df], axis=1)\n # session_analysis_df[\"chat_html\"] = session_analysis_df[\n # [\"prompts\", \"output\"]\n # ].apply(\n # lambda row: construct_html_from_prompt_and_generation(\n # row[\"prompts\"], row[\"output\"]\n # ),\n # axis=1,\n # )\n return session_analysis_df\n[docs] def flush_tracker(\n self,\n name: Optional[str] = None,\n langchain_asset: Any = None,\n finish: bool = False,\n ) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/clearml_callback.html"} {"id": "39fdd077fd58-10", "text": "finish: bool = False,\n ) -> None:\n \"\"\"Flush the tracker and setup the session.\n Everything after this will be a new table.\n Args:\n name: Name of the preformed session so far so it is identifyable\n langchain_asset: The langchain asset to save.\n finish: Whether to finish the run.\n Returns:\n None\n \"\"\"\n pd = import_pandas()\n clearml = import_clearml()\n # Log the action records\n self.logger.report_table(\n \"Action Records\", name, table_plot=pd.DataFrame(self.action_records)\n )\n # Session analysis\n session_analysis_df = self._create_session_analysis_df()\n self.logger.report_table(\n \"Session Analysis\", name, table_plot=session_analysis_df\n )\n if self.stream_logs:\n self.logger.report_text(\n {\n \"action_records\": pd.DataFrame(self.action_records),\n \"session_analysis\": session_analysis_df,\n }\n )\n if langchain_asset:\n langchain_asset_path = Path(self.temp_dir.name, \"model.json\")\n try:\n langchain_asset.save(langchain_asset_path)\n # Create output model and connect it to the task\n output_model = clearml.OutputModel(\n task=self.task, config_text=load_json(langchain_asset_path)\n )\n output_model.update_weights(\n weights_filename=str(langchain_asset_path),\n auto_delete_file=False,\n target_filename=name,\n )\n except ValueError:\n langchain_asset.save_agent(langchain_asset_path)\n output_model = clearml.OutputModel(\n task=self.task, config_text=load_json(langchain_asset_path)\n )\n output_model.update_weights(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/clearml_callback.html"} {"id": "39fdd077fd58-11", "text": ")\n output_model.update_weights(\n weights_filename=str(langchain_asset_path),\n auto_delete_file=False,\n target_filename=name,\n )\n except NotImplementedError as e:\n print(\"Could not save model.\")\n print(repr(e))\n pass\n # Cleanup after adding everything to ClearML\n self.task.flush(wait_for_uploads=True)\n self.temp_dir.cleanup()\n self.temp_dir = tempfile.TemporaryDirectory()\n self.reset_callback_meta()\n if finish:\n self.task.close()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/clearml_callback.html"} {"id": "8613609d2e5a-0", "text": "Source code for langchain.callbacks.promptlayer_callback\n\"\"\"Callback handler for promptlayer.\"\"\"\nfrom __future__ import annotations\nimport datetime\nfrom typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Tuple\nfrom uuid import UUID\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.schema import (\n ChatGeneration,\n LLMResult,\n)\nfrom langchain.schema.messages import (\n AIMessage,\n BaseMessage,\n ChatMessage,\n HumanMessage,\n SystemMessage,\n)\nif TYPE_CHECKING:\n import promptlayer\ndef _lazy_import_promptlayer() -> promptlayer:\n \"\"\"Lazy import promptlayer to avoid circular imports.\"\"\"\n try:\n import promptlayer\n except ImportError:\n raise ImportError(\n \"The PromptLayerCallbackHandler requires the promptlayer package. \"\n \" Please install it with `pip install promptlayer`.\"\n )\n return promptlayer\n[docs]class PromptLayerCallbackHandler(BaseCallbackHandler):\n \"\"\"Callback handler for promptlayer.\"\"\"\n def __init__(\n self,\n pl_id_callback: Optional[Callable[..., Any]] = None,\n pl_tags: Optional[List[str]] = [],\n ) -> None:\n \"\"\"Initialize the PromptLayerCallbackHandler.\"\"\"\n _lazy_import_promptlayer()\n self.pl_id_callback = pl_id_callback\n self.pl_tags = pl_tags\n self.runs: Dict[UUID, Dict[str, Any]] = {}\n[docs] def on_chat_model_start(\n self,\n serialized: Dict[str, Any],\n messages: List[List[BaseMessage]],\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,\n **kwargs: Any,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/promptlayer_callback.html"} {"id": "8613609d2e5a-1", "text": "tags: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> Any:\n self.runs[run_id] = {\n \"messages\": [self._create_message_dicts(m)[0] for m in messages],\n \"invocation_params\": kwargs.get(\"invocation_params\", {}),\n \"name\": \".\".join(serialized[\"id\"]),\n \"request_start_time\": datetime.datetime.now().timestamp(),\n \"tags\": tags,\n }\n[docs] def on_llm_start(\n self,\n serialized: Dict[str, Any],\n prompts: List[str],\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> Any:\n self.runs[run_id] = {\n \"prompts\": prompts,\n \"invocation_params\": kwargs.get(\"invocation_params\", {}),\n \"name\": \".\".join(serialized[\"id\"]),\n \"request_start_time\": datetime.datetime.now().timestamp(),\n \"tags\": tags,\n }\n[docs] def on_llm_end(\n self,\n response: LLMResult,\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> None:\n from promptlayer.utils import get_api_key, promptlayer_api_request\n run_info = self.runs.get(run_id, {})\n if not run_info:\n return\n run_info[\"request_end_time\"] = datetime.datetime.now().timestamp()\n for i in range(len(response.generations)):\n generation = response.generations[i][0]\n resp = {", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/promptlayer_callback.html"} {"id": "8613609d2e5a-2", "text": "generation = response.generations[i][0]\n resp = {\n \"text\": generation.text,\n \"llm_output\": response.llm_output,\n }\n model_params = run_info.get(\"invocation_params\", {})\n is_chat_model = run_info.get(\"messages\", None) is not None\n model_input = (\n run_info.get(\"messages\", [])[i]\n if is_chat_model\n else [run_info.get(\"prompts\", [])[i]]\n )\n model_response = (\n [self._convert_message_to_dict(generation.message)]\n if is_chat_model and isinstance(generation, ChatGeneration)\n else resp\n )\n pl_request_id = promptlayer_api_request(\n run_info.get(\"name\"),\n \"langchain\",\n model_input,\n model_params,\n self.pl_tags,\n model_response,\n run_info.get(\"request_start_time\"),\n run_info.get(\"request_end_time\"),\n get_api_key(),\n return_pl_id=bool(self.pl_id_callback is not None),\n metadata={\n \"_langchain_run_id\": str(run_id),\n \"_langchain_parent_run_id\": str(parent_run_id),\n \"_langchain_tags\": str(run_info.get(\"tags\", [])),\n },\n )\n if self.pl_id_callback:\n self.pl_id_callback(pl_request_id)\n def _convert_message_to_dict(self, message: BaseMessage) -> Dict[str, Any]:\n if isinstance(message, HumanMessage):\n message_dict = {\"role\": \"user\", \"content\": message.content}\n elif isinstance(message, AIMessage):\n message_dict = {\"role\": \"assistant\", \"content\": message.content}\n elif isinstance(message, SystemMessage):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/promptlayer_callback.html"} {"id": "8613609d2e5a-3", "text": "elif isinstance(message, SystemMessage):\n message_dict = {\"role\": \"system\", \"content\": message.content}\n elif isinstance(message, ChatMessage):\n message_dict = {\"role\": message.role, \"content\": message.content}\n else:\n raise ValueError(f\"Got unknown type {message}\")\n if \"name\" in message.additional_kwargs:\n message_dict[\"name\"] = message.additional_kwargs[\"name\"]\n return message_dict\n def _create_message_dicts(\n self, messages: List[BaseMessage]\n ) -> Tuple[List[Dict[str, Any]], Dict[str, Any]]:\n params: Dict[str, Any] = {}\n message_dicts = [self._convert_message_to_dict(m) for m in messages]\n return message_dicts, params", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/promptlayer_callback.html"} {"id": "1d9a2450897f-0", "text": "Source code for langchain.callbacks.wandb_callback\nimport json\nimport tempfile\nfrom copy import deepcopy\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Optional, Sequence, Union\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.callbacks.utils import (\n BaseMetadataCallbackHandler,\n flatten_dict,\n hash_string,\n import_pandas,\n import_spacy,\n import_textstat,\n)\nfrom langchain.schema import AgentAction, AgentFinish, LLMResult\n[docs]def import_wandb() -> Any:\n \"\"\"Import the wandb python package and raise an error if it is not installed.\"\"\"\n try:\n import wandb # noqa: F401\n except ImportError:\n raise ImportError(\n \"To use the wandb callback manager you need to have the `wandb` python \"\n \"package installed. Please install it with `pip install wandb`\"\n )\n return wandb\n[docs]def load_json_to_dict(json_path: Union[str, Path]) -> dict:\n \"\"\"Load json file to a dictionary.\n Parameters:\n json_path (str): The path to the json file.\n Returns:\n (dict): The dictionary representation of the json file.\n \"\"\"\n with open(json_path, \"r\") as f:\n data = json.load(f)\n return data\n[docs]def analyze_text(\n text: str,\n complexity_metrics: bool = True,\n visualize: bool = True,\n nlp: Any = None,\n output_dir: Optional[Union[str, Path]] = None,\n) -> dict:\n \"\"\"Analyze text using textstat and spacy.\n Parameters:\n text (str): The text to analyze.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} {"id": "1d9a2450897f-1", "text": "Parameters:\n text (str): The text to analyze.\n complexity_metrics (bool): Whether to compute complexity metrics.\n visualize (bool): Whether to visualize the text.\n nlp (spacy.lang): The spacy language model to use for visualization.\n output_dir (str): The directory to save the visualization files to.\n Returns:\n (dict): A dictionary containing the complexity metrics and visualization\n files serialized in a wandb.Html element.\n \"\"\"\n resp = {}\n textstat = import_textstat()\n wandb = import_wandb()\n spacy = import_spacy()\n if complexity_metrics:\n text_complexity_metrics = {\n \"flesch_reading_ease\": textstat.flesch_reading_ease(text),\n \"flesch_kincaid_grade\": textstat.flesch_kincaid_grade(text),\n \"smog_index\": textstat.smog_index(text),\n \"coleman_liau_index\": textstat.coleman_liau_index(text),\n \"automated_readability_index\": textstat.automated_readability_index(text),\n \"dale_chall_readability_score\": textstat.dale_chall_readability_score(text),\n \"difficult_words\": textstat.difficult_words(text),\n \"linsear_write_formula\": textstat.linsear_write_formula(text),\n \"gunning_fog\": textstat.gunning_fog(text),\n \"text_standard\": textstat.text_standard(text),\n \"fernandez_huerta\": textstat.fernandez_huerta(text),\n \"szigriszt_pazos\": textstat.szigriszt_pazos(text),\n \"gutierrez_polini\": textstat.gutierrez_polini(text),", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} {"id": "1d9a2450897f-2", "text": "\"gutierrez_polini\": textstat.gutierrez_polini(text),\n \"crawford\": textstat.crawford(text),\n \"gulpease_index\": textstat.gulpease_index(text),\n \"osman\": textstat.osman(text),\n }\n resp.update(text_complexity_metrics)\n if visualize and nlp and output_dir is not None:\n doc = nlp(text)\n dep_out = spacy.displacy.render( # type: ignore\n doc, style=\"dep\", jupyter=False, page=True\n )\n dep_output_path = Path(output_dir, hash_string(f\"dep-{text}\") + \".html\")\n dep_output_path.open(\"w\", encoding=\"utf-8\").write(dep_out)\n ent_out = spacy.displacy.render( # type: ignore\n doc, style=\"ent\", jupyter=False, page=True\n )\n ent_output_path = Path(output_dir, hash_string(f\"ent-{text}\") + \".html\")\n ent_output_path.open(\"w\", encoding=\"utf-8\").write(ent_out)\n text_visualizations = {\n \"dependency_tree\": wandb.Html(str(dep_output_path)),\n \"entities\": wandb.Html(str(ent_output_path)),\n }\n resp.update(text_visualizations)\n return resp\n[docs]def construct_html_from_prompt_and_generation(prompt: str, generation: str) -> Any:\n \"\"\"Construct an html element from a prompt and a generation.\n Parameters:\n prompt (str): The prompt.\n generation (str): The generation.\n Returns:\n (wandb.Html): The html element.\"\"\"\n wandb = import_wandb()\n formatted_prompt = prompt.replace(\"\\n\", \"
\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} {"id": "1d9a2450897f-3", "text": "formatted_prompt = prompt.replace(\"\\n\", \"
\")\n formatted_generation = generation.replace(\"\\n\", \"
\")\n return wandb.Html(\n f\"\"\"\n

{formatted_prompt}:

\n
\n

\n {formatted_generation}\n

\n
\n \"\"\",\n inject=False,\n )\n[docs]class WandbCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler):\n \"\"\"Callback Handler that logs to Weights and Biases.\n Parameters:\n job_type (str): The type of job.\n project (str): The project to log to.\n entity (str): The entity to log to.\n tags (list): The tags to log.\n group (str): The group to log to.\n name (str): The name of the run.\n notes (str): The notes to log.\n visualize (bool): Whether to visualize the run.\n complexity_metrics (bool): Whether to log complexity metrics.\n stream_logs (bool): Whether to stream callback actions to W&B\n This handler will utilize the associated callback method called and formats\n the input of each callback function with metadata regarding the state of LLM run,\n and adds the response to the list of records for both the {method}_records and\n action. It then logs the response using the run.log() method to Weights and Biases.\n \"\"\"\n def __init__(\n self,\n job_type: Optional[str] = None,\n project: Optional[str] = \"langchain_callback_demo\",\n entity: Optional[str] = None,\n tags: Optional[Sequence] = None,\n group: Optional[str] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} {"id": "1d9a2450897f-4", "text": "group: Optional[str] = None,\n name: Optional[str] = None,\n notes: Optional[str] = None,\n visualize: bool = False,\n complexity_metrics: bool = False,\n stream_logs: bool = False,\n ) -> None:\n \"\"\"Initialize callback handler.\"\"\"\n wandb = import_wandb()\n import_pandas()\n import_textstat()\n spacy = import_spacy()\n super().__init__()\n self.job_type = job_type\n self.project = project\n self.entity = entity\n self.tags = tags\n self.group = group\n self.name = name\n self.notes = notes\n self.visualize = visualize\n self.complexity_metrics = complexity_metrics\n self.stream_logs = stream_logs\n self.temp_dir = tempfile.TemporaryDirectory()\n self.run: wandb.sdk.wandb_run.Run = wandb.init( # type: ignore\n job_type=self.job_type,\n project=self.project,\n entity=self.entity,\n tags=self.tags,\n group=self.group,\n name=self.name,\n notes=self.notes,\n )\n warning = (\n \"DEPRECATION: The `WandbCallbackHandler` will soon be deprecated in favor \"\n \"of the `WandbTracer`. Please update your code to use the `WandbTracer` \"\n \"instead.\"\n )\n wandb.termwarn(\n warning,\n repeat=False,\n )\n self.callback_columns: list = []\n self.action_records: list = []\n self.complexity_metrics = complexity_metrics\n self.visualize = visualize\n self.nlp = spacy.load(\"en_core_web_sm\")\n def _init_resp(self) -> Dict:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} {"id": "1d9a2450897f-5", "text": "def _init_resp(self) -> Dict:\n return {k: None for k in self.callback_columns}\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM starts.\"\"\"\n self.step += 1\n self.llm_starts += 1\n self.starts += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_llm_start\"})\n resp.update(flatten_dict(serialized))\n resp.update(self.get_custom_callback_meta())\n for prompt in prompts:\n prompt_resp = deepcopy(resp)\n prompt_resp[\"prompts\"] = prompt\n self.on_llm_start_records.append(prompt_resp)\n self.action_records.append(prompt_resp)\n if self.stream_logs:\n self.run.log(prompt_resp)\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Run when LLM generates a new token.\"\"\"\n self.step += 1\n self.llm_streams += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_llm_new_token\", \"token\": token})\n resp.update(self.get_custom_callback_meta())\n self.on_llm_token_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.run.log(resp)\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Run when LLM ends running.\"\"\"\n self.step += 1\n self.llm_ends += 1\n self.ends += 1\n resp = self._init_resp()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} {"id": "1d9a2450897f-6", "text": "self.ends += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_llm_end\"})\n resp.update(flatten_dict(response.llm_output or {}))\n resp.update(self.get_custom_callback_meta())\n for generations in response.generations:\n for generation in generations:\n generation_resp = deepcopy(resp)\n generation_resp.update(flatten_dict(generation.dict()))\n generation_resp.update(\n analyze_text(\n generation.text,\n complexity_metrics=self.complexity_metrics,\n visualize=self.visualize,\n nlp=self.nlp,\n output_dir=self.temp_dir.name,\n )\n )\n self.on_llm_end_records.append(generation_resp)\n self.action_records.append(generation_resp)\n if self.stream_logs:\n self.run.log(generation_resp)\n[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain starts running.\"\"\"\n self.step += 1\n self.chain_starts += 1\n self.starts += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_chain_start\"})\n resp.update(flatten_dict(serialized))\n resp.update(self.get_custom_callback_meta())\n chain_input = inputs[\"input\"]\n if isinstance(chain_input, str):\n input_resp = deepcopy(resp)\n input_resp[\"input\"] = chain_input\n self.on_chain_start_records.append(input_resp)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} {"id": "1d9a2450897f-7", "text": "self.on_chain_start_records.append(input_resp)\n self.action_records.append(input_resp)\n if self.stream_logs:\n self.run.log(input_resp)\n elif isinstance(chain_input, list):\n for inp in chain_input:\n input_resp = deepcopy(resp)\n input_resp.update(inp)\n self.on_chain_start_records.append(input_resp)\n self.action_records.append(input_resp)\n if self.stream_logs:\n self.run.log(input_resp)\n else:\n raise ValueError(\"Unexpected data format provided!\")\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Run when chain ends running.\"\"\"\n self.step += 1\n self.chain_ends += 1\n self.ends += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_chain_end\", \"outputs\": outputs[\"output\"]})\n resp.update(self.get_custom_callback_meta())\n self.on_chain_end_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.run.log(resp)\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_tool_start(\n self, serialized: Dict[str, Any], input_str: str, **kwargs: Any\n ) -> None:\n \"\"\"Run when tool starts running.\"\"\"\n self.step += 1\n self.tool_starts += 1\n self.starts += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_tool_start\", \"input_str\": input_str})", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} {"id": "1d9a2450897f-8", "text": "resp.update({\"action\": \"on_tool_start\", \"input_str\": input_str})\n resp.update(flatten_dict(serialized))\n resp.update(self.get_custom_callback_meta())\n self.on_tool_start_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.run.log(resp)\n[docs] def on_tool_end(self, output: str, **kwargs: Any) -> None:\n \"\"\"Run when tool ends running.\"\"\"\n self.step += 1\n self.tool_ends += 1\n self.ends += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_tool_end\", \"output\": output})\n resp.update(self.get_custom_callback_meta())\n self.on_tool_end_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.run.log(resp)\n[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when tool errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_text(self, text: str, **kwargs: Any) -> None:\n \"\"\"\n Run when agent is ending.\n \"\"\"\n self.step += 1\n self.text_ctr += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_text\", \"text\": text})\n resp.update(self.get_custom_callback_meta())\n self.on_text_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.run.log(resp)\n[docs] def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None:\n \"\"\"Run when agent ends running.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} {"id": "1d9a2450897f-9", "text": "\"\"\"Run when agent ends running.\"\"\"\n self.step += 1\n self.agent_ends += 1\n self.ends += 1\n resp = self._init_resp()\n resp.update(\n {\n \"action\": \"on_agent_finish\",\n \"output\": finish.return_values[\"output\"],\n \"log\": finish.log,\n }\n )\n resp.update(self.get_custom_callback_meta())\n self.on_agent_finish_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.run.log(resp)\n[docs] def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Run on agent action.\"\"\"\n self.step += 1\n self.tool_starts += 1\n self.starts += 1\n resp = self._init_resp()\n resp.update(\n {\n \"action\": \"on_agent_action\",\n \"tool\": action.tool,\n \"tool_input\": action.tool_input,\n \"log\": action.log,\n }\n )\n resp.update(self.get_custom_callback_meta())\n self.on_agent_action_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.run.log(resp)\n def _create_session_analysis_df(self) -> Any:\n \"\"\"Create a dataframe with all the information from the session.\"\"\"\n pd = import_pandas()\n on_llm_start_records_df = pd.DataFrame(self.on_llm_start_records)\n on_llm_end_records_df = pd.DataFrame(self.on_llm_end_records)\n llm_input_prompts_df = (\n on_llm_start_records_df[[\"step\", \"prompts\", \"name\"]]\n .dropna(axis=1)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} {"id": "1d9a2450897f-10", "text": ".dropna(axis=1)\n .rename({\"step\": \"prompt_step\"}, axis=1)\n )\n complexity_metrics_columns = []\n visualizations_columns = []\n if self.complexity_metrics:\n complexity_metrics_columns = [\n \"flesch_reading_ease\",\n \"flesch_kincaid_grade\",\n \"smog_index\",\n \"coleman_liau_index\",\n \"automated_readability_index\",\n \"dale_chall_readability_score\",\n \"difficult_words\",\n \"linsear_write_formula\",\n \"gunning_fog\",\n \"text_standard\",\n \"fernandez_huerta\",\n \"szigriszt_pazos\",\n \"gutierrez_polini\",\n \"crawford\",\n \"gulpease_index\",\n \"osman\",\n ]\n if self.visualize:\n visualizations_columns = [\"dependency_tree\", \"entities\"]\n llm_outputs_df = (\n on_llm_end_records_df[\n [\n \"step\",\n \"text\",\n \"token_usage_total_tokens\",\n \"token_usage_prompt_tokens\",\n \"token_usage_completion_tokens\",\n ]\n + complexity_metrics_columns\n + visualizations_columns\n ]\n .dropna(axis=1)\n .rename({\"step\": \"output_step\", \"text\": \"output\"}, axis=1)\n )\n session_analysis_df = pd.concat([llm_input_prompts_df, llm_outputs_df], axis=1)\n session_analysis_df[\"chat_html\"] = session_analysis_df[\n [\"prompts\", \"output\"]\n ].apply(\n lambda row: construct_html_from_prompt_and_generation(\n row[\"prompts\"], row[\"output\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} {"id": "1d9a2450897f-11", "text": "row[\"prompts\"], row[\"output\"]\n ),\n axis=1,\n )\n return session_analysis_df\n[docs] def flush_tracker(\n self,\n langchain_asset: Any = None,\n reset: bool = True,\n finish: bool = False,\n job_type: Optional[str] = None,\n project: Optional[str] = None,\n entity: Optional[str] = None,\n tags: Optional[Sequence] = None,\n group: Optional[str] = None,\n name: Optional[str] = None,\n notes: Optional[str] = None,\n visualize: Optional[bool] = None,\n complexity_metrics: Optional[bool] = None,\n ) -> None:\n \"\"\"Flush the tracker and reset the session.\n Args:\n langchain_asset: The langchain asset to save.\n reset: Whether to reset the session.\n finish: Whether to finish the run.\n job_type: The job type.\n project: The project.\n entity: The entity.\n tags: The tags.\n group: The group.\n name: The name.\n notes: The notes.\n visualize: Whether to visualize.\n complexity_metrics: Whether to compute complexity metrics.\n Returns:\n None\n \"\"\"\n pd = import_pandas()\n wandb = import_wandb()\n action_records_table = wandb.Table(dataframe=pd.DataFrame(self.action_records))\n session_analysis_table = wandb.Table(\n dataframe=self._create_session_analysis_df()\n )\n self.run.log(\n {\n \"action_records\": action_records_table,\n \"session_analysis\": session_analysis_table,\n }\n )\n if langchain_asset:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} {"id": "1d9a2450897f-12", "text": "}\n )\n if langchain_asset:\n langchain_asset_path = Path(self.temp_dir.name, \"model.json\")\n model_artifact = wandb.Artifact(name=\"model\", type=\"model\")\n model_artifact.add(action_records_table, name=\"action_records\")\n model_artifact.add(session_analysis_table, name=\"session_analysis\")\n try:\n langchain_asset.save(langchain_asset_path)\n model_artifact.add_file(str(langchain_asset_path))\n model_artifact.metadata = load_json_to_dict(langchain_asset_path)\n except ValueError:\n langchain_asset.save_agent(langchain_asset_path)\n model_artifact.add_file(str(langchain_asset_path))\n model_artifact.metadata = load_json_to_dict(langchain_asset_path)\n except NotImplementedError as e:\n print(\"Could not save model.\")\n print(repr(e))\n pass\n self.run.log_artifact(model_artifact)\n if finish or reset:\n self.run.finish()\n self.temp_dir.cleanup()\n self.reset_callback_meta()\n if reset:\n self.__init__( # type: ignore\n job_type=job_type if job_type else self.job_type,\n project=project if project else self.project,\n entity=entity if entity else self.entity,\n tags=tags if tags else self.tags,\n group=group if group else self.group,\n name=name if name else self.name,\n notes=notes if notes else self.notes,\n visualize=visualize if visualize else self.visualize,\n complexity_metrics=complexity_metrics\n if complexity_metrics\n else self.complexity_metrics,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} {"id": "4fbac879b224-0", "text": "Source code for langchain.callbacks.mlflow_callback\nimport random\nimport string\nimport tempfile\nimport traceback\nfrom copy import deepcopy\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Optional, Union\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.callbacks.utils import (\n BaseMetadataCallbackHandler,\n flatten_dict,\n hash_string,\n import_pandas,\n import_spacy,\n import_textstat,\n)\nfrom langchain.schema import AgentAction, AgentFinish, LLMResult\nfrom langchain.utils import get_from_dict_or_env\n[docs]def import_mlflow() -> Any:\n \"\"\"Import the mlflow python package and raise an error if it is not installed.\"\"\"\n try:\n import mlflow\n except ImportError:\n raise ImportError(\n \"To use the mlflow callback manager you need to have the `mlflow` python \"\n \"package installed. Please install it with `pip install mlflow>=2.3.0`\"\n )\n return mlflow\n[docs]def analyze_text(\n text: str,\n nlp: Any = None,\n) -> dict:\n \"\"\"Analyze text using textstat and spacy.\n Parameters:\n text (str): The text to analyze.\n nlp (spacy.lang): The spacy language model to use for visualization.\n Returns:\n (dict): A dictionary containing the complexity metrics and visualization\n files serialized to HTML string.\n \"\"\"\n resp: Dict[str, Any] = {}\n textstat = import_textstat()\n spacy = import_spacy()\n text_complexity_metrics = {\n \"flesch_reading_ease\": textstat.flesch_reading_ease(text),", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} {"id": "4fbac879b224-1", "text": "\"flesch_reading_ease\": textstat.flesch_reading_ease(text),\n \"flesch_kincaid_grade\": textstat.flesch_kincaid_grade(text),\n \"smog_index\": textstat.smog_index(text),\n \"coleman_liau_index\": textstat.coleman_liau_index(text),\n \"automated_readability_index\": textstat.automated_readability_index(text),\n \"dale_chall_readability_score\": textstat.dale_chall_readability_score(text),\n \"difficult_words\": textstat.difficult_words(text),\n \"linsear_write_formula\": textstat.linsear_write_formula(text),\n \"gunning_fog\": textstat.gunning_fog(text),\n # \"text_standard\": textstat.text_standard(text),\n \"fernandez_huerta\": textstat.fernandez_huerta(text),\n \"szigriszt_pazos\": textstat.szigriszt_pazos(text),\n \"gutierrez_polini\": textstat.gutierrez_polini(text),\n \"crawford\": textstat.crawford(text),\n \"gulpease_index\": textstat.gulpease_index(text),\n \"osman\": textstat.osman(text),\n }\n resp.update({\"text_complexity_metrics\": text_complexity_metrics})\n resp.update(text_complexity_metrics)\n if nlp is not None:\n doc = nlp(text)\n dep_out = spacy.displacy.render( # type: ignore\n doc, style=\"dep\", jupyter=False, page=True\n )\n ent_out = spacy.displacy.render( # type: ignore\n doc, style=\"ent\", jupyter=False, page=True\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} {"id": "4fbac879b224-2", "text": "doc, style=\"ent\", jupyter=False, page=True\n )\n text_visualizations = {\n \"dependency_tree\": dep_out,\n \"entities\": ent_out,\n }\n resp.update(text_visualizations)\n return resp\n[docs]def construct_html_from_prompt_and_generation(prompt: str, generation: str) -> Any:\n \"\"\"Construct an html element from a prompt and a generation.\n Parameters:\n prompt (str): The prompt.\n generation (str): The generation.\n Returns:\n (str): The html string.\"\"\"\n formatted_prompt = prompt.replace(\"\\n\", \"
\")\n formatted_generation = generation.replace(\"\\n\", \"
\")\n return f\"\"\"\n

{formatted_prompt}:

\n
\n

\n {formatted_generation}\n

\n
\n \"\"\"\nclass MlflowLogger:\n \"\"\"Callback Handler that logs metrics and artifacts to mlflow server.\n Parameters:\n name (str): Name of the run.\n experiment (str): Name of the experiment.\n tags (dict): Tags to be attached for the run.\n tracking_uri (str): MLflow tracking server uri.\n This handler implements the helper functions to initialize,\n log metrics and artifacts to the mlflow server.\n \"\"\"\n def __init__(self, **kwargs: Any):\n self.mlflow = import_mlflow()\n tracking_uri = get_from_dict_or_env(\n kwargs, \"tracking_uri\", \"MLFLOW_TRACKING_URI\", \"\"\n )\n self.mlflow.set_tracking_uri(tracking_uri)\n # User can set other env variables described here", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} {"id": "4fbac879b224-3", "text": "# User can set other env variables described here\n # > https://www.mlflow.org/docs/latest/tracking.html#logging-to-a-tracking-server\n experiment_name = get_from_dict_or_env(\n kwargs, \"experiment_name\", \"MLFLOW_EXPERIMENT_NAME\"\n )\n self.mlf_exp = self.mlflow.get_experiment_by_name(experiment_name)\n if self.mlf_exp is not None:\n self.mlf_expid = self.mlf_exp.experiment_id\n else:\n self.mlf_expid = self.mlflow.create_experiment(experiment_name)\n self.start_run(kwargs[\"run_name\"], kwargs[\"run_tags\"])\n def start_run(self, name: str, tags: Dict[str, str]) -> None:\n \"\"\"To start a new run, auto generates the random suffix for name\"\"\"\n if name.endswith(\"-%\"):\n rname = \"\".join(random.choices(string.ascii_uppercase + string.digits, k=7))\n name = name.replace(\"%\", rname)\n self.run = self.mlflow.MlflowClient().create_run(\n self.mlf_expid, run_name=name, tags=tags\n )\n def finish_run(self) -> None:\n \"\"\"To finish the run.\"\"\"\n with self.mlflow.start_run(\n run_id=self.run.info.run_id, experiment_id=self.mlf_expid\n ):\n self.mlflow.end_run()\n def metric(self, key: str, value: float) -> None:\n \"\"\"To log metric to mlflow server.\"\"\"\n with self.mlflow.start_run(\n run_id=self.run.info.run_id, experiment_id=self.mlf_expid\n ):\n self.mlflow.log_metric(key, value)\n def metrics(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} {"id": "4fbac879b224-4", "text": "):\n self.mlflow.log_metric(key, value)\n def metrics(\n self, data: Union[Dict[str, float], Dict[str, int]], step: Optional[int] = 0\n ) -> None:\n \"\"\"To log all metrics in the input dict.\"\"\"\n with self.mlflow.start_run(\n run_id=self.run.info.run_id, experiment_id=self.mlf_expid\n ):\n self.mlflow.log_metrics(data)\n def jsonf(self, data: Dict[str, Any], filename: str) -> None:\n \"\"\"To log the input data as json file artifact.\"\"\"\n with self.mlflow.start_run(\n run_id=self.run.info.run_id, experiment_id=self.mlf_expid\n ):\n self.mlflow.log_dict(data, f\"{filename}.json\")\n def table(self, name: str, dataframe) -> None: # type: ignore\n \"\"\"To log the input pandas dataframe as a html table\"\"\"\n self.html(dataframe.to_html(), f\"table_{name}\")\n def html(self, html: str, filename: str) -> None:\n \"\"\"To log the input html string as html file artifact.\"\"\"\n with self.mlflow.start_run(\n run_id=self.run.info.run_id, experiment_id=self.mlf_expid\n ):\n self.mlflow.log_text(html, f\"{filename}.html\")\n def text(self, text: str, filename: str) -> None:\n \"\"\"To log the input text as text file artifact.\"\"\"\n with self.mlflow.start_run(\n run_id=self.run.info.run_id, experiment_id=self.mlf_expid\n ):\n self.mlflow.log_text(text, f\"{filename}.txt\")\n def artifact(self, path: str) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} {"id": "4fbac879b224-5", "text": "def artifact(self, path: str) -> None:\n \"\"\"To upload the file from given path as artifact.\"\"\"\n with self.mlflow.start_run(\n run_id=self.run.info.run_id, experiment_id=self.mlf_expid\n ):\n self.mlflow.log_artifact(path)\n def langchain_artifact(self, chain: Any) -> None:\n with self.mlflow.start_run(\n run_id=self.run.info.run_id, experiment_id=self.mlf_expid\n ):\n self.mlflow.langchain.log_model(chain, \"langchain-model\")\n[docs]class MlflowCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler):\n \"\"\"Callback Handler that logs metrics and artifacts to mlflow server.\n Parameters:\n name (str): Name of the run.\n experiment (str): Name of the experiment.\n tags (dict): Tags to be attached for the run.\n tracking_uri (str): MLflow tracking server uri.\n This handler will utilize the associated callback method called and formats\n the input of each callback function with metadata regarding the state of LLM run,\n and adds the response to the list of records for both the {method}_records and\n action. It then logs the response to mlflow server.\n \"\"\"\n def __init__(\n self,\n name: Optional[str] = \"langchainrun-%\",\n experiment: Optional[str] = \"langchain\",\n tags: Optional[Dict] = {},\n tracking_uri: Optional[str] = None,\n ) -> None:\n \"\"\"Initialize callback handler.\"\"\"\n import_pandas()\n import_textstat()\n import_mlflow()\n spacy = import_spacy()\n super().__init__()\n self.name = name\n self.experiment = experiment", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} {"id": "4fbac879b224-6", "text": "super().__init__()\n self.name = name\n self.experiment = experiment\n self.tags = tags\n self.tracking_uri = tracking_uri\n self.temp_dir = tempfile.TemporaryDirectory()\n self.mlflg = MlflowLogger(\n tracking_uri=self.tracking_uri,\n experiment_name=self.experiment,\n run_name=self.name,\n run_tags=self.tags,\n )\n self.action_records: list = []\n self.nlp = spacy.load(\"en_core_web_sm\")\n self.metrics = {\n \"step\": 0,\n \"starts\": 0,\n \"ends\": 0,\n \"errors\": 0,\n \"text_ctr\": 0,\n \"chain_starts\": 0,\n \"chain_ends\": 0,\n \"llm_starts\": 0,\n \"llm_ends\": 0,\n \"llm_streams\": 0,\n \"tool_starts\": 0,\n \"tool_ends\": 0,\n \"agent_ends\": 0,\n }\n self.records: Dict[str, Any] = {\n \"on_llm_start_records\": [],\n \"on_llm_token_records\": [],\n \"on_llm_end_records\": [],\n \"on_chain_start_records\": [],\n \"on_chain_end_records\": [],\n \"on_tool_start_records\": [],\n \"on_tool_end_records\": [],\n \"on_text_records\": [],\n \"on_agent_finish_records\": [],\n \"on_agent_action_records\": [],\n \"action_records\": [],\n }\n def _reset(self) -> None:\n for k, v in self.metrics.items():\n self.metrics[k] = 0\n for k, v in self.records.items():", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} {"id": "4fbac879b224-7", "text": "self.metrics[k] = 0\n for k, v in self.records.items():\n self.records[k] = []\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM starts.\"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"llm_starts\"] += 1\n self.metrics[\"starts\"] += 1\n llm_starts = self.metrics[\"llm_starts\"]\n resp: Dict[str, Any] = {}\n resp.update({\"action\": \"on_llm_start\"})\n resp.update(flatten_dict(serialized))\n resp.update(self.metrics)\n self.mlflg.metrics(self.metrics, step=self.metrics[\"step\"])\n for idx, prompt in enumerate(prompts):\n prompt_resp = deepcopy(resp)\n prompt_resp[\"prompt\"] = prompt\n self.records[\"on_llm_start_records\"].append(prompt_resp)\n self.records[\"action_records\"].append(prompt_resp)\n self.mlflg.jsonf(prompt_resp, f\"llm_start_{llm_starts}_prompt_{idx}\")\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Run when LLM generates a new token.\"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"llm_streams\"] += 1\n llm_streams = self.metrics[\"llm_streams\"]\n resp: Dict[str, Any] = {}\n resp.update({\"action\": \"on_llm_new_token\", \"token\": token})\n resp.update(self.metrics)\n self.mlflg.metrics(self.metrics, step=self.metrics[\"step\"])\n self.records[\"on_llm_token_records\"].append(resp)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} {"id": "4fbac879b224-8", "text": "self.records[\"on_llm_token_records\"].append(resp)\n self.records[\"action_records\"].append(resp)\n self.mlflg.jsonf(resp, f\"llm_new_tokens_{llm_streams}\")\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Run when LLM ends running.\"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"llm_ends\"] += 1\n self.metrics[\"ends\"] += 1\n llm_ends = self.metrics[\"llm_ends\"]\n resp: Dict[str, Any] = {}\n resp.update({\"action\": \"on_llm_end\"})\n resp.update(flatten_dict(response.llm_output or {}))\n resp.update(self.metrics)\n self.mlflg.metrics(self.metrics, step=self.metrics[\"step\"])\n for generations in response.generations:\n for idx, generation in enumerate(generations):\n generation_resp = deepcopy(resp)\n generation_resp.update(flatten_dict(generation.dict()))\n generation_resp.update(\n analyze_text(\n generation.text,\n nlp=self.nlp,\n )\n )\n complexity_metrics: Dict[str, float] = generation_resp.pop(\"text_complexity_metrics\") # type: ignore # noqa: E501\n self.mlflg.metrics(\n complexity_metrics,\n step=self.metrics[\"step\"],\n )\n self.records[\"on_llm_end_records\"].append(generation_resp)\n self.records[\"action_records\"].append(generation_resp)\n self.mlflg.jsonf(resp, f\"llm_end_{llm_ends}_generation_{idx}\")\n dependency_tree = generation_resp[\"dependency_tree\"]\n entities = generation_resp[\"entities\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} {"id": "4fbac879b224-9", "text": "dependency_tree = generation_resp[\"dependency_tree\"]\n entities = generation_resp[\"entities\"]\n self.mlflg.html(dependency_tree, \"dep-\" + hash_string(generation.text))\n self.mlflg.html(entities, \"ent-\" + hash_string(generation.text))\n[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM errors.\"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"errors\"] += 1\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain starts running.\"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"chain_starts\"] += 1\n self.metrics[\"starts\"] += 1\n chain_starts = self.metrics[\"chain_starts\"]\n resp: Dict[str, Any] = {}\n resp.update({\"action\": \"on_chain_start\"})\n resp.update(flatten_dict(serialized))\n resp.update(self.metrics)\n self.mlflg.metrics(self.metrics, step=self.metrics[\"step\"])\n chain_input = \",\".join([f\"{k}={v}\" for k, v in inputs.items()])\n input_resp = deepcopy(resp)\n input_resp[\"inputs\"] = chain_input\n self.records[\"on_chain_start_records\"].append(input_resp)\n self.records[\"action_records\"].append(input_resp)\n self.mlflg.jsonf(input_resp, f\"chain_start_{chain_starts}\")\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Run when chain ends running.\"\"\"\n self.metrics[\"step\"] += 1", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} {"id": "4fbac879b224-10", "text": "\"\"\"Run when chain ends running.\"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"chain_ends\"] += 1\n self.metrics[\"ends\"] += 1\n chain_ends = self.metrics[\"chain_ends\"]\n resp: Dict[str, Any] = {}\n chain_output = \",\".join([f\"{k}={v}\" for k, v in outputs.items()])\n resp.update({\"action\": \"on_chain_end\", \"outputs\": chain_output})\n resp.update(self.metrics)\n self.mlflg.metrics(self.metrics, step=self.metrics[\"step\"])\n self.records[\"on_chain_end_records\"].append(resp)\n self.records[\"action_records\"].append(resp)\n self.mlflg.jsonf(resp, f\"chain_end_{chain_ends}\")\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain errors.\"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"errors\"] += 1\n[docs] def on_tool_start(\n self, serialized: Dict[str, Any], input_str: str, **kwargs: Any\n ) -> None:\n \"\"\"Run when tool starts running.\"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"tool_starts\"] += 1\n self.metrics[\"starts\"] += 1\n tool_starts = self.metrics[\"tool_starts\"]\n resp: Dict[str, Any] = {}\n resp.update({\"action\": \"on_tool_start\", \"input_str\": input_str})\n resp.update(flatten_dict(serialized))\n resp.update(self.metrics)\n self.mlflg.metrics(self.metrics, step=self.metrics[\"step\"])\n self.records[\"on_tool_start_records\"].append(resp)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} {"id": "4fbac879b224-11", "text": "self.records[\"on_tool_start_records\"].append(resp)\n self.records[\"action_records\"].append(resp)\n self.mlflg.jsonf(resp, f\"tool_start_{tool_starts}\")\n[docs] def on_tool_end(self, output: str, **kwargs: Any) -> None:\n \"\"\"Run when tool ends running.\"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"tool_ends\"] += 1\n self.metrics[\"ends\"] += 1\n tool_ends = self.metrics[\"tool_ends\"]\n resp: Dict[str, Any] = {}\n resp.update({\"action\": \"on_tool_end\", \"output\": output})\n resp.update(self.metrics)\n self.mlflg.metrics(self.metrics, step=self.metrics[\"step\"])\n self.records[\"on_tool_end_records\"].append(resp)\n self.records[\"action_records\"].append(resp)\n self.mlflg.jsonf(resp, f\"tool_end_{tool_ends}\")\n[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when tool errors.\"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"errors\"] += 1\n[docs] def on_text(self, text: str, **kwargs: Any) -> None:\n \"\"\"\n Run when agent is ending.\n \"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"text_ctr\"] += 1\n text_ctr = self.metrics[\"text_ctr\"]\n resp: Dict[str, Any] = {}\n resp.update({\"action\": \"on_text\", \"text\": text})\n resp.update(self.metrics)\n self.mlflg.metrics(self.metrics, step=self.metrics[\"step\"])\n self.records[\"on_text_records\"].append(resp)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} {"id": "4fbac879b224-12", "text": "self.records[\"on_text_records\"].append(resp)\n self.records[\"action_records\"].append(resp)\n self.mlflg.jsonf(resp, f\"on_text_{text_ctr}\")\n[docs] def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None:\n \"\"\"Run when agent ends running.\"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"agent_ends\"] += 1\n self.metrics[\"ends\"] += 1\n agent_ends = self.metrics[\"agent_ends\"]\n resp: Dict[str, Any] = {}\n resp.update(\n {\n \"action\": \"on_agent_finish\",\n \"output\": finish.return_values[\"output\"],\n \"log\": finish.log,\n }\n )\n resp.update(self.metrics)\n self.mlflg.metrics(self.metrics, step=self.metrics[\"step\"])\n self.records[\"on_agent_finish_records\"].append(resp)\n self.records[\"action_records\"].append(resp)\n self.mlflg.jsonf(resp, f\"agent_finish_{agent_ends}\")\n[docs] def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Run on agent action.\"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"tool_starts\"] += 1\n self.metrics[\"starts\"] += 1\n tool_starts = self.metrics[\"tool_starts\"]\n resp: Dict[str, Any] = {}\n resp.update(\n {\n \"action\": \"on_agent_action\",\n \"tool\": action.tool,\n \"tool_input\": action.tool_input,\n \"log\": action.log,\n }\n )\n resp.update(self.metrics)\n self.mlflg.metrics(self.metrics, step=self.metrics[\"step\"])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} {"id": "4fbac879b224-13", "text": "self.mlflg.metrics(self.metrics, step=self.metrics[\"step\"])\n self.records[\"on_agent_action_records\"].append(resp)\n self.records[\"action_records\"].append(resp)\n self.mlflg.jsonf(resp, f\"agent_action_{tool_starts}\")\n def _create_session_analysis_df(self) -> Any:\n \"\"\"Create a dataframe with all the information from the session.\"\"\"\n pd = import_pandas()\n on_llm_start_records_df = pd.DataFrame(self.records[\"on_llm_start_records\"])\n on_llm_end_records_df = pd.DataFrame(self.records[\"on_llm_end_records\"])\n llm_input_columns = [\"step\", \"prompt\"]\n if \"name\" in on_llm_start_records_df.columns:\n llm_input_columns.append(\"name\")\n elif \"id\" in on_llm_start_records_df.columns:\n # id is llm class's full import path. For example:\n # [\"langchain\", \"llms\", \"openai\", \"AzureOpenAI\"]\n on_llm_start_records_df[\"name\"] = on_llm_start_records_df[\"id\"].apply(\n lambda id_: id_[-1]\n )\n llm_input_columns.append(\"name\")\n llm_input_prompts_df = (\n on_llm_start_records_df[llm_input_columns]\n .dropna(axis=1)\n .rename({\"step\": \"prompt_step\"}, axis=1)\n )\n complexity_metrics_columns = []\n visualizations_columns = []\n complexity_metrics_columns = [\n \"flesch_reading_ease\",\n \"flesch_kincaid_grade\",\n \"smog_index\",\n \"coleman_liau_index\",\n \"automated_readability_index\",\n \"dale_chall_readability_score\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} {"id": "4fbac879b224-14", "text": "\"automated_readability_index\",\n \"dale_chall_readability_score\",\n \"difficult_words\",\n \"linsear_write_formula\",\n \"gunning_fog\",\n # \"text_standard\",\n \"fernandez_huerta\",\n \"szigriszt_pazos\",\n \"gutierrez_polini\",\n \"crawford\",\n \"gulpease_index\",\n \"osman\",\n ]\n visualizations_columns = [\"dependency_tree\", \"entities\"]\n llm_outputs_df = (\n on_llm_end_records_df[\n [\n \"step\",\n \"text\",\n \"token_usage_total_tokens\",\n \"token_usage_prompt_tokens\",\n \"token_usage_completion_tokens\",\n ]\n + complexity_metrics_columns\n + visualizations_columns\n ]\n .dropna(axis=1)\n .rename({\"step\": \"output_step\", \"text\": \"output\"}, axis=1)\n )\n session_analysis_df = pd.concat([llm_input_prompts_df, llm_outputs_df], axis=1)\n session_analysis_df[\"chat_html\"] = session_analysis_df[\n [\"prompt\", \"output\"]\n ].apply(\n lambda row: construct_html_from_prompt_and_generation(\n row[\"prompt\"], row[\"output\"]\n ),\n axis=1,\n )\n return session_analysis_df\n[docs] def flush_tracker(self, langchain_asset: Any = None, finish: bool = False) -> None:\n pd = import_pandas()\n self.mlflg.table(\"action_records\", pd.DataFrame(self.records[\"action_records\"]))\n session_analysis_df = self._create_session_analysis_df()\n chat_html = session_analysis_df.pop(\"chat_html\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} {"id": "4fbac879b224-15", "text": "chat_html = session_analysis_df.pop(\"chat_html\")\n chat_html = chat_html.replace(\"\\n\", \"\", regex=True)\n self.mlflg.table(\"session_analysis\", pd.DataFrame(session_analysis_df))\n self.mlflg.html(\"\".join(chat_html.tolist()), \"chat_html\")\n if langchain_asset:\n # To avoid circular import error\n # mlflow only supports LLMChain asset\n if \"langchain.chains.llm.LLMChain\" in str(type(langchain_asset)):\n self.mlflg.langchain_artifact(langchain_asset)\n else:\n langchain_asset_path = str(Path(self.temp_dir.name, \"model.json\"))\n try:\n langchain_asset.save(langchain_asset_path)\n self.mlflg.artifact(langchain_asset_path)\n except ValueError:\n try:\n langchain_asset.save_agent(langchain_asset_path)\n self.mlflg.artifact(langchain_asset_path)\n except AttributeError:\n print(\"Could not save model.\")\n traceback.print_exc()\n pass\n except NotImplementedError:\n print(\"Could not save model.\")\n traceback.print_exc()\n pass\n except NotImplementedError:\n print(\"Could not save model.\")\n traceback.print_exc()\n pass\n if finish:\n self.mlflg.finish_run()\n self._reset()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} {"id": "0d0d17d0956a-0", "text": "Source code for langchain.callbacks.aim_callback\nfrom copy import deepcopy\nfrom typing import Any, Dict, List, Optional, Union\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.schema import AgentAction, AgentFinish, LLMResult\n[docs]def import_aim() -> Any:\n \"\"\"Import the aim python package and raise an error if it is not installed.\"\"\"\n try:\n import aim\n except ImportError:\n raise ImportError(\n \"To use the Aim callback manager you need to have the\"\n \" `aim` python package installed.\"\n \"Please install it with `pip install aim`\"\n )\n return aim\nclass BaseMetadataCallbackHandler:\n \"\"\"This class handles the metadata and associated function states for callbacks.\n Attributes:\n step (int): The current step.\n starts (int): The number of times the start method has been called.\n ends (int): The number of times the end method has been called.\n errors (int): The number of times the error method has been called.\n text_ctr (int): The number of times the text method has been called.\n ignore_llm_ (bool): Whether to ignore llm callbacks.\n ignore_chain_ (bool): Whether to ignore chain callbacks.\n ignore_agent_ (bool): Whether to ignore agent callbacks.\n ignore_retriever_ (bool): Whether to ignore retriever callbacks.\n always_verbose_ (bool): Whether to always be verbose.\n chain_starts (int): The number of times the chain start method has been called.\n chain_ends (int): The number of times the chain end method has been called.\n llm_starts (int): The number of times the llm start method has been called.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/aim_callback.html"} {"id": "0d0d17d0956a-1", "text": "llm_ends (int): The number of times the llm end method has been called.\n llm_streams (int): The number of times the text method has been called.\n tool_starts (int): The number of times the tool start method has been called.\n tool_ends (int): The number of times the tool end method has been called.\n agent_ends (int): The number of times the agent end method has been called.\n \"\"\"\n def __init__(self) -> None:\n self.step = 0\n self.starts = 0\n self.ends = 0\n self.errors = 0\n self.text_ctr = 0\n self.ignore_llm_ = False\n self.ignore_chain_ = False\n self.ignore_agent_ = False\n self.ignore_retriever_ = False\n self.always_verbose_ = False\n self.chain_starts = 0\n self.chain_ends = 0\n self.llm_starts = 0\n self.llm_ends = 0\n self.llm_streams = 0\n self.tool_starts = 0\n self.tool_ends = 0\n self.agent_ends = 0\n @property\n def always_verbose(self) -> bool:\n \"\"\"Whether to call verbose callbacks even if verbose is False.\"\"\"\n return self.always_verbose_\n @property\n def ignore_llm(self) -> bool:\n \"\"\"Whether to ignore LLM callbacks.\"\"\"\n return self.ignore_llm_\n @property\n def ignore_chain(self) -> bool:\n \"\"\"Whether to ignore chain callbacks.\"\"\"\n return self.ignore_chain_\n @property\n def ignore_agent(self) -> bool:\n \"\"\"Whether to ignore agent callbacks.\"\"\"\n return self.ignore_agent_\n @property", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/aim_callback.html"} {"id": "0d0d17d0956a-2", "text": "\"\"\"Whether to ignore agent callbacks.\"\"\"\n return self.ignore_agent_\n @property\n def ignore_retriever(self) -> bool:\n \"\"\"Whether to ignore retriever callbacks.\"\"\"\n return self.ignore_retriever_\n def get_custom_callback_meta(self) -> Dict[str, Any]:\n return {\n \"step\": self.step,\n \"starts\": self.starts,\n \"ends\": self.ends,\n \"errors\": self.errors,\n \"text_ctr\": self.text_ctr,\n \"chain_starts\": self.chain_starts,\n \"chain_ends\": self.chain_ends,\n \"llm_starts\": self.llm_starts,\n \"llm_ends\": self.llm_ends,\n \"llm_streams\": self.llm_streams,\n \"tool_starts\": self.tool_starts,\n \"tool_ends\": self.tool_ends,\n \"agent_ends\": self.agent_ends,\n }\n def reset_callback_meta(self) -> None:\n \"\"\"Reset the callback metadata.\"\"\"\n self.step = 0\n self.starts = 0\n self.ends = 0\n self.errors = 0\n self.text_ctr = 0\n self.ignore_llm_ = False\n self.ignore_chain_ = False\n self.ignore_agent_ = False\n self.always_verbose_ = False\n self.chain_starts = 0\n self.chain_ends = 0\n self.llm_starts = 0\n self.llm_ends = 0\n self.llm_streams = 0\n self.tool_starts = 0\n self.tool_ends = 0\n self.agent_ends = 0\n return None\n[docs]class AimCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler):\n \"\"\"Callback Handler that logs to Aim.\n Parameters:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/aim_callback.html"} {"id": "0d0d17d0956a-3", "text": "\"\"\"Callback Handler that logs to Aim.\n Parameters:\n repo (:obj:`str`, optional): Aim repository path or Repo object to which\n Run object is bound. If skipped, default Repo is used.\n experiment_name (:obj:`str`, optional): Sets Run's `experiment` property.\n 'default' if not specified. Can be used later to query runs/sequences.\n system_tracking_interval (:obj:`int`, optional): Sets the tracking interval\n in seconds for system usage metrics (CPU, Memory, etc.). Set to `None`\n to disable system metrics tracking.\n log_system_params (:obj:`bool`, optional): Enable/Disable logging of system\n params such as installed packages, git info, environment variables, etc.\n This handler will utilize the associated callback method called and formats\n the input of each callback function with metadata regarding the state of LLM run\n and then logs the response to Aim.\n \"\"\"\n def __init__(\n self,\n repo: Optional[str] = None,\n experiment_name: Optional[str] = None,\n system_tracking_interval: Optional[int] = 10,\n log_system_params: bool = True,\n ) -> None:\n \"\"\"Initialize callback handler.\"\"\"\n super().__init__()\n aim = import_aim()\n self.repo = repo\n self.experiment_name = experiment_name\n self.system_tracking_interval = system_tracking_interval\n self.log_system_params = log_system_params\n self._run = aim.Run(\n repo=self.repo,\n experiment=self.experiment_name,\n system_tracking_interval=self.system_tracking_interval,\n log_system_params=self.log_system_params,\n )\n self._run_hash = self._run.hash\n self.action_records: list = []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/aim_callback.html"} {"id": "0d0d17d0956a-4", "text": "self._run_hash = self._run.hash\n self.action_records: list = []\n[docs] def setup(self, **kwargs: Any) -> None:\n aim = import_aim()\n if not self._run:\n if self._run_hash:\n self._run = aim.Run(\n self._run_hash,\n repo=self.repo,\n system_tracking_interval=self.system_tracking_interval,\n )\n else:\n self._run = aim.Run(\n repo=self.repo,\n experiment=self.experiment_name,\n system_tracking_interval=self.system_tracking_interval,\n log_system_params=self.log_system_params,\n )\n self._run_hash = self._run.hash\n if kwargs:\n for key, value in kwargs.items():\n self._run.set(key, value, strict=False)\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM starts.\"\"\"\n aim = import_aim()\n self.step += 1\n self.llm_starts += 1\n self.starts += 1\n resp = {\"action\": \"on_llm_start\"}\n resp.update(self.get_custom_callback_meta())\n prompts_res = deepcopy(prompts)\n self._run.track(\n [aim.Text(prompt) for prompt in prompts_res],\n name=\"on_llm_start\",\n context=resp,\n )\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Run when LLM ends running.\"\"\"\n aim = import_aim()\n self.step += 1\n self.llm_ends += 1\n self.ends += 1", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/aim_callback.html"} {"id": "0d0d17d0956a-5", "text": "self.llm_ends += 1\n self.ends += 1\n resp = {\"action\": \"on_llm_end\"}\n resp.update(self.get_custom_callback_meta())\n response_res = deepcopy(response)\n generated = [\n aim.Text(generation.text)\n for generations in response_res.generations\n for generation in generations\n ]\n self._run.track(\n generated,\n name=\"on_llm_end\",\n context=resp,\n )\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Run when LLM generates a new token.\"\"\"\n self.step += 1\n self.llm_streams += 1\n[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain starts running.\"\"\"\n aim = import_aim()\n self.step += 1\n self.chain_starts += 1\n self.starts += 1\n resp = {\"action\": \"on_chain_start\"}\n resp.update(self.get_custom_callback_meta())\n inputs_res = deepcopy(inputs)\n self._run.track(\n aim.Text(inputs_res[\"input\"]), name=\"on_chain_start\", context=resp\n )\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Run when chain ends running.\"\"\"\n aim = import_aim()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/aim_callback.html"} {"id": "0d0d17d0956a-6", "text": "\"\"\"Run when chain ends running.\"\"\"\n aim = import_aim()\n self.step += 1\n self.chain_ends += 1\n self.ends += 1\n resp = {\"action\": \"on_chain_end\"}\n resp.update(self.get_custom_callback_meta())\n outputs_res = deepcopy(outputs)\n self._run.track(\n aim.Text(outputs_res[\"output\"]), name=\"on_chain_end\", context=resp\n )\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_tool_start(\n self, serialized: Dict[str, Any], input_str: str, **kwargs: Any\n ) -> None:\n \"\"\"Run when tool starts running.\"\"\"\n aim = import_aim()\n self.step += 1\n self.tool_starts += 1\n self.starts += 1\n resp = {\"action\": \"on_tool_start\"}\n resp.update(self.get_custom_callback_meta())\n self._run.track(aim.Text(input_str), name=\"on_tool_start\", context=resp)\n[docs] def on_tool_end(self, output: str, **kwargs: Any) -> None:\n \"\"\"Run when tool ends running.\"\"\"\n aim = import_aim()\n self.step += 1\n self.tool_ends += 1\n self.ends += 1\n resp = {\"action\": \"on_tool_end\"}\n resp.update(self.get_custom_callback_meta())\n self._run.track(aim.Text(output), name=\"on_tool_end\", context=resp)\n[docs] def on_tool_error(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/aim_callback.html"} {"id": "0d0d17d0956a-7", "text": "[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when tool errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_text(self, text: str, **kwargs: Any) -> None:\n \"\"\"\n Run when agent is ending.\n \"\"\"\n self.step += 1\n self.text_ctr += 1\n[docs] def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None:\n \"\"\"Run when agent ends running.\"\"\"\n aim = import_aim()\n self.step += 1\n self.agent_ends += 1\n self.ends += 1\n resp = {\"action\": \"on_agent_finish\"}\n resp.update(self.get_custom_callback_meta())\n finish_res = deepcopy(finish)\n text = \"OUTPUT:\\n{}\\n\\nLOG:\\n{}\".format(\n finish_res.return_values[\"output\"], finish_res.log\n )\n self._run.track(aim.Text(text), name=\"on_agent_finish\", context=resp)\n[docs] def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Run on agent action.\"\"\"\n aim = import_aim()\n self.step += 1\n self.tool_starts += 1\n self.starts += 1\n resp = {\n \"action\": \"on_agent_action\",\n \"tool\": action.tool,\n }\n resp.update(self.get_custom_callback_meta())\n action_res = deepcopy(action)\n text = \"TOOL INPUT:\\n{}\\n\\nLOG:\\n{}\".format(\n action_res.tool_input, action_res.log\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/aim_callback.html"} {"id": "0d0d17d0956a-8", "text": "action_res.tool_input, action_res.log\n )\n self._run.track(aim.Text(text), name=\"on_agent_action\", context=resp)\n[docs] def flush_tracker(\n self,\n repo: Optional[str] = None,\n experiment_name: Optional[str] = None,\n system_tracking_interval: Optional[int] = 10,\n log_system_params: bool = True,\n langchain_asset: Any = None,\n reset: bool = True,\n finish: bool = False,\n ) -> None:\n \"\"\"Flush the tracker and reset the session.\n Args:\n repo (:obj:`str`, optional): Aim repository path or Repo object to which\n Run object is bound. If skipped, default Repo is used.\n experiment_name (:obj:`str`, optional): Sets Run's `experiment` property.\n 'default' if not specified. Can be used later to query runs/sequences.\n system_tracking_interval (:obj:`int`, optional): Sets the tracking interval\n in seconds for system usage metrics (CPU, Memory, etc.). Set to `None`\n to disable system metrics tracking.\n log_system_params (:obj:`bool`, optional): Enable/Disable logging of system\n params such as installed packages, git info, environment variables, etc.\n langchain_asset: The langchain asset to save.\n reset: Whether to reset the session.\n finish: Whether to finish the run.\n Returns:\n None\n \"\"\"\n if langchain_asset:\n try:\n for key, value in langchain_asset.dict().items():\n self._run.set(key, value, strict=False)\n except Exception:\n pass\n if finish or reset:\n self._run.close()\n self.reset_callback_meta()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/aim_callback.html"} {"id": "0d0d17d0956a-9", "text": "self._run.close()\n self.reset_callback_meta()\n if reset:\n self.__init__( # type: ignore\n repo=repo if repo else self.repo,\n experiment_name=experiment_name\n if experiment_name\n else self.experiment_name,\n system_tracking_interval=system_tracking_interval\n if system_tracking_interval\n else self.system_tracking_interval,\n log_system_params=log_system_params\n if log_system_params\n else self.log_system_params,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/aim_callback.html"} {"id": "d64cc1a5830d-0", "text": "Source code for langchain.callbacks.base\n\"\"\"Base callback handler that can be used to handle callbacks in langchain.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional, Sequence, Union\nfrom uuid import UUID\nfrom langchain.schema.agent import AgentAction, AgentFinish\nfrom langchain.schema.document import Document\nfrom langchain.schema.messages import BaseMessage\nfrom langchain.schema.output import LLMResult\nclass RetrieverManagerMixin:\n \"\"\"Mixin for Retriever callbacks.\"\"\"\n def on_retriever_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run when Retriever errors.\"\"\"\n def on_retriever_end(\n self,\n documents: Sequence[Document],\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run when Retriever ends running.\"\"\"\nclass LLMManagerMixin:\n \"\"\"Mixin for LLM callbacks.\"\"\"\n def on_llm_new_token(\n self,\n token: str,\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run on new LLM token. Only available when streaming is enabled.\"\"\"\n def on_llm_end(\n self,\n response: LLMResult,\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run when LLM ends running.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/base.html"} {"id": "d64cc1a5830d-1", "text": ") -> Any:\n \"\"\"Run when LLM ends running.\"\"\"\n def on_llm_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run when LLM errors.\"\"\"\nclass ChainManagerMixin:\n \"\"\"Mixin for chain callbacks.\"\"\"\n def on_chain_end(\n self,\n outputs: Dict[str, Any],\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run when chain ends running.\"\"\"\n def on_chain_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run when chain errors.\"\"\"\n def on_agent_action(\n self,\n action: AgentAction,\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run on agent action.\"\"\"\n def on_agent_finish(\n self,\n finish: AgentFinish,\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run on agent end.\"\"\"\nclass ToolManagerMixin:\n \"\"\"Mixin for tool callbacks.\"\"\"\n def on_tool_end(\n self,\n output: str,\n *,\n run_id: UUID,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/base.html"} {"id": "d64cc1a5830d-2", "text": "self,\n output: str,\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run when tool ends running.\"\"\"\n def on_tool_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run when tool errors.\"\"\"\nclass CallbackManagerMixin:\n \"\"\"Mixin for callback manager.\"\"\"\n def on_llm_start(\n self,\n serialized: Dict[str, Any],\n prompts: List[str],\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run when LLM starts running.\"\"\"\n def on_chat_model_start(\n self,\n serialized: Dict[str, Any],\n messages: List[List[BaseMessage]],\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run when a chat model starts running.\"\"\"\n raise NotImplementedError(\n f\"{self.__class__.__name__} does not implement `on_chat_model_start`\"\n )\n def on_retriever_start(\n self,\n serialized: Dict[str, Any],\n query: str,\n *,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/base.html"} {"id": "d64cc1a5830d-3", "text": "serialized: Dict[str, Any],\n query: str,\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run when Retriever starts running.\"\"\"\n def on_chain_start(\n self,\n serialized: Dict[str, Any],\n inputs: Dict[str, Any],\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run when chain starts running.\"\"\"\n def on_tool_start(\n self,\n serialized: Dict[str, Any],\n input_str: str,\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run when tool starts running.\"\"\"\nclass RunManagerMixin:\n \"\"\"Mixin for run manager.\"\"\"\n def on_text(\n self,\n text: str,\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run on arbitrary text.\"\"\"\n[docs]class BaseCallbackHandler(\n LLMManagerMixin,\n ChainManagerMixin,\n ToolManagerMixin,\n RetrieverManagerMixin,\n CallbackManagerMixin,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/base.html"} {"id": "d64cc1a5830d-4", "text": "ToolManagerMixin,\n RetrieverManagerMixin,\n CallbackManagerMixin,\n RunManagerMixin,\n):\n \"\"\"Base callback handler that can be used to handle callbacks from langchain.\"\"\"\n raise_error: bool = False\n run_inline: bool = False\n @property\n def ignore_llm(self) -> bool:\n \"\"\"Whether to ignore LLM callbacks.\"\"\"\n return False\n @property\n def ignore_chain(self) -> bool:\n \"\"\"Whether to ignore chain callbacks.\"\"\"\n return False\n @property\n def ignore_agent(self) -> bool:\n \"\"\"Whether to ignore agent callbacks.\"\"\"\n return False\n @property\n def ignore_retriever(self) -> bool:\n \"\"\"Whether to ignore retriever callbacks.\"\"\"\n return False\n @property\n def ignore_chat_model(self) -> bool:\n \"\"\"Whether to ignore chat model callbacks.\"\"\"\n return False\n[docs]class AsyncCallbackHandler(BaseCallbackHandler):\n \"\"\"Async callback handler that can be used to handle callbacks from langchain.\"\"\"\n[docs] async def on_llm_start(\n self,\n serialized: Dict[str, Any],\n prompts: List[str],\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when LLM starts running.\"\"\"\n[docs] async def on_chat_model_start(\n self,\n serialized: Dict[str, Any],\n messages: List[List[BaseMessage]],\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/base.html"} {"id": "d64cc1a5830d-5", "text": "run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run when a chat model starts running.\"\"\"\n raise NotImplementedError(\n f\"{self.__class__.__name__} does not implement `on_chat_model_start`\"\n )\n[docs] async def on_llm_new_token(\n self,\n token: str,\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run on new LLM token. Only available when streaming is enabled.\"\"\"\n[docs] async def on_llm_end(\n self,\n response: LLMResult,\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when LLM ends running.\"\"\"\n[docs] async def on_llm_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when LLM errors.\"\"\"\n[docs] async def on_chain_start(\n self,\n serialized: Dict[str, Any],\n inputs: Dict[str, Any],\n *,\n run_id: UUID,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/base.html"} {"id": "d64cc1a5830d-6", "text": "inputs: Dict[str, Any],\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when chain starts running.\"\"\"\n[docs] async def on_chain_end(\n self,\n outputs: Dict[str, Any],\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when chain ends running.\"\"\"\n[docs] async def on_chain_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when chain errors.\"\"\"\n[docs] async def on_tool_start(\n self,\n serialized: Dict[str, Any],\n input_str: str,\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when tool starts running.\"\"\"\n[docs] async def on_tool_end(\n self,\n output: str,\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/base.html"} {"id": "d64cc1a5830d-7", "text": "tags: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when tool ends running.\"\"\"\n[docs] async def on_tool_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when tool errors.\"\"\"\n[docs] async def on_text(\n self,\n text: str,\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run on arbitrary text.\"\"\"\n[docs] async def on_agent_action(\n self,\n action: AgentAction,\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run on agent action.\"\"\"\n[docs] async def on_agent_finish(\n self,\n finish: AgentFinish,\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run on agent end.\"\"\"\n[docs] async def on_retriever_start(\n self,\n serialized: Dict[str, Any],\n query: str,\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/base.html"} {"id": "d64cc1a5830d-8", "text": "run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run on retriever start.\"\"\"\n[docs] async def on_retriever_end(\n self,\n documents: Sequence[Document],\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run on retriever end.\"\"\"\n[docs] async def on_retriever_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run on retriever error.\"\"\"\n[docs]class BaseCallbackManager(CallbackManagerMixin):\n \"\"\"Base callback manager that can be used to handle callbacks from LangChain.\"\"\"\n def __init__(\n self,\n handlers: List[BaseCallbackHandler],\n inheritable_handlers: Optional[List[BaseCallbackHandler]] = None,\n parent_run_id: Optional[UUID] = None,\n *,\n tags: Optional[List[str]] = None,\n inheritable_tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n inheritable_metadata: Optional[Dict[str, Any]] = None,\n ) -> None:\n \"\"\"Initialize callback manager.\"\"\"\n self.handlers: List[BaseCallbackHandler] = handlers", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/base.html"} {"id": "d64cc1a5830d-9", "text": "\"\"\"Initialize callback manager.\"\"\"\n self.handlers: List[BaseCallbackHandler] = handlers\n self.inheritable_handlers: List[BaseCallbackHandler] = (\n inheritable_handlers or []\n )\n self.parent_run_id: Optional[UUID] = parent_run_id\n self.tags = tags or []\n self.inheritable_tags = inheritable_tags or []\n self.metadata = metadata or {}\n self.inheritable_metadata = inheritable_metadata or {}\n @property\n def is_async(self) -> bool:\n \"\"\"Whether the callback manager is async.\"\"\"\n return False\n[docs] def add_handler(self, handler: BaseCallbackHandler, inherit: bool = True) -> None:\n \"\"\"Add a handler to the callback manager.\"\"\"\n self.handlers.append(handler)\n if inherit:\n self.inheritable_handlers.append(handler)\n[docs] def remove_handler(self, handler: BaseCallbackHandler) -> None:\n \"\"\"Remove a handler from the callback manager.\"\"\"\n self.handlers.remove(handler)\n self.inheritable_handlers.remove(handler)\n[docs] def set_handlers(\n self, handlers: List[BaseCallbackHandler], inherit: bool = True\n ) -> None:\n \"\"\"Set handlers as the only handlers on the callback manager.\"\"\"\n self.handlers = []\n self.inheritable_handlers = []\n for handler in handlers:\n self.add_handler(handler, inherit=inherit)\n[docs] def set_handler(self, handler: BaseCallbackHandler, inherit: bool = True) -> None:\n \"\"\"Set handler as the only handler on the callback manager.\"\"\"\n self.set_handlers([handler], inherit=inherit)\n[docs] def add_tags(self, tags: List[str], inherit: bool = True) -> None:\n for tag in tags:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/base.html"} {"id": "d64cc1a5830d-10", "text": "for tag in tags:\n if tag in self.tags:\n self.remove_tags([tag])\n self.tags.extend(tags)\n if inherit:\n self.inheritable_tags.extend(tags)\n[docs] def remove_tags(self, tags: List[str]) -> None:\n for tag in tags:\n self.tags.remove(tag)\n self.inheritable_tags.remove(tag)\n[docs] def add_metadata(self, metadata: Dict[str, Any], inherit: bool = True) -> None:\n self.metadata.update(metadata)\n if inherit:\n self.inheritable_metadata.update(metadata)\n[docs] def remove_metadata(self, keys: List[str]) -> None:\n for key in keys:\n self.metadata.pop(key)\n self.inheritable_metadata.pop(key)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/base.html"} {"id": "77ba2a0ca697-0", "text": "Source code for langchain.callbacks.file\n\"\"\"Callback Handler that writes to a file.\"\"\"\nfrom typing import Any, Dict, Optional, TextIO, cast\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.input import print_text\nfrom langchain.schema import AgentAction, AgentFinish\n[docs]class FileCallbackHandler(BaseCallbackHandler):\n \"\"\"Callback Handler that writes to a file.\"\"\"\n def __init__(\n self, filename: str, mode: str = \"a\", color: Optional[str] = None\n ) -> None:\n \"\"\"Initialize callback handler.\"\"\"\n self.file = cast(TextIO, open(filename, mode))\n self.color = color\n def __del__(self) -> None:\n \"\"\"Destructor to cleanup when done.\"\"\"\n self.file.close()\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"Print out that we are entering a chain.\"\"\"\n class_name = serialized[\"name\"]\n print_text(\n f\"\\n\\n\\033[1m> Entering new {class_name} chain...\\033[0m\",\n end=\"\\n\",\n file=self.file,\n )\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Print out that we finished a chain.\"\"\"\n print_text(\"\\n\\033[1m> Finished chain.\\033[0m\", end=\"\\n\", file=self.file)\n[docs] def on_agent_action(\n self, action: AgentAction, color: Optional[str] = None, **kwargs: Any\n ) -> Any:\n \"\"\"Run on agent action.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/file.html"} {"id": "77ba2a0ca697-1", "text": ") -> Any:\n \"\"\"Run on agent action.\"\"\"\n print_text(action.log, color=color or self.color, file=self.file)\n[docs] def on_tool_end(\n self,\n output: str,\n color: Optional[str] = None,\n observation_prefix: Optional[str] = None,\n llm_prefix: Optional[str] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"If not the final action, print out observation.\"\"\"\n if observation_prefix is not None:\n print_text(f\"\\n{observation_prefix}\", file=self.file)\n print_text(output, color=color or self.color, file=self.file)\n if llm_prefix is not None:\n print_text(f\"\\n{llm_prefix}\", file=self.file)\n[docs] def on_text(\n self, text: str, color: Optional[str] = None, end: str = \"\", **kwargs: Any\n ) -> None:\n \"\"\"Run when agent ends.\"\"\"\n print_text(text, color=color or self.color, end=end, file=self.file)\n[docs] def on_agent_finish(\n self, finish: AgentFinish, color: Optional[str] = None, **kwargs: Any\n ) -> None:\n \"\"\"Run on agent end.\"\"\"\n print_text(finish.log, color=color or self.color, end=\"\\n\", file=self.file)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/file.html"} {"id": "14dbac311efa-0", "text": "Source code for langchain.callbacks.stdout\n\"\"\"Callback Handler that prints to std out.\"\"\"\nfrom typing import Any, Dict, List, Optional, Union\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.input import print_text\nfrom langchain.schema import AgentAction, AgentFinish, LLMResult\n[docs]class StdOutCallbackHandler(BaseCallbackHandler):\n \"\"\"Callback Handler that prints to std out.\"\"\"\n def __init__(self, color: Optional[str] = None) -> None:\n \"\"\"Initialize callback handler.\"\"\"\n self.color = color\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n \"\"\"Print out the prompts.\"\"\"\n pass\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"Print out that we are entering a chain.\"\"\"\n class_name = serialized.get(\"name\", \"\")\n print(f\"\\n\\n\\033[1m> Entering new {class_name} chain...\\033[0m\")\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/stdout.html"} {"id": "14dbac311efa-1", "text": "\"\"\"Print out that we finished a chain.\"\"\"\n print(\"\\n\\033[1m> Finished chain.\\033[0m\")\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_tool_start(\n self,\n serialized: Dict[str, Any],\n input_str: str,\n **kwargs: Any,\n ) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_agent_action(\n self, action: AgentAction, color: Optional[str] = None, **kwargs: Any\n ) -> Any:\n \"\"\"Run on agent action.\"\"\"\n print_text(action.log, color=color or self.color)\n[docs] def on_tool_end(\n self,\n output: str,\n color: Optional[str] = None,\n observation_prefix: Optional[str] = None,\n llm_prefix: Optional[str] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"If not the final action, print out observation.\"\"\"\n if observation_prefix is not None:\n print_text(f\"\\n{observation_prefix}\")\n print_text(output, color=color or self.color)\n if llm_prefix is not None:\n print_text(f\"\\n{llm_prefix}\")\n[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_text(\n self,\n text: str,\n color: Optional[str] = None,\n end: str = \"\",\n **kwargs: Any,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/stdout.html"} {"id": "14dbac311efa-2", "text": "end: str = \"\",\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when agent ends.\"\"\"\n print_text(text, color=color or self.color, end=end)\n[docs] def on_agent_finish(\n self, finish: AgentFinish, color: Optional[str] = None, **kwargs: Any\n ) -> None:\n \"\"\"Run on agent end.\"\"\"\n print_text(finish.log, color=color or self.color, end=\"\\n\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/stdout.html"} {"id": "d6add4907a9b-0", "text": "Source code for langchain.callbacks.comet_ml_callback\nimport tempfile\nfrom copy import deepcopy\nfrom pathlib import Path\nfrom typing import Any, Callable, Dict, List, Optional, Sequence, Union\nimport langchain\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.callbacks.utils import (\n BaseMetadataCallbackHandler,\n flatten_dict,\n import_pandas,\n import_spacy,\n import_textstat,\n)\nfrom langchain.schema import AgentAction, AgentFinish, Generation, LLMResult\nLANGCHAIN_MODEL_NAME = \"langchain-model\"\n[docs]def import_comet_ml() -> Any:\n \"\"\"Import comet_ml and raise an error if it is not installed.\"\"\"\n try:\n import comet_ml # noqa: F401\n except ImportError:\n raise ImportError(\n \"To use the comet_ml callback manager you need to have the \"\n \"`comet_ml` python package installed. Please install it with\"\n \" `pip install comet_ml`\"\n )\n return comet_ml\ndef _get_experiment(\n workspace: Optional[str] = None, project_name: Optional[str] = None\n) -> Any:\n comet_ml = import_comet_ml()\n experiment = comet_ml.Experiment( # type: ignore\n workspace=workspace,\n project_name=project_name,\n )\n return experiment\ndef _fetch_text_complexity_metrics(text: str) -> dict:\n textstat = import_textstat()\n text_complexity_metrics = {\n \"flesch_reading_ease\": textstat.flesch_reading_ease(text),\n \"flesch_kincaid_grade\": textstat.flesch_kincaid_grade(text),\n \"smog_index\": textstat.smog_index(text),", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} {"id": "d6add4907a9b-1", "text": "\"smog_index\": textstat.smog_index(text),\n \"coleman_liau_index\": textstat.coleman_liau_index(text),\n \"automated_readability_index\": textstat.automated_readability_index(text),\n \"dale_chall_readability_score\": textstat.dale_chall_readability_score(text),\n \"difficult_words\": textstat.difficult_words(text),\n \"linsear_write_formula\": textstat.linsear_write_formula(text),\n \"gunning_fog\": textstat.gunning_fog(text),\n \"text_standard\": textstat.text_standard(text),\n \"fernandez_huerta\": textstat.fernandez_huerta(text),\n \"szigriszt_pazos\": textstat.szigriszt_pazos(text),\n \"gutierrez_polini\": textstat.gutierrez_polini(text),\n \"crawford\": textstat.crawford(text),\n \"gulpease_index\": textstat.gulpease_index(text),\n \"osman\": textstat.osman(text),\n }\n return text_complexity_metrics\ndef _summarize_metrics_for_generated_outputs(metrics: Sequence) -> dict:\n pd = import_pandas()\n metrics_df = pd.DataFrame(metrics)\n metrics_summary = metrics_df.describe()\n return metrics_summary.to_dict()\n[docs]class CometCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler):\n \"\"\"Callback Handler that logs to Comet.\n Parameters:\n job_type (str): The type of comet_ml task such as \"inference\",\n \"testing\" or \"qc\"\n project_name (str): The comet_ml project name\n tags (list): Tags to add to the task\n task_name (str): Name of the comet_ml task", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} {"id": "d6add4907a9b-2", "text": "task_name (str): Name of the comet_ml task\n visualize (bool): Whether to visualize the run.\n complexity_metrics (bool): Whether to log complexity metrics\n stream_logs (bool): Whether to stream callback actions to Comet\n This handler will utilize the associated callback method and formats\n the input of each callback function with metadata regarding the state of LLM run,\n and adds the response to the list of records for both the {method}_records and\n action. It then logs the response to Comet.\n \"\"\"\n def __init__(\n self,\n task_type: Optional[str] = \"inference\",\n workspace: Optional[str] = None,\n project_name: Optional[str] = None,\n tags: Optional[Sequence] = None,\n name: Optional[str] = None,\n visualizations: Optional[List[str]] = None,\n complexity_metrics: bool = False,\n custom_metrics: Optional[Callable] = None,\n stream_logs: bool = True,\n ) -> None:\n \"\"\"Initialize callback handler.\"\"\"\n self.comet_ml = import_comet_ml()\n super().__init__()\n self.task_type = task_type\n self.workspace = workspace\n self.project_name = project_name\n self.tags = tags\n self.visualizations = visualizations\n self.complexity_metrics = complexity_metrics\n self.custom_metrics = custom_metrics\n self.stream_logs = stream_logs\n self.temp_dir = tempfile.TemporaryDirectory()\n self.experiment = _get_experiment(workspace, project_name)\n self.experiment.log_other(\"Created from\", \"langchain\")\n if tags:\n self.experiment.add_tags(tags)\n self.name = name\n if self.name:\n self.experiment.set_name(self.name)\n warning = (", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} {"id": "d6add4907a9b-3", "text": "self.experiment.set_name(self.name)\n warning = (\n \"The comet_ml callback is currently in beta and is subject to change \"\n \"based on updates to `langchain`. Please report any issues to \"\n \"https://github.com/comet-ml/issue-tracking/issues with the tag \"\n \"`langchain`.\"\n )\n self.comet_ml.LOGGER.warning(warning)\n self.callback_columns: list = []\n self.action_records: list = []\n self.complexity_metrics = complexity_metrics\n if self.visualizations:\n spacy = import_spacy()\n self.nlp = spacy.load(\"en_core_web_sm\")\n else:\n self.nlp = None\n def _init_resp(self) -> Dict:\n return {k: None for k in self.callback_columns}\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM starts.\"\"\"\n self.step += 1\n self.llm_starts += 1\n self.starts += 1\n metadata = self._init_resp()\n metadata.update({\"action\": \"on_llm_start\"})\n metadata.update(flatten_dict(serialized))\n metadata.update(self.get_custom_callback_meta())\n for prompt in prompts:\n prompt_resp = deepcopy(metadata)\n prompt_resp[\"prompts\"] = prompt\n self.on_llm_start_records.append(prompt_resp)\n self.action_records.append(prompt_resp)\n if self.stream_logs:\n self._log_stream(prompt, metadata, self.step)\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Run when LLM generates a new token.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} {"id": "d6add4907a9b-4", "text": "\"\"\"Run when LLM generates a new token.\"\"\"\n self.step += 1\n self.llm_streams += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_llm_new_token\", \"token\": token})\n resp.update(self.get_custom_callback_meta())\n self.action_records.append(resp)\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Run when LLM ends running.\"\"\"\n self.step += 1\n self.llm_ends += 1\n self.ends += 1\n metadata = self._init_resp()\n metadata.update({\"action\": \"on_llm_end\"})\n metadata.update(flatten_dict(response.llm_output or {}))\n metadata.update(self.get_custom_callback_meta())\n output_complexity_metrics = []\n output_custom_metrics = []\n for prompt_idx, generations in enumerate(response.generations):\n for gen_idx, generation in enumerate(generations):\n text = generation.text\n generation_resp = deepcopy(metadata)\n generation_resp.update(flatten_dict(generation.dict()))\n complexity_metrics = self._get_complexity_metrics(text)\n if complexity_metrics:\n output_complexity_metrics.append(complexity_metrics)\n generation_resp.update(complexity_metrics)\n custom_metrics = self._get_custom_metrics(\n generation, prompt_idx, gen_idx\n )\n if custom_metrics:\n output_custom_metrics.append(custom_metrics)\n generation_resp.update(custom_metrics)\n if self.stream_logs:\n self._log_stream(text, metadata, self.step)\n self.action_records.append(generation_resp)\n self.on_llm_end_records.append(generation_resp)\n self._log_text_metrics(output_complexity_metrics, step=self.step)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} {"id": "d6add4907a9b-5", "text": "self._log_text_metrics(output_complexity_metrics, step=self.step)\n self._log_text_metrics(output_custom_metrics, step=self.step)\n[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain starts running.\"\"\"\n self.step += 1\n self.chain_starts += 1\n self.starts += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_chain_start\"})\n resp.update(flatten_dict(serialized))\n resp.update(self.get_custom_callback_meta())\n for chain_input_key, chain_input_val in inputs.items():\n if isinstance(chain_input_val, str):\n input_resp = deepcopy(resp)\n if self.stream_logs:\n self._log_stream(chain_input_val, resp, self.step)\n input_resp.update({chain_input_key: chain_input_val})\n self.action_records.append(input_resp)\n else:\n self.comet_ml.LOGGER.warning(\n f\"Unexpected data format provided! \"\n f\"Input Value for {chain_input_key} will not be logged\"\n )\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Run when chain ends running.\"\"\"\n self.step += 1\n self.chain_ends += 1\n self.ends += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_chain_end\"})", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} {"id": "d6add4907a9b-6", "text": "resp.update({\"action\": \"on_chain_end\"})\n resp.update(self.get_custom_callback_meta())\n for chain_output_key, chain_output_val in outputs.items():\n if isinstance(chain_output_val, str):\n output_resp = deepcopy(resp)\n if self.stream_logs:\n self._log_stream(chain_output_val, resp, self.step)\n output_resp.update({chain_output_key: chain_output_val})\n self.action_records.append(output_resp)\n else:\n self.comet_ml.LOGGER.warning(\n f\"Unexpected data format provided! \"\n f\"Output Value for {chain_output_key} will not be logged\"\n )\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_tool_start(\n self, serialized: Dict[str, Any], input_str: str, **kwargs: Any\n ) -> None:\n \"\"\"Run when tool starts running.\"\"\"\n self.step += 1\n self.tool_starts += 1\n self.starts += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_tool_start\"})\n resp.update(flatten_dict(serialized))\n resp.update(self.get_custom_callback_meta())\n if self.stream_logs:\n self._log_stream(input_str, resp, self.step)\n resp.update({\"input_str\": input_str})\n self.action_records.append(resp)\n[docs] def on_tool_end(self, output: str, **kwargs: Any) -> None:\n \"\"\"Run when tool ends running.\"\"\"\n self.step += 1\n self.tool_ends += 1\n self.ends += 1", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} {"id": "d6add4907a9b-7", "text": "self.tool_ends += 1\n self.ends += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_tool_end\"})\n resp.update(self.get_custom_callback_meta())\n if self.stream_logs:\n self._log_stream(output, resp, self.step)\n resp.update({\"output\": output})\n self.action_records.append(resp)\n[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when tool errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_text(self, text: str, **kwargs: Any) -> None:\n \"\"\"\n Run when agent is ending.\n \"\"\"\n self.step += 1\n self.text_ctr += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_text\"})\n resp.update(self.get_custom_callback_meta())\n if self.stream_logs:\n self._log_stream(text, resp, self.step)\n resp.update({\"text\": text})\n self.action_records.append(resp)\n[docs] def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None:\n \"\"\"Run when agent ends running.\"\"\"\n self.step += 1\n self.agent_ends += 1\n self.ends += 1\n resp = self._init_resp()\n output = finish.return_values[\"output\"]\n log = finish.log\n resp.update({\"action\": \"on_agent_finish\", \"log\": log})\n resp.update(self.get_custom_callback_meta())\n if self.stream_logs:\n self._log_stream(output, resp, self.step)\n resp.update({\"output\": output})\n self.action_records.append(resp)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} {"id": "d6add4907a9b-8", "text": "resp.update({\"output\": output})\n self.action_records.append(resp)\n[docs] def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Run on agent action.\"\"\"\n self.step += 1\n self.tool_starts += 1\n self.starts += 1\n tool = action.tool\n tool_input = str(action.tool_input)\n log = action.log\n resp = self._init_resp()\n resp.update({\"action\": \"on_agent_action\", \"log\": log, \"tool\": tool})\n resp.update(self.get_custom_callback_meta())\n if self.stream_logs:\n self._log_stream(tool_input, resp, self.step)\n resp.update({\"tool_input\": tool_input})\n self.action_records.append(resp)\n def _get_complexity_metrics(self, text: str) -> dict:\n \"\"\"Compute text complexity metrics using textstat.\n Parameters:\n text (str): The text to analyze.\n Returns:\n (dict): A dictionary containing the complexity metrics.\n \"\"\"\n resp = {}\n if self.complexity_metrics:\n text_complexity_metrics = _fetch_text_complexity_metrics(text)\n resp.update(text_complexity_metrics)\n return resp\n def _get_custom_metrics(\n self, generation: Generation, prompt_idx: int, gen_idx: int\n ) -> dict:\n \"\"\"Compute Custom Metrics for an LLM Generated Output\n Args:\n generation (LLMResult): Output generation from an LLM\n prompt_idx (int): List index of the input prompt\n gen_idx (int): List index of the generated output\n Returns:\n dict: A dictionary containing the custom metrics.\n \"\"\"\n resp = {}\n if self.custom_metrics:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} {"id": "d6add4907a9b-9", "text": "\"\"\"\n resp = {}\n if self.custom_metrics:\n custom_metrics = self.custom_metrics(generation, prompt_idx, gen_idx)\n resp.update(custom_metrics)\n return resp\n[docs] def flush_tracker(\n self,\n langchain_asset: Any = None,\n task_type: Optional[str] = \"inference\",\n workspace: Optional[str] = None,\n project_name: Optional[str] = \"comet-langchain-demo\",\n tags: Optional[Sequence] = None,\n name: Optional[str] = None,\n visualizations: Optional[List[str]] = None,\n complexity_metrics: bool = False,\n custom_metrics: Optional[Callable] = None,\n finish: bool = False,\n reset: bool = False,\n ) -> None:\n \"\"\"Flush the tracker and setup the session.\n Everything after this will be a new table.\n Args:\n name: Name of the preformed session so far so it is identifyable\n langchain_asset: The langchain asset to save.\n finish: Whether to finish the run.\n Returns:\n None\n \"\"\"\n self._log_session(langchain_asset)\n if langchain_asset:\n try:\n self._log_model(langchain_asset)\n except Exception:\n self.comet_ml.LOGGER.error(\n \"Failed to export agent or LLM to Comet\",\n exc_info=True,\n extra={\"show_traceback\": True},\n )\n if finish:\n self.experiment.end()\n if reset:\n self._reset(\n task_type,\n workspace,\n project_name,\n tags,\n name,\n visualizations,\n complexity_metrics,\n custom_metrics,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} {"id": "d6add4907a9b-10", "text": "visualizations,\n complexity_metrics,\n custom_metrics,\n )\n def _log_stream(self, prompt: str, metadata: dict, step: int) -> None:\n self.experiment.log_text(prompt, metadata=metadata, step=step)\n def _log_model(self, langchain_asset: Any) -> None:\n model_parameters = self._get_llm_parameters(langchain_asset)\n self.experiment.log_parameters(model_parameters, prefix=\"model\")\n langchain_asset_path = Path(self.temp_dir.name, \"model.json\")\n model_name = self.name if self.name else LANGCHAIN_MODEL_NAME\n try:\n if hasattr(langchain_asset, \"save\"):\n langchain_asset.save(langchain_asset_path)\n self.experiment.log_model(model_name, str(langchain_asset_path))\n except (ValueError, AttributeError, NotImplementedError) as e:\n if hasattr(langchain_asset, \"save_agent\"):\n langchain_asset.save_agent(langchain_asset_path)\n self.experiment.log_model(model_name, str(langchain_asset_path))\n else:\n self.comet_ml.LOGGER.error(\n f\"{e}\"\n \" Could not save Langchain Asset \"\n f\"for {langchain_asset.__class__.__name__}\"\n )\n def _log_session(self, langchain_asset: Optional[Any] = None) -> None:\n try:\n llm_session_df = self._create_session_analysis_dataframe(langchain_asset)\n # Log the cleaned dataframe as a table\n self.experiment.log_table(\"langchain-llm-session.csv\", llm_session_df)\n except Exception:\n self.comet_ml.LOGGER.warning(\n \"Failed to log session data to Comet\",\n exc_info=True,\n extra={\"show_traceback\": True},\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} {"id": "d6add4907a9b-11", "text": "exc_info=True,\n extra={\"show_traceback\": True},\n )\n try:\n metadata = {\"langchain_version\": str(langchain.__version__)}\n # Log the langchain low-level records as a JSON file directly\n self.experiment.log_asset_data(\n self.action_records, \"langchain-action_records.json\", metadata=metadata\n )\n except Exception:\n self.comet_ml.LOGGER.warning(\n \"Failed to log session data to Comet\",\n exc_info=True,\n extra={\"show_traceback\": True},\n )\n try:\n self._log_visualizations(llm_session_df)\n except Exception:\n self.comet_ml.LOGGER.warning(\n \"Failed to log visualizations to Comet\",\n exc_info=True,\n extra={\"show_traceback\": True},\n )\n def _log_text_metrics(self, metrics: Sequence[dict], step: int) -> None:\n if not metrics:\n return\n metrics_summary = _summarize_metrics_for_generated_outputs(metrics)\n for key, value in metrics_summary.items():\n self.experiment.log_metrics(value, prefix=key, step=step)\n def _log_visualizations(self, session_df: Any) -> None:\n if not (self.visualizations and self.nlp):\n return\n spacy = import_spacy()\n prompts = session_df[\"prompts\"].tolist()\n outputs = session_df[\"text\"].tolist()\n for idx, (prompt, output) in enumerate(zip(prompts, outputs)):\n doc = self.nlp(output)\n sentence_spans = list(doc.sents)\n for visualization in self.visualizations:\n try:\n html = spacy.displacy.render(\n sentence_spans,\n style=visualization,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} {"id": "d6add4907a9b-12", "text": "sentence_spans,\n style=visualization,\n options={\"compact\": True},\n jupyter=False,\n page=True,\n )\n self.experiment.log_asset_data(\n html,\n name=f\"langchain-viz-{visualization}-{idx}.html\",\n metadata={\"prompt\": prompt},\n step=idx,\n )\n except Exception as e:\n self.comet_ml.LOGGER.warning(\n e, exc_info=True, extra={\"show_traceback\": True}\n )\n return\n def _reset(\n self,\n task_type: Optional[str] = None,\n workspace: Optional[str] = None,\n project_name: Optional[str] = None,\n tags: Optional[Sequence] = None,\n name: Optional[str] = None,\n visualizations: Optional[List[str]] = None,\n complexity_metrics: bool = False,\n custom_metrics: Optional[Callable] = None,\n ) -> None:\n _task_type = task_type if task_type else self.task_type\n _workspace = workspace if workspace else self.workspace\n _project_name = project_name if project_name else self.project_name\n _tags = tags if tags else self.tags\n _name = name if name else self.name\n _visualizations = visualizations if visualizations else self.visualizations\n _complexity_metrics = (\n complexity_metrics if complexity_metrics else self.complexity_metrics\n )\n _custom_metrics = custom_metrics if custom_metrics else self.custom_metrics\n self.__init__( # type: ignore\n task_type=_task_type,\n workspace=_workspace,\n project_name=_project_name,\n tags=_tags,\n name=_name,\n visualizations=_visualizations,\n complexity_metrics=_complexity_metrics,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} {"id": "d6add4907a9b-13", "text": "visualizations=_visualizations,\n complexity_metrics=_complexity_metrics,\n custom_metrics=_custom_metrics,\n )\n self.reset_callback_meta()\n self.temp_dir = tempfile.TemporaryDirectory()\n def _create_session_analysis_dataframe(self, langchain_asset: Any = None) -> dict:\n pd = import_pandas()\n llm_parameters = self._get_llm_parameters(langchain_asset)\n num_generations_per_prompt = llm_parameters.get(\"n\", 1)\n llm_start_records_df = pd.DataFrame(self.on_llm_start_records)\n # Repeat each input row based on the number of outputs generated per prompt\n llm_start_records_df = llm_start_records_df.loc[\n llm_start_records_df.index.repeat(num_generations_per_prompt)\n ].reset_index(drop=True)\n llm_end_records_df = pd.DataFrame(self.on_llm_end_records)\n llm_session_df = pd.merge(\n llm_start_records_df,\n llm_end_records_df,\n left_index=True,\n right_index=True,\n suffixes=[\"_llm_start\", \"_llm_end\"],\n )\n return llm_session_df\n def _get_llm_parameters(self, langchain_asset: Any = None) -> dict:\n if not langchain_asset:\n return {}\n try:\n if hasattr(langchain_asset, \"agent\"):\n llm_parameters = langchain_asset.agent.llm_chain.llm.dict()\n elif hasattr(langchain_asset, \"llm_chain\"):\n llm_parameters = langchain_asset.llm_chain.llm.dict()\n elif hasattr(langchain_asset, \"llm\"):\n llm_parameters = langchain_asset.llm.dict()\n else:\n llm_parameters = langchain_asset.dict()\n except Exception:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} {"id": "d6add4907a9b-14", "text": "else:\n llm_parameters = langchain_asset.dict()\n except Exception:\n return {}\n return llm_parameters", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} {"id": "4b05729c46e1-0", "text": "Source code for langchain.callbacks.utils\nimport hashlib\nfrom pathlib import Path\nfrom typing import Any, Dict, Iterable, Tuple, Union\n[docs]def import_spacy() -> Any:\n \"\"\"Import the spacy python package and raise an error if it is not installed.\"\"\"\n try:\n import spacy\n except ImportError:\n raise ImportError(\n \"This callback manager requires the `spacy` python \"\n \"package installed. Please install it with `pip install spacy`\"\n )\n return spacy\n[docs]def import_pandas() -> Any:\n \"\"\"Import the pandas python package and raise an error if it is not installed.\"\"\"\n try:\n import pandas\n except ImportError:\n raise ImportError(\n \"This callback manager requires the `pandas` python \"\n \"package installed. Please install it with `pip install pandas`\"\n )\n return pandas\n[docs]def import_textstat() -> Any:\n \"\"\"Import the textstat python package and raise an error if it is not installed.\"\"\"\n try:\n import textstat\n except ImportError:\n raise ImportError(\n \"This callback manager requires the `textstat` python \"\n \"package installed. Please install it with `pip install textstat`\"\n )\n return textstat\ndef _flatten_dict(\n nested_dict: Dict[str, Any], parent_key: str = \"\", sep: str = \"_\"\n) -> Iterable[Tuple[str, Any]]:\n \"\"\"\n Generator that yields flattened items from a nested dictionary for a flat dict.\n Parameters:\n nested_dict (dict): The nested dictionary to flatten.\n parent_key (str): The prefix to prepend to the keys of the flattened dict.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/utils.html"} {"id": "4b05729c46e1-1", "text": "parent_key (str): The prefix to prepend to the keys of the flattened dict.\n sep (str): The separator to use between the parent key and the key of the\n flattened dictionary.\n Yields:\n (str, any): A key-value pair from the flattened dictionary.\n \"\"\"\n for key, value in nested_dict.items():\n new_key = parent_key + sep + key if parent_key else key\n if isinstance(value, dict):\n yield from _flatten_dict(value, new_key, sep)\n else:\n yield new_key, value\n[docs]def flatten_dict(\n nested_dict: Dict[str, Any], parent_key: str = \"\", sep: str = \"_\"\n) -> Dict[str, Any]:\n \"\"\"Flattens a nested dictionary into a flat dictionary.\n Parameters:\n nested_dict (dict): The nested dictionary to flatten.\n parent_key (str): The prefix to prepend to the keys of the flattened dict.\n sep (str): The separator to use between the parent key and the key of the\n flattened dictionary.\n Returns:\n (dict): A flat dictionary.\n \"\"\"\n flat_dict = {k: v for k, v in _flatten_dict(nested_dict, parent_key, sep)}\n return flat_dict\n[docs]def hash_string(s: str) -> str:\n \"\"\"Hash a string using sha1.\n Parameters:\n s (str): The string to hash.\n Returns:\n (str): The hashed string.\n \"\"\"\n return hashlib.sha1(s.encode(\"utf-8\")).hexdigest()\n[docs]def load_json(json_path: Union[str, Path]) -> str:\n \"\"\"Load json file to a string.\n Parameters:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/utils.html"} {"id": "4b05729c46e1-2", "text": "\"\"\"Load json file to a string.\n Parameters:\n json_path (str): The path to the json file.\n Returns:\n (str): The string representation of the json file.\n \"\"\"\n with open(json_path, \"r\") as f:\n data = f.read()\n return data\nclass BaseMetadataCallbackHandler:\n \"\"\"This class handles the metadata and associated function states for callbacks.\n Attributes:\n step (int): The current step.\n starts (int): The number of times the start method has been called.\n ends (int): The number of times the end method has been called.\n errors (int): The number of times the error method has been called.\n text_ctr (int): The number of times the text method has been called.\n ignore_llm_ (bool): Whether to ignore llm callbacks.\n ignore_chain_ (bool): Whether to ignore chain callbacks.\n ignore_agent_ (bool): Whether to ignore agent callbacks.\n ignore_retriever_ (bool): Whether to ignore retriever callbacks.\n always_verbose_ (bool): Whether to always be verbose.\n chain_starts (int): The number of times the chain start method has been called.\n chain_ends (int): The number of times the chain end method has been called.\n llm_starts (int): The number of times the llm start method has been called.\n llm_ends (int): The number of times the llm end method has been called.\n llm_streams (int): The number of times the text method has been called.\n tool_starts (int): The number of times the tool start method has been called.\n tool_ends (int): The number of times the tool end method has been called.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/utils.html"} {"id": "4b05729c46e1-3", "text": "tool_ends (int): The number of times the tool end method has been called.\n agent_ends (int): The number of times the agent end method has been called.\n on_llm_start_records (list): A list of records of the on_llm_start method.\n on_llm_token_records (list): A list of records of the on_llm_token method.\n on_llm_end_records (list): A list of records of the on_llm_end method.\n on_chain_start_records (list): A list of records of the on_chain_start method.\n on_chain_end_records (list): A list of records of the on_chain_end method.\n on_tool_start_records (list): A list of records of the on_tool_start method.\n on_tool_end_records (list): A list of records of the on_tool_end method.\n on_agent_finish_records (list): A list of records of the on_agent_end method.\n \"\"\"\n def __init__(self) -> None:\n self.step = 0\n self.starts = 0\n self.ends = 0\n self.errors = 0\n self.text_ctr = 0\n self.ignore_llm_ = False\n self.ignore_chain_ = False\n self.ignore_agent_ = False\n self.ignore_retriever_ = False\n self.always_verbose_ = False\n self.chain_starts = 0\n self.chain_ends = 0\n self.llm_starts = 0\n self.llm_ends = 0\n self.llm_streams = 0\n self.tool_starts = 0\n self.tool_ends = 0\n self.agent_ends = 0\n self.on_llm_start_records: list = []\n self.on_llm_token_records: list = []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/utils.html"} {"id": "4b05729c46e1-4", "text": "self.on_llm_token_records: list = []\n self.on_llm_end_records: list = []\n self.on_chain_start_records: list = []\n self.on_chain_end_records: list = []\n self.on_tool_start_records: list = []\n self.on_tool_end_records: list = []\n self.on_text_records: list = []\n self.on_agent_finish_records: list = []\n self.on_agent_action_records: list = []\n @property\n def always_verbose(self) -> bool:\n \"\"\"Whether to call verbose callbacks even if verbose is False.\"\"\"\n return self.always_verbose_\n @property\n def ignore_llm(self) -> bool:\n \"\"\"Whether to ignore LLM callbacks.\"\"\"\n return self.ignore_llm_\n @property\n def ignore_chain(self) -> bool:\n \"\"\"Whether to ignore chain callbacks.\"\"\"\n return self.ignore_chain_\n @property\n def ignore_agent(self) -> bool:\n \"\"\"Whether to ignore agent callbacks.\"\"\"\n return self.ignore_agent_\n def get_custom_callback_meta(self) -> Dict[str, Any]:\n return {\n \"step\": self.step,\n \"starts\": self.starts,\n \"ends\": self.ends,\n \"errors\": self.errors,\n \"text_ctr\": self.text_ctr,\n \"chain_starts\": self.chain_starts,\n \"chain_ends\": self.chain_ends,\n \"llm_starts\": self.llm_starts,\n \"llm_ends\": self.llm_ends,\n \"llm_streams\": self.llm_streams,\n \"tool_starts\": self.tool_starts,\n \"tool_ends\": self.tool_ends,\n \"agent_ends\": self.agent_ends,\n }\n def reset_callback_meta(self) -> None:\n \"\"\"Reset the callback metadata.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/utils.html"} {"id": "4b05729c46e1-5", "text": "def reset_callback_meta(self) -> None:\n \"\"\"Reset the callback metadata.\"\"\"\n self.step = 0\n self.starts = 0\n self.ends = 0\n self.errors = 0\n self.text_ctr = 0\n self.ignore_llm_ = False\n self.ignore_chain_ = False\n self.ignore_agent_ = False\n self.always_verbose_ = False\n self.chain_starts = 0\n self.chain_ends = 0\n self.llm_starts = 0\n self.llm_ends = 0\n self.llm_streams = 0\n self.tool_starts = 0\n self.tool_ends = 0\n self.agent_ends = 0\n self.on_llm_start_records = []\n self.on_llm_token_records = []\n self.on_llm_end_records = []\n self.on_chain_start_records = []\n self.on_chain_end_records = []\n self.on_tool_start_records = []\n self.on_tool_end_records = []\n self.on_text_records = []\n self.on_agent_finish_records = []\n self.on_agent_action_records = []\n return None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/utils.html"} {"id": "5e4fd7719be7-0", "text": "Source code for langchain.callbacks.context_callback\n\"\"\"Callback handler for Context AI\"\"\"\nimport os\nfrom typing import Any, Dict, List\nfrom uuid import UUID\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.schema import (\n BaseMessage,\n LLMResult,\n)\n[docs]def import_context() -> Any:\n try:\n import getcontext # noqa: F401\n from getcontext.generated.models import (\n Conversation,\n Message,\n MessageRole,\n Rating,\n )\n from getcontext.token import Credential # noqa: F401\n except ImportError:\n raise ImportError(\n \"To use the context callback manager you need to have the \"\n \"`getcontext` python package installed (version >=0.3.0). \"\n \"Please install it with `pip install --upgrade python-context`\"\n )\n return getcontext, Credential, Conversation, Message, MessageRole, Rating\n[docs]class ContextCallbackHandler(BaseCallbackHandler):\n \"\"\"Callback Handler that records transcripts to Context (https://getcontext.ai).\n Keyword Args:\n token (optional): The token with which to authenticate requests to Context.\n Visit https://go.getcontext.ai/settings to generate a token.\n If not provided, the value of the `CONTEXT_TOKEN` environment\n variable will be used.\n Raises:\n ImportError: if the `context-python` package is not installed.\n Chat Example:\n >>> from langchain.llms import ChatOpenAI\n >>> from langchain.callbacks import ContextCallbackHandler\n >>> context_callback = ContextCallbackHandler(\n ... token=\"\",\n ... )\n >>> chat = ChatOpenAI(\n ... temperature=0,\n ... headers={\"user_id\": \"123\"},", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/context_callback.html"} {"id": "5e4fd7719be7-1", "text": "... temperature=0,\n ... headers={\"user_id\": \"123\"},\n ... callbacks=[context_callback],\n ... openai_api_key=\"API_KEY_HERE\",\n ... )\n >>> messages = [\n ... SystemMessage(content=\"You translate English to French.\"),\n ... HumanMessage(content=\"I love programming with LangChain.\"),\n ... ]\n >>> chat(messages)\n Chain Example:\n >>> from langchain import LLMChain\n >>> from langchain.llms import ChatOpenAI\n >>> from langchain.callbacks import ContextCallbackHandler\n >>> context_callback = ContextCallbackHandler(\n ... token=\"\",\n ... )\n >>> human_message_prompt = HumanMessagePromptTemplate(\n ... prompt=PromptTemplate(\n ... template=\"What is a good name for a company that makes {product}?\",\n ... input_variables=[\"product\"],\n ... ),\n ... )\n >>> chat_prompt_template = ChatPromptTemplate.from_messages(\n ... [human_message_prompt]\n ... )\n >>> callback = ContextCallbackHandler(token)\n >>> # Note: the same callback object must be shared between the\n ... LLM and the chain.\n >>> chat = ChatOpenAI(temperature=0.9, callbacks=[callback])\n >>> chain = LLMChain(\n ... llm=chat,\n ... prompt=chat_prompt_template,\n ... callbacks=[callback]\n ... )\n >>> chain.run(\"colorful socks\")\n \"\"\"\n def __init__(self, token: str = \"\", verbose: bool = False, **kwargs: Any) -> None:\n (\n self.context,\n self.credential,\n self.conversation_model,\n self.message_model,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/context_callback.html"} {"id": "5e4fd7719be7-2", "text": "self.credential,\n self.conversation_model,\n self.message_model,\n self.message_role_model,\n self.rating_model,\n ) = import_context()\n token = token or os.environ.get(\"CONTEXT_TOKEN\") or \"\"\n self.client = self.context.ContextAPI(credential=self.credential(token))\n self.chain_run_id = None\n self.llm_model = None\n self.messages: List[Any] = []\n self.metadata: Dict[str, str] = {}\n[docs] def on_chat_model_start(\n self,\n serialized: Dict[str, Any],\n messages: List[List[BaseMessage]],\n *,\n run_id: UUID,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run when the chat model is started.\"\"\"\n llm_model = kwargs.get(\"invocation_params\", {}).get(\"model\", None)\n if llm_model is not None:\n self.metadata[\"llm_model\"] = llm_model\n if len(messages) == 0:\n return\n for message in messages[0]:\n role = self.message_role_model.SYSTEM\n if message.type == \"human\":\n role = self.message_role_model.USER\n elif message.type == \"system\":\n role = self.message_role_model.SYSTEM\n elif message.type == \"ai\":\n role = self.message_role_model.ASSISTANT\n self.messages.append(\n self.message_model(\n message=message.content,\n role=role,\n )\n )\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Run when LLM ends.\"\"\"\n if len(response.generations) == 0 or len(response.generations[0]) == 0:\n return", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/context_callback.html"} {"id": "5e4fd7719be7-3", "text": "return\n if not self.chain_run_id:\n generation = response.generations[0][0]\n self.messages.append(\n self.message_model(\n message=generation.text,\n role=self.message_role_model.ASSISTANT,\n )\n )\n self._log_conversation()\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain starts.\"\"\"\n self.chain_run_id = kwargs.get(\"run_id\", None)\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Run when chain ends.\"\"\"\n self.messages.append(\n self.message_model(\n message=outputs[\"text\"],\n role=self.message_role_model.ASSISTANT,\n )\n )\n self._log_conversation()\n self.chain_run_id = None\n def _log_conversation(self) -> None:\n \"\"\"Log the conversation to the context API.\"\"\"\n if len(self.messages) == 0:\n return\n self.client.log.conversation_upsert(\n body={\n \"conversation\": self.conversation_model(\n messages=self.messages,\n metadata=self.metadata,\n )\n }\n )\n self.messages = []\n self.metadata = {}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/context_callback.html"} {"id": "defccdfe5e48-0", "text": "Source code for langchain.callbacks.streaming_stdout\n\"\"\"Callback Handler streams to stdout on new llm token.\"\"\"\nimport sys\nfrom typing import Any, Dict, List, Union\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.schema import AgentAction, AgentFinish, LLMResult\n[docs]class StreamingStdOutCallbackHandler(BaseCallbackHandler):\n \"\"\"Callback handler for streaming. Only works with LLMs that support streaming.\"\"\"\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM starts running.\"\"\"\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Run on new LLM token. Only available when streaming is enabled.\"\"\"\n sys.stdout.write(token)\n sys.stdout.flush()\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Run when LLM ends running.\"\"\"\n[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM errors.\"\"\"\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain starts running.\"\"\"\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Run when chain ends running.\"\"\"\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain errors.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streaming_stdout.html"} {"id": "defccdfe5e48-1", "text": ") -> None:\n \"\"\"Run when chain errors.\"\"\"\n[docs] def on_tool_start(\n self, serialized: Dict[str, Any], input_str: str, **kwargs: Any\n ) -> None:\n \"\"\"Run when tool starts running.\"\"\"\n[docs] def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Run on agent action.\"\"\"\n pass\n[docs] def on_tool_end(self, output: str, **kwargs: Any) -> None:\n \"\"\"Run when tool ends running.\"\"\"\n[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when tool errors.\"\"\"\n[docs] def on_text(self, text: str, **kwargs: Any) -> None:\n \"\"\"Run on arbitrary text.\"\"\"\n[docs] def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None:\n \"\"\"Run on agent end.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streaming_stdout.html"} {"id": "91a9aa1a5740-0", "text": "Source code for langchain.callbacks.arthur_callback\n\"\"\"ArthurAI's Callback Handler.\"\"\"\nfrom __future__ import annotations\nimport os\nimport uuid\nfrom collections import defaultdict\nfrom datetime import datetime\nfrom time import time\nfrom typing import TYPE_CHECKING, Any, DefaultDict, Dict, List, Optional, Union\nimport numpy as np\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.schema import AgentAction, AgentFinish, LLMResult\nif TYPE_CHECKING:\n import arthurai\n from arthurai.core.models import ArthurModel\nPROMPT_TOKENS = \"prompt_tokens\"\nCOMPLETION_TOKENS = \"completion_tokens\"\nTOKEN_USAGE = \"token_usage\"\nFINISH_REASON = \"finish_reason\"\nDURATION = \"duration\"\ndef _lazy_load_arthur() -> arthurai:\n \"\"\"Lazy load Arthur.\"\"\"\n try:\n import arthurai\n except ImportError as e:\n raise ImportError(\n \"To use the ArthurCallbackHandler you need the\"\n \" `arthurai` package. Please install it with\"\n \" `pip install arthurai`.\",\n e,\n )\n return arthurai\n[docs]class ArthurCallbackHandler(BaseCallbackHandler):\n \"\"\"Callback Handler that logs to Arthur platform.\n Arthur helps enterprise teams optimize model operations\n and performance at scale. The Arthur API tracks model\n performance, explainability, and fairness across tabular,\n NLP, and CV models. Our API is model- and platform-agnostic,\n and continuously scales with complex and dynamic enterprise needs.\n To learn more about Arthur, visit our website at\n https://www.arthur.ai/ or read the Arthur docs at\n https://docs.arthur.ai/\n \"\"\"\n def __init__(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/arthur_callback.html"} {"id": "91a9aa1a5740-1", "text": "\"\"\"\n def __init__(\n self,\n arthur_model: ArthurModel,\n ) -> None:\n \"\"\"Initialize callback handler.\"\"\"\n super().__init__()\n arthurai = _lazy_load_arthur()\n Stage = arthurai.common.constants.Stage\n ValueType = arthurai.common.constants.ValueType\n self.arthur_model = arthur_model\n # save the attributes of this model to be used when preparing\n # inferences to log to Arthur in on_llm_end()\n self.attr_names = set([a.name for a in self.arthur_model.get_attributes()])\n self.input_attr = [\n x\n for x in self.arthur_model.get_attributes()\n if x.stage == Stage.ModelPipelineInput\n and x.value_type == ValueType.Unstructured_Text\n ][0].name\n self.output_attr = [\n x\n for x in self.arthur_model.get_attributes()\n if x.stage == Stage.PredictedValue\n and x.value_type == ValueType.Unstructured_Text\n ][0].name\n self.token_likelihood_attr = None\n if (\n len(\n [\n x\n for x in self.arthur_model.get_attributes()\n if x.value_type == ValueType.TokenLikelihoods\n ]\n )\n > 0\n ):\n self.token_likelihood_attr = [\n x\n for x in self.arthur_model.get_attributes()\n if x.value_type == ValueType.TokenLikelihoods\n ][0].name\n self.run_map: DefaultDict[str, Any] = defaultdict(dict)\n[docs] @classmethod\n def from_credentials(\n cls,\n model_id: str,\n arthur_url: Optional[str] = \"https://app.arthur.ai\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/arthur_callback.html"} {"id": "91a9aa1a5740-2", "text": "arthur_url: Optional[str] = \"https://app.arthur.ai\",\n arthur_login: Optional[str] = None,\n arthur_password: Optional[str] = None,\n ) -> ArthurCallbackHandler:\n \"\"\"Initialize callback handler from Arthur credentials.\n Args:\n model_id (str): The ID of the arthur model to log to.\n arthur_url (str, optional): The URL of the Arthur instance to log to.\n Defaults to \"https://app.arthur.ai\".\n arthur_login (str, optional): The login to use to connect to Arthur.\n Defaults to None.\n arthur_password (str, optional): The password to use to connect to\n Arthur. Defaults to None.\n Returns:\n ArthurCallbackHandler: The initialized callback handler.\n \"\"\"\n arthurai = _lazy_load_arthur()\n ArthurAI = arthurai.ArthurAI\n ResponseClientError = arthurai.common.exceptions.ResponseClientError\n # connect to Arthur\n if arthur_login is None:\n try:\n arthur_api_key = os.environ[\"ARTHUR_API_KEY\"]\n except KeyError:\n raise ValueError(\n \"No Arthur authentication provided. Either give\"\n \" a login to the ArthurCallbackHandler\"\n \" or set an ARTHUR_API_KEY as an environment variable.\"\n )\n arthur = ArthurAI(url=arthur_url, access_key=arthur_api_key)\n else:\n if arthur_password is None:\n arthur = ArthurAI(url=arthur_url, login=arthur_login)\n else:\n arthur = ArthurAI(\n url=arthur_url, login=arthur_login, password=arthur_password\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/arthur_callback.html"} {"id": "91a9aa1a5740-3", "text": ")\n # get model from Arthur by the provided model ID\n try:\n arthur_model = arthur.get_model(model_id)\n except ResponseClientError:\n raise ValueError(\n f\"Was unable to retrieve model with id {model_id} from Arthur.\"\n \" Make sure the ID corresponds to a model that is currently\"\n \" registered with your Arthur account.\"\n )\n return cls(arthur_model)\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n \"\"\"On LLM start, save the input prompts\"\"\"\n run_id = kwargs[\"run_id\"]\n self.run_map[run_id][\"input_texts\"] = prompts\n self.run_map[run_id][\"start_time\"] = time()\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"On LLM end, send data to Arthur.\"\"\"\n try:\n import pytz # type: ignore[import]\n except ImportError as e:\n raise ImportError(\n \"Could not import pytz. Please install it with 'pip install pytz'.\"\n ) from e\n run_id = kwargs[\"run_id\"]\n # get the run params from this run ID,\n # or raise an error if this run ID has no corresponding metadata in self.run_map\n try:\n run_map_data = self.run_map[run_id]\n except KeyError as e:\n raise KeyError(\n \"This function has been called with a run_id\"\n \" that was never registered in on_llm_start().\"\n \" Restart and try running the LLM again\"\n ) from e", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/arthur_callback.html"} {"id": "91a9aa1a5740-4", "text": "\" Restart and try running the LLM again\"\n ) from e\n # mark the duration time between on_llm_start() and on_llm_end()\n time_from_start_to_end = time() - run_map_data[\"start_time\"]\n # create inferences to log to Arthur\n inferences = []\n for i, generations in enumerate(response.generations):\n for generation in generations:\n inference = {\n \"partner_inference_id\": str(uuid.uuid4()),\n \"inference_timestamp\": datetime.now(tz=pytz.UTC),\n self.input_attr: run_map_data[\"input_texts\"][i],\n self.output_attr: generation.text,\n }\n if generation.generation_info is not None:\n # add finish reason to the inference\n # if generation info contains a finish reason and\n # if the ArthurModel was registered to monitor finish_reason\n if (\n FINISH_REASON in generation.generation_info\n and FINISH_REASON in self.attr_names\n ):\n inference[FINISH_REASON] = generation.generation_info[\n FINISH_REASON\n ]\n # add token likelihoods data to the inference if the ArthurModel\n # was registered to monitor token likelihoods\n logprobs_data = generation.generation_info[\"logprobs\"]\n if (\n logprobs_data is not None\n and self.token_likelihood_attr is not None\n ):\n logprobs = logprobs_data[\"top_logprobs\"]\n likelihoods = [\n {k: np.exp(v) for k, v in logprobs[i].items()}\n for i in range(len(logprobs))\n ]\n inference[self.token_likelihood_attr] = likelihoods\n # add token usage counts to the inference if the", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/arthur_callback.html"} {"id": "91a9aa1a5740-5", "text": "# add token usage counts to the inference if the\n # ArthurModel was registered to monitor token usage\n if (\n isinstance(response.llm_output, dict)\n and TOKEN_USAGE in response.llm_output\n ):\n token_usage = response.llm_output[TOKEN_USAGE]\n if (\n PROMPT_TOKENS in token_usage\n and PROMPT_TOKENS in self.attr_names\n ):\n inference[PROMPT_TOKENS] = token_usage[PROMPT_TOKENS]\n if (\n COMPLETION_TOKENS in token_usage\n and COMPLETION_TOKENS in self.attr_names\n ):\n inference[COMPLETION_TOKENS] = token_usage[COMPLETION_TOKENS]\n # add inference duration to the inference if the ArthurModel\n # was registered to monitor inference duration\n if DURATION in self.attr_names:\n inference[DURATION] = time_from_start_to_end\n inferences.append(inference)\n # send inferences to arthur\n self.arthur_model.send_inferences(inferences)\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"On chain start, do nothing.\"\"\"\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"On chain end, do nothing.\"\"\"\n[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing when LLM outputs an error.\"\"\"\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"On new token, pass.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/arthur_callback.html"} {"id": "91a9aa1a5740-6", "text": "\"\"\"On new token, pass.\"\"\"\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing when LLM chain outputs an error.\"\"\"\n[docs] def on_tool_start(\n self,\n serialized: Dict[str, Any],\n input_str: str,\n **kwargs: Any,\n ) -> None:\n \"\"\"Do nothing when tool starts.\"\"\"\n[docs] def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Do nothing when agent takes a specific action.\"\"\"\n[docs] def on_tool_end(\n self,\n output: str,\n observation_prefix: Optional[str] = None,\n llm_prefix: Optional[str] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Do nothing when tool ends.\"\"\"\n[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing when tool outputs an error.\"\"\"\n[docs] def on_text(self, text: str, **kwargs: Any) -> None:\n \"\"\"Do nothing\"\"\"\n[docs] def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None:\n \"\"\"Do nothing\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/arthur_callback.html"} {"id": "d8b81af2614e-0", "text": "Source code for langchain.callbacks.argilla_callback\nimport os\nimport warnings\nfrom typing import Any, Dict, List, Optional, Union\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.schema import AgentAction, AgentFinish, LLMResult\n[docs]class ArgillaCallbackHandler(BaseCallbackHandler):\n \"\"\"Callback Handler that logs into Argilla.\n Args:\n dataset_name: name of the `FeedbackDataset` in Argilla. Note that it must\n exist in advance. If you need help on how to create a `FeedbackDataset` in\n Argilla, please visit\n https://docs.argilla.io/en/latest/guides/llms/practical_guides/use_argilla_callback_in_langchain.html.\n workspace_name: name of the workspace in Argilla where the specified\n `FeedbackDataset` lives in. Defaults to `None`, which means that the\n default workspace will be used.\n api_url: URL of the Argilla Server that we want to use, and where the\n `FeedbackDataset` lives in. Defaults to `None`, which means that either\n `ARGILLA_API_URL` environment variable or the default http://localhost:6900\n will be used.\n api_key: API Key to connect to the Argilla Server. Defaults to `None`, which\n means that either `ARGILLA_API_KEY` environment variable or the default\n `argilla.apikey` will be used.\n Raises:\n ImportError: if the `argilla` package is not installed.\n ConnectionError: if the connection to Argilla fails.\n FileNotFoundError: if the `FeedbackDataset` retrieval from Argilla fails.\n Examples:\n >>> from langchain.llms import OpenAI\n >>> from langchain.callbacks import ArgillaCallbackHandler\n >>> argilla_callback = ArgillaCallbackHandler(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/argilla_callback.html"} {"id": "d8b81af2614e-1", "text": ">>> argilla_callback = ArgillaCallbackHandler(\n ... dataset_name=\"my-dataset\",\n ... workspace_name=\"my-workspace\",\n ... api_url=\"http://localhost:6900\",\n ... api_key=\"argilla.apikey\",\n ... )\n >>> llm = OpenAI(\n ... temperature=0,\n ... callbacks=[argilla_callback],\n ... verbose=True,\n ... openai_api_key=\"API_KEY_HERE\",\n ... )\n >>> llm.generate([\n ... \"What is the best NLP-annotation tool out there? (no bias at all)\",\n ... ])\n \"Argilla, no doubt about it.\"\n \"\"\"\n def __init__(\n self,\n dataset_name: str,\n workspace_name: Optional[str] = None,\n api_url: Optional[str] = None,\n api_key: Optional[str] = None,\n ) -> None:\n \"\"\"Initializes the `ArgillaCallbackHandler`.\n Args:\n dataset_name: name of the `FeedbackDataset` in Argilla. Note that it must\n exist in advance. If you need help on how to create a `FeedbackDataset`\n in Argilla, please visit\n https://docs.argilla.io/en/latest/guides/llms/practical_guides/use_argilla_callback_in_langchain.html.\n workspace_name: name of the workspace in Argilla where the specified\n `FeedbackDataset` lives in. Defaults to `None`, which means that the\n default workspace will be used.\n api_url: URL of the Argilla Server that we want to use, and where the\n `FeedbackDataset` lives in. Defaults to `None`, which means that either", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/argilla_callback.html"} {"id": "d8b81af2614e-2", "text": "`FeedbackDataset` lives in. Defaults to `None`, which means that either\n `ARGILLA_API_URL` environment variable or the default\n http://localhost:6900 will be used.\n api_key: API Key to connect to the Argilla Server. Defaults to `None`, which\n means that either `ARGILLA_API_KEY` environment variable or the default\n `argilla.apikey` will be used.\n Raises:\n ImportError: if the `argilla` package is not installed.\n ConnectionError: if the connection to Argilla fails.\n FileNotFoundError: if the `FeedbackDataset` retrieval from Argilla fails.\n \"\"\"\n super().__init__()\n # Import Argilla (not via `import_argilla` to keep hints in IDEs)\n try:\n import argilla as rg # noqa: F401\n except ImportError:\n raise ImportError(\n \"To use the Argilla callback manager you need to have the `argilla` \"\n \"Python package installed. Please install it with `pip install argilla`\"\n )\n # Show a warning message if Argilla will assume the default values will be used\n if api_url is None and os.getenv(\"ARGILLA_API_URL\") is None:\n warnings.warn(\n (\n \"Since `api_url` is None, and the env var `ARGILLA_API_URL` is not\"\n \" set, it will default to `http://localhost:6900`.\"\n ),\n )\n if api_key is None and os.getenv(\"ARGILLA_API_KEY\") is None:\n warnings.warn(\n (\n \"Since `api_key` is None, and the env var `ARGILLA_API_KEY` is not\"\n \" set, it will default to `argilla.apikey`.\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/argilla_callback.html"} {"id": "d8b81af2614e-3", "text": "\" set, it will default to `argilla.apikey`.\"\n ),\n )\n # Connect to Argilla with the provided credentials, if applicable\n try:\n rg.init(\n api_key=api_key,\n api_url=api_url,\n )\n except Exception as e:\n raise ConnectionError(\n f\"Could not connect to Argilla with exception: '{e}'.\\n\"\n \"Please check your `api_key` and `api_url`, and make sure that \"\n \"the Argilla server is up and running. If the problem persists \"\n \"please report it to https://github.com/argilla-io/argilla/issues \"\n \"with the label `langchain`.\"\n ) from e\n # Set the Argilla variables\n self.dataset_name = dataset_name\n self.workspace_name = workspace_name or rg.get_workspace()\n # Retrieve the `FeedbackDataset` from Argilla (without existing records)\n try:\n self.dataset = rg.FeedbackDataset.from_argilla(\n name=self.dataset_name,\n workspace=self.workspace_name,\n with_records=False,\n )\n except Exception as e:\n raise FileNotFoundError(\n \"`FeedbackDataset` retrieval from Argilla failed with exception:\"\n f\" '{e}'.\\nPlease check that the dataset with\"\n f\" name={self.dataset_name} in the\"\n f\" workspace={self.workspace_name} exists in advance. If you need help\"\n \" on how to create a `langchain`-compatible `FeedbackDataset` in\"\n \" Argilla, please visit\"\n \" https://docs.argilla.io/en/latest/guides/llms/practical_guides/use_argilla_callback_in_langchain.html.\" # noqa: E501", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/argilla_callback.html"} {"id": "d8b81af2614e-4", "text": "\" If the problem persists please report it to\"\n \" https://github.com/argilla-io/argilla/issues with the label\"\n \" `langchain`.\"\n ) from e\n supported_fields = [\"prompt\", \"response\"]\n if supported_fields != [field.name for field in self.dataset.fields]:\n raise ValueError(\n f\"`FeedbackDataset` with name={self.dataset_name} in the\"\n f\" workspace={self.workspace_name} \"\n \"had fields that are not supported yet for the `langchain` integration.\"\n \" Supported fields are: \"\n f\"{supported_fields}, and the current `FeedbackDataset` fields are\"\n f\" {[field.name for field in self.dataset.fields]}. \"\n \"For more information on how to create a `langchain`-compatible\"\n \" `FeedbackDataset` in Argilla, please visit\"\n \" https://docs.argilla.io/en/latest/guides/llms/practical_guides/use_argilla_callback_in_langchain.html.\" # noqa: E501\n )\n self.prompts: Dict[str, List[str]] = {}\n warnings.warn(\n (\n \"The `ArgillaCallbackHandler` is currently in beta and is subject to \"\n \"change based on updates to `langchain`. Please report any issues to \"\n \"https://github.com/argilla-io/argilla/issues with the tag `langchain`.\"\n ),\n )\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n \"\"\"Save the prompts in memory when an LLM starts.\"\"\"\n self.prompts.update({str(kwargs[\"parent_run_id\"] or kwargs[\"run_id\"]): prompts})", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/argilla_callback.html"} {"id": "d8b81af2614e-5", "text": "[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Do nothing when a new token is generated.\"\"\"\n pass\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Log records to Argilla when an LLM ends.\"\"\"\n # Do nothing if there's a parent_run_id, since we will log the records when\n # the chain ends\n if kwargs[\"parent_run_id\"]:\n return\n # Creates the records and adds them to the `FeedbackDataset`\n prompts = self.prompts[str(kwargs[\"run_id\"])]\n for prompt, generations in zip(prompts, response.generations):\n self.dataset.add_records(\n records=[\n {\n \"fields\": {\n \"prompt\": prompt,\n \"response\": generation.text.strip(),\n },\n }\n for generation in generations\n ]\n )\n # Push the records to Argilla\n self.dataset.push_to_argilla()\n # Pop current run from `self.runs`\n self.prompts.pop(str(kwargs[\"run_id\"]))\n[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing when LLM outputs an error.\"\"\"\n pass\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"If the key `input` is in `inputs`, then save it in `self.prompts` using\n either the `parent_run_id` or the `run_id` as the key. This is done so that", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/argilla_callback.html"} {"id": "d8b81af2614e-6", "text": "we don't log the same input prompt twice, once when the LLM starts and once\n when the chain starts.\n \"\"\"\n if \"input\" in inputs:\n self.prompts.update(\n {\n str(kwargs[\"parent_run_id\"] or kwargs[\"run_id\"]): (\n inputs[\"input\"]\n if isinstance(inputs[\"input\"], list)\n else [inputs[\"input\"]]\n )\n }\n )\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"If either the `parent_run_id` or the `run_id` is in `self.prompts`, then\n log the outputs to Argilla, and pop the run from `self.prompts`. The behavior\n differs if the output is a list or not.\n \"\"\"\n if not any(\n key in self.prompts\n for key in [str(kwargs[\"parent_run_id\"]), str(kwargs[\"run_id\"])]\n ):\n return\n prompts = self.prompts.get(str(kwargs[\"parent_run_id\"])) or self.prompts.get(\n str(kwargs[\"run_id\"])\n )\n for chain_output_key, chain_output_val in outputs.items():\n if isinstance(chain_output_val, list):\n # Creates the records and adds them to the `FeedbackDataset`\n self.dataset.add_records(\n records=[\n {\n \"fields\": {\n \"prompt\": prompt,\n \"response\": output[\"text\"].strip(),\n },\n }\n for prompt, output in zip(\n prompts, chain_output_val # type: ignore\n )\n ]\n )\n else:\n # Creates the records and adds them to the `FeedbackDataset`\n self.dataset.add_records(\n records=[", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/argilla_callback.html"} {"id": "d8b81af2614e-7", "text": "self.dataset.add_records(\n records=[\n {\n \"fields\": {\n \"prompt\": \" \".join(prompts), # type: ignore\n \"response\": chain_output_val.strip(),\n },\n }\n ]\n )\n # Push the records to Argilla\n self.dataset.push_to_argilla()\n # Pop current run from `self.runs`\n if str(kwargs[\"parent_run_id\"]) in self.prompts:\n self.prompts.pop(str(kwargs[\"parent_run_id\"]))\n if str(kwargs[\"run_id\"]) in self.prompts:\n self.prompts.pop(str(kwargs[\"run_id\"]))\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing when LLM chain outputs an error.\"\"\"\n pass\n[docs] def on_tool_start(\n self,\n serialized: Dict[str, Any],\n input_str: str,\n **kwargs: Any,\n ) -> None:\n \"\"\"Do nothing when tool starts.\"\"\"\n pass\n[docs] def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Do nothing when agent takes a specific action.\"\"\"\n pass\n[docs] def on_tool_end(\n self,\n output: str,\n observation_prefix: Optional[str] = None,\n llm_prefix: Optional[str] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Do nothing when tool ends.\"\"\"\n pass\n[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing when tool outputs an error.\"\"\"\n pass", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/argilla_callback.html"} {"id": "d8b81af2614e-8", "text": ") -> None:\n \"\"\"Do nothing when tool outputs an error.\"\"\"\n pass\n[docs] def on_text(self, text: str, **kwargs: Any) -> None:\n \"\"\"Do nothing\"\"\"\n pass\n[docs] def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None:\n \"\"\"Do nothing\"\"\"\n pass", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/argilla_callback.html"} {"id": "89503bfe3b61-0", "text": "Source code for langchain.callbacks.manager\nfrom __future__ import annotations\nimport asyncio\nimport functools\nimport logging\nimport os\nimport warnings\nfrom contextlib import asynccontextmanager, contextmanager\nfrom contextvars import ContextVar\nfrom typing import (\n Any,\n AsyncGenerator,\n Dict,\n Generator,\n List,\n Optional,\n Sequence,\n Type,\n TypeVar,\n Union,\n cast,\n)\nfrom uuid import UUID, uuid4\nimport langchain\nfrom langchain.callbacks.base import (\n BaseCallbackHandler,\n BaseCallbackManager,\n ChainManagerMixin,\n LLMManagerMixin,\n RetrieverManagerMixin,\n RunManagerMixin,\n ToolManagerMixin,\n)\nfrom langchain.callbacks.openai_info import OpenAICallbackHandler\nfrom langchain.callbacks.stdout import StdOutCallbackHandler\nfrom langchain.callbacks.tracers.langchain import LangChainTracer\nfrom langchain.callbacks.tracers.langchain_v1 import LangChainTracerV1, TracerSessionV1\nfrom langchain.callbacks.tracers.stdout import ConsoleCallbackHandler\nfrom langchain.callbacks.tracers.wandb import WandbTracer\nfrom langchain.schema import (\n AgentAction,\n AgentFinish,\n Document,\n LLMResult,\n)\nfrom langchain.schema.messages import BaseMessage, get_buffer_string\nlogger = logging.getLogger(__name__)\nCallbacks = Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]\nopenai_callback_var: ContextVar[Optional[OpenAICallbackHandler]] = ContextVar(\n \"openai_callback\", default=None\n)\ntracing_callback_var: ContextVar[\n Optional[LangChainTracerV1]\n] = ContextVar( # noqa: E501\n \"tracing_callback\", default=None\n)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-1", "text": "\"tracing_callback\", default=None\n)\nwandb_tracing_callback_var: ContextVar[\n Optional[WandbTracer]\n] = ContextVar( # noqa: E501\n \"tracing_wandb_callback\", default=None\n)\ntracing_v2_callback_var: ContextVar[\n Optional[LangChainTracer]\n] = ContextVar( # noqa: E501\n \"tracing_callback_v2\", default=None\n)\ndef _get_debug() -> bool:\n return langchain.debug\n[docs]@contextmanager\ndef get_openai_callback() -> Generator[OpenAICallbackHandler, None, None]:\n \"\"\"Get the OpenAI callback handler in a context manager.\n which conveniently exposes token and cost information.\n Returns:\n OpenAICallbackHandler: The OpenAI callback handler.\n Example:\n >>> with get_openai_callback() as cb:\n ... # Use the OpenAI callback handler\n \"\"\"\n cb = OpenAICallbackHandler()\n openai_callback_var.set(cb)\n yield cb\n openai_callback_var.set(None)\n[docs]@contextmanager\ndef tracing_enabled(\n session_name: str = \"default\",\n) -> Generator[TracerSessionV1, None, None]:\n \"\"\"Get the Deprecated LangChainTracer in a context manager.\n Args:\n session_name (str, optional): The name of the session.\n Defaults to \"default\".\n Returns:\n TracerSessionV1: The LangChainTracer session.\n Example:\n >>> with tracing_enabled() as session:\n ... # Use the LangChainTracer session\n \"\"\"\n cb = LangChainTracerV1()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-2", "text": "\"\"\"\n cb = LangChainTracerV1()\n session = cast(TracerSessionV1, cb.load_session(session_name))\n tracing_callback_var.set(cb)\n yield session\n tracing_callback_var.set(None)\n[docs]@contextmanager\ndef wandb_tracing_enabled(\n session_name: str = \"default\",\n) -> Generator[None, None, None]:\n \"\"\"Get the WandbTracer in a context manager.\n Args:\n session_name (str, optional): The name of the session.\n Defaults to \"default\".\n Returns:\n None\n Example:\n >>> with wandb_tracing_enabled() as session:\n ... # Use the WandbTracer session\n \"\"\"\n cb = WandbTracer()\n wandb_tracing_callback_var.set(cb)\n yield None\n wandb_tracing_callback_var.set(None)\n[docs]@contextmanager\ndef tracing_v2_enabled(\n project_name: Optional[str] = None,\n *,\n example_id: Optional[Union[str, UUID]] = None,\n tags: Optional[List[str]] = None,\n) -> Generator[None, None, None]:\n \"\"\"Instruct LangChain to log all runs in context to LangSmith.\n Args:\n project_name (str, optional): The name of the project.\n Defaults to \"default\".\n example_id (str or UUID, optional): The ID of the example.\n Defaults to None.\n tags (List[str], optional): The tags to add to the run.\n Defaults to None.\n Returns:\n None\n Example:\n >>> with tracing_v2_enabled():\n ... # LangChain code will automatically be traced\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-3", "text": "... # LangChain code will automatically be traced\n \"\"\"\n # Issue a warning that this is experimental\n warnings.warn(\n \"The tracing v2 API is in development. \"\n \"This is not yet stable and may change in the future.\"\n )\n if isinstance(example_id, str):\n example_id = UUID(example_id)\n cb = LangChainTracer(\n example_id=example_id,\n project_name=project_name,\n tags=tags,\n )\n tracing_v2_callback_var.set(cb)\n yield\n tracing_v2_callback_var.set(None)\n[docs]@contextmanager\ndef trace_as_chain_group(\n group_name: str,\n *,\n project_name: Optional[str] = None,\n example_id: Optional[Union[str, UUID]] = None,\n tags: Optional[List[str]] = None,\n) -> Generator[CallbackManager, None, None]:\n \"\"\"Get a callback manager for a chain group in a context manager.\n Useful for grouping different calls together as a single run even if\n they aren't composed in a single chain.\n Args:\n group_name (str): The name of the chain group.\n project_name (str, optional): The name of the project.\n Defaults to None.\n example_id (str or UUID, optional): The ID of the example.\n Defaults to None.\n tags (List[str], optional): The inheritable tags to apply to all runs.\n Defaults to None.\n Returns:\n CallbackManager: The callback manager for the chain group.\n Example:\n >>> with trace_as_chain_group(\"group_name\") as manager:\n ... # Use the callback manager for the chain group\n ... llm.predict(\"Foo\", callbacks=manager)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-4", "text": "... llm.predict(\"Foo\", callbacks=manager)\n \"\"\"\n cb = LangChainTracer(\n project_name=project_name,\n example_id=example_id,\n )\n cm = CallbackManager.configure(\n inheritable_callbacks=[cb],\n inheritable_tags=tags,\n )\n run_manager = cm.on_chain_start({\"name\": group_name}, {})\n yield run_manager.get_child()\n run_manager.on_chain_end({})\n@asynccontextmanager\nasync def atrace_as_chain_group(\n group_name: str,\n *,\n project_name: Optional[str] = None,\n example_id: Optional[Union[str, UUID]] = None,\n tags: Optional[List[str]] = None,\n) -> AsyncGenerator[AsyncCallbackManager, None]:\n \"\"\"Get an async callback manager for a chain group in a context manager.\n Useful for grouping different async calls together as a single run even if\n they aren't composed in a single chain.\n Args:\n group_name (str): The name of the chain group.\n project_name (str, optional): The name of the project.\n Defaults to None.\n example_id (str or UUID, optional): The ID of the example.\n Defaults to None.\n tags (List[str], optional): The inheritable tags to apply to all runs.\n Defaults to None.\n Returns:\n AsyncCallbackManager: The async callback manager for the chain group.\n Example:\n >>> async with atrace_as_chain_group(\"group_name\") as manager:\n ... # Use the async callback manager for the chain group\n ... await llm.apredict(\"Foo\", callbacks=manager)\n \"\"\"\n cb = LangChainTracer(\n project_name=project_name,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-5", "text": "\"\"\"\n cb = LangChainTracer(\n project_name=project_name,\n example_id=example_id,\n )\n cm = AsyncCallbackManager.configure(\n inheritable_callbacks=[cb], inheritable_tags=tags\n )\n run_manager = await cm.on_chain_start({\"name\": group_name}, {})\n try:\n yield run_manager.get_child()\n finally:\n await run_manager.on_chain_end({})\ndef _handle_event(\n handlers: List[BaseCallbackHandler],\n event_name: str,\n ignore_condition_name: Optional[str],\n *args: Any,\n **kwargs: Any,\n) -> None:\n \"\"\"Generic event handler for CallbackManager.\"\"\"\n message_strings: Optional[List[str]] = None\n for handler in handlers:\n try:\n if ignore_condition_name is None or not getattr(\n handler, ignore_condition_name\n ):\n getattr(handler, event_name)(*args, **kwargs)\n except NotImplementedError as e:\n if event_name == \"on_chat_model_start\":\n if message_strings is None:\n message_strings = [get_buffer_string(m) for m in args[1]]\n _handle_event(\n [handler],\n \"on_llm_start\",\n \"ignore_llm\",\n args[0],\n message_strings,\n *args[2:],\n **kwargs,\n )\n else:\n logger.warning(\n f\"Error in {handler.__class__.__name__}.{event_name} callback: {e}\"\n )\n except Exception as e:\n logger.warning(\n f\"Error in {handler.__class__.__name__}.{event_name} callback: {e}\"\n )\n if handler.raise_error:\n raise e", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-6", "text": ")\n if handler.raise_error:\n raise e\nasync def _ahandle_event_for_handler(\n handler: BaseCallbackHandler,\n event_name: str,\n ignore_condition_name: Optional[str],\n *args: Any,\n **kwargs: Any,\n) -> None:\n try:\n if ignore_condition_name is None or not getattr(handler, ignore_condition_name):\n event = getattr(handler, event_name)\n if asyncio.iscoroutinefunction(event):\n await event(*args, **kwargs)\n else:\n if handler.run_inline:\n event(*args, **kwargs)\n else:\n await asyncio.get_event_loop().run_in_executor(\n None, functools.partial(event, *args, **kwargs)\n )\n except NotImplementedError as e:\n if event_name == \"on_chat_model_start\":\n message_strings = [get_buffer_string(m) for m in args[1]]\n await _ahandle_event_for_handler(\n handler,\n \"on_llm_start\",\n \"ignore_llm\",\n args[0],\n message_strings,\n *args[2:],\n **kwargs,\n )\n else:\n logger.warning(\n f\"Error in {handler.__class__.__name__}.{event_name} callback: {e}\"\n )\n except Exception as e:\n logger.warning(\n f\"Error in {handler.__class__.__name__}.{event_name} callback: {e}\"\n )\n if handler.raise_error:\n raise e\nasync def _ahandle_event(\n handlers: List[BaseCallbackHandler],\n event_name: str,\n ignore_condition_name: Optional[str],\n *args: Any,\n **kwargs: Any,\n) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-7", "text": "*args: Any,\n **kwargs: Any,\n) -> None:\n \"\"\"Generic event handler for AsyncCallbackManager.\"\"\"\n for handler in [h for h in handlers if h.run_inline]:\n await _ahandle_event_for_handler(\n handler, event_name, ignore_condition_name, *args, **kwargs\n )\n await asyncio.gather(\n *(\n _ahandle_event_for_handler(\n handler, event_name, ignore_condition_name, *args, **kwargs\n )\n for handler in handlers\n if not handler.run_inline\n )\n )\nBRM = TypeVar(\"BRM\", bound=\"BaseRunManager\")\n[docs]class BaseRunManager(RunManagerMixin):\n \"\"\"Base class for run manager (a bound callback manager).\"\"\"\n def __init__(\n self,\n *,\n run_id: UUID,\n handlers: List[BaseCallbackHandler],\n inheritable_handlers: List[BaseCallbackHandler],\n parent_run_id: Optional[UUID] = None,\n tags: Optional[List[str]] = None,\n inheritable_tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n inheritable_metadata: Optional[Dict[str, Any]] = None,\n ) -> None:\n \"\"\"Initialize the run manager.\n Args:\n run_id (UUID): The ID of the run.\n handlers (List[BaseCallbackHandler]): The list of handlers.\n inheritable_handlers (List[BaseCallbackHandler]):\n The list of inheritable handlers.\n parent_run_id (UUID, optional): The ID of the parent run.\n Defaults to None.\n tags (Optional[List[str]]): The list of tags.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-8", "text": "Defaults to None.\n tags (Optional[List[str]]): The list of tags.\n inheritable_tags (Optional[List[str]]): The list of inheritable tags.\n metadata (Optional[Dict[str, Any]]): The metadata.\n inheritable_metadata (Optional[Dict[str, Any]]): The inheritable metadata.\n \"\"\"\n self.run_id = run_id\n self.handlers = handlers\n self.inheritable_handlers = inheritable_handlers\n self.parent_run_id = parent_run_id\n self.tags = tags or []\n self.inheritable_tags = inheritable_tags or []\n self.metadata = metadata or {}\n self.inheritable_metadata = inheritable_metadata or {}\n[docs] @classmethod\n def get_noop_manager(cls: Type[BRM]) -> BRM:\n \"\"\"Return a manager that doesn't perform any operations.\n Returns:\n BaseRunManager: The noop manager.\n \"\"\"\n return cls(\n run_id=uuid4(),\n handlers=[],\n inheritable_handlers=[],\n tags=[],\n inheritable_tags=[],\n metadata={},\n inheritable_metadata={},\n )\n[docs]class RunManager(BaseRunManager):\n \"\"\"Sync Run Manager.\"\"\"\n[docs] def on_text(\n self,\n text: str,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run when text is received.\n Args:\n text (str): The received text.\n Returns:\n Any: The result of the callback.\n \"\"\"\n _handle_event(\n self.handlers,\n \"on_text\",\n None,\n text,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-9", "text": "tags=self.tags,\n **kwargs,\n )\n[docs]class ParentRunManager(RunManager):\n \"\"\"Sync Parent Run Manager.\"\"\"\n[docs] def get_child(self, tag: Optional[str] = None) -> CallbackManager:\n \"\"\"Get a child callback manager.\n Args:\n tag (str, optional): The tag for the child callback manager.\n Defaults to None.\n Returns:\n CallbackManager: The child callback manager.\n \"\"\"\n manager = CallbackManager(handlers=[], parent_run_id=self.run_id)\n manager.set_handlers(self.inheritable_handlers)\n manager.add_tags(self.inheritable_tags)\n manager.add_metadata(self.inheritable_metadata)\n if tag is not None:\n manager.add_tags([tag], False)\n return manager\n[docs]class AsyncRunManager(BaseRunManager):\n \"\"\"Async Run Manager.\"\"\"\n[docs] async def on_text(\n self,\n text: str,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run when text is received.\n Args:\n text (str): The received text.\n Returns:\n Any: The result of the callback.\n \"\"\"\n await _ahandle_event(\n self.handlers,\n \"on_text\",\n None,\n text,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n[docs]class AsyncParentRunManager(AsyncRunManager):\n \"\"\"Async Parent Run Manager.\"\"\"\n[docs] def get_child(self, tag: Optional[str] = None) -> AsyncCallbackManager:\n \"\"\"Get a child callback manager.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-10", "text": "\"\"\"Get a child callback manager.\n Args:\n tag (str, optional): The tag for the child callback manager.\n Defaults to None.\n Returns:\n AsyncCallbackManager: The child callback manager.\n \"\"\"\n manager = AsyncCallbackManager(handlers=[], parent_run_id=self.run_id)\n manager.set_handlers(self.inheritable_handlers)\n manager.add_tags(self.inheritable_tags)\n manager.add_metadata(self.inheritable_metadata)\n if tag is not None:\n manager.add_tags([tag], False)\n return manager\n[docs]class CallbackManagerForLLMRun(RunManager, LLMManagerMixin):\n \"\"\"Callback manager for LLM run.\"\"\"\n[docs] def on_llm_new_token(\n self,\n token: str,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when LLM generates a new token.\n Args:\n token (str): The new token.\n \"\"\"\n _handle_event(\n self.handlers,\n \"on_llm_new_token\",\n \"ignore_llm\",\n token=token,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Run when LLM ends running.\n Args:\n response (LLMResult): The LLM result.\n \"\"\"\n _handle_event(\n self.handlers,\n \"on_llm_end\",\n \"ignore_llm\",\n response,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-11", "text": "tags=self.tags,\n **kwargs,\n )\n[docs] def on_llm_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when LLM errors.\n Args:\n error (Exception or KeyboardInterrupt): The error.\n \"\"\"\n _handle_event(\n self.handlers,\n \"on_llm_error\",\n \"ignore_llm\",\n error,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n[docs]class AsyncCallbackManagerForLLMRun(AsyncRunManager, LLMManagerMixin):\n \"\"\"Async callback manager for LLM run.\"\"\"\n[docs] async def on_llm_new_token(\n self,\n token: str,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when LLM generates a new token.\n Args:\n token (str): The new token.\n \"\"\"\n await _ahandle_event(\n self.handlers,\n \"on_llm_new_token\",\n \"ignore_llm\",\n token,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n[docs] async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Run when LLM ends running.\n Args:\n response (LLMResult): The LLM result.\n \"\"\"\n await _ahandle_event(\n self.handlers,\n \"on_llm_end\",\n \"ignore_llm\",\n response,\n run_id=self.run_id,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-12", "text": "\"ignore_llm\",\n response,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n[docs] async def on_llm_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when LLM errors.\n Args:\n error (Exception or KeyboardInterrupt): The error.\n \"\"\"\n await _ahandle_event(\n self.handlers,\n \"on_llm_error\",\n \"ignore_llm\",\n error,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n[docs]class CallbackManagerForChainRun(ParentRunManager, ChainManagerMixin):\n \"\"\"Callback manager for chain run.\"\"\"\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Run when chain ends running.\n Args:\n outputs (Dict[str, Any]): The outputs of the chain.\n \"\"\"\n _handle_event(\n self.handlers,\n \"on_chain_end\",\n \"ignore_chain\",\n outputs,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n[docs] def on_chain_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when chain errors.\n Args:\n error (Exception or KeyboardInterrupt): The error.\n \"\"\"\n _handle_event(\n self.handlers,\n \"on_chain_error\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-13", "text": "_handle_event(\n self.handlers,\n \"on_chain_error\",\n \"ignore_chain\",\n error,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n[docs] def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Run when agent action is received.\n Args:\n action (AgentAction): The agent action.\n Returns:\n Any: The result of the callback.\n \"\"\"\n _handle_event(\n self.handlers,\n \"on_agent_action\",\n \"ignore_agent\",\n action,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n[docs] def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> Any:\n \"\"\"Run when agent finish is received.\n Args:\n finish (AgentFinish): The agent finish.\n Returns:\n Any: The result of the callback.\n \"\"\"\n _handle_event(\n self.handlers,\n \"on_agent_finish\",\n \"ignore_agent\",\n finish,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n[docs]class AsyncCallbackManagerForChainRun(AsyncParentRunManager, ChainManagerMixin):\n \"\"\"Async callback manager for chain run.\"\"\"\n[docs] async def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Run when chain ends running.\n Args:\n outputs (Dict[str, Any]): The outputs of the chain.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-14", "text": "outputs (Dict[str, Any]): The outputs of the chain.\n \"\"\"\n await _ahandle_event(\n self.handlers,\n \"on_chain_end\",\n \"ignore_chain\",\n outputs,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n[docs] async def on_chain_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when chain errors.\n Args:\n error (Exception or KeyboardInterrupt): The error.\n \"\"\"\n await _ahandle_event(\n self.handlers,\n \"on_chain_error\",\n \"ignore_chain\",\n error,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n[docs] async def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Run when agent action is received.\n Args:\n action (AgentAction): The agent action.\n Returns:\n Any: The result of the callback.\n \"\"\"\n await _ahandle_event(\n self.handlers,\n \"on_agent_action\",\n \"ignore_agent\",\n action,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n[docs] async def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> Any:\n \"\"\"Run when agent finish is received.\n Args:\n finish (AgentFinish): The agent finish.\n Returns:\n Any: The result of the callback.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-15", "text": "Returns:\n Any: The result of the callback.\n \"\"\"\n await _ahandle_event(\n self.handlers,\n \"on_agent_finish\",\n \"ignore_agent\",\n finish,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n[docs]class CallbackManagerForToolRun(ParentRunManager, ToolManagerMixin):\n \"\"\"Callback manager for tool run.\"\"\"\n[docs] def on_tool_end(\n self,\n output: str,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when tool ends running.\n Args:\n output (str): The output of the tool.\n \"\"\"\n _handle_event(\n self.handlers,\n \"on_tool_end\",\n \"ignore_agent\",\n output,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n[docs] def on_tool_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when tool errors.\n Args:\n error (Exception or KeyboardInterrupt): The error.\n \"\"\"\n _handle_event(\n self.handlers,\n \"on_tool_error\",\n \"ignore_agent\",\n error,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n[docs]class AsyncCallbackManagerForToolRun(AsyncParentRunManager, ToolManagerMixin):\n \"\"\"Async callback manager for tool run.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-16", "text": "\"\"\"Async callback manager for tool run.\"\"\"\n[docs] async def on_tool_end(self, output: str, **kwargs: Any) -> None:\n \"\"\"Run when tool ends running.\n Args:\n output (str): The output of the tool.\n \"\"\"\n await _ahandle_event(\n self.handlers,\n \"on_tool_end\",\n \"ignore_agent\",\n output,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n[docs] async def on_tool_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when tool errors.\n Args:\n error (Exception or KeyboardInterrupt): The error.\n \"\"\"\n await _ahandle_event(\n self.handlers,\n \"on_tool_error\",\n \"ignore_agent\",\n error,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n[docs]class CallbackManagerForRetrieverRun(ParentRunManager, RetrieverManagerMixin):\n \"\"\"Callback manager for retriever run.\"\"\"\n[docs] def on_retriever_end(\n self,\n documents: Sequence[Document],\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when retriever ends running.\"\"\"\n _handle_event(\n self.handlers,\n \"on_retriever_end\",\n \"ignore_retriever\",\n documents,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-17", "text": "tags=self.tags,\n **kwargs,\n )\n[docs] def on_retriever_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when retriever errors.\"\"\"\n _handle_event(\n self.handlers,\n \"on_retriever_error\",\n \"ignore_retriever\",\n error,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n[docs]class AsyncCallbackManagerForRetrieverRun(\n AsyncParentRunManager,\n RetrieverManagerMixin,\n):\n \"\"\"Async callback manager for retriever run.\"\"\"\n[docs] async def on_retriever_end(\n self, documents: Sequence[Document], **kwargs: Any\n ) -> None:\n \"\"\"Run when retriever ends running.\"\"\"\n await _ahandle_event(\n self.handlers,\n \"on_retriever_end\",\n \"ignore_retriever\",\n documents,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n[docs] async def on_retriever_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when retriever errors.\"\"\"\n await _ahandle_event(\n self.handlers,\n \"on_retriever_error\",\n \"ignore_retriever\",\n error,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-18", "text": "tags=self.tags,\n **kwargs,\n )\n[docs]class CallbackManager(BaseCallbackManager):\n \"\"\"Callback manager that can be used to handle callbacks from langchain.\"\"\"\n[docs] def on_llm_start(\n self,\n serialized: Dict[str, Any],\n prompts: List[str],\n **kwargs: Any,\n ) -> List[CallbackManagerForLLMRun]:\n \"\"\"Run when LLM starts running.\n Args:\n serialized (Dict[str, Any]): The serialized LLM.\n prompts (List[str]): The list of prompts.\n run_id (UUID, optional): The ID of the run. Defaults to None.\n Returns:\n List[CallbackManagerForLLMRun]: A callback manager for each\n prompt as an LLM run.\n \"\"\"\n managers = []\n for prompt in prompts:\n run_id_ = uuid4()\n _handle_event(\n self.handlers,\n \"on_llm_start\",\n \"ignore_llm\",\n serialized,\n [prompt],\n run_id=run_id_,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n metadata=self.metadata,\n **kwargs,\n )\n managers.append(\n CallbackManagerForLLMRun(\n run_id=run_id_,\n handlers=self.handlers,\n inheritable_handlers=self.inheritable_handlers,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n inheritable_tags=self.inheritable_tags,\n metadata=self.metadata,\n inheritable_metadata=self.inheritable_metadata,\n )\n )\n return managers\n[docs] def on_chat_model_start(\n self,\n serialized: Dict[str, Any],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-19", "text": "self,\n serialized: Dict[str, Any],\n messages: List[List[BaseMessage]],\n **kwargs: Any,\n ) -> List[CallbackManagerForLLMRun]:\n \"\"\"Run when LLM starts running.\n Args:\n serialized (Dict[str, Any]): The serialized LLM.\n messages (List[List[BaseMessage]]): The list of messages.\n run_id (UUID, optional): The ID of the run. Defaults to None.\n Returns:\n List[CallbackManagerForLLMRun]: A callback manager for each\n list of messages as an LLM run.\n \"\"\"\n managers = []\n for message_list in messages:\n run_id_ = uuid4()\n _handle_event(\n self.handlers,\n \"on_chat_model_start\",\n \"ignore_chat_model\",\n serialized,\n [message_list],\n run_id=run_id_,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n metadata=self.metadata,\n **kwargs,\n )\n managers.append(\n CallbackManagerForLLMRun(\n run_id=run_id_,\n handlers=self.handlers,\n inheritable_handlers=self.inheritable_handlers,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n inheritable_tags=self.inheritable_tags,\n metadata=self.metadata,\n inheritable_metadata=self.inheritable_metadata,\n )\n )\n return managers\n[docs] def on_chain_start(\n self,\n serialized: Dict[str, Any],\n inputs: Dict[str, Any],\n run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> CallbackManagerForChainRun:\n \"\"\"Run when chain starts running.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-20", "text": ") -> CallbackManagerForChainRun:\n \"\"\"Run when chain starts running.\n Args:\n serialized (Dict[str, Any]): The serialized chain.\n inputs (Dict[str, Any]): The inputs to the chain.\n run_id (UUID, optional): The ID of the run. Defaults to None.\n Returns:\n CallbackManagerForChainRun: The callback manager for the chain run.\n \"\"\"\n if run_id is None:\n run_id = uuid4()\n _handle_event(\n self.handlers,\n \"on_chain_start\",\n \"ignore_chain\",\n serialized,\n inputs,\n run_id=run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n metadata=self.metadata,\n **kwargs,\n )\n return CallbackManagerForChainRun(\n run_id=run_id,\n handlers=self.handlers,\n inheritable_handlers=self.inheritable_handlers,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n inheritable_tags=self.inheritable_tags,\n metadata=self.metadata,\n inheritable_metadata=self.inheritable_metadata,\n )\n[docs] def on_tool_start(\n self,\n serialized: Dict[str, Any],\n input_str: str,\n run_id: Optional[UUID] = None,\n parent_run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> CallbackManagerForToolRun:\n \"\"\"Run when tool starts running.\n Args:\n serialized (Dict[str, Any]): The serialized tool.\n input_str (str): The input to the tool.\n run_id (UUID, optional): The ID of the run. Defaults to None.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-21", "text": "run_id (UUID, optional): The ID of the run. Defaults to None.\n parent_run_id (UUID, optional): The ID of the parent run. Defaults to None.\n Returns:\n CallbackManagerForToolRun: The callback manager for the tool run.\n \"\"\"\n if run_id is None:\n run_id = uuid4()\n _handle_event(\n self.handlers,\n \"on_tool_start\",\n \"ignore_agent\",\n serialized,\n input_str,\n run_id=run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n metadata=self.metadata,\n **kwargs,\n )\n return CallbackManagerForToolRun(\n run_id=run_id,\n handlers=self.handlers,\n inheritable_handlers=self.inheritable_handlers,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n inheritable_tags=self.inheritable_tags,\n metadata=self.metadata,\n inheritable_metadata=self.inheritable_metadata,\n )\n[docs] def on_retriever_start(\n self,\n serialized: Dict[str, Any],\n query: str,\n run_id: Optional[UUID] = None,\n parent_run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> CallbackManagerForRetrieverRun:\n \"\"\"Run when retriever starts running.\"\"\"\n if run_id is None:\n run_id = uuid4()\n _handle_event(\n self.handlers,\n \"on_retriever_start\",\n \"ignore_retriever\",\n serialized,\n query,\n run_id=run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n metadata=self.metadata,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-22", "text": "tags=self.tags,\n metadata=self.metadata,\n **kwargs,\n )\n return CallbackManagerForRetrieverRun(\n run_id=run_id,\n handlers=self.handlers,\n inheritable_handlers=self.inheritable_handlers,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n inheritable_tags=self.inheritable_tags,\n metadata=self.metadata,\n inheritable_metadata=self.inheritable_metadata,\n )\n[docs] @classmethod\n def configure(\n cls,\n inheritable_callbacks: Callbacks = None,\n local_callbacks: Callbacks = None,\n verbose: bool = False,\n inheritable_tags: Optional[List[str]] = None,\n local_tags: Optional[List[str]] = None,\n inheritable_metadata: Optional[Dict[str, Any]] = None,\n local_metadata: Optional[Dict[str, Any]] = None,\n ) -> CallbackManager:\n \"\"\"Configure the callback manager.\n Args:\n inheritable_callbacks (Optional[Callbacks], optional): The inheritable\n callbacks. Defaults to None.\n local_callbacks (Optional[Callbacks], optional): The local callbacks.\n Defaults to None.\n verbose (bool, optional): Whether to enable verbose mode. Defaults to False.\n inheritable_tags (Optional[List[str]], optional): The inheritable tags.\n Defaults to None.\n local_tags (Optional[List[str]], optional): The local tags.\n Defaults to None.\n inheritable_metadata (Optional[Dict[str, Any]], optional): The inheritable\n metadata. Defaults to None.\n local_metadata (Optional[Dict[str, Any]], optional): The local metadata.\n Defaults to None.\n Returns:\n CallbackManager: The configured callback manager.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-23", "text": "Returns:\n CallbackManager: The configured callback manager.\n \"\"\"\n return _configure(\n cls,\n inheritable_callbacks,\n local_callbacks,\n verbose,\n inheritable_tags,\n local_tags,\n inheritable_metadata,\n local_metadata,\n )\n[docs]class AsyncCallbackManager(BaseCallbackManager):\n \"\"\"Async callback manager that can be used to handle callbacks from LangChain.\"\"\"\n @property\n def is_async(self) -> bool:\n \"\"\"Return whether the handler is async.\"\"\"\n return True\n[docs] async def on_llm_start(\n self,\n serialized: Dict[str, Any],\n prompts: List[str],\n **kwargs: Any,\n ) -> List[AsyncCallbackManagerForLLMRun]:\n \"\"\"Run when LLM starts running.\n Args:\n serialized (Dict[str, Any]): The serialized LLM.\n prompts (List[str]): The list of prompts.\n run_id (UUID, optional): The ID of the run. Defaults to None.\n Returns:\n List[AsyncCallbackManagerForLLMRun]: The list of async\n callback managers, one for each LLM Run corresponding\n to each prompt.\n \"\"\"\n tasks = []\n managers = []\n for prompt in prompts:\n run_id_ = uuid4()\n tasks.append(\n _ahandle_event(\n self.handlers,\n \"on_llm_start\",\n \"ignore_llm\",\n serialized,\n [prompt],\n run_id=run_id_,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n metadata=self.metadata,\n **kwargs,\n )\n )\n managers.append(\n AsyncCallbackManagerForLLMRun(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-24", "text": ")\n )\n managers.append(\n AsyncCallbackManagerForLLMRun(\n run_id=run_id_,\n handlers=self.handlers,\n inheritable_handlers=self.inheritable_handlers,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n inheritable_tags=self.inheritable_tags,\n metadata=self.metadata,\n inheritable_metadata=self.inheritable_metadata,\n )\n )\n await asyncio.gather(*tasks)\n return managers\n[docs] async def on_chat_model_start(\n self,\n serialized: Dict[str, Any],\n messages: List[List[BaseMessage]],\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run when LLM starts running.\n Args:\n serialized (Dict[str, Any]): The serialized LLM.\n messages (List[List[BaseMessage]]): The list of messages.\n run_id (UUID, optional): The ID of the run. Defaults to None.\n Returns:\n List[AsyncCallbackManagerForLLMRun]: The list of\n async callback managers, one for each LLM Run\n corresponding to each inner message list.\n \"\"\"\n tasks = []\n managers = []\n for message_list in messages:\n run_id_ = uuid4()\n tasks.append(\n _ahandle_event(\n self.handlers,\n \"on_chat_model_start\",\n \"ignore_chat_model\",\n serialized,\n [message_list],\n run_id=run_id_,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n metadata=self.metadata,\n **kwargs,\n )\n )\n managers.append(\n AsyncCallbackManagerForLLMRun(\n run_id=run_id_,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-25", "text": "AsyncCallbackManagerForLLMRun(\n run_id=run_id_,\n handlers=self.handlers,\n inheritable_handlers=self.inheritable_handlers,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n inheritable_tags=self.inheritable_tags,\n metadata=self.metadata,\n inheritable_metadata=self.inheritable_metadata,\n )\n )\n await asyncio.gather(*tasks)\n return managers\n[docs] async def on_chain_start(\n self,\n serialized: Dict[str, Any],\n inputs: Dict[str, Any],\n run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> AsyncCallbackManagerForChainRun:\n \"\"\"Run when chain starts running.\n Args:\n serialized (Dict[str, Any]): The serialized chain.\n inputs (Dict[str, Any]): The inputs to the chain.\n run_id (UUID, optional): The ID of the run. Defaults to None.\n Returns:\n AsyncCallbackManagerForChainRun: The async callback manager\n for the chain run.\n \"\"\"\n if run_id is None:\n run_id = uuid4()\n await _ahandle_event(\n self.handlers,\n \"on_chain_start\",\n \"ignore_chain\",\n serialized,\n inputs,\n run_id=run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n metadata=self.metadata,\n **kwargs,\n )\n return AsyncCallbackManagerForChainRun(\n run_id=run_id,\n handlers=self.handlers,\n inheritable_handlers=self.inheritable_handlers,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n inheritable_tags=self.inheritable_tags,\n metadata=self.metadata,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-26", "text": "inheritable_tags=self.inheritable_tags,\n metadata=self.metadata,\n inheritable_metadata=self.inheritable_metadata,\n )\n[docs] async def on_tool_start(\n self,\n serialized: Dict[str, Any],\n input_str: str,\n run_id: Optional[UUID] = None,\n parent_run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> AsyncCallbackManagerForToolRun:\n \"\"\"Run when tool starts running.\n Args:\n serialized (Dict[str, Any]): The serialized tool.\n input_str (str): The input to the tool.\n run_id (UUID, optional): The ID of the run. Defaults to None.\n parent_run_id (UUID, optional): The ID of the parent run.\n Defaults to None.\n Returns:\n AsyncCallbackManagerForToolRun: The async callback manager\n for the tool run.\n \"\"\"\n if run_id is None:\n run_id = uuid4()\n await _ahandle_event(\n self.handlers,\n \"on_tool_start\",\n \"ignore_agent\",\n serialized,\n input_str,\n run_id=run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n metadata=self.metadata,\n **kwargs,\n )\n return AsyncCallbackManagerForToolRun(\n run_id=run_id,\n handlers=self.handlers,\n inheritable_handlers=self.inheritable_handlers,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n inheritable_tags=self.inheritable_tags,\n metadata=self.metadata,\n inheritable_metadata=self.inheritable_metadata,\n )\n[docs] async def on_retriever_start(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-27", "text": ")\n[docs] async def on_retriever_start(\n self,\n serialized: Dict[str, Any],\n query: str,\n run_id: Optional[UUID] = None,\n parent_run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> AsyncCallbackManagerForRetrieverRun:\n \"\"\"Run when retriever starts running.\"\"\"\n if run_id is None:\n run_id = uuid4()\n await _ahandle_event(\n self.handlers,\n \"on_retriever_start\",\n \"ignore_retriever\",\n serialized,\n query,\n run_id=run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n metadata=self.metadata,\n **kwargs,\n )\n return AsyncCallbackManagerForRetrieverRun(\n run_id=run_id,\n handlers=self.handlers,\n inheritable_handlers=self.inheritable_handlers,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n inheritable_tags=self.inheritable_tags,\n metadata=self.metadata,\n inheritable_metadata=self.inheritable_metadata,\n )\n[docs] @classmethod\n def configure(\n cls,\n inheritable_callbacks: Callbacks = None,\n local_callbacks: Callbacks = None,\n verbose: bool = False,\n inheritable_tags: Optional[List[str]] = None,\n local_tags: Optional[List[str]] = None,\n inheritable_metadata: Optional[Dict[str, Any]] = None,\n local_metadata: Optional[Dict[str, Any]] = None,\n ) -> AsyncCallbackManager:\n \"\"\"Configure the async callback manager.\n Args:\n inheritable_callbacks (Optional[Callbacks], optional): The inheritable", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-28", "text": "Args:\n inheritable_callbacks (Optional[Callbacks], optional): The inheritable\n callbacks. Defaults to None.\n local_callbacks (Optional[Callbacks], optional): The local callbacks.\n Defaults to None.\n verbose (bool, optional): Whether to enable verbose mode. Defaults to False.\n inheritable_tags (Optional[List[str]], optional): The inheritable tags.\n Defaults to None.\n local_tags (Optional[List[str]], optional): The local tags.\n Defaults to None.\n inheritable_metadata (Optional[Dict[str, Any]], optional): The inheritable\n metadata. Defaults to None.\n local_metadata (Optional[Dict[str, Any]], optional): The local metadata.\n Defaults to None.\n Returns:\n AsyncCallbackManager: The configured async callback manager.\n \"\"\"\n return _configure(\n cls,\n inheritable_callbacks,\n local_callbacks,\n verbose,\n inheritable_tags,\n local_tags,\n inheritable_metadata,\n local_metadata,\n )\nT = TypeVar(\"T\", CallbackManager, AsyncCallbackManager)\n[docs]def env_var_is_set(env_var: str) -> bool:\n \"\"\"Check if an environment variable is set.\n Args:\n env_var (str): The name of the environment variable.\n Returns:\n bool: True if the environment variable is set, False otherwise.\n \"\"\"\n return env_var in os.environ and os.environ[env_var] not in (\n \"\",\n \"0\",\n \"false\",\n \"False\",\n )\ndef _configure(\n callback_manager_cls: Type[T],\n inheritable_callbacks: Callbacks = None,\n local_callbacks: Callbacks = None,\n verbose: bool = False,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-29", "text": "local_callbacks: Callbacks = None,\n verbose: bool = False,\n inheritable_tags: Optional[List[str]] = None,\n local_tags: Optional[List[str]] = None,\n inheritable_metadata: Optional[Dict[str, Any]] = None,\n local_metadata: Optional[Dict[str, Any]] = None,\n) -> T:\n \"\"\"Configure the callback manager.\n Args:\n callback_manager_cls (Type[T]): The callback manager class.\n inheritable_callbacks (Optional[Callbacks], optional): The inheritable\n callbacks. Defaults to None.\n local_callbacks (Optional[Callbacks], optional): The local callbacks.\n Defaults to None.\n verbose (bool, optional): Whether to enable verbose mode. Defaults to False.\n inheritable_tags (Optional[List[str]], optional): The inheritable tags.\n Defaults to None.\n local_tags (Optional[List[str]], optional): The local tags. Defaults to None.\n inheritable_metadata (Optional[Dict[str, Any]], optional): The inheritable\n metadata. Defaults to None.\n local_metadata (Optional[Dict[str, Any]], optional): The local metadata.\n Defaults to None.\n Returns:\n T: The configured callback manager.\n \"\"\"\n callback_manager = callback_manager_cls(handlers=[])\n if inheritable_callbacks or local_callbacks:\n if isinstance(inheritable_callbacks, list) or inheritable_callbacks is None:\n inheritable_callbacks_ = inheritable_callbacks or []\n callback_manager = callback_manager_cls(\n handlers=inheritable_callbacks_.copy(),\n inheritable_handlers=inheritable_callbacks_.copy(),\n )\n else:\n callback_manager = callback_manager_cls(\n handlers=inheritable_callbacks.handlers,\n inheritable_handlers=inheritable_callbacks.inheritable_handlers,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-30", "text": "inheritable_handlers=inheritable_callbacks.inheritable_handlers,\n parent_run_id=inheritable_callbacks.parent_run_id,\n tags=inheritable_callbacks.tags,\n inheritable_tags=inheritable_callbacks.inheritable_tags,\n metadata=inheritable_callbacks.metadata,\n inheritable_metadata=inheritable_callbacks.inheritable_metadata,\n )\n local_handlers_ = (\n local_callbacks\n if isinstance(local_callbacks, list)\n else (local_callbacks.handlers if local_callbacks else [])\n )\n for handler in local_handlers_:\n callback_manager.add_handler(handler, False)\n if inheritable_tags or local_tags:\n callback_manager.add_tags(inheritable_tags or [])\n callback_manager.add_tags(local_tags or [], False)\n if inheritable_metadata or local_metadata:\n callback_manager.add_metadata(inheritable_metadata or {})\n callback_manager.add_metadata(local_metadata or {}, False)\n tracer = tracing_callback_var.get()\n wandb_tracer = wandb_tracing_callback_var.get()\n open_ai = openai_callback_var.get()\n tracing_enabled_ = (\n env_var_is_set(\"LANGCHAIN_TRACING\")\n or tracer is not None\n or env_var_is_set(\"LANGCHAIN_HANDLER\")\n )\n wandb_tracing_enabled_ = (\n env_var_is_set(\"LANGCHAIN_WANDB_TRACING\") or wandb_tracer is not None\n )\n tracer_v2 = tracing_v2_callback_var.get()\n tracing_v2_enabled_ = (\n env_var_is_set(\"LANGCHAIN_TRACING_V2\") or tracer_v2 is not None\n )\n tracer_project = os.environ.get(\n \"LANGCHAIN_PROJECT\", os.environ.get(\"LANGCHAIN_SESSION\", \"default\")\n )\n debug = _get_debug()\n if (\n verbose", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-31", "text": ")\n debug = _get_debug()\n if (\n verbose\n or debug\n or tracing_enabled_\n or tracing_v2_enabled_\n or wandb_tracing_enabled_\n or open_ai is not None\n ):\n if verbose and not any(\n isinstance(handler, StdOutCallbackHandler)\n for handler in callback_manager.handlers\n ):\n if debug:\n pass\n else:\n callback_manager.add_handler(StdOutCallbackHandler(), False)\n if debug and not any(\n isinstance(handler, ConsoleCallbackHandler)\n for handler in callback_manager.handlers\n ):\n callback_manager.add_handler(ConsoleCallbackHandler(), True)\n if tracing_enabled_ and not any(\n isinstance(handler, LangChainTracerV1)\n for handler in callback_manager.handlers\n ):\n if tracer:\n callback_manager.add_handler(tracer, True)\n else:\n handler = LangChainTracerV1()\n handler.load_session(tracer_project)\n callback_manager.add_handler(handler, True)\n if wandb_tracing_enabled_ and not any(\n isinstance(handler, WandbTracer) for handler in callback_manager.handlers\n ):\n if wandb_tracer:\n callback_manager.add_handler(wandb_tracer, True)\n else:\n handler = WandbTracer()\n callback_manager.add_handler(handler, True)\n if tracing_v2_enabled_ and not any(\n isinstance(handler, LangChainTracer)\n for handler in callback_manager.handlers\n ):\n if tracer_v2:\n callback_manager.add_handler(tracer_v2, True)\n else:\n try:\n handler = LangChainTracer(project_name=tracer_project)\n callback_manager.add_handler(handler, True)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "89503bfe3b61-32", "text": "callback_manager.add_handler(handler, True)\n except Exception as e:\n logger.warning(\n \"Unable to load requested LangChainTracer.\"\n \" To disable this warning,\"\n \" unset the LANGCHAIN_TRACING_V2 environment variables.\",\n e,\n )\n if open_ai is not None and not any(\n isinstance(handler, OpenAICallbackHandler)\n for handler in callback_manager.handlers\n ):\n callback_manager.add_handler(open_ai, True)\n return callback_manager", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} {"id": "48cc84ce2baf-0", "text": "Source code for langchain.callbacks.streaming_aiter\nfrom __future__ import annotations\nimport asyncio\nfrom typing import Any, AsyncIterator, Dict, List, Literal, Union, cast\nfrom langchain.callbacks.base import AsyncCallbackHandler\nfrom langchain.schema import LLMResult\n# TODO If used by two LLM runs in parallel this won't work as expected\n[docs]class AsyncIteratorCallbackHandler(AsyncCallbackHandler):\n \"\"\"Callback handler that returns an async iterator.\"\"\"\n queue: asyncio.Queue[str]\n done: asyncio.Event\n @property\n def always_verbose(self) -> bool:\n return True\n def __init__(self) -> None:\n self.queue = asyncio.Queue()\n self.done = asyncio.Event()\n[docs] async def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n # If two calls are made in a row, this resets the state\n self.done.clear()\n[docs] async def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n if token is not None and token != \"\":\n self.queue.put_nowait(token)\n[docs] async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n self.done.set()\n[docs] async def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n self.done.set()\n # TODO implement the other methods\n[docs] async def aiter(self) -> AsyncIterator[str]:\n while not self.queue.empty() or not self.done.is_set():\n # Wait for the next token in the queue,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streaming_aiter.html"} {"id": "48cc84ce2baf-1", "text": "# Wait for the next token in the queue,\n # but stop waiting if the done event is set\n done, other = await asyncio.wait(\n [\n # NOTE: If you add other tasks here, update the code below,\n # which assumes each set has exactly one task each\n asyncio.ensure_future(self.queue.get()),\n asyncio.ensure_future(self.done.wait()),\n ],\n return_when=asyncio.FIRST_COMPLETED,\n )\n # Cancel the other task\n if other:\n other.pop().cancel()\n # Extract the value of the first completed task\n token_or_done = cast(Union[str, Literal[True]], done.pop().result())\n # If the extracted value is the boolean True, the done event was set\n if token_or_done is True:\n break\n # Otherwise, the extracted value is a token, which we yield\n yield token_or_done", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streaming_aiter.html"} {"id": "c69dab2a2e7c-0", "text": "Source code for langchain.callbacks.openai_info\n\"\"\"Callback Handler that prints to std out.\"\"\"\nfrom typing import Any, Dict, List\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.schema import LLMResult\nMODEL_COST_PER_1K_TOKENS = {\n # GPT-4 input\n \"gpt-4\": 0.03,\n \"gpt-4-0314\": 0.03,\n \"gpt-4-0613\": 0.03,\n \"gpt-4-32k\": 0.06,\n \"gpt-4-32k-0314\": 0.06,\n \"gpt-4-32k-0613\": 0.06,\n # GPT-4 output\n \"gpt-4-completion\": 0.06,\n \"gpt-4-0314-completion\": 0.06,\n \"gpt-4-0613-completion\": 0.06,\n \"gpt-4-32k-completion\": 0.12,\n \"gpt-4-32k-0314-completion\": 0.12,\n \"gpt-4-32k-0613-completion\": 0.12,\n # GPT-3.5 input\n \"gpt-3.5-turbo\": 0.0015,\n \"gpt-3.5-turbo-0301\": 0.0015,\n \"gpt-3.5-turbo-0613\": 0.0015,\n \"gpt-3.5-turbo-16k\": 0.003,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/openai_info.html"} {"id": "c69dab2a2e7c-1", "text": "\"gpt-3.5-turbo-16k-0613\": 0.003,\n # GPT-3.5 output\n \"gpt-3.5-turbo-completion\": 0.002,\n \"gpt-3.5-turbo-0301-completion\": 0.002,\n \"gpt-3.5-turbo-0613-completion\": 0.002,\n \"gpt-3.5-turbo-16k-completion\": 0.004,\n \"gpt-3.5-turbo-16k-0613-completion\": 0.004,\n # Others\n \"gpt-35-turbo\": 0.002, # Azure OpenAI version of ChatGPT\n \"text-ada-001\": 0.0004,\n \"ada\": 0.0004,\n \"text-babbage-001\": 0.0005,\n \"babbage\": 0.0005,\n \"text-curie-001\": 0.002,\n \"curie\": 0.002,\n \"text-davinci-003\": 0.02,\n \"text-davinci-002\": 0.02,\n \"code-davinci-002\": 0.02,\n \"ada-finetuned\": 0.0016,\n \"babbage-finetuned\": 0.0024,\n \"curie-finetuned\": 0.012,\n \"davinci-finetuned\": 0.12,\n}\n[docs]def standardize_model_name(\n model_name: str,\n is_completion: bool = False,\n) -> str:\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/openai_info.html"} {"id": "c69dab2a2e7c-2", "text": "is_completion: bool = False,\n) -> str:\n \"\"\"\n Standardize the model name to a format that can be used in the OpenAI API.\n Args:\n model_name: Model name to standardize.\n is_completion: Whether the model is used for completion or not.\n Defaults to False.\n Returns:\n Standardized model name.\n \"\"\"\n model_name = model_name.lower()\n if \"ft-\" in model_name:\n return model_name.split(\":\")[0] + \"-finetuned\"\n elif is_completion and (\n model_name.startswith(\"gpt-4\") or model_name.startswith(\"gpt-3.5\")\n ):\n return model_name + \"-completion\"\n else:\n return model_name\n[docs]def get_openai_token_cost_for_model(\n model_name: str, num_tokens: int, is_completion: bool = False\n) -> float:\n \"\"\"\n Get the cost in USD for a given model and number of tokens.\n Args:\n model_name: Name of the model\n num_tokens: Number of tokens.\n is_completion: Whether the model is used for completion or not.\n Defaults to False.\n Returns:\n Cost in USD.\n \"\"\"\n model_name = standardize_model_name(model_name, is_completion=is_completion)\n if model_name not in MODEL_COST_PER_1K_TOKENS:\n raise ValueError(\n f\"Unknown model: {model_name}. Please provide a valid OpenAI model name.\"\n \"Known models are: \" + \", \".join(MODEL_COST_PER_1K_TOKENS.keys())\n )\n return MODEL_COST_PER_1K_TOKENS[model_name] * (num_tokens / 1000)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/openai_info.html"} {"id": "c69dab2a2e7c-3", "text": "[docs]class OpenAICallbackHandler(BaseCallbackHandler):\n \"\"\"Callback Handler that tracks OpenAI info.\"\"\"\n total_tokens: int = 0\n prompt_tokens: int = 0\n completion_tokens: int = 0\n successful_requests: int = 0\n total_cost: float = 0.0\n def __repr__(self) -> str:\n return (\n f\"Tokens Used: {self.total_tokens}\\n\"\n f\"\\tPrompt Tokens: {self.prompt_tokens}\\n\"\n f\"\\tCompletion Tokens: {self.completion_tokens}\\n\"\n f\"Successful Requests: {self.successful_requests}\\n\"\n f\"Total Cost (USD): ${self.total_cost}\"\n )\n @property\n def always_verbose(self) -> bool:\n \"\"\"Whether to call verbose callbacks even if verbose is False.\"\"\"\n return True\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n \"\"\"Print out the prompts.\"\"\"\n pass\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Print out the token.\"\"\"\n pass\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Collect token usage.\"\"\"\n if response.llm_output is None:\n return None\n self.successful_requests += 1\n if \"token_usage\" not in response.llm_output:\n return None\n token_usage = response.llm_output[\"token_usage\"]\n completion_tokens = token_usage.get(\"completion_tokens\", 0)\n prompt_tokens = token_usage.get(\"prompt_tokens\", 0)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/openai_info.html"} {"id": "c69dab2a2e7c-4", "text": "prompt_tokens = token_usage.get(\"prompt_tokens\", 0)\n model_name = standardize_model_name(response.llm_output.get(\"model_name\", \"\"))\n if model_name in MODEL_COST_PER_1K_TOKENS:\n completion_cost = get_openai_token_cost_for_model(\n model_name, completion_tokens, is_completion=True\n )\n prompt_cost = get_openai_token_cost_for_model(model_name, prompt_tokens)\n self.total_cost += prompt_cost + completion_cost\n self.total_tokens += token_usage.get(\"total_tokens\", 0)\n self.prompt_tokens += prompt_tokens\n self.completion_tokens += completion_tokens\n def __copy__(self) -> \"OpenAICallbackHandler\":\n \"\"\"Return a copy of the callback handler.\"\"\"\n return self\n def __deepcopy__(self, memo: Any) -> \"OpenAICallbackHandler\":\n \"\"\"Return a deep copy of the callback handler.\"\"\"\n return self", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/openai_info.html"} {"id": "47e804851afe-0", "text": "Source code for langchain.callbacks.whylabs_callback\nfrom __future__ import annotations\nimport logging\nfrom typing import TYPE_CHECKING, Any, Dict, List, Optional, Union\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.schema import AgentAction, AgentFinish, Generation, LLMResult\nfrom langchain.utils import get_from_env\nif TYPE_CHECKING:\n from whylogs.api.logger.logger import Logger\ndiagnostic_logger = logging.getLogger(__name__)\n[docs]def import_langkit(\n sentiment: bool = False,\n toxicity: bool = False,\n themes: bool = False,\n) -> Any:\n \"\"\"Import the langkit python package and raise an error if it is not installed.\n Args:\n sentiment: Whether to import the langkit.sentiment module. Defaults to False.\n toxicity: Whether to import the langkit.toxicity module. Defaults to False.\n themes: Whether to import the langkit.themes module. Defaults to False.\n Returns:\n The imported langkit module.\n \"\"\"\n try:\n import langkit # noqa: F401\n import langkit.regexes # noqa: F401\n import langkit.textstat # noqa: F401\n if sentiment:\n import langkit.sentiment # noqa: F401\n if toxicity:\n import langkit.toxicity # noqa: F401\n if themes:\n import langkit.themes # noqa: F401\n except ImportError:\n raise ImportError(\n \"To use the whylabs callback manager you need to have the `langkit` python \"\n \"package installed. Please install it with `pip install langkit`.\"\n )\n return langkit\n[docs]class WhyLabsCallbackHandler(BaseCallbackHandler):\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/whylabs_callback.html"} {"id": "47e804851afe-1", "text": "[docs]class WhyLabsCallbackHandler(BaseCallbackHandler):\n \"\"\"\n Callback Handler for logging to WhyLabs. This callback handler utilizes\n `langkit` to extract features from the prompts & responses when interacting with\n an LLM. These features can be used to guardrail, evaluate, and observe interactions\n over time to detect issues relating to hallucinations, prompt engineering,\n or output validation. LangKit is an LLM monitoring toolkit developed by WhyLabs.\n Here are some examples of what can be monitored with LangKit:\n * Text Quality\n - readability score\n - complexity and grade scores\n * Text Relevance\n - Similarity scores between prompt/responses\n - Similarity scores against user-defined themes\n - Topic classification\n * Security and Privacy\n - patterns - count of strings matching a user-defined regex pattern group\n - jailbreaks - similarity scores with respect to known jailbreak attempts\n - prompt injection - similarity scores with respect to known prompt attacks\n - refusals - similarity scores with respect to known LLM refusal responses\n * Sentiment and Toxicity\n - sentiment analysis\n - toxicity analysis\n For more information, see https://docs.whylabs.ai/docs/language-model-monitoring\n or check out the LangKit repo here: https://github.com/whylabs/langkit\n ---\n Args:\n api_key (Optional[str]): WhyLabs API key. Optional because the preferred\n way to specify the API key is with environment variable\n WHYLABS_API_KEY.\n org_id (Optional[str]): WhyLabs organization id to write profiles to.\n Optional because the preferred way to specify the organization id is\n with environment variable WHYLABS_DEFAULT_ORG_ID.\n dataset_id (Optional[str]): WhyLabs dataset id to write profiles to.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/whylabs_callback.html"} {"id": "47e804851afe-2", "text": "dataset_id (Optional[str]): WhyLabs dataset id to write profiles to.\n Optional because the preferred way to specify the dataset id is\n with environment variable WHYLABS_DEFAULT_DATASET_ID.\n sentiment (bool): Whether to enable sentiment analysis. Defaults to False.\n toxicity (bool): Whether to enable toxicity analysis. Defaults to False.\n themes (bool): Whether to enable theme analysis. Defaults to False.\n \"\"\"\n def __init__(self, logger: Logger):\n \"\"\"Initiate the rolling logger\"\"\"\n super().__init__()\n self.logger = logger\n diagnostic_logger.info(\n \"Initialized WhyLabs callback handler with configured whylogs Logger.\"\n )\n def _profile_generations(self, generations: List[Generation]) -> None:\n for gen in generations:\n self.logger.log({\"response\": gen.text})\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n \"\"\"Pass the input prompts to the logger\"\"\"\n for prompt in prompts:\n self.logger.log({\"prompt\": prompt})\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Pass the generated response to the logger.\"\"\"\n for generations in response.generations:\n self._profile_generations(generations)\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_chain_start(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/whylabs_callback.html"} {"id": "47e804851afe-3", "text": "\"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing.\"\"\"\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Do nothing.\"\"\"\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_tool_start(\n self,\n serialized: Dict[str, Any],\n input_str: str,\n **kwargs: Any,\n ) -> None:\n \"\"\"Do nothing.\"\"\"\n[docs] def on_agent_action(\n self, action: AgentAction, color: Optional[str] = None, **kwargs: Any\n ) -> Any:\n \"\"\"Do nothing.\"\"\"\n[docs] def on_tool_end(\n self,\n output: str,\n color: Optional[str] = None,\n observation_prefix: Optional[str] = None,\n llm_prefix: Optional[str] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Do nothing.\"\"\"\n[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_text(self, text: str, **kwargs: Any) -> None:\n \"\"\"Do nothing.\"\"\"\n[docs] def on_agent_finish(\n self, finish: AgentFinish, color: Optional[str] = None, **kwargs: Any\n ) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/whylabs_callback.html"} {"id": "47e804851afe-4", "text": ") -> None:\n \"\"\"Run on agent end.\"\"\"\n pass\n[docs] def flush(self) -> None:\n self.logger._do_rollover()\n diagnostic_logger.info(\"Flushing WhyLabs logger, writing profile...\")\n[docs] def close(self) -> None:\n self.logger.close()\n diagnostic_logger.info(\"Closing WhyLabs logger, see you next time!\")\n def __enter__(self) -> WhyLabsCallbackHandler:\n return self\n def __exit__(\n self, exception_type: Any, exception_value: Any, traceback: Any\n ) -> None:\n self.close()\n[docs] @classmethod\n def from_params(\n cls,\n *,\n api_key: Optional[str] = None,\n org_id: Optional[str] = None,\n dataset_id: Optional[str] = None,\n sentiment: bool = False,\n toxicity: bool = False,\n themes: bool = False,\n ) -> Logger:\n \"\"\"Instantiate whylogs Logger from params.\n Args:\n api_key (Optional[str]): WhyLabs API key. Optional because the preferred\n way to specify the API key is with environment variable\n WHYLABS_API_KEY.\n org_id (Optional[str]): WhyLabs organization id to write profiles to.\n If not set must be specified in environment variable\n WHYLABS_DEFAULT_ORG_ID.\n dataset_id (Optional[str]): The model or dataset this callback is gathering\n telemetry for. If not set must be specified in environment variable\n WHYLABS_DEFAULT_DATASET_ID.\n sentiment (bool): If True will initialize a model to perform\n sentiment analysis compound score. Defaults to False and will not gather\n this metric.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/whylabs_callback.html"} {"id": "47e804851afe-5", "text": "sentiment analysis compound score. Defaults to False and will not gather\n this metric.\n toxicity (bool): If True will initialize a model to score\n toxicity. Defaults to False and will not gather this metric.\n themes (bool): If True will initialize a model to calculate\n distance to configured themes. Defaults to None and will not gather this\n metric.\n \"\"\"\n # langkit library will import necessary whylogs libraries\n import_langkit(sentiment=sentiment, toxicity=toxicity, themes=themes)\n import whylogs as why\n from whylogs.api.writer.whylabs import WhyLabsWriter\n from whylogs.core.schema import DeclarativeSchema\n from whylogs.experimental.core.metrics.udf_metric import generate_udf_schema\n api_key = api_key or get_from_env(\"api_key\", \"WHYLABS_API_KEY\")\n org_id = org_id or get_from_env(\"org_id\", \"WHYLABS_DEFAULT_ORG_ID\")\n dataset_id = dataset_id or get_from_env(\n \"dataset_id\", \"WHYLABS_DEFAULT_DATASET_ID\"\n )\n whylabs_writer = WhyLabsWriter(\n api_key=api_key, org_id=org_id, dataset_id=dataset_id\n )\n langkit_schema = DeclarativeSchema(generate_udf_schema())\n whylabs_logger = why.logger(\n mode=\"rolling\", interval=5, when=\"M\", schema=langkit_schema\n )\n whylabs_logger.append_writer(writer=whylabs_writer)\n diagnostic_logger.info(\n \"Started whylogs Logger with WhyLabsWriter and initialized LangKit. \ud83d\udcdd\"\n )\n return cls(whylabs_logger)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/whylabs_callback.html"} {"id": "3f7ced99134e-0", "text": "Source code for langchain.callbacks.infino_callback\nimport time\nfrom typing import Any, Dict, List, Optional, Union\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.schema import AgentAction, AgentFinish, LLMResult\n[docs]def import_infino() -> Any:\n \"\"\"Import the infino client.\"\"\"\n try:\n from infinopy import InfinoClient\n except ImportError:\n raise ImportError(\n \"To use the Infino callbacks manager you need to have the\"\n \" `infinopy` python package installed.\"\n \"Please install it with `pip install infinopy`\"\n )\n return InfinoClient()\n[docs]class InfinoCallbackHandler(BaseCallbackHandler):\n \"\"\"Callback Handler that logs to Infino.\"\"\"\n def __init__(\n self,\n model_id: Optional[str] = None,\n model_version: Optional[str] = None,\n verbose: bool = False,\n ) -> None:\n # Set Infino client\n self.client = import_infino()\n self.model_id = model_id\n self.model_version = model_version\n self.verbose = verbose\n def _send_to_infino(\n self,\n key: str,\n value: Any,\n is_ts: bool = True,\n ) -> None:\n \"\"\"Send the key-value to Infino.\n Parameters:\n key (str): the key to send to Infino.\n value (Any): the value to send to Infino.\n is_ts (bool): if True, the value is part of a time series, else it\n is sent as a log message.\n \"\"\"\n payload = {\n \"date\": int(time.time()),\n key: value,\n \"labels\": {", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/infino_callback.html"} {"id": "3f7ced99134e-1", "text": "key: value,\n \"labels\": {\n \"model_id\": self.model_id,\n \"model_version\": self.model_version,\n },\n }\n if self.verbose:\n print(f\"Tracking {key} with Infino: {payload}\")\n # Append to Infino time series only if is_ts is True, otherwise\n # append to Infino log.\n if is_ts:\n self.client.append_ts(payload)\n else:\n self.client.append_log(payload)\n[docs] def on_llm_start(\n self,\n serialized: Dict[str, Any],\n prompts: List[str],\n **kwargs: Any,\n ) -> None:\n \"\"\"Log the prompts to Infino, and set start time and error flag.\"\"\"\n for prompt in prompts:\n self._send_to_infino(\"prompt\", prompt, is_ts=False)\n # Set the error flag to indicate no error (this will get overridden\n # in on_llm_error if an error occurs).\n self.error = 0\n # Set the start time (so that we can calculate the request\n # duration in on_llm_end).\n self.start_time = time.time()\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Do nothing when a new token is generated.\"\"\"\n pass\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Log the latency, error, token usage, and response to Infino.\"\"\"\n # Calculate and track the request latency.\n self.end_time = time.time()\n duration = self.end_time - self.start_time\n self._send_to_infino(\"latency\", duration)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/infino_callback.html"} {"id": "3f7ced99134e-2", "text": "self._send_to_infino(\"latency\", duration)\n # Track success or error flag.\n self._send_to_infino(\"error\", self.error)\n # Track token usage.\n if (response.llm_output is not None) and isinstance(response.llm_output, Dict):\n token_usage = response.llm_output[\"token_usage\"]\n if token_usage is not None:\n prompt_tokens = token_usage[\"prompt_tokens\"]\n total_tokens = token_usage[\"total_tokens\"]\n completion_tokens = token_usage[\"completion_tokens\"]\n self._send_to_infino(\"prompt_tokens\", prompt_tokens)\n self._send_to_infino(\"total_tokens\", total_tokens)\n self._send_to_infino(\"completion_tokens\", completion_tokens)\n # Track prompt response.\n for generations in response.generations:\n for generation in generations:\n self._send_to_infino(\"prompt_response\", generation.text, is_ts=False)\n[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Set the error flag.\"\"\"\n self.error = 1\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing when LLM chain starts.\"\"\"\n pass\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Do nothing when LLM chain ends.\"\"\"\n pass\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Need to log the error.\"\"\"\n pass", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/infino_callback.html"} {"id": "3f7ced99134e-3", "text": ") -> None:\n \"\"\"Need to log the error.\"\"\"\n pass\n[docs] def on_tool_start(\n self,\n serialized: Dict[str, Any],\n input_str: str,\n **kwargs: Any,\n ) -> None:\n \"\"\"Do nothing when tool starts.\"\"\"\n pass\n[docs] def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Do nothing when agent takes a specific action.\"\"\"\n pass\n[docs] def on_tool_end(\n self,\n output: str,\n observation_prefix: Optional[str] = None,\n llm_prefix: Optional[str] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Do nothing when tool ends.\"\"\"\n pass\n[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing when tool outputs an error.\"\"\"\n pass\n[docs] def on_text(self, text: str, **kwargs: Any) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None:\n \"\"\"Do nothing.\"\"\"\n pass", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/infino_callback.html"} {"id": "013a98c7f456-0", "text": "Source code for langchain.callbacks.human\nfrom typing import Any, Callable, Dict, Optional\nfrom uuid import UUID\nfrom langchain.callbacks.base import BaseCallbackHandler\ndef _default_approve(_input: str) -> bool:\n msg = (\n \"Do you approve of the following input? \"\n \"Anything except 'Y'/'Yes' (case-insensitive) will be treated as a no.\"\n )\n msg += \"\\n\\n\" + _input + \"\\n\"\n resp = input(msg)\n return resp.lower() in (\"yes\", \"y\")\ndef _default_true(_: Dict[str, Any]) -> bool:\n return True\n[docs]class HumanRejectedException(Exception):\n \"\"\"Exception to raise when a person manually review and rejects a value.\"\"\"\n[docs]class HumanApprovalCallbackHandler(BaseCallbackHandler):\n \"\"\"Callback for manually validating values.\"\"\"\n raise_error: bool = True\n def __init__(\n self,\n approve: Callable[[Any], bool] = _default_approve,\n should_check: Callable[[Dict[str, Any]], bool] = _default_true,\n ):\n self._approve = approve\n self._should_check = should_check\n[docs] def on_tool_start(\n self,\n serialized: Dict[str, Any],\n input_str: str,\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> Any:\n if self._should_check(serialized) and not self._approve(input_str):\n raise HumanRejectedException(\n f\"Inputs {input_str} to tool {serialized} were rejected.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/human.html"} {"id": "db607f48cb0e-0", "text": "Source code for langchain.callbacks.flyte_callback\n\"\"\"FlyteKit callback handler.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom copy import deepcopy\nfrom typing import TYPE_CHECKING, Any, Dict, List, Tuple, Union\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.callbacks.utils import (\n BaseMetadataCallbackHandler,\n flatten_dict,\n import_pandas,\n import_spacy,\n import_textstat,\n)\nfrom langchain.schema import AgentAction, AgentFinish, LLMResult\nif TYPE_CHECKING:\n import flytekit\n from flytekitplugins.deck import renderer\nlogger = logging.getLogger(__name__)\n[docs]def import_flytekit() -> Tuple[flytekit, renderer]:\n \"\"\"Import flytekit and flytekitplugins-deck-standard.\"\"\"\n try:\n import flytekit # noqa: F401\n from flytekitplugins.deck import renderer # noqa: F401\n except ImportError:\n raise ImportError(\n \"To use the flyte callback manager you need\"\n \"to have the `flytekit` and `flytekitplugins-deck-standard`\"\n \"packages installed. Please install them with `pip install flytekit`\"\n \"and `pip install flytekitplugins-deck-standard`.\"\n )\n return flytekit, renderer\n[docs]def analyze_text(\n text: str,\n nlp: Any = None,\n textstat: Any = None,\n) -> dict:\n \"\"\"Analyze text using textstat and spacy.\n Parameters:\n text (str): The text to analyze.\n nlp (spacy.lang): The spacy language model to use for visualization.\n Returns:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/flyte_callback.html"} {"id": "db607f48cb0e-1", "text": "Returns:\n (dict): A dictionary containing the complexity metrics and visualization\n files serialized to HTML string.\n \"\"\"\n resp: Dict[str, Any] = {}\n if textstat is not None:\n text_complexity_metrics = {\n \"flesch_reading_ease\": textstat.flesch_reading_ease(text),\n \"flesch_kincaid_grade\": textstat.flesch_kincaid_grade(text),\n \"smog_index\": textstat.smog_index(text),\n \"coleman_liau_index\": textstat.coleman_liau_index(text),\n \"automated_readability_index\": textstat.automated_readability_index(text),\n \"dale_chall_readability_score\": textstat.dale_chall_readability_score(text),\n \"difficult_words\": textstat.difficult_words(text),\n \"linsear_write_formula\": textstat.linsear_write_formula(text),\n \"gunning_fog\": textstat.gunning_fog(text),\n \"fernandez_huerta\": textstat.fernandez_huerta(text),\n \"szigriszt_pazos\": textstat.szigriszt_pazos(text),\n \"gutierrez_polini\": textstat.gutierrez_polini(text),\n \"crawford\": textstat.crawford(text),\n \"gulpease_index\": textstat.gulpease_index(text),\n \"osman\": textstat.osman(text),\n }\n resp.update({\"text_complexity_metrics\": text_complexity_metrics})\n resp.update(text_complexity_metrics)\n if nlp is not None:\n spacy = import_spacy()\n doc = nlp(text)\n dep_out = spacy.displacy.render( # type: ignore", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/flyte_callback.html"} {"id": "db607f48cb0e-2", "text": "dep_out = spacy.displacy.render( # type: ignore\n doc, style=\"dep\", jupyter=False, page=True\n )\n ent_out = spacy.displacy.render( # type: ignore\n doc, style=\"ent\", jupyter=False, page=True\n )\n text_visualizations = {\n \"dependency_tree\": dep_out,\n \"entities\": ent_out,\n }\n resp.update(text_visualizations)\n return resp\n[docs]class FlyteCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler):\n \"\"\"This callback handler is designed specifically for usage within a Flyte task.\"\"\"\n def __init__(self) -> None:\n \"\"\"Initialize callback handler.\"\"\"\n flytekit, renderer = import_flytekit()\n self.pandas = import_pandas()\n self.textstat = None\n try:\n self.textstat = import_textstat()\n except ImportError:\n logger.warning(\n \"Textstat library is not installed. \\\n It may result in the inability to log \\\n certain metrics that can be captured with Textstat.\"\n )\n spacy = None\n try:\n spacy = import_spacy()\n except ImportError:\n logger.warning(\n \"Spacy library is not installed. \\\n It may result in the inability to log \\\n certain metrics that can be captured with Spacy.\"\n )\n super().__init__()\n self.nlp = None\n if spacy:\n try:\n self.nlp = spacy.load(\"en_core_web_sm\")\n except OSError:\n logger.warning(\n \"FlyteCallbackHandler uses spacy's en_core_web_sm model\"\n \" for certain metrics. To download,\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/flyte_callback.html"} {"id": "db607f48cb0e-3", "text": "\" for certain metrics. To download,\"\n \" run the following command in your terminal:\"\n \" `python -m spacy download en_core_web_sm`\"\n )\n self.table_renderer = renderer.TableRenderer\n self.markdown_renderer = renderer.MarkdownRenderer\n self.deck = flytekit.Deck(\n \"LangChain Metrics\",\n self.markdown_renderer().to_html(\"## LangChain Metrics\"),\n )\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM starts.\"\"\"\n self.step += 1\n self.llm_starts += 1\n self.starts += 1\n resp: Dict[str, Any] = {}\n resp.update({\"action\": \"on_llm_start\"})\n resp.update(flatten_dict(serialized))\n resp.update(self.get_custom_callback_meta())\n prompt_responses = []\n for prompt in prompts:\n prompt_responses.append(prompt)\n resp.update({\"prompts\": prompt_responses})\n self.deck.append(self.markdown_renderer().to_html(\"### LLM Start\"))\n self.deck.append(\n self.table_renderer().to_html(self.pandas.DataFrame([resp])) + \"\\n\"\n )\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Run when LLM generates a new token.\"\"\"\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Run when LLM ends running.\"\"\"\n self.step += 1\n self.llm_ends += 1\n self.ends += 1\n resp: Dict[str, Any] = {}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/flyte_callback.html"} {"id": "db607f48cb0e-4", "text": "self.ends += 1\n resp: Dict[str, Any] = {}\n resp.update({\"action\": \"on_llm_end\"})\n resp.update(flatten_dict(response.llm_output or {}))\n resp.update(self.get_custom_callback_meta())\n self.deck.append(self.markdown_renderer().to_html(\"### LLM End\"))\n self.deck.append(self.table_renderer().to_html(self.pandas.DataFrame([resp])))\n for generations in response.generations:\n for generation in generations:\n generation_resp = deepcopy(resp)\n generation_resp.update(flatten_dict(generation.dict()))\n if self.nlp or self.textstat:\n generation_resp.update(\n analyze_text(\n generation.text, nlp=self.nlp, textstat=self.textstat\n )\n )\n complexity_metrics: Dict[str, float] = generation_resp.pop(\"text_complexity_metrics\") # type: ignore # noqa: E501\n self.deck.append(\n self.markdown_renderer().to_html(\"#### Text Complexity Metrics\")\n )\n self.deck.append(\n self.table_renderer().to_html(\n self.pandas.DataFrame([complexity_metrics])\n )\n + \"\\n\"\n )\n dependency_tree = generation_resp[\"dependency_tree\"]\n self.deck.append(\n self.markdown_renderer().to_html(\"#### Dependency Tree\")\n )\n self.deck.append(dependency_tree)\n entities = generation_resp[\"entities\"]\n self.deck.append(self.markdown_renderer().to_html(\"#### Entities\"))\n self.deck.append(entities)\n else:\n self.deck.append(\n self.markdown_renderer().to_html(\"#### Generated Response\")\n )\n self.deck.append(self.markdown_renderer().to_html(generation.text))\n[docs] def on_llm_error(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/flyte_callback.html"} {"id": "db607f48cb0e-5", "text": "[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain starts running.\"\"\"\n self.step += 1\n self.chain_starts += 1\n self.starts += 1\n resp: Dict[str, Any] = {}\n resp.update({\"action\": \"on_chain_start\"})\n resp.update(flatten_dict(serialized))\n resp.update(self.get_custom_callback_meta())\n chain_input = \",\".join([f\"{k}={v}\" for k, v in inputs.items()])\n input_resp = deepcopy(resp)\n input_resp[\"inputs\"] = chain_input\n self.deck.append(self.markdown_renderer().to_html(\"### Chain Start\"))\n self.deck.append(\n self.table_renderer().to_html(self.pandas.DataFrame([input_resp])) + \"\\n\"\n )\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Run when chain ends running.\"\"\"\n self.step += 1\n self.chain_ends += 1\n self.ends += 1\n resp: Dict[str, Any] = {}\n chain_output = \",\".join([f\"{k}={v}\" for k, v in outputs.items()])\n resp.update({\"action\": \"on_chain_end\", \"outputs\": chain_output})\n resp.update(self.get_custom_callback_meta())\n self.deck.append(self.markdown_renderer().to_html(\"### Chain End\"))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/flyte_callback.html"} {"id": "db607f48cb0e-6", "text": "self.deck.append(self.markdown_renderer().to_html(\"### Chain End\"))\n self.deck.append(\n self.table_renderer().to_html(self.pandas.DataFrame([resp])) + \"\\n\"\n )\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_tool_start(\n self, serialized: Dict[str, Any], input_str: str, **kwargs: Any\n ) -> None:\n \"\"\"Run when tool starts running.\"\"\"\n self.step += 1\n self.tool_starts += 1\n self.starts += 1\n resp: Dict[str, Any] = {}\n resp.update({\"action\": \"on_tool_start\", \"input_str\": input_str})\n resp.update(flatten_dict(serialized))\n resp.update(self.get_custom_callback_meta())\n self.deck.append(self.markdown_renderer().to_html(\"### Tool Start\"))\n self.deck.append(\n self.table_renderer().to_html(self.pandas.DataFrame([resp])) + \"\\n\"\n )\n[docs] def on_tool_end(self, output: str, **kwargs: Any) -> None:\n \"\"\"Run when tool ends running.\"\"\"\n self.step += 1\n self.tool_ends += 1\n self.ends += 1\n resp: Dict[str, Any] = {}\n resp.update({\"action\": \"on_tool_end\", \"output\": output})\n resp.update(self.get_custom_callback_meta())\n self.deck.append(self.markdown_renderer().to_html(\"### Tool End\"))\n self.deck.append(\n self.table_renderer().to_html(self.pandas.DataFrame([resp])) + \"\\n\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/flyte_callback.html"} {"id": "db607f48cb0e-7", "text": ")\n[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when tool errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_text(self, text: str, **kwargs: Any) -> None:\n \"\"\"\n Run when agent is ending.\n \"\"\"\n self.step += 1\n self.text_ctr += 1\n resp: Dict[str, Any] = {}\n resp.update({\"action\": \"on_text\", \"text\": text})\n resp.update(self.get_custom_callback_meta())\n self.deck.append(self.markdown_renderer().to_html(\"### On Text\"))\n self.deck.append(\n self.table_renderer().to_html(self.pandas.DataFrame([resp])) + \"\\n\"\n )\n[docs] def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None:\n \"\"\"Run when agent ends running.\"\"\"\n self.step += 1\n self.agent_ends += 1\n self.ends += 1\n resp: Dict[str, Any] = {}\n resp.update(\n {\n \"action\": \"on_agent_finish\",\n \"output\": finish.return_values[\"output\"],\n \"log\": finish.log,\n }\n )\n resp.update(self.get_custom_callback_meta())\n self.deck.append(self.markdown_renderer().to_html(\"### Agent Finish\"))\n self.deck.append(\n self.table_renderer().to_html(self.pandas.DataFrame([resp])) + \"\\n\"\n )\n[docs] def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Run on agent action.\"\"\"\n self.step += 1", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/flyte_callback.html"} {"id": "db607f48cb0e-8", "text": "\"\"\"Run on agent action.\"\"\"\n self.step += 1\n self.tool_starts += 1\n self.starts += 1\n resp: Dict[str, Any] = {}\n resp.update(\n {\n \"action\": \"on_agent_action\",\n \"tool\": action.tool,\n \"tool_input\": action.tool_input,\n \"log\": action.log,\n }\n )\n resp.update(self.get_custom_callback_meta())\n self.deck.append(self.markdown_renderer().to_html(\"### Agent Action\"))\n self.deck.append(\n self.table_renderer().to_html(self.pandas.DataFrame([resp])) + \"\\n\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/flyte_callback.html"} {"id": "1b9f6e9b813d-0", "text": "Source code for langchain.callbacks.streamlit.__init__\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, Optional\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.callbacks.streamlit.streamlit_callback_handler import (\n LLMThoughtLabeler as LLMThoughtLabeler,\n)\nfrom langchain.callbacks.streamlit.streamlit_callback_handler import (\n StreamlitCallbackHandler as _InternalStreamlitCallbackHandler,\n)\nif TYPE_CHECKING:\n from streamlit.delta_generator import DeltaGenerator\n[docs]def StreamlitCallbackHandler(\n parent_container: DeltaGenerator,\n *,\n max_thought_containers: int = 4,\n expand_new_thoughts: bool = True,\n collapse_completed_thoughts: bool = True,\n thought_labeler: Optional[LLMThoughtLabeler] = None,\n) -> BaseCallbackHandler:\n \"\"\"Construct a new StreamlitCallbackHandler. This CallbackHandler is geared towards\n use with a LangChain Agent; it displays the Agent's LLM and tool-usage \"thoughts\"\n inside a series of Streamlit expanders.\n Parameters\n ----------\n parent_container\n The `st.container` that will contain all the Streamlit elements that the\n Handler creates.\n max_thought_containers\n The max number of completed LLM thought containers to show at once. When this\n threshold is reached, a new thought will cause the oldest thoughts to be\n collapsed into a \"History\" expander. Defaults to 4.\n expand_new_thoughts\n Each LLM \"thought\" gets its own `st.expander`. This param controls whether that\n expander is expanded by default. Defaults to True.\n collapse_completed_thoughts\n If True, LLM thought expanders will be collapsed when completed.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/__init__.html"} {"id": "1b9f6e9b813d-1", "text": "If True, LLM thought expanders will be collapsed when completed.\n Defaults to True.\n thought_labeler\n An optional custom LLMThoughtLabeler instance. If unspecified, the handler\n will use the default thought labeling logic. Defaults to None.\n Returns\n -------\n A new StreamlitCallbackHandler instance.\n Note that this is an \"auto-updating\" API: if the installed version of Streamlit\n has a more recent StreamlitCallbackHandler implementation, an instance of that class\n will be used.\n \"\"\"\n # If we're using a version of Streamlit that implements StreamlitCallbackHandler,\n # delegate to it instead of using our built-in handler. The official handler is\n # guaranteed to support the same set of kwargs.\n try:\n from streamlit.external.langchain import (\n StreamlitCallbackHandler as OfficialStreamlitCallbackHandler, # type: ignore # noqa: 501\n )\n return OfficialStreamlitCallbackHandler(\n parent_container,\n max_thought_containers=max_thought_containers,\n expand_new_thoughts=expand_new_thoughts,\n collapse_completed_thoughts=collapse_completed_thoughts,\n thought_labeler=thought_labeler,\n )\n except ImportError:\n return _InternalStreamlitCallbackHandler(\n parent_container,\n max_thought_containers=max_thought_containers,\n expand_new_thoughts=expand_new_thoughts,\n collapse_completed_thoughts=collapse_completed_thoughts,\n thought_labeler=thought_labeler,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/__init__.html"} {"id": "c71f9c0ceebf-0", "text": "Source code for langchain.callbacks.streamlit.streamlit_callback_handler\n\"\"\"Callback Handler that prints to streamlit.\"\"\"\nfrom __future__ import annotations\nfrom enum import Enum\nfrom typing import TYPE_CHECKING, Any, Dict, List, NamedTuple, Optional, Union\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.callbacks.streamlit.mutable_expander import MutableExpander\nfrom langchain.schema import AgentAction, AgentFinish, LLMResult\nif TYPE_CHECKING:\n from streamlit.delta_generator import DeltaGenerator\ndef _convert_newlines(text: str) -> str:\n \"\"\"Convert newline characters to markdown newline sequences\n (space, space, newline).\n \"\"\"\n return text.replace(\"\\n\", \" \\n\")\nCHECKMARK_EMOJI = \"\u2705\"\nTHINKING_EMOJI = \":thinking_face:\"\nHISTORY_EMOJI = \":books:\"\nEXCEPTION_EMOJI = \"\u26a0\ufe0f\"\n[docs]class LLMThoughtState(Enum):\n \"\"\"Enumerator of the LLMThought state.\"\"\"\n # The LLM is thinking about what to do next. We don't know which tool we'll run.\n THINKING = \"THINKING\"\n # The LLM has decided to run a tool. We don't have results from the tool yet.\n RUNNING_TOOL = \"RUNNING_TOOL\"\n # We have results from the tool.\n COMPLETE = \"COMPLETE\"\n[docs]class ToolRecord(NamedTuple):\n \"\"\"The tool record as a NamedTuple.\"\"\"\n name: str\n input_str: str\nclass LLMThoughtLabeler:\n \"\"\"\n Generates markdown labels for LLMThought containers. Pass a custom\n subclass of this to StreamlitCallbackHandler to override its default\n labeling logic.\n \"\"\"\n def get_initial_label(self) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/streamlit_callback_handler.html"} {"id": "c71f9c0ceebf-1", "text": "labeling logic.\n \"\"\"\n def get_initial_label(self) -> str:\n \"\"\"Return the markdown label for a new LLMThought that doesn't have\n an associated tool yet.\n \"\"\"\n return f\"{THINKING_EMOJI} **Thinking...**\"\n def get_tool_label(self, tool: ToolRecord, is_complete: bool) -> str:\n \"\"\"Return the label for an LLMThought that has an associated\n tool.\n Parameters\n ----------\n tool\n The tool's ToolRecord\n is_complete\n True if the thought is complete; False if the thought\n is still receiving input.\n Returns\n -------\n The markdown label for the thought's container.\n \"\"\"\n input = tool.input_str\n name = tool.name\n emoji = CHECKMARK_EMOJI if is_complete else THINKING_EMOJI\n if name == \"_Exception\":\n emoji = EXCEPTION_EMOJI\n name = \"Parsing error\"\n idx = min([60, len(input)])\n input = input[0:idx]\n if len(tool.input_str) > idx:\n input = input + \"...\"\n input = input.replace(\"\\n\", \" \")\n label = f\"{emoji} **{name}:** {input}\"\n return label\n def get_history_label(self) -> str:\n \"\"\"Return a markdown label for the special 'history' container\n that contains overflow thoughts.\n \"\"\"\n return f\"{HISTORY_EMOJI} **History**\"\n def get_final_agent_thought_label(self) -> str:\n \"\"\"Return the markdown label for the agent's final thought -\n the \"Now I have the answer\" thought, that doesn't involve\n a tool.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/streamlit_callback_handler.html"} {"id": "c71f9c0ceebf-2", "text": "a tool.\n \"\"\"\n return f\"{CHECKMARK_EMOJI} **Complete!**\"\nclass LLMThought:\n \"\"\"A thought in the LLM's thought stream.\"\"\"\n def __init__(\n self,\n parent_container: DeltaGenerator,\n labeler: LLMThoughtLabeler,\n expanded: bool,\n collapse_on_complete: bool,\n ):\n \"\"\"Initialize the LLMThought.\n Args:\n parent_container: The container we're writing into.\n labeler: The labeler to use for this thought.\n expanded: Whether the thought should be expanded by default.\n collapse_on_complete: Whether the thought should be collapsed.\n \"\"\"\n self._container = MutableExpander(\n parent_container=parent_container,\n label=labeler.get_initial_label(),\n expanded=expanded,\n )\n self._state = LLMThoughtState.THINKING\n self._llm_token_stream = \"\"\n self._llm_token_writer_idx: Optional[int] = None\n self._last_tool: Optional[ToolRecord] = None\n self._collapse_on_complete = collapse_on_complete\n self._labeler = labeler\n @property\n def container(self) -> MutableExpander:\n \"\"\"The container we're writing into.\"\"\"\n return self._container\n @property\n def last_tool(self) -> Optional[ToolRecord]:\n \"\"\"The last tool executed by this thought\"\"\"\n return self._last_tool\n def _reset_llm_token_stream(self) -> None:\n self._llm_token_stream = \"\"\n self._llm_token_writer_idx = None\n def on_llm_start(self, serialized: Dict[str, Any], prompts: List[str]) -> None:\n self._reset_llm_token_stream()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/streamlit_callback_handler.html"} {"id": "c71f9c0ceebf-3", "text": "self._reset_llm_token_stream()\n def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n # This is only called when the LLM is initialized with `streaming=True`\n self._llm_token_stream += _convert_newlines(token)\n self._llm_token_writer_idx = self._container.markdown(\n self._llm_token_stream, index=self._llm_token_writer_idx\n )\n def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n # `response` is the concatenation of all the tokens received by the LLM.\n # If we're receiving streaming tokens from `on_llm_new_token`, this response\n # data is redundant\n self._reset_llm_token_stream()\n def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n self._container.markdown(\"**LLM encountered an error...**\")\n self._container.exception(error)\n def on_tool_start(\n self, serialized: Dict[str, Any], input_str: str, **kwargs: Any\n ) -> None:\n # Called with the name of the tool we're about to run (in `serialized[name]`),\n # and its input. We change our container's label to be the tool name.\n self._state = LLMThoughtState.RUNNING_TOOL\n tool_name = serialized[\"name\"]\n self._last_tool = ToolRecord(name=tool_name, input_str=input_str)\n self._container.update(\n new_label=self._labeler.get_tool_label(self._last_tool, is_complete=False)\n )\n def on_tool_end(\n self,\n output: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/streamlit_callback_handler.html"} {"id": "c71f9c0ceebf-4", "text": ")\n def on_tool_end(\n self,\n output: str,\n color: Optional[str] = None,\n observation_prefix: Optional[str] = None,\n llm_prefix: Optional[str] = None,\n **kwargs: Any,\n ) -> None:\n self._container.markdown(f\"**{output}**\")\n def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n self._container.markdown(\"**Tool encountered an error...**\")\n self._container.exception(error)\n def on_agent_action(\n self, action: AgentAction, color: Optional[str] = None, **kwargs: Any\n ) -> Any:\n # Called when we're about to kick off a new tool. The `action` data\n # tells us the tool we're about to use, and the input we'll give it.\n # We don't output anything here, because we'll receive this same data\n # when `on_tool_start` is called immediately after.\n pass\n def complete(self, final_label: Optional[str] = None) -> None:\n \"\"\"Finish the thought.\"\"\"\n if final_label is None and self._state == LLMThoughtState.RUNNING_TOOL:\n assert (\n self._last_tool is not None\n ), \"_last_tool should never be null when _state == RUNNING_TOOL\"\n final_label = self._labeler.get_tool_label(\n self._last_tool, is_complete=True\n )\n self._state = LLMThoughtState.COMPLETE\n if self._collapse_on_complete:\n self._container.update(new_label=final_label, new_expanded=False)\n else:\n self._container.update(new_label=final_label)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/streamlit_callback_handler.html"} {"id": "c71f9c0ceebf-5", "text": "else:\n self._container.update(new_label=final_label)\n def clear(self) -> None:\n \"\"\"Remove the thought from the screen. A cleared thought can't be reused.\"\"\"\n self._container.clear()\n[docs]class StreamlitCallbackHandler(BaseCallbackHandler):\n \"\"\"A callback handler that writes to a Streamlit app.\"\"\"\n def __init__(\n self,\n parent_container: DeltaGenerator,\n *,\n max_thought_containers: int = 4,\n expand_new_thoughts: bool = True,\n collapse_completed_thoughts: bool = True,\n thought_labeler: Optional[LLMThoughtLabeler] = None,\n ):\n \"\"\"Create a StreamlitCallbackHandler instance.\n Parameters\n ----------\n parent_container\n The `st.container` that will contain all the Streamlit elements that the\n Handler creates.\n max_thought_containers\n The max number of completed LLM thought containers to show at once. When\n this threshold is reached, a new thought will cause the oldest thoughts to\n be collapsed into a \"History\" expander. Defaults to 4.\n expand_new_thoughts\n Each LLM \"thought\" gets its own `st.expander`. This param controls whether\n that expander is expanded by default. Defaults to True.\n collapse_completed_thoughts\n If True, LLM thought expanders will be collapsed when completed.\n Defaults to True.\n thought_labeler\n An optional custom LLMThoughtLabeler instance. If unspecified, the handler\n will use the default thought labeling logic. Defaults to None.\n \"\"\"\n self._parent_container = parent_container\n self._history_parent = parent_container.container()\n self._history_container: Optional[MutableExpander] = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/streamlit_callback_handler.html"} {"id": "c71f9c0ceebf-6", "text": "self._history_container: Optional[MutableExpander] = None\n self._current_thought: Optional[LLMThought] = None\n self._completed_thoughts: List[LLMThought] = []\n self._max_thought_containers = max(max_thought_containers, 1)\n self._expand_new_thoughts = expand_new_thoughts\n self._collapse_completed_thoughts = collapse_completed_thoughts\n self._thought_labeler = thought_labeler or LLMThoughtLabeler()\n def _require_current_thought(self) -> LLMThought:\n \"\"\"Return our current LLMThought. Raise an error if we have no current\n thought.\n \"\"\"\n if self._current_thought is None:\n raise RuntimeError(\"Current LLMThought is unexpectedly None!\")\n return self._current_thought\n def _get_last_completed_thought(self) -> Optional[LLMThought]:\n \"\"\"Return our most recent completed LLMThought, or None if we don't have one.\"\"\"\n if len(self._completed_thoughts) > 0:\n return self._completed_thoughts[len(self._completed_thoughts) - 1]\n return None\n @property\n def _num_thought_containers(self) -> int:\n \"\"\"The number of 'thought containers' we're currently showing: the\n number of completed thought containers, the history container (if it exists),\n and the current thought container (if it exists).\n \"\"\"\n count = len(self._completed_thoughts)\n if self._history_container is not None:\n count += 1\n if self._current_thought is not None:\n count += 1\n return count", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/streamlit_callback_handler.html"} {"id": "c71f9c0ceebf-7", "text": "count += 1\n return count\n def _complete_current_thought(self, final_label: Optional[str] = None) -> None:\n \"\"\"Complete the current thought, optionally assigning it a new label.\n Add it to our _completed_thoughts list.\n \"\"\"\n thought = self._require_current_thought()\n thought.complete(final_label)\n self._completed_thoughts.append(thought)\n self._current_thought = None\n def _prune_old_thought_containers(self) -> None:\n \"\"\"If we have too many thoughts onscreen, move older thoughts to the\n 'history container.'\n \"\"\"\n while (\n self._num_thought_containers > self._max_thought_containers\n and len(self._completed_thoughts) > 0\n ):\n # Create our history container if it doesn't exist, and if\n # max_thought_containers is > 1. (if max_thought_containers is 1, we don't\n # have room to show history.)\n if self._history_container is None and self._max_thought_containers > 1:\n self._history_container = MutableExpander(\n self._history_parent,\n label=self._thought_labeler.get_history_label(),\n expanded=False,\n )\n oldest_thought = self._completed_thoughts.pop(0)\n if self._history_container is not None:\n self._history_container.markdown(oldest_thought.container.label)\n self._history_container.append_copy(oldest_thought.container)\n oldest_thought.clear()\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/streamlit_callback_handler.html"} {"id": "c71f9c0ceebf-8", "text": ") -> None:\n if self._current_thought is None:\n self._current_thought = LLMThought(\n parent_container=self._parent_container,\n expanded=self._expand_new_thoughts,\n collapse_on_complete=self._collapse_completed_thoughts,\n labeler=self._thought_labeler,\n )\n self._current_thought.on_llm_start(serialized, prompts)\n # We don't prune_old_thought_containers here, because our container won't\n # be visible until it has a child.\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n self._require_current_thought().on_llm_new_token(token, **kwargs)\n self._prune_old_thought_containers()\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n self._require_current_thought().on_llm_end(response, **kwargs)\n self._prune_old_thought_containers()\n[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n self._require_current_thought().on_llm_error(error, **kwargs)\n self._prune_old_thought_containers()\n[docs] def on_tool_start(\n self, serialized: Dict[str, Any], input_str: str, **kwargs: Any\n ) -> None:\n self._require_current_thought().on_tool_start(serialized, input_str, **kwargs)\n self._prune_old_thought_containers()\n[docs] def on_tool_end(\n self,\n output: str,\n color: Optional[str] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/streamlit_callback_handler.html"} {"id": "c71f9c0ceebf-9", "text": "self,\n output: str,\n color: Optional[str] = None,\n observation_prefix: Optional[str] = None,\n llm_prefix: Optional[str] = None,\n **kwargs: Any,\n ) -> None:\n self._require_current_thought().on_tool_end(\n output, color, observation_prefix, llm_prefix, **kwargs\n )\n self._complete_current_thought()\n[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n self._require_current_thought().on_tool_error(error, **kwargs)\n self._prune_old_thought_containers()\n[docs] def on_text(\n self,\n text: str,\n color: Optional[str] = None,\n end: str = \"\",\n **kwargs: Any,\n ) -> None:\n pass\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n pass\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n pass\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n pass\n[docs] def on_agent_action(\n self, action: AgentAction, color: Optional[str] = None, **kwargs: Any\n ) -> Any:\n self._require_current_thought().on_agent_action(action, color, **kwargs)\n self._prune_old_thought_containers()\n[docs] def on_agent_finish(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/streamlit_callback_handler.html"} {"id": "c71f9c0ceebf-10", "text": "[docs] def on_agent_finish(\n self, finish: AgentFinish, color: Optional[str] = None, **kwargs: Any\n ) -> None:\n if self._current_thought is not None:\n self._current_thought.complete(\n self._thought_labeler.get_final_agent_thought_label()\n )\n self._current_thought = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/streamlit_callback_handler.html"} {"id": "26ea6e1e4ab5-0", "text": "Source code for langchain.callbacks.streamlit.mutable_expander\nfrom __future__ import annotations\nfrom enum import Enum\nfrom typing import TYPE_CHECKING, Any, Dict, List, NamedTuple, Optional\nif TYPE_CHECKING:\n from streamlit.delta_generator import DeltaGenerator\n from streamlit.type_util import SupportsStr\n[docs]class ChildType(Enum):\n \"\"\"The enumerator of the child type.\"\"\"\n MARKDOWN = \"MARKDOWN\"\n EXCEPTION = \"EXCEPTION\"\n[docs]class ChildRecord(NamedTuple):\n \"\"\"The child record as a NamedTuple.\"\"\"\n type: ChildType\n kwargs: Dict[str, Any]\n dg: DeltaGenerator\nclass MutableExpander:\n \"\"\"A Streamlit expander that can be renamed and dynamically expanded/collapsed.\"\"\"\n def __init__(self, parent_container: DeltaGenerator, label: str, expanded: bool):\n \"\"\"Create a new MutableExpander.\n Parameters\n ----------\n parent_container\n The `st.container` that the expander will be created inside.\n The expander transparently deletes and recreates its underlying\n `st.expander` instance when its label changes, and it uses\n `parent_container` to ensure it recreates this underlying expander in the\n same location onscreen.\n label\n The expander's initial label.\n expanded\n The expander's initial `expanded` value.\n \"\"\"\n self._label = label\n self._expanded = expanded\n self._parent_cursor = parent_container.empty()\n self._container = self._parent_cursor.expander(label, expanded)\n self._child_records: List[ChildRecord] = []\n @property\n def label(self) -> str:\n \"\"\"The expander's label string.\"\"\"\n return self._label\n @property", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/mutable_expander.html"} {"id": "26ea6e1e4ab5-1", "text": "\"\"\"The expander's label string.\"\"\"\n return self._label\n @property\n def expanded(self) -> bool:\n \"\"\"True if the expander was created with `expanded=True`.\"\"\"\n return self._expanded\n def clear(self) -> None:\n \"\"\"Remove the container and its contents entirely. A cleared container can't\n be reused.\n \"\"\"\n self._container = self._parent_cursor.empty()\n self._child_records.clear()\n def append_copy(self, other: MutableExpander) -> None:\n \"\"\"Append a copy of another MutableExpander's children to this\n MutableExpander.\n \"\"\"\n other_records = other._child_records.copy()\n for record in other_records:\n self._create_child(record.type, record.kwargs)\n def update(\n self, *, new_label: Optional[str] = None, new_expanded: Optional[bool] = None\n ) -> None:\n \"\"\"Change the expander's label and expanded state\"\"\"\n if new_label is None:\n new_label = self._label\n if new_expanded is None:\n new_expanded = self._expanded\n if self._label == new_label and self._expanded == new_expanded:\n # No change!\n return\n self._label = new_label\n self._expanded = new_expanded\n self._container = self._parent_cursor.expander(new_label, new_expanded)\n prev_records = self._child_records\n self._child_records = []\n # Replay all children into the new container\n for record in prev_records:\n self._create_child(record.type, record.kwargs)\n def markdown(\n self,\n body: SupportsStr,\n unsafe_allow_html: bool = False,\n *,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/mutable_expander.html"} {"id": "26ea6e1e4ab5-2", "text": "body: SupportsStr,\n unsafe_allow_html: bool = False,\n *,\n help: Optional[str] = None,\n index: Optional[int] = None,\n ) -> int:\n \"\"\"Add a Markdown element to the container and return its index.\"\"\"\n kwargs = {\"body\": body, \"unsafe_allow_html\": unsafe_allow_html, \"help\": help}\n new_dg = self._get_dg(index).markdown(**kwargs) # type: ignore[arg-type]\n record = ChildRecord(ChildType.MARKDOWN, kwargs, new_dg)\n return self._add_record(record, index)\n def exception(\n self, exception: BaseException, *, index: Optional[int] = None\n ) -> int:\n \"\"\"Add an Exception element to the container and return its index.\"\"\"\n kwargs = {\"exception\": exception}\n new_dg = self._get_dg(index).exception(**kwargs)\n record = ChildRecord(ChildType.EXCEPTION, kwargs, new_dg)\n return self._add_record(record, index)\n def _create_child(self, type: ChildType, kwargs: Dict[str, Any]) -> None:\n \"\"\"Create a new child with the given params\"\"\"\n if type == ChildType.MARKDOWN:\n self.markdown(**kwargs)\n elif type == ChildType.EXCEPTION:\n self.exception(**kwargs)\n else:\n raise RuntimeError(f\"Unexpected child type {type}\")\n def _add_record(self, record: ChildRecord, index: Optional[int]) -> int:\n \"\"\"Add a ChildRecord to self._children. If `index` is specified, replace\n the existing record at that index. Otherwise, append the record to the\n end of the list.\n Return the index of the added record.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/mutable_expander.html"} {"id": "26ea6e1e4ab5-3", "text": "end of the list.\n Return the index of the added record.\n \"\"\"\n if index is not None:\n # Replace existing child\n self._child_records[index] = record\n return index\n # Append new child\n self._child_records.append(record)\n return len(self._child_records) - 1\n def _get_dg(self, index: Optional[int]) -> DeltaGenerator:\n if index is not None:\n # Existing index: reuse child's DeltaGenerator\n assert 0 <= index < len(self._child_records), f\"Bad index: {index}\"\n return self._child_records[index].dg\n # No index: use container's DeltaGenerator\n return self._container", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/mutable_expander.html"} {"id": "d4e2dcbdad67-0", "text": "Source code for langchain.callbacks.tracers.run_collector\n\"\"\"A tracer that collects all nested runs in a list.\"\"\"\nfrom typing import Any, List, Optional, Union\nfrom uuid import UUID\nfrom langchain.callbacks.tracers.base import BaseTracer\nfrom langchain.callbacks.tracers.schemas import Run\n[docs]class RunCollectorCallbackHandler(BaseTracer):\n \"\"\"\n A tracer that collects all nested runs in a list.\n This tracer is useful for inspection and evaluation purposes.\n Parameters\n ----------\n example_id : Optional[Union[UUID, str]], default=None\n The ID of the example being traced. It can be either a UUID or a string.\n \"\"\"\n name = \"run-collector_callback_handler\"\n def __init__(\n self, example_id: Optional[Union[UUID, str]] = None, **kwargs: Any\n ) -> None:\n \"\"\"\n Initialize the RunCollectorCallbackHandler.\n Parameters\n ----------\n example_id : Optional[Union[UUID, str]], default=None\n The ID of the example being traced. It can be either a UUID or a string.\n \"\"\"\n super().__init__(**kwargs)\n self.example_id = (\n UUID(example_id) if isinstance(example_id, str) else example_id\n )\n self.traced_runs: List[Run] = []\n def _persist_run(self, run: Run) -> None:\n \"\"\"\n Persist a run by adding it to the traced_runs list.\n Parameters\n ----------\n run : Run\n The run to be persisted.\n \"\"\"\n run_ = run.copy()\n run_.reference_example_id = self.example_id\n self.traced_runs.append(run_)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/run_collector.html"} {"id": "819ddbceaa50-0", "text": "Source code for langchain.callbacks.tracers.langchain_v1\nfrom __future__ import annotations\nimport logging\nimport os\nfrom typing import Any, Dict, Optional, Union\nimport requests\nfrom langchain.callbacks.tracers.base import BaseTracer\nfrom langchain.callbacks.tracers.schemas import (\n ChainRun,\n LLMRun,\n Run,\n ToolRun,\n TracerSession,\n TracerSessionV1,\n TracerSessionV1Base,\n)\nfrom langchain.schema.messages import get_buffer_string\nfrom langchain.utils import raise_for_status_with_text\n[docs]def get_headers() -> Dict[str, Any]:\n \"\"\"Get the headers for the LangChain API.\"\"\"\n headers: Dict[str, Any] = {\"Content-Type\": \"application/json\"}\n if os.getenv(\"LANGCHAIN_API_KEY\"):\n headers[\"x-api-key\"] = os.getenv(\"LANGCHAIN_API_KEY\")\n return headers\ndef _get_endpoint() -> str:\n return os.getenv(\"LANGCHAIN_ENDPOINT\", \"http://localhost:8000\")\n[docs]class LangChainTracerV1(BaseTracer):\n \"\"\"An implementation of the SharedTracer that POSTS to the langchain endpoint.\"\"\"\n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Initialize the LangChain tracer.\"\"\"\n super().__init__(**kwargs)\n self.session: Optional[TracerSessionV1] = None\n self._endpoint = _get_endpoint()\n self._headers = get_headers()\n def _convert_to_v1_run(self, run: Run) -> Union[LLMRun, ChainRun, ToolRun]:\n session = self.session or self.load_default_session()\n if not isinstance(session, TracerSessionV1):\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/langchain_v1.html"} {"id": "819ddbceaa50-1", "text": "if not isinstance(session, TracerSessionV1):\n raise ValueError(\n \"LangChainTracerV1 is not compatible with\"\n f\" session of type {type(session)}\"\n )\n if run.run_type == \"llm\":\n if \"prompts\" in run.inputs:\n prompts = run.inputs[\"prompts\"]\n elif \"messages\" in run.inputs:\n prompts = [get_buffer_string(batch) for batch in run.inputs[\"messages\"]]\n else:\n raise ValueError(\"No prompts found in LLM run inputs\")\n return LLMRun(\n uuid=str(run.id) if run.id else None,\n parent_uuid=str(run.parent_run_id) if run.parent_run_id else None,\n start_time=run.start_time,\n end_time=run.end_time,\n extra=run.extra,\n execution_order=run.execution_order,\n child_execution_order=run.child_execution_order,\n serialized=run.serialized,\n session_id=session.id,\n error=run.error,\n prompts=prompts,\n response=run.outputs if run.outputs else None,\n )\n if run.run_type == \"chain\":\n child_runs = [self._convert_to_v1_run(run) for run in run.child_runs]\n return ChainRun(\n uuid=str(run.id) if run.id else None,\n parent_uuid=str(run.parent_run_id) if run.parent_run_id else None,\n start_time=run.start_time,\n end_time=run.end_time,\n execution_order=run.execution_order,\n child_execution_order=run.child_execution_order,\n serialized=run.serialized,\n session_id=session.id,\n inputs=run.inputs,\n outputs=run.outputs,\n error=run.error,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/langchain_v1.html"} {"id": "819ddbceaa50-2", "text": "outputs=run.outputs,\n error=run.error,\n extra=run.extra,\n child_llm_runs=[run for run in child_runs if isinstance(run, LLMRun)],\n child_chain_runs=[\n run for run in child_runs if isinstance(run, ChainRun)\n ],\n child_tool_runs=[run for run in child_runs if isinstance(run, ToolRun)],\n )\n if run.run_type == \"tool\":\n child_runs = [self._convert_to_v1_run(run) for run in run.child_runs]\n return ToolRun(\n uuid=str(run.id) if run.id else None,\n parent_uuid=str(run.parent_run_id) if run.parent_run_id else None,\n start_time=run.start_time,\n end_time=run.end_time,\n execution_order=run.execution_order,\n child_execution_order=run.child_execution_order,\n serialized=run.serialized,\n session_id=session.id,\n action=str(run.serialized),\n tool_input=run.inputs.get(\"input\", \"\"),\n output=None if run.outputs is None else run.outputs.get(\"output\"),\n error=run.error,\n extra=run.extra,\n child_chain_runs=[\n run for run in child_runs if isinstance(run, ChainRun)\n ],\n child_tool_runs=[run for run in child_runs if isinstance(run, ToolRun)],\n child_llm_runs=[run for run in child_runs if isinstance(run, LLMRun)],\n )\n raise ValueError(f\"Unknown run type: {run.run_type}\")\n def _persist_run(self, run: Union[Run, LLMRun, ChainRun, ToolRun]) -> None:\n \"\"\"Persist a run.\"\"\"\n if isinstance(run, Run):\n v1_run = self._convert_to_v1_run(run)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/langchain_v1.html"} {"id": "819ddbceaa50-3", "text": "v1_run = self._convert_to_v1_run(run)\n else:\n v1_run = run\n if isinstance(v1_run, LLMRun):\n endpoint = f\"{self._endpoint}/llm-runs\"\n elif isinstance(v1_run, ChainRun):\n endpoint = f\"{self._endpoint}/chain-runs\"\n else:\n endpoint = f\"{self._endpoint}/tool-runs\"\n try:\n response = requests.post(\n endpoint,\n data=v1_run.json(),\n headers=self._headers,\n )\n raise_for_status_with_text(response)\n except Exception as e:\n logging.warning(f\"Failed to persist run: {e}\")\n def _persist_session(\n self, session_create: TracerSessionV1Base\n ) -> Union[TracerSessionV1, TracerSession]:\n \"\"\"Persist a session.\"\"\"\n try:\n r = requests.post(\n f\"{self._endpoint}/sessions\",\n data=session_create.json(),\n headers=self._headers,\n )\n session = TracerSessionV1(id=r.json()[\"id\"], **session_create.dict())\n except Exception as e:\n logging.warning(f\"Failed to create session, using default session: {e}\")\n session = TracerSessionV1(id=1, **session_create.dict())\n return session\n def _load_session(self, session_name: Optional[str] = None) -> TracerSessionV1:\n \"\"\"Load a session from the tracer.\"\"\"\n try:\n url = f\"{self._endpoint}/sessions\"\n if session_name:\n url += f\"?name={session_name}\"\n r = requests.get(url, headers=self._headers)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/langchain_v1.html"} {"id": "819ddbceaa50-4", "text": "r = requests.get(url, headers=self._headers)\n tracer_session = TracerSessionV1(**r.json()[0])\n except Exception as e:\n session_type = \"default\" if not session_name else session_name\n logging.warning(\n f\"Failed to load {session_type} session, using empty session: {e}\"\n )\n tracer_session = TracerSessionV1(id=1)\n self.session = tracer_session\n return tracer_session\n[docs] def load_session(self, session_name: str) -> Union[TracerSessionV1, TracerSession]:\n \"\"\"Load a session with the given name from the tracer.\"\"\"\n return self._load_session(session_name)\n[docs] def load_default_session(self) -> Union[TracerSessionV1, TracerSession]:\n \"\"\"Load the default tracing session and set it as the Tracer's session.\"\"\"\n return self._load_session(\"default\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/langchain_v1.html"} {"id": "ee4e2a6ddd20-0", "text": "Source code for langchain.callbacks.tracers.evaluation\n\"\"\"A tracer that runs evaluators over completed runs.\"\"\"\nimport logging\nfrom concurrent.futures import Future, ThreadPoolExecutor, wait\nfrom typing import Any, Optional, Sequence, Set, Union\nfrom uuid import UUID\nfrom langchainplus_sdk import LangChainPlusClient, RunEvaluator\nfrom langchain.callbacks.manager import tracing_v2_enabled\nfrom langchain.callbacks.tracers.base import BaseTracer\nfrom langchain.callbacks.tracers.schemas import Run\nlogger = logging.getLogger(__name__)\n[docs]class EvaluatorCallbackHandler(BaseTracer):\n \"\"\"A tracer that runs a run evaluator whenever a run is persisted.\n Parameters\n ----------\n evaluators : Sequence[RunEvaluator]\n The run evaluators to apply to all top level runs.\n max_workers : int, optional\n The maximum number of worker threads to use for running the evaluators.\n If not specified, it will default to the number of evaluators.\n client : LangChainPlusClient, optional\n The LangChainPlusClient instance to use for evaluating the runs.\n If not specified, a new instance will be created.\n example_id : Union[UUID, str], optional\n The example ID to be associated with the runs.\n project_name : str, optional\n The LangSmith project name to be organize eval chain runs under.\n Attributes\n ----------\n example_id : Union[UUID, None]\n The example ID associated with the runs.\n client : LangChainPlusClient\n The LangChainPlusClient instance used for evaluating the runs.\n evaluators : Sequence[RunEvaluator]\n The sequence of run evaluators to be executed.\n executor : ThreadPoolExecutor\n The thread pool executor used for running the evaluators.\n futures : Set[Future]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/evaluation.html"} {"id": "ee4e2a6ddd20-1", "text": "futures : Set[Future]\n The set of futures representing the running evaluators.\n skip_unfinished : bool\n Whether to skip runs that are not finished or raised\n an error.\n project_name : Optional[str]\n The LangSmith project name to be organize eval chain runs under.\n \"\"\"\n name = \"evaluator_callback_handler\"\n def __init__(\n self,\n evaluators: Sequence[RunEvaluator],\n max_workers: Optional[int] = None,\n client: Optional[LangChainPlusClient] = None,\n example_id: Optional[Union[UUID, str]] = None,\n skip_unfinished: bool = True,\n project_name: Optional[str] = None,\n **kwargs: Any,\n ) -> None:\n super().__init__(**kwargs)\n self.example_id = (\n UUID(example_id) if isinstance(example_id, str) else example_id\n )\n self.client = client or LangChainPlusClient()\n self.evaluators = evaluators\n self.executor = ThreadPoolExecutor(\n max_workers=max(max_workers or len(evaluators), 1)\n )\n self.futures: Set[Future] = set()\n self.skip_unfinished = skip_unfinished\n self.project_name = project_name\n def _evaluate_in_project(self, run: Run, evaluator: RunEvaluator) -> None:\n \"\"\"Evaluate the run in the project.\n Parameters\n ----------\n run : Run\n The run to be evaluated.\n evaluator : RunEvaluator\n The evaluator to use for evaluating the run.\n \"\"\"\n try:\n if self.project_name is None:\n self.client.evaluate_run(run, evaluator)\n with tracing_v2_enabled(project_name=self.project_name, tags=[\"eval\"]):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/evaluation.html"} {"id": "ee4e2a6ddd20-2", "text": "with tracing_v2_enabled(project_name=self.project_name, tags=[\"eval\"]):\n self.client.evaluate_run(run, evaluator)\n except Exception as e:\n logger.error(\n f\"Error evaluating run {run.id} with \"\n f\"{evaluator.__class__.__name__}: {e}\",\n exc_info=True,\n )\n raise e\n def _persist_run(self, run: Run) -> None:\n \"\"\"Run the evaluator on the run.\n Parameters\n ----------\n run : Run\n The run to be evaluated.\n \"\"\"\n if self.skip_unfinished and not run.outputs:\n logger.debug(f\"Skipping unfinished run {run.id}\")\n return\n run_ = run.copy()\n run_.reference_example_id = self.example_id\n for evaluator in self.evaluators:\n self.futures.add(\n self.executor.submit(self._evaluate_in_project, run_, evaluator)\n )\n[docs] def wait_for_futures(self) -> None:\n \"\"\"Wait for all futures to complete.\"\"\"\n futures = list(self.futures)\n wait(futures)\n for future in futures:\n self.futures.remove(future)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/evaluation.html"} {"id": "6aa651eddb90-0", "text": "Source code for langchain.callbacks.tracers.base\n\"\"\"Base interfaces for tracing runs.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom abc import ABC, abstractmethod\nfrom datetime import datetime\nfrom typing import Any, Dict, List, Optional, Sequence, Union, cast\nfrom uuid import UUID\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.callbacks.tracers.schemas import Run, RunTypeEnum\nfrom langchain.load.dump import dumpd\nfrom langchain.schema.document import Document\nfrom langchain.schema.output import ChatGeneration, LLMResult\nlogger = logging.getLogger(__name__)\n[docs]class TracerException(Exception):\n \"\"\"Base class for exceptions in tracers module.\"\"\"\n[docs]class BaseTracer(BaseCallbackHandler, ABC):\n \"\"\"Base interface for tracers.\"\"\"\n def __init__(self, **kwargs: Any) -> None:\n super().__init__(**kwargs)\n self.run_map: Dict[str, Run] = {}\n @staticmethod\n def _add_child_run(\n parent_run: Run,\n child_run: Run,\n ) -> None:\n \"\"\"Add child run to a chain run or tool run.\"\"\"\n parent_run.child_runs.append(child_run)\n @abstractmethod\n def _persist_run(self, run: Run) -> None:\n \"\"\"Persist a run.\"\"\"\n def _start_trace(self, run: Run) -> None:\n \"\"\"Start a trace for a run.\"\"\"\n if run.parent_run_id:\n parent_run = self.run_map[str(run.parent_run_id)]\n if parent_run:\n self._add_child_run(parent_run, run)\n else:\n logger.warning(f\"Parent run with UUID {run.parent_run_id} not found.\")\n self.run_map[str(run.id)] = run", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/base.html"} {"id": "6aa651eddb90-1", "text": "self.run_map[str(run.id)] = run\n def _end_trace(self, run: Run) -> None:\n \"\"\"End a trace for a run.\"\"\"\n if not run.parent_run_id:\n self._persist_run(run)\n else:\n parent_run = self.run_map.get(str(run.parent_run_id))\n if parent_run is None:\n logger.warning(f\"Parent run with UUID {run.parent_run_id} not found.\")\n elif (\n run.child_execution_order is not None\n and parent_run.child_execution_order is not None\n and run.child_execution_order > parent_run.child_execution_order\n ):\n parent_run.child_execution_order = run.child_execution_order\n self.run_map.pop(str(run.id))\n def _get_execution_order(self, parent_run_id: Optional[str] = None) -> int:\n \"\"\"Get the execution order for a run.\"\"\"\n if parent_run_id is None:\n return 1\n parent_run = self.run_map.get(parent_run_id)\n if parent_run is None:\n logger.warning(f\"Parent run with UUID {parent_run_id} not found.\")\n return 1\n if parent_run.child_execution_order is None:\n raise TracerException(\n f\"Parent run with UUID {parent_run_id} has no child execution order.\"\n )\n return parent_run.child_execution_order + 1\n[docs] def on_llm_start(\n self,\n serialized: Dict[str, Any],\n prompts: List[str],\n *,\n run_id: UUID,\n tags: Optional[List[str]] = None,\n parent_run_id: Optional[UUID] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/base.html"} {"id": "6aa651eddb90-2", "text": "**kwargs: Any,\n ) -> None:\n \"\"\"Start a trace for an LLM run.\"\"\"\n parent_run_id_ = str(parent_run_id) if parent_run_id else None\n execution_order = self._get_execution_order(parent_run_id_)\n start_time = datetime.utcnow()\n if metadata:\n kwargs.update({\"metadata\": metadata})\n llm_run = Run(\n id=run_id,\n parent_run_id=parent_run_id,\n serialized=serialized,\n inputs={\"prompts\": prompts},\n extra=kwargs,\n events=[{\"name\": \"start\", \"time\": start_time}],\n start_time=start_time,\n execution_order=execution_order,\n child_execution_order=execution_order,\n run_type=RunTypeEnum.llm,\n tags=tags or [],\n )\n self._start_trace(llm_run)\n self._on_llm_start(llm_run)\n[docs] def on_llm_new_token(\n self,\n token: str,\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run on new LLM token. Only available when streaming is enabled.\"\"\"\n if not run_id:\n raise TracerException(\"No run_id provided for on_llm_new_token callback.\")\n run_id_ = str(run_id)\n llm_run = self.run_map.get(run_id_)\n if llm_run is None or llm_run.run_type != RunTypeEnum.llm:\n raise TracerException(\"No LLM Run found to be traced\")\n llm_run.events.append(\n {\n \"name\": \"new_token\",\n \"time\": datetime.utcnow(),", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/base.html"} {"id": "6aa651eddb90-3", "text": "{\n \"name\": \"new_token\",\n \"time\": datetime.utcnow(),\n \"kwargs\": {\"token\": token},\n },\n )\n[docs] def on_llm_end(self, response: LLMResult, *, run_id: UUID, **kwargs: Any) -> None:\n \"\"\"End a trace for an LLM run.\"\"\"\n if not run_id:\n raise TracerException(\"No run_id provided for on_llm_end callback.\")\n run_id_ = str(run_id)\n llm_run = self.run_map.get(run_id_)\n if llm_run is None or llm_run.run_type != RunTypeEnum.llm:\n raise TracerException(\"No LLM Run found to be traced\")\n llm_run.outputs = response.dict()\n for i, generations in enumerate(response.generations):\n for j, generation in enumerate(generations):\n output_generation = llm_run.outputs[\"generations\"][i][j]\n if \"message\" in output_generation:\n output_generation[\"message\"] = dumpd(\n cast(ChatGeneration, generation).message\n )\n llm_run.end_time = datetime.utcnow()\n llm_run.events.append({\"name\": \"end\", \"time\": llm_run.end_time})\n self._end_trace(llm_run)\n self._on_llm_end(llm_run)\n[docs] def on_llm_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n *,\n run_id: UUID,\n **kwargs: Any,\n ) -> None:\n \"\"\"Handle an error for an LLM run.\"\"\"\n if not run_id:\n raise TracerException(\"No run_id provided for on_llm_error callback.\")\n run_id_ = str(run_id)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/base.html"} {"id": "6aa651eddb90-4", "text": "run_id_ = str(run_id)\n llm_run = self.run_map.get(run_id_)\n if llm_run is None or llm_run.run_type != RunTypeEnum.llm:\n raise TracerException(\"No LLM Run found to be traced\")\n llm_run.error = repr(error)\n llm_run.end_time = datetime.utcnow()\n llm_run.events.append({\"name\": \"error\", \"time\": llm_run.end_time})\n self._end_trace(llm_run)\n self._on_chain_error(llm_run)\n[docs] def on_chain_start(\n self,\n serialized: Dict[str, Any],\n inputs: Dict[str, Any],\n *,\n run_id: UUID,\n tags: Optional[List[str]] = None,\n parent_run_id: Optional[UUID] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Start a trace for a chain run.\"\"\"\n parent_run_id_ = str(parent_run_id) if parent_run_id else None\n execution_order = self._get_execution_order(parent_run_id_)\n start_time = datetime.utcnow()\n if metadata:\n kwargs.update({\"metadata\": metadata})\n chain_run = Run(\n id=run_id,\n parent_run_id=parent_run_id,\n serialized=serialized,\n inputs=inputs,\n extra=kwargs,\n events=[{\"name\": \"start\", \"time\": start_time}],\n start_time=start_time,\n execution_order=execution_order,\n child_execution_order=execution_order,\n child_runs=[],\n run_type=RunTypeEnum.chain,\n tags=tags or [],\n )\n self._start_trace(chain_run)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/base.html"} {"id": "6aa651eddb90-5", "text": "tags=tags or [],\n )\n self._start_trace(chain_run)\n self._on_chain_start(chain_run)\n[docs] def on_chain_end(\n self, outputs: Dict[str, Any], *, run_id: UUID, **kwargs: Any\n ) -> None:\n \"\"\"End a trace for a chain run.\"\"\"\n if not run_id:\n raise TracerException(\"No run_id provided for on_chain_end callback.\")\n chain_run = self.run_map.get(str(run_id))\n if chain_run is None or chain_run.run_type != RunTypeEnum.chain:\n raise TracerException(\"No chain Run found to be traced\")\n chain_run.outputs = outputs\n chain_run.end_time = datetime.utcnow()\n chain_run.events.append({\"name\": \"end\", \"time\": chain_run.end_time})\n self._end_trace(chain_run)\n self._on_chain_end(chain_run)\n[docs] def on_chain_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n *,\n run_id: UUID,\n **kwargs: Any,\n ) -> None:\n \"\"\"Handle an error for a chain run.\"\"\"\n if not run_id:\n raise TracerException(\"No run_id provided for on_chain_error callback.\")\n chain_run = self.run_map.get(str(run_id))\n if chain_run is None or chain_run.run_type != RunTypeEnum.chain:\n raise TracerException(\"No chain Run found to be traced\")\n chain_run.error = repr(error)\n chain_run.end_time = datetime.utcnow()\n chain_run.events.append({\"name\": \"error\", \"time\": chain_run.end_time})\n self._end_trace(chain_run)\n self._on_chain_error(chain_run)\n[docs] def on_tool_start(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/base.html"} {"id": "6aa651eddb90-6", "text": "self._on_chain_error(chain_run)\n[docs] def on_tool_start(\n self,\n serialized: Dict[str, Any],\n input_str: str,\n *,\n run_id: UUID,\n tags: Optional[List[str]] = None,\n parent_run_id: Optional[UUID] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Start a trace for a tool run.\"\"\"\n parent_run_id_ = str(parent_run_id) if parent_run_id else None\n execution_order = self._get_execution_order(parent_run_id_)\n start_time = datetime.utcnow()\n if metadata:\n kwargs.update({\"metadata\": metadata})\n tool_run = Run(\n id=run_id,\n parent_run_id=parent_run_id,\n serialized=serialized,\n inputs={\"input\": input_str},\n extra=kwargs,\n events=[{\"name\": \"start\", \"time\": start_time}],\n start_time=start_time,\n execution_order=execution_order,\n child_execution_order=execution_order,\n child_runs=[],\n run_type=RunTypeEnum.tool,\n tags=tags or [],\n )\n self._start_trace(tool_run)\n self._on_tool_start(tool_run)\n[docs] def on_tool_end(self, output: str, *, run_id: UUID, **kwargs: Any) -> None:\n \"\"\"End a trace for a tool run.\"\"\"\n if not run_id:\n raise TracerException(\"No run_id provided for on_tool_end callback.\")\n tool_run = self.run_map.get(str(run_id))\n if tool_run is None or tool_run.run_type != RunTypeEnum.tool:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/base.html"} {"id": "6aa651eddb90-7", "text": "if tool_run is None or tool_run.run_type != RunTypeEnum.tool:\n raise TracerException(\"No tool Run found to be traced\")\n tool_run.outputs = {\"output\": output}\n tool_run.end_time = datetime.utcnow()\n tool_run.events.append({\"name\": \"end\", \"time\": tool_run.end_time})\n self._end_trace(tool_run)\n self._on_tool_end(tool_run)\n[docs] def on_tool_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n *,\n run_id: UUID,\n **kwargs: Any,\n ) -> None:\n \"\"\"Handle an error for a tool run.\"\"\"\n if not run_id:\n raise TracerException(\"No run_id provided for on_tool_error callback.\")\n tool_run = self.run_map.get(str(run_id))\n if tool_run is None or tool_run.run_type != RunTypeEnum.tool:\n raise TracerException(\"No tool Run found to be traced\")\n tool_run.error = repr(error)\n tool_run.end_time = datetime.utcnow()\n tool_run.events.append({\"name\": \"error\", \"time\": tool_run.end_time})\n self._end_trace(tool_run)\n self._on_tool_error(tool_run)\n[docs] def on_retriever_start(\n self,\n serialized: Dict[str, Any],\n query: str,\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when Retriever starts running.\"\"\"\n parent_run_id_ = str(parent_run_id) if parent_run_id else None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/base.html"} {"id": "6aa651eddb90-8", "text": "parent_run_id_ = str(parent_run_id) if parent_run_id else None\n execution_order = self._get_execution_order(parent_run_id_)\n start_time = datetime.utcnow()\n if metadata:\n kwargs.update({\"metadata\": metadata})\n retrieval_run = Run(\n id=run_id,\n name=\"Retriever\",\n parent_run_id=parent_run_id,\n serialized=serialized,\n inputs={\"query\": query},\n extra=kwargs,\n events=[{\"name\": \"start\", \"time\": start_time}],\n start_time=start_time,\n execution_order=execution_order,\n child_execution_order=execution_order,\n child_runs=[],\n run_type=RunTypeEnum.retriever,\n )\n self._start_trace(retrieval_run)\n self._on_retriever_start(retrieval_run)\n[docs] def on_retriever_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n *,\n run_id: UUID,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when Retriever errors.\"\"\"\n if not run_id:\n raise TracerException(\"No run_id provided for on_retriever_error callback.\")\n retrieval_run = self.run_map.get(str(run_id))\n if retrieval_run is None or retrieval_run.run_type != RunTypeEnum.retriever:\n raise TracerException(\"No retriever Run found to be traced\")\n retrieval_run.error = repr(error)\n retrieval_run.end_time = datetime.utcnow()\n retrieval_run.events.append({\"name\": \"error\", \"time\": retrieval_run.end_time})\n self._end_trace(retrieval_run)\n self._on_retriever_error(retrieval_run)\n[docs] def on_retriever_end(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/base.html"} {"id": "6aa651eddb90-9", "text": "[docs] def on_retriever_end(\n self, documents: Sequence[Document], *, run_id: UUID, **kwargs: Any\n ) -> None:\n \"\"\"Run when Retriever ends running.\"\"\"\n if not run_id:\n raise TracerException(\"No run_id provided for on_retriever_end callback.\")\n retrieval_run = self.run_map.get(str(run_id))\n if retrieval_run is None or retrieval_run.run_type != RunTypeEnum.retriever:\n raise TracerException(\"No retriever Run found to be traced\")\n retrieval_run.outputs = {\"documents\": documents}\n retrieval_run.end_time = datetime.utcnow()\n retrieval_run.events.append({\"name\": \"end\", \"time\": retrieval_run.end_time})\n self._end_trace(retrieval_run)\n self._on_retriever_end(retrieval_run)\n def __deepcopy__(self, memo: dict) -> BaseTracer:\n \"\"\"Deepcopy the tracer.\"\"\"\n return self\n def __copy__(self) -> BaseTracer:\n \"\"\"Copy the tracer.\"\"\"\n return self\n def _on_llm_start(self, run: Run) -> None:\n \"\"\"Process the LLM Run upon start.\"\"\"\n def _on_llm_end(self, run: Run) -> None:\n \"\"\"Process the LLM Run.\"\"\"\n def _on_llm_error(self, run: Run) -> None:\n \"\"\"Process the LLM Run upon error.\"\"\"\n def _on_chain_start(self, run: Run) -> None:\n \"\"\"Process the Chain Run upon start.\"\"\"\n def _on_chain_end(self, run: Run) -> None:\n \"\"\"Process the Chain Run.\"\"\"\n def _on_chain_error(self, run: Run) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/base.html"} {"id": "6aa651eddb90-10", "text": "def _on_chain_error(self, run: Run) -> None:\n \"\"\"Process the Chain Run upon error.\"\"\"\n def _on_tool_start(self, run: Run) -> None:\n \"\"\"Process the Tool Run upon start.\"\"\"\n def _on_tool_end(self, run: Run) -> None:\n \"\"\"Process the Tool Run.\"\"\"\n def _on_tool_error(self, run: Run) -> None:\n \"\"\"Process the Tool Run upon error.\"\"\"\n def _on_chat_model_start(self, run: Run) -> None:\n \"\"\"Process the Chat Model Run upon start.\"\"\"\n def _on_retriever_start(self, run: Run) -> None:\n \"\"\"Process the Retriever Run upon start.\"\"\"\n def _on_retriever_end(self, run: Run) -> None:\n \"\"\"Process the Retriever Run.\"\"\"\n def _on_retriever_error(self, run: Run) -> None:\n \"\"\"Process the Retriever Run upon error.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/base.html"} {"id": "02e41e21cf1a-0", "text": "Source code for langchain.callbacks.tracers.wandb\n\"\"\"A Tracer Implementation that records activity to Weights & Biases.\"\"\"\nfrom __future__ import annotations\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Dict,\n List,\n Optional,\n Sequence,\n TypedDict,\n Union,\n)\nfrom langchain.callbacks.tracers.base import BaseTracer\nfrom langchain.callbacks.tracers.schemas import Run, RunTypeEnum\nif TYPE_CHECKING:\n from wandb import Settings as WBSettings\n from wandb.sdk.data_types import trace_tree\n from wandb.sdk.lib.paths import StrPath\n from wandb.wandb_run import Run as WBRun\nPRINT_WARNINGS = True\ndef _convert_lc_run_to_wb_span(trace_tree: Any, run: Run) -> trace_tree.Span:\n if run.run_type == RunTypeEnum.llm:\n return _convert_llm_run_to_wb_span(trace_tree, run)\n elif run.run_type == RunTypeEnum.chain:\n return _convert_chain_run_to_wb_span(trace_tree, run)\n elif run.run_type == RunTypeEnum.tool:\n return _convert_tool_run_to_wb_span(trace_tree, run)\n else:\n return _convert_run_to_wb_span(trace_tree, run)\ndef _convert_llm_run_to_wb_span(trace_tree: Any, run: Run) -> trace_tree.Span:\n base_span = _convert_run_to_wb_span(trace_tree, run)\n base_span.results = [\n trace_tree.Result(\n inputs={\"prompt\": prompt},\n outputs={\n f\"gen_{g_i}\": gen[\"text\"]\n for g_i, gen in enumerate(run.outputs[\"generations\"][ndx])\n }\n if (\n run.outputs is not None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/wandb.html"} {"id": "02e41e21cf1a-1", "text": "}\n if (\n run.outputs is not None\n and len(run.outputs[\"generations\"]) > ndx\n and len(run.outputs[\"generations\"][ndx]) > 0\n )\n else None,\n )\n for ndx, prompt in enumerate(run.inputs[\"prompts\"] or [])\n ]\n base_span.span_kind = trace_tree.SpanKind.LLM\n return base_span\ndef _serialize_inputs(run_inputs: dict) -> Union[dict, list]:\n if \"input_documents\" in run_inputs:\n docs = run_inputs[\"input_documents\"]\n return [doc.json() for doc in docs]\n else:\n return run_inputs\ndef _convert_chain_run_to_wb_span(trace_tree: Any, run: Run) -> trace_tree.Span:\n base_span = _convert_run_to_wb_span(trace_tree, run)\n base_span.results = [\n trace_tree.Result(inputs=_serialize_inputs(run.inputs), outputs=run.outputs)\n ]\n base_span.child_spans = [\n _convert_lc_run_to_wb_span(trace_tree, child_run)\n for child_run in run.child_runs\n ]\n base_span.span_kind = (\n trace_tree.SpanKind.AGENT\n if \"agent\" in run.serialized.get(\"name\", \"\").lower()\n else trace_tree.SpanKind.CHAIN\n )\n return base_span\ndef _convert_tool_run_to_wb_span(trace_tree: Any, run: Run) -> trace_tree.Span:\n base_span = _convert_run_to_wb_span(trace_tree, run)\n base_span.results = [\n trace_tree.Result(inputs=_serialize_inputs(run.inputs), outputs=run.outputs)\n ]\n base_span.child_spans = [\n _convert_lc_run_to_wb_span(trace_tree, child_run)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/wandb.html"} {"id": "02e41e21cf1a-2", "text": "_convert_lc_run_to_wb_span(trace_tree, child_run)\n for child_run in run.child_runs\n ]\n base_span.span_kind = trace_tree.SpanKind.TOOL\n return base_span\ndef _convert_run_to_wb_span(trace_tree: Any, run: Run) -> trace_tree.Span:\n attributes = {**run.extra} if run.extra else {}\n attributes[\"execution_order\"] = run.execution_order\n return trace_tree.Span(\n span_id=str(run.id) if run.id is not None else None,\n name=run.serialized.get(\"name\"),\n start_time_ms=int(run.start_time.timestamp() * 1000),\n end_time_ms=int(run.end_time.timestamp() * 1000),\n status_code=trace_tree.StatusCode.SUCCESS\n if run.error is None\n else trace_tree.StatusCode.ERROR,\n status_message=run.error,\n attributes=attributes,\n )\ndef _replace_type_with_kind(data: Any) -> Any:\n if isinstance(data, dict):\n # W&B TraceTree expects \"_kind\" instead of \"_type\" since `_type` is special\n # in W&B.\n if \"_type\" in data:\n _type = data.pop(\"_type\")\n data[\"_kind\"] = _type\n return {k: _replace_type_with_kind(v) for k, v in data.items()}\n elif isinstance(data, list):\n return [_replace_type_with_kind(v) for v in data]\n elif isinstance(data, tuple):\n return tuple(_replace_type_with_kind(v) for v in data)\n elif isinstance(data, set):\n return {_replace_type_with_kind(v) for v in data}\n else:\n return data\n[docs]class WandbRunArgs(TypedDict):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/wandb.html"} {"id": "02e41e21cf1a-3", "text": "return data\n[docs]class WandbRunArgs(TypedDict):\n \"\"\"Arguments for the WandbTracer.\"\"\"\n job_type: Optional[str]\n dir: Optional[StrPath]\n config: Union[Dict, str, None]\n project: Optional[str]\n entity: Optional[str]\n reinit: Optional[bool]\n tags: Optional[Sequence]\n group: Optional[str]\n name: Optional[str]\n notes: Optional[str]\n magic: Optional[Union[dict, str, bool]]\n config_exclude_keys: Optional[List[str]]\n config_include_keys: Optional[List[str]]\n anonymous: Optional[str]\n mode: Optional[str]\n allow_val_change: Optional[bool]\n resume: Optional[Union[bool, str]]\n force: Optional[bool]\n tensorboard: Optional[bool]\n sync_tensorboard: Optional[bool]\n monitor_gym: Optional[bool]\n save_code: Optional[bool]\n id: Optional[str]\n settings: Union[WBSettings, Dict[str, Any], None]\n[docs]class WandbTracer(BaseTracer):\n \"\"\"Callback Handler that logs to Weights and Biases.\n This handler will log the model architecture and run traces to Weights and Biases.\n This will ensure that all LangChain activity is logged to W&B.\n \"\"\"\n _run: Optional[WBRun] = None\n _run_args: Optional[WandbRunArgs] = None\n def __init__(self, run_args: Optional[WandbRunArgs] = None, **kwargs: Any) -> None:\n \"\"\"Initializes the WandbTracer.\n Parameters:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/wandb.html"} {"id": "02e41e21cf1a-4", "text": "\"\"\"Initializes the WandbTracer.\n Parameters:\n run_args: (dict, optional) Arguments to pass to `wandb.init()`. If not\n provided, `wandb.init()` will be called with no arguments. Please\n refer to the `wandb.init` for more details.\n To use W&B to monitor all LangChain activity, add this tracer like any other\n LangChain callback:\n ```\n from wandb.integration.langchain import WandbTracer\n tracer = WandbTracer()\n chain = LLMChain(llm, callbacks=[tracer])\n # ...end of notebook / script:\n tracer.finish()\n ```\n \"\"\"\n super().__init__(**kwargs)\n try:\n import wandb\n from wandb.sdk.data_types import trace_tree\n except ImportError as e:\n raise ImportError(\n \"Could not import wandb python package.\"\n \"Please install it with `pip install wandb`.\"\n ) from e\n self._wandb = wandb\n self._trace_tree = trace_tree\n self._run_args = run_args\n self._ensure_run(should_print_url=(wandb.run is None))\n[docs] def finish(self) -> None:\n \"\"\"Waits for all asynchronous processes to finish and data to upload.\n Proxy for `wandb.finish()`.\n \"\"\"\n self._wandb.finish()\n def _log_trace_from_run(self, run: Run) -> None:\n \"\"\"Logs a LangChain Run to W*B as a W&B Trace.\"\"\"\n self._ensure_run()\n try:\n root_span = _convert_lc_run_to_wb_span(self._trace_tree, run)\n except Exception as e:\n if PRINT_WARNINGS:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/wandb.html"} {"id": "02e41e21cf1a-5", "text": "except Exception as e:\n if PRINT_WARNINGS:\n self._wandb.termwarn(\n f\"Skipping trace saving - unable to safely convert LangChain Run \"\n f\"into W&B Trace due to: {e}\"\n )\n return\n model_dict = None\n # TODO: Add something like this once we have a way to get the clean serialized\n # parent dict from a run:\n # serialized_parent = safely_get_span_producing_model(run)\n # if serialized_parent is not None:\n # model_dict = safely_convert_model_to_dict(serialized_parent)\n model_trace = self._trace_tree.WBTraceTree(\n root_span=root_span,\n model_dict=model_dict,\n )\n if self._wandb.run is not None:\n self._wandb.run.log({\"langchain_trace\": model_trace})\n def _ensure_run(self, should_print_url: bool = False) -> None:\n \"\"\"Ensures an active W&B run exists.\n If not, will start a new run with the provided run_args.\n \"\"\"\n if self._wandb.run is None:\n # Make a shallow copy of the run args, so we don't modify the original\n run_args = self._run_args or {} # type: ignore\n run_args: dict = {**run_args} # type: ignore\n # Prefer to run in silent mode since W&B has a lot of output\n # which can be undesirable when dealing with text-based models.\n if \"settings\" not in run_args: # type: ignore\n run_args[\"settings\"] = {\"silent\": True} # type: ignore\n # Start the run and add the stream table\n self._wandb.init(**run_args)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/wandb.html"} {"id": "02e41e21cf1a-6", "text": "self._wandb.init(**run_args)\n if self._wandb.run is not None:\n if should_print_url:\n run_url = self._wandb.run.settings.run_url\n self._wandb.termlog(\n f\"Streaming LangChain activity to W&B at {run_url}\\n\"\n \"`WandbTracer` is currently in beta.\\n\"\n \"Please report any issues to \"\n \"https://github.com/wandb/wandb/issues with the tag \"\n \"`langchain`.\"\n )\n self._wandb.run._label(repo=\"langchain\")\n def _persist_run(self, run: \"Run\") -> None:\n \"\"\"Persist a run.\"\"\"\n self._log_trace_from_run(run)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/wandb.html"} {"id": "65dda8d7bcd7-0", "text": "Source code for langchain.callbacks.tracers.stdout\nimport json\nfrom typing import Any, List\nfrom langchain.callbacks.tracers.base import BaseTracer\nfrom langchain.callbacks.tracers.schemas import Run\nfrom langchain.input import get_bolded_text, get_colored_text\n[docs]def try_json_stringify(obj: Any, fallback: str) -> str:\n \"\"\"\n Try to stringify an object to JSON.\n Args:\n obj: Object to stringify.\n fallback: Fallback string to return if the object cannot be stringified.\n Returns:\n A JSON string if the object can be stringified, otherwise the fallback string.\n \"\"\"\n try:\n return json.dumps(obj, indent=2, ensure_ascii=False)\n except Exception:\n return fallback\n[docs]def elapsed(run: Any) -> str:\n \"\"\"Get the elapsed time of a run.\n Args:\n run: any object with a start_time and end_time attribute.\n Returns:\n A string with the elapsed time in seconds or\n milliseconds if time is less than a second.\n \"\"\"\n elapsed_time = run.end_time - run.start_time\n milliseconds = elapsed_time.total_seconds() * 1000\n if milliseconds < 1000:\n return f\"{milliseconds}ms\"\n return f\"{(milliseconds / 1000):.2f}s\"\n[docs]class ConsoleCallbackHandler(BaseTracer):\n \"\"\"Tracer that prints to the console.\"\"\"\n name = \"console_callback_handler\"\n def _persist_run(self, run: Run) -> None:\n pass\n[docs] def get_parents(self, run: Run) -> List[Run]:\n parents = []\n current_run = run\n while current_run.parent_run_id:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/stdout.html"} {"id": "65dda8d7bcd7-1", "text": "parents = []\n current_run = run\n while current_run.parent_run_id:\n parent = self.run_map.get(str(current_run.parent_run_id))\n if parent:\n parents.append(parent)\n current_run = parent\n else:\n break\n return parents\n[docs] def get_breadcrumbs(self, run: Run) -> str:\n parents = self.get_parents(run)[::-1]\n string = \" > \".join(\n f\"{parent.execution_order}:{parent.run_type}:{parent.name}\"\n if i != len(parents) - 1\n else f\"{parent.execution_order}:{parent.run_type}:{parent.name}\"\n for i, parent in enumerate(parents + [run])\n )\n return string\n # logging methods\n def _on_chain_start(self, run: Run) -> None:\n crumbs = self.get_breadcrumbs(run)\n print(\n f\"{get_colored_text('[chain/start]', color='green')} \"\n + get_bolded_text(f\"[{crumbs}] Entering Chain run with input:\\n\")\n + f\"{try_json_stringify(run.inputs, '[inputs]')}\"\n )\n def _on_chain_end(self, run: Run) -> None:\n crumbs = self.get_breadcrumbs(run)\n print(\n f\"{get_colored_text('[chain/end]', color='blue')} \"\n + get_bolded_text(\n f\"[{crumbs}] [{elapsed(run)}] Exiting Chain run with output:\\n\"\n )\n + f\"{try_json_stringify(run.outputs, '[outputs]')}\"\n )\n def _on_chain_error(self, run: Run) -> None:\n crumbs = self.get_breadcrumbs(run)\n print(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/stdout.html"} {"id": "65dda8d7bcd7-2", "text": "crumbs = self.get_breadcrumbs(run)\n print(\n f\"{get_colored_text('[chain/error]', color='red')} \"\n + get_bolded_text(\n f\"[{crumbs}] [{elapsed(run)}] Chain run errored with error:\\n\"\n )\n + f\"{try_json_stringify(run.error, '[error]')}\"\n )\n def _on_llm_start(self, run: Run) -> None:\n crumbs = self.get_breadcrumbs(run)\n inputs = (\n {\"prompts\": [p.strip() for p in run.inputs[\"prompts\"]]}\n if \"prompts\" in run.inputs\n else run.inputs\n )\n print(\n f\"{get_colored_text('[llm/start]', color='green')} \"\n + get_bolded_text(f\"[{crumbs}] Entering LLM run with input:\\n\")\n + f\"{try_json_stringify(inputs, '[inputs]')}\"\n )\n def _on_llm_end(self, run: Run) -> None:\n crumbs = self.get_breadcrumbs(run)\n print(\n f\"{get_colored_text('[llm/end]', color='blue')} \"\n + get_bolded_text(\n f\"[{crumbs}] [{elapsed(run)}] Exiting LLM run with output:\\n\"\n )\n + f\"{try_json_stringify(run.outputs, '[response]')}\"\n )\n def _on_llm_error(self, run: Run) -> None:\n crumbs = self.get_breadcrumbs(run)\n print(\n f\"{get_colored_text('[llm/error]', color='red')} \"\n + get_bolded_text(\n f\"[{crumbs}] [{elapsed(run)}] LLM run errored with error:\\n\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/stdout.html"} {"id": "65dda8d7bcd7-3", "text": ")\n + f\"{try_json_stringify(run.error, '[error]')}\"\n )\n def _on_tool_start(self, run: Run) -> None:\n crumbs = self.get_breadcrumbs(run)\n print(\n f'{get_colored_text(\"[tool/start]\", color=\"green\")} '\n + get_bolded_text(f\"[{crumbs}] Entering Tool run with input:\\n\")\n + f'\"{run.inputs[\"input\"].strip()}\"'\n )\n def _on_tool_end(self, run: Run) -> None:\n crumbs = self.get_breadcrumbs(run)\n if run.outputs:\n print(\n f'{get_colored_text(\"[tool/end]\", color=\"blue\")} '\n + get_bolded_text(\n f\"[{crumbs}] [{elapsed(run)}] Exiting Tool run with output:\\n\"\n )\n + f'\"{run.outputs[\"output\"].strip()}\"'\n )\n def _on_tool_error(self, run: Run) -> None:\n crumbs = self.get_breadcrumbs(run)\n print(\n f\"{get_colored_text('[tool/error]', color='red')} \"\n + get_bolded_text(f\"[{crumbs}] [{elapsed(run)}] \")\n + f\"Tool run errored with error:\\n\"\n f\"{run.error}\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/stdout.html"} {"id": "3be9618f4be8-0", "text": "Source code for langchain.callbacks.tracers.schemas\n\"\"\"Schemas for tracers.\"\"\"\nfrom __future__ import annotations\nimport datetime\nfrom typing import Any, Dict, List, Optional\nfrom uuid import UUID\nfrom langchainplus_sdk.schemas import RunBase as BaseRunV2\nfrom langchainplus_sdk.schemas import RunTypeEnum\nfrom pydantic import BaseModel, Field, root_validator\nfrom langchain.schema import LLMResult\n[docs]class TracerSessionV1Base(BaseModel):\n \"\"\"Base class for TracerSessionV1.\"\"\"\n start_time: datetime.datetime = Field(default_factory=datetime.datetime.utcnow)\n name: Optional[str] = None\n extra: Optional[Dict[str, Any]] = None\n[docs]class TracerSessionV1Create(TracerSessionV1Base):\n \"\"\"Create class for TracerSessionV1.\"\"\"\n[docs]class TracerSessionV1(TracerSessionV1Base):\n \"\"\"TracerSessionV1 schema.\"\"\"\n id: int\n[docs]class TracerSessionBase(TracerSessionV1Base):\n \"\"\"A creation class for TracerSession.\"\"\"\n tenant_id: UUID\n[docs]class TracerSession(TracerSessionBase):\n \"\"\"TracerSessionV1 schema for the V2 API.\"\"\"\n id: UUID\n[docs]class BaseRun(BaseModel):\n \"\"\"Base class for Run.\"\"\"\n uuid: str\n parent_uuid: Optional[str] = None\n start_time: datetime.datetime = Field(default_factory=datetime.datetime.utcnow)\n end_time: datetime.datetime = Field(default_factory=datetime.datetime.utcnow)\n extra: Optional[Dict[str, Any]] = None\n execution_order: int\n child_execution_order: int\n serialized: Dict[str, Any]\n session_id: int\n error: Optional[str] = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/schemas.html"} {"id": "3be9618f4be8-1", "text": "session_id: int\n error: Optional[str] = None\n[docs]class LLMRun(BaseRun):\n \"\"\"Class for LLMRun.\"\"\"\n prompts: List[str]\n response: Optional[LLMResult] = None\n[docs]class ChainRun(BaseRun):\n \"\"\"Class for ChainRun.\"\"\"\n inputs: Dict[str, Any]\n outputs: Optional[Dict[str, Any]] = None\n child_llm_runs: List[LLMRun] = Field(default_factory=list)\n child_chain_runs: List[ChainRun] = Field(default_factory=list)\n child_tool_runs: List[ToolRun] = Field(default_factory=list)\n[docs]class ToolRun(BaseRun):\n \"\"\"Class for ToolRun.\"\"\"\n tool_input: str\n output: Optional[str] = None\n action: str\n child_llm_runs: List[LLMRun] = Field(default_factory=list)\n child_chain_runs: List[ChainRun] = Field(default_factory=list)\n child_tool_runs: List[ToolRun] = Field(default_factory=list)\n# Begin V2 API Schemas\n[docs]class Run(BaseRunV2):\n \"\"\"Run schema for the V2 API in the Tracer.\"\"\"\n execution_order: int\n child_execution_order: int\n child_runs: List[Run] = Field(default_factory=list)\n tags: Optional[List[str]] = Field(default_factory=list)\n[docs] @root_validator(pre=True)\n def assign_name(cls, values: dict) -> dict:\n \"\"\"Assign name to the run.\"\"\"\n if values.get(\"name\") is None:\n if \"name\" in values[\"serialized\"]:\n values[\"name\"] = values[\"serialized\"][\"name\"]\n elif \"id\" in values[\"serialized\"]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/schemas.html"} {"id": "3be9618f4be8-2", "text": "elif \"id\" in values[\"serialized\"]:\n values[\"name\"] = values[\"serialized\"][\"id\"][-1]\n return values\nChainRun.update_forward_refs()\nToolRun.update_forward_refs()\n__all__ = [\n \"BaseRun\",\n \"ChainRun\",\n \"LLMRun\",\n \"Run\",\n \"RunTypeEnum\",\n \"ToolRun\",\n \"TracerSession\",\n \"TracerSessionBase\",\n \"TracerSessionV1\",\n \"TracerSessionV1Base\",\n \"TracerSessionV1Create\",\n]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/schemas.html"} {"id": "fbf363991e51-0", "text": "Source code for langchain.callbacks.tracers.langchain\n\"\"\"A Tracer implementation that records to LangChain endpoint.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport os\nfrom concurrent.futures import Future, ThreadPoolExecutor, wait\nfrom datetime import datetime\nfrom typing import Any, Dict, List, Optional, Set, Union\nfrom uuid import UUID\nfrom langchainplus_sdk import LangChainPlusClient\nfrom langchain.callbacks.tracers.base import BaseTracer\nfrom langchain.callbacks.tracers.schemas import Run, RunTypeEnum, TracerSession\nfrom langchain.env import get_runtime_environment\nfrom langchain.load.dump import dumpd\nfrom langchain.schema.messages import BaseMessage\nlogger = logging.getLogger(__name__)\n_LOGGED = set()\n_TRACERS: List[LangChainTracer] = []\n[docs]def log_error_once(method: str, exception: Exception) -> None:\n \"\"\"Log an error once.\"\"\"\n global _LOGGED\n if (method, type(exception)) in _LOGGED:\n return\n _LOGGED.add((method, type(exception)))\n logger.error(exception)\n[docs]def wait_for_all_tracers() -> None:\n \"\"\"Wait for all tracers to finish.\"\"\"\n global _TRACERS\n for tracer in _TRACERS:\n tracer.wait_for_futures()\n[docs]class LangChainTracer(BaseTracer):\n \"\"\"An implementation of the SharedTracer that POSTS to the langchain endpoint.\"\"\"\n def __init__(\n self,\n example_id: Optional[Union[UUID, str]] = None,\n project_name: Optional[str] = None,\n client: Optional[LangChainPlusClient] = None,\n tags: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/langchain.html"} {"id": "fbf363991e51-1", "text": "**kwargs: Any,\n ) -> None:\n \"\"\"Initialize the LangChain tracer.\"\"\"\n super().__init__(**kwargs)\n self.session: Optional[TracerSession] = None\n self.example_id = (\n UUID(example_id) if isinstance(example_id, str) else example_id\n )\n self.project_name = project_name or os.getenv(\n \"LANGCHAIN_PROJECT\", os.getenv(\"LANGCHAIN_SESSION\", \"default\")\n )\n # set max_workers to 1 to process tasks in order\n self.executor = ThreadPoolExecutor(max_workers=1)\n self.client = client or LangChainPlusClient()\n self._futures: Set[Future] = set()\n self.tags = tags or []\n global _TRACERS\n _TRACERS.append(self)\n[docs] def on_chat_model_start(\n self,\n serialized: Dict[str, Any],\n messages: List[List[BaseMessage]],\n *,\n run_id: UUID,\n tags: Optional[List[str]] = None,\n parent_run_id: Optional[UUID] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Start a trace for an LLM run.\"\"\"\n parent_run_id_ = str(parent_run_id) if parent_run_id else None\n execution_order = self._get_execution_order(parent_run_id_)\n start_time = datetime.utcnow()\n if metadata:\n kwargs.update({\"metadata\": metadata})\n chat_model_run = Run(\n id=run_id,\n parent_run_id=parent_run_id,\n serialized=serialized,\n inputs={\"messages\": [[dumpd(msg) for msg in batch] for batch in messages]},\n extra=kwargs,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/langchain.html"} {"id": "fbf363991e51-2", "text": "extra=kwargs,\n events=[{\"name\": \"start\", \"time\": start_time}],\n start_time=start_time,\n execution_order=execution_order,\n child_execution_order=execution_order,\n run_type=RunTypeEnum.llm,\n tags=tags,\n )\n self._start_trace(chat_model_run)\n self._on_chat_model_start(chat_model_run)\n def _persist_run(self, run: Run) -> None:\n \"\"\"The Langchain Tracer uses Post/Patch rather than persist.\"\"\"\n def _get_tags(self, run: Run) -> List[str]:\n \"\"\"Get combined tags for a run.\"\"\"\n tags = set(run.tags or [])\n tags.update(self.tags or [])\n return list(tags)\n def _persist_run_single(self, run: Run) -> None:\n \"\"\"Persist a run.\"\"\"\n run_dict = run.dict(exclude={\"child_runs\"})\n run_dict[\"tags\"] = self._get_tags(run)\n extra = run_dict.get(\"extra\", {})\n extra[\"runtime\"] = get_runtime_environment()\n run_dict[\"extra\"] = extra\n try:\n self.client.create_run(**run_dict, project_name=self.project_name)\n except Exception as e:\n # Errors are swallowed by the thread executor so we need to log them here\n log_error_once(\"post\", e)\n raise\n def _update_run_single(self, run: Run) -> None:\n \"\"\"Update a run.\"\"\"\n try:\n run_dict = run.dict()\n run_dict[\"tags\"] = self._get_tags(run)\n self.client.update_run(run.id, **run_dict)\n except Exception as e:\n # Errors are swallowed by the thread executor so we need to log them here", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/langchain.html"} {"id": "fbf363991e51-3", "text": "# Errors are swallowed by the thread executor so we need to log them here\n log_error_once(\"patch\", e)\n raise\n def _on_llm_start(self, run: Run) -> None:\n \"\"\"Persist an LLM run.\"\"\"\n if run.parent_run_id is None:\n run.reference_example_id = self.example_id\n self._futures.add(\n self.executor.submit(self._persist_run_single, run.copy(deep=True))\n )\n def _on_chat_model_start(self, run: Run) -> None:\n \"\"\"Persist an LLM run.\"\"\"\n if run.parent_run_id is None:\n run.reference_example_id = self.example_id\n self._futures.add(\n self.executor.submit(self._persist_run_single, run.copy(deep=True))\n )\n def _on_llm_end(self, run: Run) -> None:\n \"\"\"Process the LLM Run.\"\"\"\n self._futures.add(\n self.executor.submit(self._update_run_single, run.copy(deep=True))\n )\n def _on_llm_error(self, run: Run) -> None:\n \"\"\"Process the LLM Run upon error.\"\"\"\n self._futures.add(\n self.executor.submit(self._update_run_single, run.copy(deep=True))\n )\n def _on_chain_start(self, run: Run) -> None:\n \"\"\"Process the Chain Run upon start.\"\"\"\n if run.parent_run_id is None:\n run.reference_example_id = self.example_id\n self._futures.add(\n self.executor.submit(self._persist_run_single, run.copy(deep=True))\n )\n def _on_chain_end(self, run: Run) -> None:\n \"\"\"Process the Chain Run.\"\"\"\n self._futures.add(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/langchain.html"} {"id": "fbf363991e51-4", "text": "\"\"\"Process the Chain Run.\"\"\"\n self._futures.add(\n self.executor.submit(self._update_run_single, run.copy(deep=True))\n )\n def _on_chain_error(self, run: Run) -> None:\n \"\"\"Process the Chain Run upon error.\"\"\"\n self._futures.add(\n self.executor.submit(self._update_run_single, run.copy(deep=True))\n )\n def _on_tool_start(self, run: Run) -> None:\n \"\"\"Process the Tool Run upon start.\"\"\"\n if run.parent_run_id is None:\n run.reference_example_id = self.example_id\n self._futures.add(\n self.executor.submit(self._persist_run_single, run.copy(deep=True))\n )\n def _on_tool_end(self, run: Run) -> None:\n \"\"\"Process the Tool Run.\"\"\"\n self._futures.add(\n self.executor.submit(self._update_run_single, run.copy(deep=True))\n )\n def _on_tool_error(self, run: Run) -> None:\n \"\"\"Process the Tool Run upon error.\"\"\"\n self._futures.add(\n self.executor.submit(self._update_run_single, run.copy(deep=True))\n )\n def _on_retriever_start(self, run: Run) -> None:\n \"\"\"Process the Retriever Run upon start.\"\"\"\n if run.parent_run_id is None:\n run.reference_example_id = self.example_id\n self._futures.add(\n self.executor.submit(self._persist_run_single, run.copy(deep=True))\n )\n def _on_retriever_end(self, run: Run) -> None:\n \"\"\"Process the Retriever Run.\"\"\"\n self._futures.add(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/langchain.html"} {"id": "fbf363991e51-5", "text": "\"\"\"Process the Retriever Run.\"\"\"\n self._futures.add(\n self.executor.submit(self._update_run_single, run.copy(deep=True))\n )\n def _on_retriever_error(self, run: Run) -> None:\n \"\"\"Process the Retriever Run upon error.\"\"\"\n self._futures.add(\n self.executor.submit(self._update_run_single, run.copy(deep=True))\n )\n[docs] def wait_for_futures(self) -> None:\n \"\"\"Wait for the given futures to complete.\"\"\"\n futures = list(self._futures)\n wait(futures)\n for future in futures:\n self._futures.remove(future)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/tracers/langchain.html"} {"id": "f767a146022c-0", "text": "Source code for langchain.prompts.prompt\n\"\"\"Prompt schema definition.\"\"\"\nfrom __future__ import annotations\nfrom pathlib import Path\nfrom string import Formatter\nfrom typing import Any, Dict, List, Union\nfrom pydantic import root_validator\nfrom langchain.prompts.base import (\n DEFAULT_FORMATTER_MAPPING,\n StringPromptTemplate,\n _get_jinja2_variables_from_template,\n check_valid_template,\n)\n[docs]class PromptTemplate(StringPromptTemplate):\n \"\"\"Schema to represent a prompt for an LLM.\n Example:\n .. code-block:: python\n from langchain import PromptTemplate\n prompt = PromptTemplate(input_variables=[\"foo\"], template=\"Say {foo}\")\n \"\"\"\n @property\n def lc_attributes(self) -> Dict[str, Any]:\n return {\n \"template_format\": self.template_format,\n }\n input_variables: List[str]\n \"\"\"A list of the names of the variables the prompt template expects.\"\"\"\n template: str\n \"\"\"The prompt template.\"\"\"\n template_format: str = \"f-string\"\n \"\"\"The format of the prompt template. Options are: 'f-string', 'jinja2'.\"\"\"\n validate_template: bool = True\n \"\"\"Whether or not to try validating the template.\"\"\"\n @property\n def _prompt_type(self) -> str:\n \"\"\"Return the prompt type key.\"\"\"\n return \"prompt\"\n[docs] def format(self, **kwargs: Any) -> str:\n \"\"\"Format the prompt with the inputs.\n Args:\n kwargs: Any arguments to be passed to the prompt template.\n Returns:\n A formatted string.\n Example:\n .. code-block:: python\n prompt.format(variable1=\"foo\")\n \"\"\"\n kwargs = self._merge_partial_and_user_variables(**kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/prompt.html"} {"id": "f767a146022c-1", "text": "\"\"\"\n kwargs = self._merge_partial_and_user_variables(**kwargs)\n return DEFAULT_FORMATTER_MAPPING[self.template_format](self.template, **kwargs)\n[docs] @root_validator()\n def template_is_valid(cls, values: Dict) -> Dict:\n \"\"\"Check that template and input variables are consistent.\"\"\"\n if values[\"validate_template\"]:\n all_inputs = values[\"input_variables\"] + list(values[\"partial_variables\"])\n check_valid_template(\n values[\"template\"], values[\"template_format\"], all_inputs\n )\n return values\n[docs] @classmethod\n def from_examples(\n cls,\n examples: List[str],\n suffix: str,\n input_variables: List[str],\n example_separator: str = \"\\n\\n\",\n prefix: str = \"\",\n **kwargs: Any,\n ) -> PromptTemplate:\n \"\"\"Take examples in list format with prefix and suffix to create a prompt.\n Intended to be used as a way to dynamically create a prompt from examples.\n Args:\n examples: List of examples to use in the prompt.\n suffix: String to go after the list of examples. Should generally\n set up the user's input.\n input_variables: A list of variable names the final prompt template\n will expect.\n example_separator: The separator to use in between examples. Defaults\n to two new line characters.\n prefix: String that should go before any examples. Generally includes\n examples. Default to an empty string.\n Returns:\n The final prompt generated.\n \"\"\"\n template = example_separator.join([prefix, *examples, suffix])\n return cls(input_variables=input_variables, template=template, **kwargs)\n[docs] @classmethod\n def from_file(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/prompt.html"} {"id": "f767a146022c-2", "text": "[docs] @classmethod\n def from_file(\n cls, template_file: Union[str, Path], input_variables: List[str], **kwargs: Any\n ) -> PromptTemplate:\n \"\"\"Load a prompt from a file.\n Args:\n template_file: The path to the file containing the prompt template.\n input_variables: A list of variable names the final prompt template\n will expect.\n Returns:\n The prompt loaded from the file.\n \"\"\"\n with open(str(template_file), \"r\") as f:\n template = f.read()\n return cls(input_variables=input_variables, template=template, **kwargs)\n[docs] @classmethod\n def from_template(cls, template: str, **kwargs: Any) -> PromptTemplate:\n \"\"\"Load a prompt template from a template.\"\"\"\n if \"template_format\" in kwargs and kwargs[\"template_format\"] == \"jinja2\":\n # Get the variables for the template\n input_variables = _get_jinja2_variables_from_template(template)\n else:\n input_variables = {\n v for _, v, _, _ in Formatter().parse(template) if v is not None\n }\n if \"partial_variables\" in kwargs:\n partial_variables = kwargs[\"partial_variables\"]\n input_variables = {\n var for var in input_variables if var not in partial_variables\n }\n return cls(\n input_variables=list(sorted(input_variables)), template=template, **kwargs\n )\n# For backwards compatibility.\nPrompt = PromptTemplate", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/prompt.html"} {"id": "adc7043bb81f-0", "text": "Source code for langchain.prompts.chat\n\"\"\"Chat prompt template.\"\"\"\nfrom __future__ import annotations\nfrom abc import ABC, abstractmethod\nfrom pathlib import Path\nfrom typing import Any, Callable, List, Sequence, Tuple, Type, TypeVar, Union\nfrom pydantic import Field, root_validator\nfrom langchain.load.serializable import Serializable\nfrom langchain.prompts.base import StringPromptTemplate\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema import (\n BasePromptTemplate,\n PromptValue,\n)\nfrom langchain.schema.messages import (\n AIMessage,\n BaseMessage,\n ChatMessage,\n HumanMessage,\n SystemMessage,\n get_buffer_string,\n)\n[docs]class BaseMessagePromptTemplate(Serializable, ABC):\n @property\n def lc_serializable(self) -> bool:\n return True\n[docs] @abstractmethod\n def format_messages(self, **kwargs: Any) -> List[BaseMessage]:\n \"\"\"To messages.\"\"\"\n @property\n @abstractmethod\n def input_variables(self) -> List[str]:\n \"\"\"Input variables for this prompt template.\"\"\"\n[docs]class MessagesPlaceholder(BaseMessagePromptTemplate):\n \"\"\"Prompt template that assumes variable is already list of messages.\"\"\"\n variable_name: str\n[docs] def format_messages(self, **kwargs: Any) -> List[BaseMessage]:\n \"\"\"To a BaseMessage.\"\"\"\n value = kwargs[self.variable_name]\n if not isinstance(value, list):\n raise ValueError(\n f\"variable {self.variable_name} should be a list of base messages, \"\n f\"got {value}\"\n )\n for v in value:\n if not isinstance(v, BaseMessage):\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/chat.html"} {"id": "adc7043bb81f-1", "text": "if not isinstance(v, BaseMessage):\n raise ValueError(\n f\"variable {self.variable_name} should be a list of base messages,\"\n f\" got {value}\"\n )\n return value\n @property\n def input_variables(self) -> List[str]:\n \"\"\"Input variables for this prompt template.\"\"\"\n return [self.variable_name]\nMessagePromptTemplateT = TypeVar(\n \"MessagePromptTemplateT\", bound=\"BaseStringMessagePromptTemplate\"\n)\n[docs]class BaseStringMessagePromptTemplate(BaseMessagePromptTemplate, ABC):\n prompt: StringPromptTemplate\n additional_kwargs: dict = Field(default_factory=dict)\n[docs] @classmethod\n def from_template(\n cls: Type[MessagePromptTemplateT],\n template: str,\n template_format: str = \"f-string\",\n **kwargs: Any,\n ) -> MessagePromptTemplateT:\n prompt = PromptTemplate.from_template(template, template_format=template_format)\n return cls(prompt=prompt, **kwargs)\n[docs] @classmethod\n def from_template_file(\n cls: Type[MessagePromptTemplateT],\n template_file: Union[str, Path],\n input_variables: List[str],\n **kwargs: Any,\n ) -> MessagePromptTemplateT:\n prompt = PromptTemplate.from_file(template_file, input_variables)\n return cls(prompt=prompt, **kwargs)\n[docs] @abstractmethod\n def format(self, **kwargs: Any) -> BaseMessage:\n \"\"\"To a BaseMessage.\"\"\"\n[docs] def format_messages(self, **kwargs: Any) -> List[BaseMessage]:\n return [self.format(**kwargs)]\n @property\n def input_variables(self) -> List[str]:\n return self.prompt.input_variables", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/chat.html"} {"id": "adc7043bb81f-2", "text": "def input_variables(self) -> List[str]:\n return self.prompt.input_variables\n[docs]class ChatMessagePromptTemplate(BaseStringMessagePromptTemplate):\n role: str\n[docs] def format(self, **kwargs: Any) -> BaseMessage:\n text = self.prompt.format(**kwargs)\n return ChatMessage(\n content=text, role=self.role, additional_kwargs=self.additional_kwargs\n )\n[docs]class HumanMessagePromptTemplate(BaseStringMessagePromptTemplate):\n[docs] def format(self, **kwargs: Any) -> BaseMessage:\n text = self.prompt.format(**kwargs)\n return HumanMessage(content=text, additional_kwargs=self.additional_kwargs)\n[docs]class AIMessagePromptTemplate(BaseStringMessagePromptTemplate):\n[docs] def format(self, **kwargs: Any) -> BaseMessage:\n text = self.prompt.format(**kwargs)\n return AIMessage(content=text, additional_kwargs=self.additional_kwargs)\n[docs]class SystemMessagePromptTemplate(BaseStringMessagePromptTemplate):\n[docs] def format(self, **kwargs: Any) -> BaseMessage:\n text = self.prompt.format(**kwargs)\n return SystemMessage(content=text, additional_kwargs=self.additional_kwargs)\n[docs]class ChatPromptValue(PromptValue):\n messages: List[BaseMessage]\n[docs] def to_string(self) -> str:\n \"\"\"Return prompt as string.\"\"\"\n return get_buffer_string(self.messages)\n[docs] def to_messages(self) -> List[BaseMessage]:\n \"\"\"Return prompt as messages.\"\"\"\n return self.messages\n[docs]class BaseChatPromptTemplate(BasePromptTemplate, ABC):\n[docs] def format(self, **kwargs: Any) -> str:\n return self.format_prompt(**kwargs).to_string()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/chat.html"} {"id": "adc7043bb81f-3", "text": "return self.format_prompt(**kwargs).to_string()\n[docs] def format_prompt(self, **kwargs: Any) -> PromptValue:\n messages = self.format_messages(**kwargs)\n return ChatPromptValue(messages=messages)\n[docs] @abstractmethod\n def format_messages(self, **kwargs: Any) -> List[BaseMessage]:\n \"\"\"Format kwargs into a list of messages.\"\"\"\n[docs]class ChatPromptTemplate(BaseChatPromptTemplate, ABC):\n input_variables: List[str]\n messages: List[Union[BaseMessagePromptTemplate, BaseMessage]]\n[docs] @root_validator(pre=True)\n def validate_input_variables(cls, values: dict) -> dict:\n messages = values[\"messages\"]\n input_vars = set()\n for message in messages:\n if isinstance(message, BaseMessagePromptTemplate):\n input_vars.update(message.input_variables)\n if \"partial_variables\" in values:\n input_vars = input_vars - set(values[\"partial_variables\"])\n if \"input_variables\" in values:\n if input_vars != set(values[\"input_variables\"]):\n raise ValueError(\n \"Got mismatched input_variables. \"\n f\"Expected: {input_vars}. \"\n f\"Got: {values['input_variables']}\"\n )\n else:\n values[\"input_variables\"] = list(input_vars)\n return values\n[docs] @classmethod\n def from_template(cls, template: str, **kwargs: Any) -> ChatPromptTemplate:\n prompt_template = PromptTemplate.from_template(template, **kwargs)\n message = HumanMessagePromptTemplate(prompt=prompt_template)\n return cls.from_messages([message])\n[docs] @classmethod\n def from_role_strings(\n cls, string_messages: List[Tuple[str, str]]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/chat.html"} {"id": "adc7043bb81f-4", "text": "cls, string_messages: List[Tuple[str, str]]\n ) -> ChatPromptTemplate:\n messages = [\n ChatMessagePromptTemplate(\n prompt=PromptTemplate.from_template(template), role=role\n )\n for role, template in string_messages\n ]\n return cls.from_messages(messages)\n[docs] @classmethod\n def from_strings(\n cls, string_messages: List[Tuple[Type[BaseMessagePromptTemplate], str]]\n ) -> ChatPromptTemplate:\n messages = [\n role(prompt=PromptTemplate.from_template(template))\n for role, template in string_messages\n ]\n return cls.from_messages(messages)\n[docs] @classmethod\n def from_messages(\n cls, messages: Sequence[Union[BaseMessagePromptTemplate, BaseMessage]]\n ) -> ChatPromptTemplate:\n input_vars = set()\n for message in messages:\n if isinstance(message, BaseMessagePromptTemplate):\n input_vars.update(message.input_variables)\n return cls(input_variables=list(input_vars), messages=messages)\n[docs] def format(self, **kwargs: Any) -> str:\n return self.format_prompt(**kwargs).to_string()\n[docs] def format_messages(self, **kwargs: Any) -> List[BaseMessage]:\n kwargs = self._merge_partial_and_user_variables(**kwargs)\n result = []\n for message_template in self.messages:\n if isinstance(message_template, BaseMessage):\n result.extend([message_template])\n elif isinstance(message_template, BaseMessagePromptTemplate):\n rel_params = {\n k: v\n for k, v in kwargs.items()\n if k in message_template.input_variables\n }\n message = message_template.format_messages(**rel_params)\n result.extend(message)\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/chat.html"} {"id": "adc7043bb81f-5", "text": "result.extend(message)\n else:\n raise ValueError(f\"Unexpected input: {message_template}\")\n return result\n[docs] def partial(self, **kwargs: Union[str, Callable[[], str]]) -> BasePromptTemplate:\n raise NotImplementedError\n @property\n def _prompt_type(self) -> str:\n return \"chat\"\n[docs] def save(self, file_path: Union[Path, str]) -> None:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/chat.html"} {"id": "b1db43077771-0", "text": "Source code for langchain.prompts.base\n\"\"\"BasePrompt schema definition.\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom abc import ABC\nfrom typing import Any, Callable, Dict, List, Set\nfrom langchain.formatting import formatter\nfrom langchain.schema import BasePromptTemplate\nfrom langchain.schema.messages import BaseMessage, HumanMessage\nfrom langchain.schema.prompt import PromptValue\n[docs]def jinja2_formatter(template: str, **kwargs: Any) -> str:\n \"\"\"Format a template using jinja2.\"\"\"\n try:\n from jinja2 import Template\n except ImportError:\n raise ImportError(\n \"jinja2 not installed, which is needed to use the jinja2_formatter. \"\n \"Please install it with `pip install jinja2`.\"\n )\n return Template(template).render(**kwargs)\n[docs]def validate_jinja2(template: str, input_variables: List[str]) -> None:\n \"\"\"\n Validate that the input variables are valid for the template.\n Issues an warning if missing or extra variables are found.\n Args:\n template: The template string.\n input_variables: The input variables.\n \"\"\"\n input_variables_set = set(input_variables)\n valid_variables = _get_jinja2_variables_from_template(template)\n missing_variables = valid_variables - input_variables_set\n extra_variables = input_variables_set - valid_variables\n warning_message = \"\"\n if missing_variables:\n warning_message += f\"Missing variables: {missing_variables} \"\n if extra_variables:\n warning_message += f\"Extra variables: {extra_variables}\"\n if warning_message:\n warnings.warn(warning_message.strip())\ndef _get_jinja2_variables_from_template(template: str) -> Set[str]:\n try:\n from jinja2 import Environment, meta", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/base.html"} {"id": "b1db43077771-1", "text": "try:\n from jinja2 import Environment, meta\n except ImportError:\n raise ImportError(\n \"jinja2 not installed, which is needed to use the jinja2_formatter. \"\n \"Please install it with `pip install jinja2`.\"\n )\n env = Environment()\n ast = env.parse(template)\n variables = meta.find_undeclared_variables(ast)\n return variables\nDEFAULT_FORMATTER_MAPPING: Dict[str, Callable] = {\n \"f-string\": formatter.format,\n \"jinja2\": jinja2_formatter,\n}\nDEFAULT_VALIDATOR_MAPPING: Dict[str, Callable] = {\n \"f-string\": formatter.validate_input_variables,\n \"jinja2\": validate_jinja2,\n}\n[docs]def check_valid_template(\n template: str, template_format: str, input_variables: List[str]\n) -> None:\n \"\"\"Check that template string is valid.\"\"\"\n if template_format not in DEFAULT_FORMATTER_MAPPING:\n valid_formats = list(DEFAULT_FORMATTER_MAPPING)\n raise ValueError(\n f\"Invalid template format. Got `{template_format}`;\"\n f\" should be one of {valid_formats}\"\n )\n try:\n validator_func = DEFAULT_VALIDATOR_MAPPING[template_format]\n validator_func(template, input_variables)\n except KeyError as e:\n raise ValueError(\n \"Invalid prompt schema; check for mismatched or missing input parameters. \"\n + str(e)\n )\n[docs]class StringPromptValue(PromptValue):\n text: str\n[docs] def to_string(self) -> str:\n \"\"\"Return prompt as string.\"\"\"\n return self.text\n[docs] def to_messages(self) -> List[BaseMessage]:\n \"\"\"Return prompt as messages.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/base.html"} {"id": "b1db43077771-2", "text": "\"\"\"Return prompt as messages.\"\"\"\n return [HumanMessage(content=self.text)]\n[docs]class StringPromptTemplate(BasePromptTemplate, ABC):\n \"\"\"String prompt should expose the format method, returning a prompt.\"\"\"\n[docs] def format_prompt(self, **kwargs: Any) -> PromptValue:\n \"\"\"Create Chat Messages.\"\"\"\n return StringPromptValue(text=self.format(**kwargs))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/base.html"} {"id": "4eee7dc63dfc-0", "text": "Source code for langchain.prompts.pipeline\nfrom typing import Any, Dict, List, Tuple\nfrom pydantic import root_validator\nfrom langchain.prompts.chat import BaseChatPromptTemplate\nfrom langchain.schema import BasePromptTemplate, PromptValue\ndef _get_inputs(inputs: dict, input_variables: List[str]) -> dict:\n return {k: inputs[k] for k in input_variables}\n[docs]class PipelinePromptTemplate(BasePromptTemplate):\n \"\"\"A prompt template for composing multiple prompts together.\n This can be useful when you want to reuse parts of prompts.\n A PipelinePrompt consists of two main parts:\n - final_prompt: This is the final prompt that is returned\n - pipeline_prompts: This is a list of tuples, consisting\n of a string (`name`) and a Prompt Template.\n Each PromptTemplate will be formatted and then passed\n to future prompt templates as a variable with\n the same name as `name`\n \"\"\"\n final_prompt: BasePromptTemplate\n pipeline_prompts: List[Tuple[str, BasePromptTemplate]]\n[docs] @root_validator(pre=True)\n def get_input_variables(cls, values: Dict) -> Dict:\n \"\"\"Get input variables.\"\"\"\n created_variables = set()\n all_variables = set()\n for k, prompt in values[\"pipeline_prompts\"]:\n created_variables.add(k)\n all_variables.update(prompt.input_variables)\n values[\"input_variables\"] = list(all_variables.difference(created_variables))\n return values\n[docs] def format_prompt(self, **kwargs: Any) -> PromptValue:\n for k, prompt in self.pipeline_prompts:\n _inputs = _get_inputs(kwargs, prompt.input_variables)\n if isinstance(prompt, BaseChatPromptTemplate):\n kwargs[k] = prompt.format_messages(**_inputs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/pipeline.html"} {"id": "4eee7dc63dfc-1", "text": "kwargs[k] = prompt.format_messages(**_inputs)\n else:\n kwargs[k] = prompt.format(**_inputs)\n _inputs = _get_inputs(kwargs, self.final_prompt.input_variables)\n return self.final_prompt.format_prompt(**_inputs)\n[docs] def format(self, **kwargs: Any) -> str:\n return self.format_prompt(**kwargs).to_string()\n @property\n def _prompt_type(self) -> str:\n raise ValueError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/pipeline.html"} {"id": "2bce3f039d70-0", "text": "Source code for langchain.prompts.few_shot_with_templates\n\"\"\"Prompt template that contains few shot examples.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.prompts.base import DEFAULT_FORMATTER_MAPPING, StringPromptTemplate\nfrom langchain.prompts.example_selector.base import BaseExampleSelector\nfrom langchain.prompts.prompt import PromptTemplate\n[docs]class FewShotPromptWithTemplates(StringPromptTemplate):\n \"\"\"Prompt template that contains few shot examples.\"\"\"\n examples: Optional[List[dict]] = None\n \"\"\"Examples to format into the prompt.\n Either this or example_selector should be provided.\"\"\"\n example_selector: Optional[BaseExampleSelector] = None\n \"\"\"ExampleSelector to choose the examples to format into the prompt.\n Either this or examples should be provided.\"\"\"\n example_prompt: PromptTemplate\n \"\"\"PromptTemplate used to format an individual example.\"\"\"\n suffix: StringPromptTemplate\n \"\"\"A PromptTemplate to put after the examples.\"\"\"\n input_variables: List[str]\n \"\"\"A list of the names of the variables the prompt template expects.\"\"\"\n example_separator: str = \"\\n\\n\"\n \"\"\"String separator used to join the prefix, the examples, and suffix.\"\"\"\n prefix: Optional[StringPromptTemplate] = None\n \"\"\"A PromptTemplate to put before the examples.\"\"\"\n template_format: str = \"f-string\"\n \"\"\"The format of the prompt template. Options are: 'f-string', 'jinja2'.\"\"\"\n validate_template: bool = True\n \"\"\"Whether or not to try validating the template.\"\"\"\n[docs] @root_validator(pre=True)\n def check_examples_and_selector(cls, values: Dict) -> Dict:\n \"\"\"Check that one and only one of examples/example_selector are provided.\"\"\"\n examples = values.get(\"examples\", None)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/few_shot_with_templates.html"} {"id": "2bce3f039d70-1", "text": "examples = values.get(\"examples\", None)\n example_selector = values.get(\"example_selector\", None)\n if examples and example_selector:\n raise ValueError(\n \"Only one of 'examples' and 'example_selector' should be provided\"\n )\n if examples is None and example_selector is None:\n raise ValueError(\n \"One of 'examples' and 'example_selector' should be provided\"\n )\n return values\n[docs] @root_validator()\n def template_is_valid(cls, values: Dict) -> Dict:\n \"\"\"Check that prefix, suffix and input variables are consistent.\"\"\"\n if values[\"validate_template\"]:\n input_variables = values[\"input_variables\"]\n expected_input_variables = set(values[\"suffix\"].input_variables)\n expected_input_variables |= set(values[\"partial_variables\"])\n if values[\"prefix\"] is not None:\n expected_input_variables |= set(values[\"prefix\"].input_variables)\n missing_vars = expected_input_variables.difference(input_variables)\n if missing_vars:\n raise ValueError(\n f\"Got input_variables={input_variables}, but based on \"\n f\"prefix/suffix expected {expected_input_variables}\"\n )\n return values\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n def _get_examples(self, **kwargs: Any) -> List[dict]:\n if self.examples is not None:\n return self.examples\n elif self.example_selector is not None:\n return self.example_selector.select_examples(kwargs)\n else:\n raise ValueError\n[docs] def format(self, **kwargs: Any) -> str:\n \"\"\"Format the prompt with the inputs.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/few_shot_with_templates.html"} {"id": "2bce3f039d70-2", "text": "\"\"\"Format the prompt with the inputs.\n Args:\n kwargs: Any arguments to be passed to the prompt template.\n Returns:\n A formatted string.\n Example:\n .. code-block:: python\n prompt.format(variable1=\"foo\")\n \"\"\"\n kwargs = self._merge_partial_and_user_variables(**kwargs)\n # Get the examples to use.\n examples = self._get_examples(**kwargs)\n # Format the examples.\n example_strings = [\n self.example_prompt.format(**example) for example in examples\n ]\n # Create the overall prefix.\n if self.prefix is None:\n prefix = \"\"\n else:\n prefix_kwargs = {\n k: v for k, v in kwargs.items() if k in self.prefix.input_variables\n }\n for k in prefix_kwargs.keys():\n kwargs.pop(k)\n prefix = self.prefix.format(**prefix_kwargs)\n # Create the overall suffix\n suffix_kwargs = {\n k: v for k, v in kwargs.items() if k in self.suffix.input_variables\n }\n for k in suffix_kwargs.keys():\n kwargs.pop(k)\n suffix = self.suffix.format(\n **suffix_kwargs,\n )\n pieces = [prefix, *example_strings, suffix]\n template = self.example_separator.join([piece for piece in pieces if piece])\n # Format the template with the input variables.\n return DEFAULT_FORMATTER_MAPPING[self.template_format](template, **kwargs)\n @property\n def _prompt_type(self) -> str:\n \"\"\"Return the prompt type key.\"\"\"\n return \"few_shot_with_templates\"\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return a dictionary of the prompt.\"\"\"\n if self.example_selector:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/few_shot_with_templates.html"} {"id": "2bce3f039d70-3", "text": "\"\"\"Return a dictionary of the prompt.\"\"\"\n if self.example_selector:\n raise ValueError(\"Saving an example selector is not currently supported\")\n return super().dict(**kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/few_shot_with_templates.html"} {"id": "6112f701c90b-0", "text": "Source code for langchain.prompts.loading\n\"\"\"Load prompts from disk.\"\"\"\nimport importlib\nimport json\nimport logging\nfrom pathlib import Path\nfrom typing import Union\nimport yaml\nfrom langchain.output_parsers.regex import RegexParser\nfrom langchain.prompts.few_shot import FewShotPromptTemplate\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema import BaseLLMOutputParser, BasePromptTemplate, NoOpOutputParser\nfrom langchain.utilities.loading import try_load_from_hub\nURL_BASE = \"https://raw.githubusercontent.com/hwchase17/langchain-hub/master/prompts/\"\nlogger = logging.getLogger(__name__)\n[docs]def load_prompt_from_config(config: dict) -> BasePromptTemplate:\n \"\"\"Load prompt from Config Dict.\"\"\"\n if \"_type\" not in config:\n logger.warning(\"No `_type` key found, defaulting to `prompt`.\")\n config_type = config.pop(\"_type\", \"prompt\")\n if config_type not in type_to_loader_dict:\n raise ValueError(f\"Loading {config_type} prompt not supported\")\n prompt_loader = type_to_loader_dict[config_type]\n return prompt_loader(config)\ndef _load_template(var_name: str, config: dict) -> dict:\n \"\"\"Load template from disk if applicable.\"\"\"\n # Check if template_path exists in config.\n if f\"{var_name}_path\" in config:\n # If it does, make sure template variable doesn't also exist.\n if var_name in config:\n raise ValueError(\n f\"Both `{var_name}_path` and `{var_name}` cannot be provided.\"\n )\n # Pop the template path from the config.\n template_path = Path(config.pop(f\"{var_name}_path\"))\n # Load the template.\n if template_path.suffix == \".txt\":", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/loading.html"} {"id": "6112f701c90b-1", "text": "# Load the template.\n if template_path.suffix == \".txt\":\n with open(template_path) as f:\n template = f.read()\n else:\n raise ValueError\n # Set the template variable to the extracted variable.\n config[var_name] = template\n return config\ndef _load_examples(config: dict) -> dict:\n \"\"\"Load examples if necessary.\"\"\"\n if isinstance(config[\"examples\"], list):\n pass\n elif isinstance(config[\"examples\"], str):\n with open(config[\"examples\"]) as f:\n if config[\"examples\"].endswith(\".json\"):\n examples = json.load(f)\n elif config[\"examples\"].endswith((\".yaml\", \".yml\")):\n examples = yaml.safe_load(f)\n else:\n raise ValueError(\n \"Invalid file format. Only json or yaml formats are supported.\"\n )\n config[\"examples\"] = examples\n else:\n raise ValueError(\"Invalid examples format. Only list or string are supported.\")\n return config\ndef _load_output_parser(config: dict) -> dict:\n \"\"\"Load output parser.\"\"\"\n if \"output_parser\" in config and config[\"output_parser\"]:\n _config = config.pop(\"output_parser\")\n output_parser_type = _config.pop(\"_type\")\n if output_parser_type == \"regex_parser\":\n output_parser: BaseLLMOutputParser = RegexParser(**_config)\n elif output_parser_type == \"default\":\n output_parser = NoOpOutputParser(**_config)\n else:\n raise ValueError(f\"Unsupported output parser {output_parser_type}\")\n config[\"output_parser\"] = output_parser\n return config\ndef _load_few_shot_prompt(config: dict) -> FewShotPromptTemplate:\n \"\"\"Load the few shot prompt from the config.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/loading.html"} {"id": "6112f701c90b-2", "text": "\"\"\"Load the few shot prompt from the config.\"\"\"\n # Load the suffix and prefix templates.\n config = _load_template(\"suffix\", config)\n config = _load_template(\"prefix\", config)\n # Load the example prompt.\n if \"example_prompt_path\" in config:\n if \"example_prompt\" in config:\n raise ValueError(\n \"Only one of example_prompt and example_prompt_path should \"\n \"be specified.\"\n )\n config[\"example_prompt\"] = load_prompt(config.pop(\"example_prompt_path\"))\n else:\n config[\"example_prompt\"] = load_prompt_from_config(config[\"example_prompt\"])\n # Load the examples.\n config = _load_examples(config)\n config = _load_output_parser(config)\n return FewShotPromptTemplate(**config)\ndef _load_prompt(config: dict) -> PromptTemplate:\n \"\"\"Load the prompt template from config.\"\"\"\n # Load the template from disk if necessary.\n config = _load_template(\"template\", config)\n config = _load_output_parser(config)\n return PromptTemplate(**config)\n[docs]def load_prompt(path: Union[str, Path]) -> BasePromptTemplate:\n \"\"\"Unified method for loading a prompt from LangChainHub or local fs.\"\"\"\n if hub_result := try_load_from_hub(\n path, _load_prompt_from_file, \"prompts\", {\"py\", \"json\", \"yaml\"}\n ):\n return hub_result\n else:\n return _load_prompt_from_file(path)\ndef _load_prompt_from_file(file: Union[str, Path]) -> BasePromptTemplate:\n \"\"\"Load prompt from file.\"\"\"\n # Convert file to Path object.\n if isinstance(file, str):\n file_path = Path(file)\n else:\n file_path = file", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/loading.html"} {"id": "6112f701c90b-3", "text": "file_path = Path(file)\n else:\n file_path = file\n # Load from either json or yaml.\n if file_path.suffix == \".json\":\n with open(file_path) as f:\n config = json.load(f)\n elif file_path.suffix == \".yaml\":\n with open(file_path, \"r\") as f:\n config = yaml.safe_load(f)\n elif file_path.suffix == \".py\":\n spec = importlib.util.spec_from_loader(\n \"prompt\", loader=None, origin=str(file_path)\n )\n if spec is None:\n raise ValueError(\"could not load spec\")\n helper = importlib.util.module_from_spec(spec)\n with open(file_path, \"rb\") as f:\n exec(f.read(), helper.__dict__)\n if not isinstance(helper.PROMPT, BasePromptTemplate):\n raise ValueError(\"Did not get object of type BasePromptTemplate.\")\n return helper.PROMPT\n else:\n raise ValueError(f\"Got unsupported file type {file_path.suffix}\")\n # Load the prompt from the config now.\n return load_prompt_from_config(config)\ntype_to_loader_dict = {\n \"prompt\": _load_prompt,\n \"few_shot\": _load_few_shot_prompt,\n # \"few_shot_with_templates\": _load_few_shot_with_templates_prompt,\n}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/loading.html"} {"id": "4772af074122-0", "text": "Source code for langchain.prompts.few_shot\n\"\"\"Prompt template that contains few shot examples.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.prompts.base import (\n DEFAULT_FORMATTER_MAPPING,\n StringPromptTemplate,\n check_valid_template,\n)\nfrom langchain.prompts.example_selector.base import BaseExampleSelector\nfrom langchain.prompts.prompt import PromptTemplate\n[docs]class FewShotPromptTemplate(StringPromptTemplate):\n \"\"\"Prompt template that contains few shot examples.\"\"\"\n @property\n def lc_serializable(self) -> bool:\n return False\n examples: Optional[List[dict]] = None\n \"\"\"Examples to format into the prompt.\n Either this or example_selector should be provided.\"\"\"\n example_selector: Optional[BaseExampleSelector] = None\n \"\"\"ExampleSelector to choose the examples to format into the prompt.\n Either this or examples should be provided.\"\"\"\n example_prompt: PromptTemplate\n \"\"\"PromptTemplate used to format an individual example.\"\"\"\n suffix: str\n \"\"\"A prompt template string to put after the examples.\"\"\"\n input_variables: List[str]\n \"\"\"A list of the names of the variables the prompt template expects.\"\"\"\n example_separator: str = \"\\n\\n\"\n \"\"\"String separator used to join the prefix, the examples, and suffix.\"\"\"\n prefix: str = \"\"\n \"\"\"A prompt template string to put before the examples.\"\"\"\n template_format: str = \"f-string\"\n \"\"\"The format of the prompt template. Options are: 'f-string', 'jinja2'.\"\"\"\n validate_template: bool = True\n \"\"\"Whether or not to try validating the template.\"\"\"\n[docs] @root_validator(pre=True)\n def check_examples_and_selector(cls, values: Dict) -> Dict:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/few_shot.html"} {"id": "4772af074122-1", "text": "def check_examples_and_selector(cls, values: Dict) -> Dict:\n \"\"\"Check that one and only one of examples/example_selector are provided.\"\"\"\n examples = values.get(\"examples\", None)\n example_selector = values.get(\"example_selector\", None)\n if examples and example_selector:\n raise ValueError(\n \"Only one of 'examples' and 'example_selector' should be provided\"\n )\n if examples is None and example_selector is None:\n raise ValueError(\n \"One of 'examples' and 'example_selector' should be provided\"\n )\n return values\n[docs] @root_validator()\n def template_is_valid(cls, values: Dict) -> Dict:\n \"\"\"Check that prefix, suffix and input variables are consistent.\"\"\"\n if values[\"validate_template\"]:\n check_valid_template(\n values[\"prefix\"] + values[\"suffix\"],\n values[\"template_format\"],\n values[\"input_variables\"] + list(values[\"partial_variables\"]),\n )\n return values\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n def _get_examples(self, **kwargs: Any) -> List[dict]:\n if self.examples is not None:\n return self.examples\n elif self.example_selector is not None:\n return self.example_selector.select_examples(kwargs)\n else:\n raise ValueError\n[docs] def format(self, **kwargs: Any) -> str:\n \"\"\"Format the prompt with the inputs.\n Args:\n kwargs: Any arguments to be passed to the prompt template.\n Returns:\n A formatted string.\n Example:\n .. code-block:: python\n prompt.format(variable1=\"foo\")\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/few_shot.html"} {"id": "4772af074122-2", "text": ".. code-block:: python\n prompt.format(variable1=\"foo\")\n \"\"\"\n kwargs = self._merge_partial_and_user_variables(**kwargs)\n # Get the examples to use.\n examples = self._get_examples(**kwargs)\n examples = [\n {k: e[k] for k in self.example_prompt.input_variables} for e in examples\n ]\n # Format the examples.\n example_strings = [\n self.example_prompt.format(**example) for example in examples\n ]\n # Create the overall template.\n pieces = [self.prefix, *example_strings, self.suffix]\n template = self.example_separator.join([piece for piece in pieces if piece])\n # Format the template with the input variables.\n return DEFAULT_FORMATTER_MAPPING[self.template_format](template, **kwargs)\n @property\n def _prompt_type(self) -> str:\n \"\"\"Return the prompt type key.\"\"\"\n return \"few_shot\"\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return a dictionary of the prompt.\"\"\"\n if self.example_selector:\n raise ValueError(\"Saving an example selector is not currently supported\")\n return super().dict(**kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/few_shot.html"} {"id": "b49693c4aa14-0", "text": "Source code for langchain.prompts.example_selector.base\n\"\"\"Interface for selecting examples to include in prompts.\"\"\"\nfrom abc import ABC, abstractmethod\nfrom typing import Any, Dict, List\n[docs]class BaseExampleSelector(ABC):\n \"\"\"Interface for selecting examples to include in prompts.\"\"\"\n[docs] @abstractmethod\n def add_example(self, example: Dict[str, str]) -> Any:\n \"\"\"Add new example to store for a key.\"\"\"\n[docs] @abstractmethod\n def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:\n \"\"\"Select which examples to use based on the inputs.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/base.html"} {"id": "f5e1aaf8a3be-0", "text": "Source code for langchain.prompts.example_selector.semantic_similarity\n\"\"\"Example selector that selects examples based on SemanticSimilarity.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional, Type\nfrom pydantic import BaseModel, Extra\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.prompts.example_selector.base import BaseExampleSelector\nfrom langchain.vectorstores.base import VectorStore\n[docs]def sorted_values(values: Dict[str, str]) -> List[Any]:\n \"\"\"Return a list of values in dict sorted by key.\"\"\"\n return [values[val] for val in sorted(values)]\n[docs]class SemanticSimilarityExampleSelector(BaseExampleSelector, BaseModel):\n \"\"\"Example selector that selects examples based on SemanticSimilarity.\"\"\"\n vectorstore: VectorStore\n \"\"\"VectorStore than contains information about examples.\"\"\"\n k: int = 4\n \"\"\"Number of examples to select.\"\"\"\n example_keys: Optional[List[str]] = None\n \"\"\"Optional keys to filter examples to.\"\"\"\n input_keys: Optional[List[str]] = None\n \"\"\"Optional keys to filter input to. If provided, the search is based on\n the input variables instead of all variables.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] def add_example(self, example: Dict[str, str]) -> str:\n \"\"\"Add new example to vectorstore.\"\"\"\n if self.input_keys:\n string_example = \" \".join(\n sorted_values({key: example[key] for key in self.input_keys})\n )\n else:\n string_example = \" \".join(sorted_values(example))\n ids = self.vectorstore.add_texts([string_example], metadatas=[example])\n return ids[0]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/semantic_similarity.html"} {"id": "f5e1aaf8a3be-1", "text": "return ids[0]\n[docs] def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:\n \"\"\"Select which examples to use based on semantic similarity.\"\"\"\n # Get the docs with the highest similarity.\n if self.input_keys:\n input_variables = {key: input_variables[key] for key in self.input_keys}\n query = \" \".join(sorted_values(input_variables))\n example_docs = self.vectorstore.similarity_search(query, k=self.k)\n # Get the examples from the metadata.\n # This assumes that examples are stored in metadata.\n examples = [dict(e.metadata) for e in example_docs]\n # If example keys are provided, filter examples to those keys.\n if self.example_keys:\n examples = [{k: eg[k] for k in self.example_keys} for eg in examples]\n return examples\n[docs] @classmethod\n def from_examples(\n cls,\n examples: List[dict],\n embeddings: Embeddings,\n vectorstore_cls: Type[VectorStore],\n k: int = 4,\n input_keys: Optional[List[str]] = None,\n **vectorstore_cls_kwargs: Any,\n ) -> SemanticSimilarityExampleSelector:\n \"\"\"Create k-shot example selector using example list and embeddings.\n Reshuffles examples dynamically based on query similarity.\n Args:\n examples: List of examples to use in the prompt.\n embeddings: An initialized embedding API interface, e.g. OpenAIEmbeddings().\n vectorstore_cls: A vector store DB interface class, e.g. FAISS.\n k: Number of examples to select\n input_keys: If provided, the search is based on the input variables\n instead of all variables.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/semantic_similarity.html"} {"id": "f5e1aaf8a3be-2", "text": "instead of all variables.\n vectorstore_cls_kwargs: optional kwargs containing url for vector store\n Returns:\n The ExampleSelector instantiated, backed by a vector store.\n \"\"\"\n if input_keys:\n string_examples = [\n \" \".join(sorted_values({k: eg[k] for k in input_keys}))\n for eg in examples\n ]\n else:\n string_examples = [\" \".join(sorted_values(eg)) for eg in examples]\n vectorstore = vectorstore_cls.from_texts(\n string_examples, embeddings, metadatas=examples, **vectorstore_cls_kwargs\n )\n return cls(vectorstore=vectorstore, k=k, input_keys=input_keys)\n[docs]class MaxMarginalRelevanceExampleSelector(SemanticSimilarityExampleSelector):\n \"\"\"ExampleSelector that selects examples based on Max Marginal Relevance.\n This was shown to improve performance in this paper:\n https://arxiv.org/pdf/2211.13892.pdf\n \"\"\"\n fetch_k: int = 20\n \"\"\"Number of examples to fetch to rerank.\"\"\"\n[docs] def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:\n \"\"\"Select which examples to use based on semantic similarity.\"\"\"\n # Get the docs with the highest similarity.\n if self.input_keys:\n input_variables = {key: input_variables[key] for key in self.input_keys}\n query = \" \".join(sorted_values(input_variables))\n example_docs = self.vectorstore.max_marginal_relevance_search(\n query, k=self.k, fetch_k=self.fetch_k\n )\n # Get the examples from the metadata.\n # This assumes that examples are stored in metadata.\n examples = [dict(e.metadata) for e in example_docs]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/semantic_similarity.html"} {"id": "f5e1aaf8a3be-3", "text": "examples = [dict(e.metadata) for e in example_docs]\n # If example keys are provided, filter examples to those keys.\n if self.example_keys:\n examples = [{k: eg[k] for k in self.example_keys} for eg in examples]\n return examples\n[docs] @classmethod\n def from_examples(\n cls,\n examples: List[dict],\n embeddings: Embeddings,\n vectorstore_cls: Type[VectorStore],\n k: int = 4,\n input_keys: Optional[List[str]] = None,\n fetch_k: int = 20,\n **vectorstore_cls_kwargs: Any,\n ) -> MaxMarginalRelevanceExampleSelector:\n \"\"\"Create k-shot example selector using example list and embeddings.\n Reshuffles examples dynamically based on query similarity.\n Args:\n examples: List of examples to use in the prompt.\n embeddings: An iniialized embedding API interface, e.g. OpenAIEmbeddings().\n vectorstore_cls: A vector store DB interface class, e.g. FAISS.\n k: Number of examples to select\n input_keys: If provided, the search is based on the input variables\n instead of all variables.\n vectorstore_cls_kwargs: optional kwargs containing url for vector store\n Returns:\n The ExampleSelector instantiated, backed by a vector store.\n \"\"\"\n if input_keys:\n string_examples = [\n \" \".join(sorted_values({k: eg[k] for k in input_keys}))\n for eg in examples\n ]\n else:\n string_examples = [\" \".join(sorted_values(eg)) for eg in examples]\n vectorstore = vectorstore_cls.from_texts(\n string_examples, embeddings, metadatas=examples, **vectorstore_cls_kwargs\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/semantic_similarity.html"} {"id": "f5e1aaf8a3be-4", "text": ")\n return cls(vectorstore=vectorstore, k=k, fetch_k=fetch_k, input_keys=input_keys)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/semantic_similarity.html"} {"id": "76fb8118ffe0-0", "text": "Source code for langchain.prompts.example_selector.ngram_overlap\n\"\"\"Select and order examples based on ngram overlap score (sentence_bleu score).\nhttps://www.nltk.org/_modules/nltk/translate/bleu_score.html\nhttps://aclanthology.org/P02-1040.pdf\n\"\"\"\nfrom typing import Dict, List\nimport numpy as np\nfrom pydantic import BaseModel, root_validator\nfrom langchain.prompts.example_selector.base import BaseExampleSelector\nfrom langchain.prompts.prompt import PromptTemplate\n[docs]def ngram_overlap_score(source: List[str], example: List[str]) -> float:\n \"\"\"Compute ngram overlap score of source and example as sentence_bleu score.\n Use sentence_bleu with method1 smoothing function and auto reweighting.\n Return float value between 0.0 and 1.0 inclusive.\n https://www.nltk.org/_modules/nltk/translate/bleu_score.html\n https://aclanthology.org/P02-1040.pdf\n \"\"\"\n from nltk.translate.bleu_score import (\n SmoothingFunction, # type: ignore\n sentence_bleu,\n )\n hypotheses = source[0].split()\n references = [s.split() for s in example]\n return float(\n sentence_bleu(\n references,\n hypotheses,\n smoothing_function=SmoothingFunction().method1,\n auto_reweigh=True,\n )\n )\n[docs]class NGramOverlapExampleSelector(BaseExampleSelector, BaseModel):\n \"\"\"Select and order examples based on ngram overlap score (sentence_bleu score).\n https://www.nltk.org/_modules/nltk/translate/bleu_score.html\n https://aclanthology.org/P02-1040.pdf\n \"\"\"\n examples: List[dict]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/ngram_overlap.html"} {"id": "76fb8118ffe0-1", "text": "\"\"\"\n examples: List[dict]\n \"\"\"A list of the examples that the prompt template expects.\"\"\"\n example_prompt: PromptTemplate\n \"\"\"Prompt template used to format the examples.\"\"\"\n threshold: float = -1.0\n \"\"\"Threshold at which algorithm stops. Set to -1.0 by default.\n For negative threshold:\n select_examples sorts examples by ngram_overlap_score, but excludes none.\n For threshold greater than 1.0:\n select_examples excludes all examples, and returns an empty list.\n For threshold equal to 0.0:\n select_examples sorts examples by ngram_overlap_score,\n and excludes examples with no ngram overlap with input.\n \"\"\"\n[docs] @root_validator(pre=True)\n def check_dependencies(cls, values: Dict) -> Dict:\n \"\"\"Check that valid dependencies exist.\"\"\"\n try:\n from nltk.translate.bleu_score import ( # noqa: disable=F401\n SmoothingFunction,\n sentence_bleu,\n )\n except ImportError as e:\n raise ValueError(\n \"Not all the correct dependencies for this ExampleSelect exist\"\n ) from e\n return values\n[docs] def add_example(self, example: Dict[str, str]) -> None:\n \"\"\"Add new example to list.\"\"\"\n self.examples.append(example)\n[docs] def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:\n \"\"\"Return list of examples sorted by ngram_overlap_score with input.\n Descending order.\n Excludes any examples with ngram_overlap_score less than or equal to threshold.\n \"\"\"\n inputs = list(input_variables.values())\n examples = []\n k = len(self.examples)\n score = [0.0] * k", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/ngram_overlap.html"} {"id": "76fb8118ffe0-2", "text": "k = len(self.examples)\n score = [0.0] * k\n first_prompt_template_key = self.example_prompt.input_variables[0]\n for i in range(k):\n score[i] = ngram_overlap_score(\n inputs, [self.examples[i][first_prompt_template_key]]\n )\n while True:\n arg_max = np.argmax(score)\n if (score[arg_max] < self.threshold) or abs(\n score[arg_max] - self.threshold\n ) < 1e-9:\n break\n examples.append(self.examples[arg_max])\n score[arg_max] = self.threshold - 1.0\n return examples", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/ngram_overlap.html"} {"id": "542ebd9a56c1-0", "text": "Source code for langchain.prompts.example_selector.length_based\n\"\"\"Select examples based on length.\"\"\"\nimport re\nfrom typing import Callable, Dict, List\nfrom pydantic import BaseModel, validator\nfrom langchain.prompts.example_selector.base import BaseExampleSelector\nfrom langchain.prompts.prompt import PromptTemplate\ndef _get_length_based(text: str) -> int:\n return len(re.split(\"\\n| \", text))\n[docs]class LengthBasedExampleSelector(BaseExampleSelector, BaseModel):\n \"\"\"Select examples based on length.\"\"\"\n examples: List[dict]\n \"\"\"A list of the examples that the prompt template expects.\"\"\"\n example_prompt: PromptTemplate\n \"\"\"Prompt template used to format the examples.\"\"\"\n get_text_length: Callable[[str], int] = _get_length_based\n \"\"\"Function to measure prompt length. Defaults to word count.\"\"\"\n max_length: int = 2048\n \"\"\"Max length for the prompt, beyond which examples are cut.\"\"\"\n example_text_lengths: List[int] = [] #: :meta private:\n[docs] def add_example(self, example: Dict[str, str]) -> None:\n \"\"\"Add new example to list.\"\"\"\n self.examples.append(example)\n string_example = self.example_prompt.format(**example)\n self.example_text_lengths.append(self.get_text_length(string_example))\n[docs] @validator(\"example_text_lengths\", always=True)\n def calculate_example_text_lengths(cls, v: List[int], values: Dict) -> List[int]:\n \"\"\"Calculate text lengths if they don't exist.\"\"\"\n # Check if text lengths were passed in\n if v:\n return v\n # If they were not, calculate them\n example_prompt = values[\"example_prompt\"]\n get_text_length = values[\"get_text_length\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/length_based.html"} {"id": "542ebd9a56c1-1", "text": "get_text_length = values[\"get_text_length\"]\n string_examples = [example_prompt.format(**eg) for eg in values[\"examples\"]]\n return [get_text_length(eg) for eg in string_examples]\n[docs] def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:\n \"\"\"Select which examples to use based on the input lengths.\"\"\"\n inputs = \" \".join(input_variables.values())\n remaining_length = self.max_length - self.get_text_length(inputs)\n i = 0\n examples = []\n while remaining_length > 0 and i < len(self.examples):\n new_length = remaining_length - self.example_text_lengths[i]\n if new_length < 0:\n break\n else:\n examples.append(self.examples[i])\n remaining_length = new_length\n i += 1\n return examples", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/length_based.html"} {"id": "82963e3459b7-0", "text": "Source code for langchain.utilities.duckduckgo_search\n\"\"\"Util that calls DuckDuckGo Search.\nNo setup required. Free.\nhttps://pypi.org/project/duckduckgo-search/\n\"\"\"\nfrom typing import Dict, List, Optional\nfrom pydantic import BaseModel, Extra\nfrom pydantic.class_validators import root_validator\n[docs]class DuckDuckGoSearchAPIWrapper(BaseModel):\n \"\"\"Wrapper for DuckDuckGo Search API.\n Free and does not require any setup\n \"\"\"\n k: int = 10\n region: Optional[str] = \"wt-wt\"\n safesearch: str = \"moderate\"\n time: Optional[str] = \"y\"\n max_results: int = 5\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that python package exists in environment.\"\"\"\n try:\n from duckduckgo_search import DDGS # noqa: F401\n except ImportError:\n raise ValueError(\n \"Could not import duckduckgo-search python package. \"\n \"Please install it with `pip install duckduckgo-search`.\"\n )\n return values\n[docs] def get_snippets(self, query: str) -> List[str]:\n \"\"\"Run query through DuckDuckGo and return concatenated results.\"\"\"\n from duckduckgo_search import DDGS\n with DDGS() as ddgs:\n results = ddgs.text(\n query,\n region=self.region,\n safesearch=self.safesearch,\n timelimit=self.time,\n )\n if results is None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/duckduckgo_search.html"} {"id": "82963e3459b7-1", "text": "timelimit=self.time,\n )\n if results is None:\n return [\"No good DuckDuckGo Search Result was found\"]\n snippets = []\n for i, res in enumerate(results, 1):\n if res is not None:\n snippets.append(res[\"body\"])\n if len(snippets) == self.max_results:\n break\n return snippets\n[docs] def run(self, query: str) -> str:\n snippets = self.get_snippets(query)\n return \" \".join(snippets)\n[docs] def results(self, query: str, num_results: int) -> List[Dict[str, str]]:\n \"\"\"Run query through DuckDuckGo and return metadata.\n Args:\n query: The query to search for.\n num_results: The number of results to return.\n Returns:\n A list of dictionaries with the following keys:\n snippet - The description of the result.\n title - The title of the result.\n link - The link to the result.\n \"\"\"\n from duckduckgo_search import DDGS\n with DDGS() as ddgs:\n results = ddgs.text(\n query,\n region=self.region,\n safesearch=self.safesearch,\n timelimit=self.time,\n )\n if results is None:\n return [{\"Result\": \"No good DuckDuckGo Search Result was found\"}]\n def to_metadata(result: Dict) -> Dict[str, str]:\n return {\n \"snippet\": result[\"body\"],\n \"title\": result[\"title\"],\n \"link\": result[\"href\"],\n }\n formatted_results = []\n for i, res in enumerate(results, 1):\n if res is not None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/duckduckgo_search.html"} {"id": "82963e3459b7-2", "text": "if res is not None:\n formatted_results.append(to_metadata(res))\n if len(formatted_results) == num_results:\n break\n return formatted_results", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/duckduckgo_search.html"} {"id": "272268a690cb-0", "text": "Source code for langchain.utilities.searx_search\n\"\"\"Utility for using SearxNG meta search API.\nSearxNG is a privacy-friendly free metasearch engine that aggregates results from\n`multiple search engines\n`_ and databases and\nsupports the `OpenSearch\n`_\nspecification.\nMore details on the installation instructions `here. <../../integrations/searx.html>`_\nFor the search API refer to https://docs.searxng.org/dev/search_api.html\nQuick Start\n-----------\nIn order to use this utility you need to provide the searx host. This can be done\nby passing the named parameter :attr:`searx_host `\nor exporting the environment variable SEARX_HOST.\nNote: this is the only required parameter.\nThen create a searx search instance like this:\n .. code-block:: python\n from langchain.utilities import SearxSearchWrapper\n # when the host starts with `http` SSL is disabled and the connection\n # is assumed to be on a private network\n searx_host='http://self.hosted'\n search = SearxSearchWrapper(searx_host=searx_host)\nYou can now use the ``search`` instance to query the searx API.\nSearching\n---------\nUse the :meth:`run() ` and\n:meth:`results() ` methods to query the searx API.\nOther methods are available for convenience.\n:class:`SearxResults` is a convenience wrapper around the raw json result.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"} {"id": "272268a690cb-1", "text": ":class:`SearxResults` is a convenience wrapper around the raw json result.\nExample usage of the ``run`` method to make a search:\n .. code-block:: python\n s.run(query=\"what is the best search engine?\")\nEngine Parameters\n-----------------\nYou can pass any `accepted searx search API\n`_ parameters to the\n:py:class:`SearxSearchWrapper` instance.\nIn the following example we are using the\n:attr:`engines ` and the ``language`` parameters:\n .. code-block:: python\n # assuming the searx host is set as above or exported as an env variable\n s = SearxSearchWrapper(engines=['google', 'bing'],\n language='es')\nSearch Tips\n-----------\nSearx offers a special\n`search syntax `_\nthat can also be used instead of passing engine parameters.\nFor example the following query:\n .. code-block:: python\n s = SearxSearchWrapper(\"langchain library\", engines=['github'])\n # can also be written as:\n s = SearxSearchWrapper(\"langchain library !github\")\n # or even:\n s = SearxSearchWrapper(\"langchain library !gh\")\nIn some situations you might want to pass an extra string to the search query.\nFor example when the `run()` method is called by an agent. The search suffix can\nalso be used as a way to pass extra parameters to searx or the underlying search\nengines.\n .. code-block:: python\n # select the github engine and pass the search suffix", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"} {"id": "272268a690cb-2", "text": ".. code-block:: python\n # select the github engine and pass the search suffix\n s = SearchWrapper(\"langchain library\", query_suffix=\"!gh\")\n s = SearchWrapper(\"langchain library\")\n # select github the conventional google search syntax\n s.run(\"large language models\", query_suffix=\"site:github.com\")\n*NOTE*: A search suffix can be defined on both the instance and the method level.\nThe resulting query will be the concatenation of the two with the former taking\nprecedence.\nSee `SearxNG Configured Engines\n`_ and\n`SearxNG Search Syntax `_\nfor more details.\nNotes\n-----\nThis wrapper is based on the SearxNG fork https://github.com/searxng/searxng which is\nbetter maintained than the original Searx project and offers more features.\nPublic searxNG instances often use a rate limiter for API usage, so you might want to\nuse a self hosted instance and disable the rate limiter.\nIf you are self-hosting an instance you can customize the rate limiter for your\nown network as described\n`here `_.\nFor a list of public SearxNG instances see https://searx.space/\n\"\"\"\nimport json\nfrom typing import Any, Dict, List, Optional\nimport aiohttp\nimport requests\nfrom pydantic import BaseModel, Extra, Field, PrivateAttr, root_validator, validator\nfrom langchain.utils import get_from_dict_or_env\ndef _get_default_params() -> dict:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"} {"id": "272268a690cb-3", "text": "def _get_default_params() -> dict:\n return {\"language\": \"en\", \"format\": \"json\"}\n[docs]class SearxResults(dict):\n \"\"\"Dict like wrapper around search api results.\"\"\"\n _data = \"\"\n def __init__(self, data: str):\n \"\"\"Take a raw result from Searx and make it into a dict like object.\"\"\"\n json_data = json.loads(data)\n super().__init__(json_data)\n self.__dict__ = self\n def __str__(self) -> str:\n \"\"\"Text representation of searx result.\"\"\"\n return self._data\n @property\n def results(self) -> Any:\n \"\"\"Silence mypy for accessing this field.\n :meta private:\n \"\"\"\n return self.get(\"results\")\n @property\n def answers(self) -> Any:\n \"\"\"Helper accessor on the json result.\"\"\"\n return self.get(\"answers\")\n[docs]class SearxSearchWrapper(BaseModel):\n \"\"\"Wrapper for Searx API.\n To use you need to provide the searx host by passing the named parameter\n ``searx_host`` or exporting the environment variable ``SEARX_HOST``.\n In some situations you might want to disable SSL verification, for example\n if you are running searx locally. You can do this by passing the named parameter\n ``unsecure``. You can also pass the host url scheme as ``http`` to disable SSL.\n Example:\n .. code-block:: python\n from langchain.utilities import SearxSearchWrapper\n searx = SearxSearchWrapper(searx_host=\"http://localhost:8888\")\n Example with SSL disabled:\n .. code-block:: python", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"} {"id": "272268a690cb-4", "text": "Example with SSL disabled:\n .. code-block:: python\n from langchain.utilities import SearxSearchWrapper\n # note the unsecure parameter is not needed if you pass the url scheme as\n # http\n searx = SearxSearchWrapper(searx_host=\"http://localhost:8888\",\n unsecure=True)\n \"\"\"\n _result: SearxResults = PrivateAttr()\n searx_host: str = \"\"\n unsecure: bool = False\n params: dict = Field(default_factory=_get_default_params)\n headers: Optional[dict] = None\n engines: Optional[List[str]] = []\n categories: Optional[List[str]] = []\n query_suffix: Optional[str] = \"\"\n k: int = 10\n aiosession: Optional[Any] = None\n[docs] @validator(\"unsecure\")\n def disable_ssl_warnings(cls, v: bool) -> bool:\n \"\"\"Disable SSL warnings.\"\"\"\n if v:\n # requests.urllib3.disable_warnings()\n try:\n import urllib3\n urllib3.disable_warnings()\n except ImportError as e:\n print(e)\n return v\n[docs] @root_validator()\n def validate_params(cls, values: Dict) -> Dict:\n \"\"\"Validate that custom searx params are merged with default ones.\"\"\"\n user_params = values[\"params\"]\n default = _get_default_params()\n values[\"params\"] = {**default, **user_params}\n engines = values.get(\"engines\")\n if engines:\n values[\"params\"][\"engines\"] = \",\".join(engines)\n categories = values.get(\"categories\")\n if categories:\n values[\"params\"][\"categories\"] = \",\".join(categories)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"} {"id": "272268a690cb-5", "text": "if categories:\n values[\"params\"][\"categories\"] = \",\".join(categories)\n searx_host = get_from_dict_or_env(values, \"searx_host\", \"SEARX_HOST\")\n if not searx_host.startswith(\"http\"):\n print(\n f\"Warning: missing the url scheme on host \\\n ! assuming secure https://{searx_host} \"\n )\n searx_host = \"https://\" + searx_host\n elif searx_host.startswith(\"http://\"):\n values[\"unsecure\"] = True\n cls.disable_ssl_warnings(True)\n values[\"searx_host\"] = searx_host\n return values\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n def _searx_api_query(self, params: dict) -> SearxResults:\n \"\"\"Actual request to searx API.\"\"\"\n raw_result = requests.get(\n self.searx_host,\n headers=self.headers,\n params=params,\n verify=not self.unsecure,\n )\n # test if http result is ok\n if not raw_result.ok:\n raise ValueError(\"Searx API returned an error: \", raw_result.text)\n res = SearxResults(raw_result.text)\n self._result = res\n return res\n async def _asearx_api_query(self, params: dict) -> SearxResults:\n if not self.aiosession:\n async with aiohttp.ClientSession() as session:\n async with session.get(\n self.searx_host,\n headers=self.headers,\n params=params,\n ssl=(lambda: False if self.unsecure else None)(),\n ) as response:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"} {"id": "272268a690cb-6", "text": ") as response:\n if not response.ok:\n raise ValueError(\"Searx API returned an error: \", response.text)\n result = SearxResults(await response.text())\n self._result = result\n else:\n async with self.aiosession.get(\n self.searx_host,\n headers=self.headers,\n params=params,\n verify=not self.unsecure,\n ) as response:\n if not response.ok:\n raise ValueError(\"Searx API returned an error: \", response.text)\n result = SearxResults(await response.text())\n self._result = result\n return result\n[docs] def run(\n self,\n query: str,\n engines: Optional[List[str]] = None,\n categories: Optional[List[str]] = None,\n query_suffix: Optional[str] = \"\",\n **kwargs: Any,\n ) -> str:\n \"\"\"Run query through Searx API and parse results.\n You can pass any other params to the searx query API.\n Args:\n query: The query to search for.\n query_suffix: Extra suffix appended to the query.\n engines: List of engines to use for the query.\n categories: List of categories to use for the query.\n **kwargs: extra parameters to pass to the searx API.\n Returns:\n str: The result of the query.\n Raises:\n ValueError: If an error occurred with the query.\n Example:\n This will make a query to the qwant engine:\n .. code-block:: python\n from langchain.utilities import SearxSearchWrapper\n searx = SearxSearchWrapper(searx_host=\"http://my.searx.host\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"} {"id": "272268a690cb-7", "text": "searx.run(\"what is the weather in France ?\", engine=\"qwant\")\n # the same result can be achieved using the `!` syntax of searx\n # to select the engine using `query_suffix`\n searx.run(\"what is the weather in France ?\", query_suffix=\"!qwant\")\n \"\"\"\n _params = {\n \"q\": query,\n }\n params = {**self.params, **_params, **kwargs}\n if self.query_suffix and len(self.query_suffix) > 0:\n params[\"q\"] += \" \" + self.query_suffix\n if isinstance(query_suffix, str) and len(query_suffix) > 0:\n params[\"q\"] += \" \" + query_suffix\n if isinstance(engines, list) and len(engines) > 0:\n params[\"engines\"] = \",\".join(engines)\n if isinstance(categories, list) and len(categories) > 0:\n params[\"categories\"] = \",\".join(categories)\n res = self._searx_api_query(params)\n if len(res.answers) > 0:\n toret = res.answers[0]\n # only return the content of the results list\n elif len(res.results) > 0:\n toret = \"\\n\\n\".join([r.get(\"content\", \"\") for r in res.results[: self.k]])\n else:\n toret = \"No good search result found\"\n return toret\n[docs] async def arun(\n self,\n query: str,\n engines: Optional[List[str]] = None,\n query_suffix: Optional[str] = \"\",\n **kwargs: Any,\n ) -> str:\n \"\"\"Asynchronously version of `run`.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"} {"id": "272268a690cb-8", "text": ") -> str:\n \"\"\"Asynchronously version of `run`.\"\"\"\n _params = {\n \"q\": query,\n }\n params = {**self.params, **_params, **kwargs}\n if self.query_suffix and len(self.query_suffix) > 0:\n params[\"q\"] += \" \" + self.query_suffix\n if isinstance(query_suffix, str) and len(query_suffix) > 0:\n params[\"q\"] += \" \" + query_suffix\n if isinstance(engines, list) and len(engines) > 0:\n params[\"engines\"] = \",\".join(engines)\n res = await self._asearx_api_query(params)\n if len(res.answers) > 0:\n toret = res.answers[0]\n # only return the content of the results list\n elif len(res.results) > 0:\n toret = \"\\n\\n\".join([r.get(\"content\", \"\") for r in res.results[: self.k]])\n else:\n toret = \"No good search result found\"\n return toret\n[docs] def results(\n self,\n query: str,\n num_results: int,\n engines: Optional[List[str]] = None,\n categories: Optional[List[str]] = None,\n query_suffix: Optional[str] = \"\",\n **kwargs: Any,\n ) -> List[Dict]:\n \"\"\"Run query through Searx API and returns the results with metadata.\n Args:\n query: The query to search for.\n query_suffix: Extra suffix appended to the query.\n num_results: Limit the number of results to return.\n engines: List of engines to use for the query.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"} {"id": "272268a690cb-9", "text": "engines: List of engines to use for the query.\n categories: List of categories to use for the query.\n **kwargs: extra parameters to pass to the searx API.\n Returns:\n Dict with the following keys:\n {\n snippet: The description of the result.\n title: The title of the result.\n link: The link to the result.\n engines: The engines used for the result.\n category: Searx category of the result.\n }\n \"\"\"\n _params = {\n \"q\": query,\n }\n params = {**self.params, **_params, **kwargs}\n if self.query_suffix and len(self.query_suffix) > 0:\n params[\"q\"] += \" \" + self.query_suffix\n if isinstance(query_suffix, str) and len(query_suffix) > 0:\n params[\"q\"] += \" \" + query_suffix\n if isinstance(engines, list) and len(engines) > 0:\n params[\"engines\"] = \",\".join(engines)\n if isinstance(categories, list) and len(categories) > 0:\n params[\"categories\"] = \",\".join(categories)\n results = self._searx_api_query(params).results[:num_results]\n if len(results) == 0:\n return [{\"Result\": \"No good Search Result was found\"}]\n return [\n {\n \"snippet\": result.get(\"content\", \"\"),\n \"title\": result[\"title\"],\n \"link\": result[\"url\"],\n \"engines\": result[\"engines\"],\n \"category\": result[\"category\"],\n }\n for result in results\n ]\n[docs] async def aresults(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"} {"id": "272268a690cb-10", "text": "]\n[docs] async def aresults(\n self,\n query: str,\n num_results: int,\n engines: Optional[List[str]] = None,\n query_suffix: Optional[str] = \"\",\n **kwargs: Any,\n ) -> List[Dict]:\n \"\"\"Asynchronously query with json results.\n Uses aiohttp. See `results` for more info.\n \"\"\"\n _params = {\n \"q\": query,\n }\n params = {**self.params, **_params, **kwargs}\n if self.query_suffix and len(self.query_suffix) > 0:\n params[\"q\"] += \" \" + self.query_suffix\n if isinstance(query_suffix, str) and len(query_suffix) > 0:\n params[\"q\"] += \" \" + query_suffix\n if isinstance(engines, list) and len(engines) > 0:\n params[\"engines\"] = \",\".join(engines)\n results = (await self._asearx_api_query(params)).results[:num_results]\n if len(results) == 0:\n return [{\"Result\": \"No good Search Result was found\"}]\n return [\n {\n \"snippet\": result.get(\"content\", \"\"),\n \"title\": result[\"title\"],\n \"link\": result[\"url\"],\n \"engines\": result[\"engines\"],\n \"category\": result[\"category\"],\n }\n for result in results\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"} {"id": "a7f2d5b7c21c-0", "text": "Source code for langchain.utilities.apify\nfrom typing import Any, Callable, Dict, Optional\nfrom pydantic import BaseModel, root_validator\nfrom langchain.document_loaders import ApifyDatasetLoader\nfrom langchain.document_loaders.base import Document\nfrom langchain.utils import get_from_dict_or_env\n[docs]class ApifyWrapper(BaseModel):\n \"\"\"Wrapper around Apify.\n To use, you should have the ``apify-client`` python package installed,\n and the environment variable ``APIFY_API_TOKEN`` set with your API key, or pass\n `apify_api_token` as a named parameter to the constructor.\n \"\"\"\n apify_client: Any\n apify_client_async: Any\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate environment.\n Validate that an Apify API token is set and the apify-client\n Python package exists in the current environment.\n \"\"\"\n apify_api_token = get_from_dict_or_env(\n values, \"apify_api_token\", \"APIFY_API_TOKEN\"\n )\n try:\n from apify_client import ApifyClient, ApifyClientAsync\n values[\"apify_client\"] = ApifyClient(apify_api_token)\n values[\"apify_client_async\"] = ApifyClientAsync(apify_api_token)\n except ImportError:\n raise ValueError(\n \"Could not import apify-client Python package. \"\n \"Please install it with `pip install apify-client`.\"\n )\n return values\n[docs] def call_actor(\n self,\n actor_id: str,\n run_input: Dict,\n dataset_mapping_function: Callable[[Dict], Document],\n *,\n build: Optional[str] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/apify.html"} {"id": "a7f2d5b7c21c-1", "text": "*,\n build: Optional[str] = None,\n memory_mbytes: Optional[int] = None,\n timeout_secs: Optional[int] = None,\n ) -> ApifyDatasetLoader:\n \"\"\"Run an Actor on the Apify platform and wait for results to be ready.\n Args:\n actor_id (str): The ID or name of the Actor on the Apify platform.\n run_input (Dict): The input object of the Actor that you're trying to run.\n dataset_mapping_function (Callable): A function that takes a single\n dictionary (an Apify dataset item) and converts it to an\n instance of the Document class.\n build (str, optional): Optionally specifies the actor build to run.\n It can be either a build tag or build number.\n memory_mbytes (int, optional): Optional memory limit for the run,\n in megabytes.\n timeout_secs (int, optional): Optional timeout for the run, in seconds.\n Returns:\n ApifyDatasetLoader: A loader that will fetch the records from the\n Actor run's default dataset.\n \"\"\"\n actor_call = self.apify_client.actor(actor_id).call(\n run_input=run_input,\n build=build,\n memory_mbytes=memory_mbytes,\n timeout_secs=timeout_secs,\n )\n return ApifyDatasetLoader(\n dataset_id=actor_call[\"defaultDatasetId\"],\n dataset_mapping_function=dataset_mapping_function,\n )\n[docs] async def acall_actor(\n self,\n actor_id: str,\n run_input: Dict,\n dataset_mapping_function: Callable[[Dict], Document],\n *,\n build: Optional[str] = None,\n memory_mbytes: Optional[int] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/apify.html"} {"id": "a7f2d5b7c21c-2", "text": "memory_mbytes: Optional[int] = None,\n timeout_secs: Optional[int] = None,\n ) -> ApifyDatasetLoader:\n \"\"\"Run an Actor on the Apify platform and wait for results to be ready.\n Args:\n actor_id (str): The ID or name of the Actor on the Apify platform.\n run_input (Dict): The input object of the Actor that you're trying to run.\n dataset_mapping_function (Callable): A function that takes a single\n dictionary (an Apify dataset item) and converts it to\n an instance of the Document class.\n build (str, optional): Optionally specifies the actor build to run.\n It can be either a build tag or build number.\n memory_mbytes (int, optional): Optional memory limit for the run,\n in megabytes.\n timeout_secs (int, optional): Optional timeout for the run, in seconds.\n Returns:\n ApifyDatasetLoader: A loader that will fetch the records from the\n Actor run's default dataset.\n \"\"\"\n actor_call = await self.apify_client_async.actor(actor_id).call(\n run_input=run_input,\n build=build,\n memory_mbytes=memory_mbytes,\n timeout_secs=timeout_secs,\n )\n return ApifyDatasetLoader(\n dataset_id=actor_call[\"defaultDatasetId\"],\n dataset_mapping_function=dataset_mapping_function,\n )\n[docs] def call_actor_task(\n self,\n task_id: str,\n task_input: Dict,\n dataset_mapping_function: Callable[[Dict], Document],\n *,\n build: Optional[str] = None,\n memory_mbytes: Optional[int] = None,\n timeout_secs: Optional[int] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/apify.html"} {"id": "a7f2d5b7c21c-3", "text": "timeout_secs: Optional[int] = None,\n ) -> ApifyDatasetLoader:\n \"\"\"Run a saved Actor task on Apify and wait for results to be ready.\n Args:\n task_id (str): The ID or name of the task on the Apify platform.\n task_input (Dict): The input object of the task that you're trying to run.\n Overrides the task's saved input.\n dataset_mapping_function (Callable): A function that takes a single\n dictionary (an Apify dataset item) and converts it to an\n instance of the Document class.\n build (str, optional): Optionally specifies the actor build to run.\n It can be either a build tag or build number.\n memory_mbytes (int, optional): Optional memory limit for the run,\n in megabytes.\n timeout_secs (int, optional): Optional timeout for the run, in seconds.\n Returns:\n ApifyDatasetLoader: A loader that will fetch the records from the\n task run's default dataset.\n \"\"\"\n task_call = self.apify_client.task(task_id).call(\n task_input=task_input,\n build=build,\n memory_mbytes=memory_mbytes,\n timeout_secs=timeout_secs,\n )\n return ApifyDatasetLoader(\n dataset_id=task_call[\"defaultDatasetId\"],\n dataset_mapping_function=dataset_mapping_function,\n )\n[docs] async def acall_actor_task(\n self,\n task_id: str,\n task_input: Dict,\n dataset_mapping_function: Callable[[Dict], Document],\n *,\n build: Optional[str] = None,\n memory_mbytes: Optional[int] = None,\n timeout_secs: Optional[int] = None,\n ) -> ApifyDatasetLoader:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/apify.html"} {"id": "a7f2d5b7c21c-4", "text": "timeout_secs: Optional[int] = None,\n ) -> ApifyDatasetLoader:\n \"\"\"Run a saved Actor task on Apify and wait for results to be ready.\n Args:\n task_id (str): The ID or name of the task on the Apify platform.\n task_input (Dict): The input object of the task that you're trying to run.\n Overrides the task's saved input.\n dataset_mapping_function (Callable): A function that takes a single\n dictionary (an Apify dataset item) and converts it to an\n instance of the Document class.\n build (str, optional): Optionally specifies the actor build to run.\n It can be either a build tag or build number.\n memory_mbytes (int, optional): Optional memory limit for the run,\n in megabytes.\n timeout_secs (int, optional): Optional timeout for the run, in seconds.\n Returns:\n ApifyDatasetLoader: A loader that will fetch the records from the\n task run's default dataset.\n \"\"\"\n task_call = await self.apify_client_async.task(task_id).call(\n task_input=task_input,\n build=build,\n memory_mbytes=memory_mbytes,\n timeout_secs=timeout_secs,\n )\n return ApifyDatasetLoader(\n dataset_id=task_call[\"defaultDatasetId\"],\n dataset_mapping_function=dataset_mapping_function,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/apify.html"} {"id": "9ac3cefc4da5-0", "text": "Source code for langchain.utilities.python\nimport sys\nfrom io import StringIO\nfrom typing import Dict, Optional\nfrom pydantic import BaseModel, Field\n[docs]class PythonREPL(BaseModel):\n \"\"\"Simulates a standalone Python REPL.\"\"\"\n globals: Optional[Dict] = Field(default_factory=dict, alias=\"_globals\")\n locals: Optional[Dict] = Field(default_factory=dict, alias=\"_locals\")\n[docs] def run(self, command: str) -> str:\n \"\"\"Run command with own globals/locals and returns anything printed.\"\"\"\n old_stdout = sys.stdout\n sys.stdout = mystdout = StringIO()\n try:\n exec(command, self.globals, self.locals)\n sys.stdout = old_stdout\n output = mystdout.getvalue()\n except Exception as e:\n sys.stdout = old_stdout\n output = repr(e)\n return output", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/python.html"} {"id": "5dc80f41eb88-0", "text": "Source code for langchain.utilities.vertexai\n\"\"\"Utilities to init Vertex AI.\"\"\"\nfrom typing import TYPE_CHECKING, Optional\nif TYPE_CHECKING:\n from google.auth.credentials import Credentials\n[docs]def raise_vertex_import_error() -> None:\n \"\"\"Raise ImportError related to Vertex SDK being not available.\n Raises:\n ImportError: an ImportError that mentions a required version of the SDK.\n \"\"\"\n sdk = \"'google-cloud-aiplatform>=1.26.0'\"\n raise ImportError(\n \"Could not import VertexAI. Please, install it with \" f\"pip install {sdk}\"\n )\n[docs]def init_vertexai(\n project: Optional[str] = None,\n location: Optional[str] = None,\n credentials: Optional[\"Credentials\"] = None,\n) -> None:\n \"\"\"Init vertexai.\n Args:\n project: The default GCP project to use when making Vertex API calls.\n location: The default location to use when making API calls.\n credentials: The default custom\n credentials to use when making API calls. If not provided credentials\n will be ascertained from the environment.\n Raises:\n ImportError: If importing vertexai SDK did not succeed.\n \"\"\"\n try:\n import vertexai\n except ImportError:\n raise_vertex_import_error()\n vertexai.init(\n project=project,\n location=location,\n credentials=credentials,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/vertexai.html"} {"id": "10b83b39e165-0", "text": "Source code for langchain.utilities.google_search\n\"\"\"Util that calls Google Search.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.utils import get_from_dict_or_env\n[docs]class GoogleSearchAPIWrapper(BaseModel):\n \"\"\"Wrapper for Google Search API.\n Adapted from: Instructions adapted from https://stackoverflow.com/questions/\n 37083058/\n programmatically-searching-google-in-python-using-custom-search\n TODO: DOCS for using it\n 1. Install google-api-python-client\n - If you don't already have a Google account, sign up.\n - If you have never created a Google APIs Console project,\n read the Managing Projects page and create a project in the Google API Console.\n - Install the library using pip install google-api-python-client\n The current version of the library is 2.70.0 at this time\n 2. To create an API key:\n - Navigate to the APIs & Services\u2192Credentials panel in Cloud Console.\n - Select Create credentials, then select API key from the drop-down menu.\n - The API key created dialog box displays your newly created key.\n - You now have an API_KEY\n 3. Setup Custom Search Engine so you can search the entire web\n - Create a custom search engine in this link.\n - In Sites to search, add any valid URL (i.e. www.stackoverflow.com).\n - That\u2019s all you have to fill up, the rest doesn\u2019t matter.\n In the left-side menu, click Edit search engine \u2192 {your search engine name}\n \u2192 Setup Set Search the entire web to ON. Remove the URL you added from\n the list of Sites to search.\n - Under Search engine ID you\u2019ll find the search-engine-ID.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_search.html"} {"id": "10b83b39e165-1", "text": "- Under Search engine ID you\u2019ll find the search-engine-ID.\n 4. Enable the Custom Search API\n - Navigate to the APIs & Services\u2192Dashboard panel in Cloud Console.\n - Click Enable APIs and Services.\n - Search for Custom Search API and click on it.\n - Click Enable.\n URL for it: https://console.cloud.google.com/apis/library/customsearch.googleapis\n .com\n \"\"\"\n search_engine: Any #: :meta private:\n google_api_key: Optional[str] = None\n google_cse_id: Optional[str] = None\n k: int = 10\n siterestrict: bool = False\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n def _google_search_results(self, search_term: str, **kwargs: Any) -> List[dict]:\n cse = self.search_engine.cse()\n if self.siterestrict:\n cse = cse.siterestrict()\n res = cse.list(q=search_term, cx=self.google_cse_id, **kwargs).execute()\n return res.get(\"items\", [])\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n google_api_key = get_from_dict_or_env(\n values, \"google_api_key\", \"GOOGLE_API_KEY\"\n )\n values[\"google_api_key\"] = google_api_key\n google_cse_id = get_from_dict_or_env(values, \"google_cse_id\", \"GOOGLE_CSE_ID\")\n values[\"google_cse_id\"] = google_cse_id\n try:\n from googleapiclient.discovery import build\n except ImportError:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_search.html"} {"id": "10b83b39e165-2", "text": "try:\n from googleapiclient.discovery import build\n except ImportError:\n raise ImportError(\n \"google-api-python-client is not installed. \"\n \"Please install it with `pip install google-api-python-client`\"\n )\n service = build(\"customsearch\", \"v1\", developerKey=google_api_key)\n values[\"search_engine\"] = service\n return values\n[docs] def run(self, query: str) -> str:\n \"\"\"Run query through GoogleSearch and parse result.\"\"\"\n snippets = []\n results = self._google_search_results(query, num=self.k)\n if len(results) == 0:\n return \"No good Google Search Result was found\"\n for result in results:\n if \"snippet\" in result:\n snippets.append(result[\"snippet\"])\n return \" \".join(snippets)\n[docs] def results(\n self,\n query: str,\n num_results: int,\n search_params: Optional[Dict[str, str]] = None,\n ) -> List[Dict]:\n \"\"\"Run query through GoogleSearch and return metadata.\n Args:\n query: The query to search for.\n num_results: The number of results to return.\n search_params: Parameters to be passed on search\n Returns:\n A list of dictionaries with the following keys:\n snippet - The description of the result.\n title - The title of the result.\n link - The link to the result.\n \"\"\"\n metadata_results = []\n results = self._google_search_results(\n query, num=num_results, **(search_params or {})\n )\n if len(results) == 0:\n return [{\"Result\": \"No good Google Search Result was found\"}]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_search.html"} {"id": "10b83b39e165-3", "text": "return [{\"Result\": \"No good Google Search Result was found\"}]\n for result in results:\n metadata_result = {\n \"title\": result[\"title\"],\n \"link\": result[\"link\"],\n }\n if \"snippet\" in result:\n metadata_result[\"snippet\"] = result[\"snippet\"]\n metadata_results.append(metadata_result)\n return metadata_results", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_search.html"} {"id": "6a2f41b085f2-0", "text": "Source code for langchain.utilities.twilio\n\"\"\"Util that calls Twilio.\"\"\"\nfrom typing import Any, Dict, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.utils import get_from_dict_or_env\n[docs]class TwilioAPIWrapper(BaseModel):\n \"\"\"Messaging Client using Twilio.\n To use, you should have the ``twilio`` python package installed,\n and the environment variables ``TWILIO_ACCOUNT_SID``, ``TWILIO_AUTH_TOKEN``, and\n ``TWILIO_FROM_NUMBER``, or pass `account_sid`, `auth_token`, and `from_number` as\n named parameters to the constructor.\n Example:\n .. code-block:: python\n from langchain.utilities.twilio import TwilioAPIWrapper\n twilio = TwilioAPIWrapper(\n account_sid=\"ACxxx\",\n auth_token=\"xxx\",\n from_number=\"+10123456789\"\n )\n twilio.run('test', '+12484345508')\n \"\"\"\n client: Any #: :meta private:\n account_sid: Optional[str] = None\n \"\"\"Twilio account string identifier.\"\"\"\n auth_token: Optional[str] = None\n \"\"\"Twilio auth token.\"\"\"\n from_number: Optional[str] = None\n \"\"\"A Twilio phone number in [E.164](https://www.twilio.com/docs/glossary/what-e164) \n format, an \n [alphanumeric sender ID](https://www.twilio.com/docs/sms/send-messages#use-an-alphanumeric-sender-id), \n or a [Channel Endpoint address](https://www.twilio.com/docs/sms/channels#channel-addresses) \n that is enabled for the type of message you want to send. Phone numbers or", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/twilio.html"} {"id": "6a2f41b085f2-1", "text": "that is enabled for the type of message you want to send. Phone numbers or \n [short codes](https://www.twilio.com/docs/sms/api/short-code) purchased from \n Twilio also work here. You cannot, for example, spoof messages from a private \n cell phone number. If you are using `messaging_service_sid`, this parameter \n must be empty.\n \"\"\" # noqa: E501\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = False\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n try:\n from twilio.rest import Client\n except ImportError:\n raise ImportError(\n \"Could not import twilio python package. \"\n \"Please install it with `pip install twilio`.\"\n )\n account_sid = get_from_dict_or_env(values, \"account_sid\", \"TWILIO_ACCOUNT_SID\")\n auth_token = get_from_dict_or_env(values, \"auth_token\", \"TWILIO_AUTH_TOKEN\")\n values[\"from_number\"] = get_from_dict_or_env(\n values, \"from_number\", \"TWILIO_FROM_NUMBER\"\n )\n values[\"client\"] = Client(account_sid, auth_token)\n return values\n[docs] def run(self, body: str, to: str) -> str:\n \"\"\"Run body through Twilio and respond with message sid.\n Args:\n body: The text of the message you want to send. Can be up to 1,600\n characters in length.\n to: The destination phone number in", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/twilio.html"} {"id": "6a2f41b085f2-2", "text": "characters in length.\n to: The destination phone number in\n [E.164](https://www.twilio.com/docs/glossary/what-e164) format for\n SMS/MMS or\n [Channel user address](https://www.twilio.com/docs/sms/channels#channel-addresses)\n for other 3rd-party channels.\n \"\"\" # noqa: E501\n message = self.client.messages.create(to, from_=self.from_number, body=body)\n return message.sid", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/twilio.html"} {"id": "ac49516beebf-0", "text": "Source code for langchain.utilities.google_places_api\n\"\"\"Chain that calls Google Places API.\n\"\"\"\nimport logging\nfrom typing import Any, Dict, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.utils import get_from_dict_or_env\n[docs]class GooglePlacesAPIWrapper(BaseModel):\n \"\"\"Wrapper around Google Places API.\n To use, you should have the ``googlemaps`` python package installed,\n **an API key for the google maps platform**,\n and the environment variable ''GPLACES_API_KEY''\n set with your API key , or pass 'gplaces_api_key'\n as a named parameter to the constructor.\n By default, this will return the all the results on the input query.\n You can use the top_k_results argument to limit the number of results.\n Example:\n .. code-block:: python\n from langchain import GooglePlacesAPIWrapper\n gplaceapi = GooglePlacesAPIWrapper()\n \"\"\"\n gplaces_api_key: Optional[str] = None\n google_map_client: Any #: :meta private:\n top_k_results: Optional[int] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key is in your environment variable.\"\"\"\n gplaces_api_key = get_from_dict_or_env(\n values, \"gplaces_api_key\", \"GPLACES_API_KEY\"\n )\n values[\"gplaces_api_key\"] = gplaces_api_key\n try:\n import googlemaps\n values[\"google_map_client\"] = googlemaps.Client(gplaces_api_key)\n except ImportError:\n raise ImportError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_places_api.html"} {"id": "ac49516beebf-1", "text": "except ImportError:\n raise ImportError(\n \"Could not import googlemaps python package. \"\n \"Please install it with `pip install googlemaps`.\"\n )\n return values\n[docs] def run(self, query: str) -> str:\n \"\"\"Run Places search and get k number of places that exists that match.\"\"\"\n search_results = self.google_map_client.places(query)[\"results\"]\n num_to_return = len(search_results)\n places = []\n if num_to_return == 0:\n return \"Google Places did not find any places that match the description\"\n num_to_return = (\n num_to_return\n if self.top_k_results is None\n else min(num_to_return, self.top_k_results)\n )\n for i in range(num_to_return):\n result = search_results[i]\n details = self.fetch_place_details(result[\"place_id\"])\n if details is not None:\n places.append(details)\n return \"\\n\".join([f\"{i+1}. {item}\" for i, item in enumerate(places)])\n[docs] def fetch_place_details(self, place_id: str) -> Optional[str]:\n try:\n place_details = self.google_map_client.place(place_id)\n formatted_details = self.format_place_details(place_details)\n return formatted_details\n except Exception as e:\n logging.error(f\"An Error occurred while fetching place details: {e}\")\n return None\n[docs] def format_place_details(self, place_details: Dict[str, Any]) -> Optional[str]:\n try:\n name = place_details.get(\"result\", {}).get(\"name\", \"Unkown\")\n address = place_details.get(\"result\", {}).get(\n \"formatted_address\", \"Unknown\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_places_api.html"} {"id": "ac49516beebf-2", "text": "\"formatted_address\", \"Unknown\"\n )\n phone_number = place_details.get(\"result\", {}).get(\n \"formatted_phone_number\", \"Unknown\"\n )\n website = place_details.get(\"result\", {}).get(\"website\", \"Unknown\")\n formatted_details = (\n f\"{name}\\nAddress: {address}\\n\"\n f\"Phone: {phone_number}\\nWebsite: {website}\\n\\n\"\n )\n return formatted_details\n except Exception as e:\n logging.error(f\"An error occurred while formatting place details: {e}\")\n return None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_places_api.html"} {"id": "db5157066cf8-0", "text": "Source code for langchain.utilities.metaphor_search\n\"\"\"Util that calls Metaphor Search API.\nIn order to set this up, follow instructions at:\n\"\"\"\nimport json\nfrom typing import Dict, List, Optional\nimport aiohttp\nimport requests\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.utils import get_from_dict_or_env\nMETAPHOR_API_URL = \"https://api.metaphor.systems\"\n[docs]class MetaphorSearchAPIWrapper(BaseModel):\n \"\"\"Wrapper for Metaphor Search API.\"\"\"\n metaphor_api_key: str\n k: int = 10\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n def _metaphor_search_results(\n self,\n query: str,\n num_results: int,\n include_domains: Optional[List[str]] = None,\n exclude_domains: Optional[List[str]] = None,\n start_crawl_date: Optional[str] = None,\n end_crawl_date: Optional[str] = None,\n start_published_date: Optional[str] = None,\n end_published_date: Optional[str] = None,\n ) -> List[dict]:\n headers = {\"X-Api-Key\": self.metaphor_api_key}\n params = {\n \"numResults\": num_results,\n \"query\": query,\n \"includeDomains\": include_domains,\n \"excludeDomains\": exclude_domains,\n \"startCrawlDate\": start_crawl_date,\n \"endCrawlDate\": end_crawl_date,\n \"startPublishedDate\": start_published_date,\n \"endPublishedDate\": end_published_date,\n }\n response = requests.post(\n # type: ignore", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/metaphor_search.html"} {"id": "db5157066cf8-1", "text": "}\n response = requests.post(\n # type: ignore\n f\"{METAPHOR_API_URL}/search\",\n headers=headers,\n json=params,\n )\n response.raise_for_status()\n search_results = response.json()\n print(search_results)\n return search_results[\"results\"]\n[docs] @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and endpoint exists in environment.\"\"\"\n metaphor_api_key = get_from_dict_or_env(\n values, \"metaphor_api_key\", \"METAPHOR_API_KEY\"\n )\n values[\"metaphor_api_key\"] = metaphor_api_key\n return values\n[docs] def results(\n self,\n query: str,\n num_results: int,\n include_domains: Optional[List[str]] = None,\n exclude_domains: Optional[List[str]] = None,\n start_crawl_date: Optional[str] = None,\n end_crawl_date: Optional[str] = None,\n start_published_date: Optional[str] = None,\n end_published_date: Optional[str] = None,\n ) -> List[Dict]:\n \"\"\"Run query through Metaphor Search and return metadata.\n Args:\n query: The query to search for.\n num_results: The number of results to return.\n Returns:\n A list of dictionaries with the following keys:\n title - The title of the\n url - The url\n author - Author of the content, if applicable. Otherwise, None.\n published_date - Estimated date published\n in YYYY-MM-DD format. Otherwise, None.\n \"\"\"\n raw_search_results = self._metaphor_search_results(\n query,\n num_results=num_results,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/metaphor_search.html"} {"id": "db5157066cf8-2", "text": "query,\n num_results=num_results,\n include_domains=include_domains,\n exclude_domains=exclude_domains,\n start_crawl_date=start_crawl_date,\n end_crawl_date=end_crawl_date,\n start_published_date=start_published_date,\n end_published_date=end_published_date,\n )\n return self._clean_results(raw_search_results)\n[docs] async def results_async(\n self,\n query: str,\n num_results: int,\n include_domains: Optional[List[str]] = None,\n exclude_domains: Optional[List[str]] = None,\n start_crawl_date: Optional[str] = None,\n end_crawl_date: Optional[str] = None,\n start_published_date: Optional[str] = None,\n end_published_date: Optional[str] = None,\n ) -> List[Dict]:\n \"\"\"Get results from the Metaphor Search API asynchronously.\"\"\"\n # Function to perform the API call\n async def fetch() -> str:\n headers = {\"X-Api-Key\": self.metaphor_api_key}\n params = {\n \"numResults\": num_results,\n \"query\": query,\n \"includeDomains\": include_domains,\n \"excludeDomains\": exclude_domains,\n \"startCrawlDate\": start_crawl_date,\n \"endCrawlDate\": end_crawl_date,\n \"startPublishedDate\": start_published_date,\n \"endPublishedDate\": end_published_date,\n }\n async with aiohttp.ClientSession() as session:\n async with session.post(\n f\"{METAPHOR_API_URL}/search\", json=params, headers=headers\n ) as res:\n if res.status == 200:\n data = await res.text()\n return data\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/metaphor_search.html"} {"id": "db5157066cf8-3", "text": "data = await res.text()\n return data\n else:\n raise Exception(f\"Error {res.status}: {res.reason}\")\n results_json_str = await fetch()\n results_json = json.loads(results_json_str)\n return self._clean_results(results_json[\"results\"])\n def _clean_results(self, raw_search_results: List[Dict]) -> List[Dict]:\n cleaned_results = []\n for result in raw_search_results:\n cleaned_results.append(\n {\n \"title\": result[\"title\"],\n \"url\": result[\"url\"],\n \"author\": result[\"author\"],\n \"published_date\": result[\"publishedDate\"],\n }\n )\n return cleaned_results", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/metaphor_search.html"} {"id": "ac98ca32834c-0", "text": "Source code for langchain.utilities.dataforseo_api_search\nimport base64\nfrom typing import Dict, Optional\nfrom urllib.parse import quote\nimport aiohttp\nimport requests\nfrom pydantic import BaseModel, Extra, Field, root_validator\nfrom langchain.utils import get_from_dict_or_env\n[docs]class DataForSeoAPIWrapper(BaseModel):\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n default_params: dict = Field(\n default={\n \"location_name\": \"United States\",\n \"language_code\": \"en\",\n \"depth\": 10,\n \"se_name\": \"google\",\n \"se_type\": \"organic\",\n }\n )\n params: dict = Field(default={})\n api_login: Optional[str] = None\n api_password: Optional[str] = None\n json_result_types: Optional[list] = None\n json_result_fields: Optional[list] = None\n top_count: Optional[int] = None\n aiosession: Optional[aiohttp.ClientSession] = None\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that login and password exists in environment.\"\"\"\n login = get_from_dict_or_env(values, \"api_login\", \"DATAFORSEO_LOGIN\")\n password = get_from_dict_or_env(values, \"api_password\", \"DATAFORSEO_PASSWORD\")\n values[\"api_login\"] = login\n values[\"api_password\"] = password\n return values\n[docs] async def arun(self, url: str) -> str:\n \"\"\"Run request to DataForSEO SERP API and parse result async.\"\"\"\n return self._process_response(await self._aresponse_json(url))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/dataforseo_api_search.html"} {"id": "ac98ca32834c-1", "text": "return self._process_response(await self._aresponse_json(url))\n[docs] def run(self, url: str) -> str:\n \"\"\"Run request to DataForSEO SERP API and parse result async.\"\"\"\n return self._process_response(self._response_json(url))\n[docs] def results(self, url: str) -> list:\n res = self._response_json(url)\n return self._filter_results(res)\n[docs] async def aresults(self, url: str) -> list:\n res = await self._aresponse_json(url)\n return self._filter_results(res)\n def _prepare_request(self, keyword: str) -> dict:\n \"\"\"Prepare the request details for the DataForSEO SERP API.\"\"\"\n if self.api_login is None or self.api_password is None:\n raise ValueError(\"api_login or api_password is not provided\")\n cred = base64.b64encode(\n f\"{self.api_login}:{self.api_password}\".encode(\"utf-8\")\n ).decode(\"utf-8\")\n headers = {\"Authorization\": f\"Basic {cred}\", \"Content-Type\": \"application/json\"}\n obj = {\"keyword\": quote(keyword)}\n obj = {**obj, **self.default_params, **self.params}\n data = [obj]\n _url = (\n f\"https://api.dataforseo.com/v3/serp/{obj['se_name']}\"\n f\"/{obj['se_type']}/live/advanced\"\n )\n return {\n \"url\": _url,\n \"headers\": headers,\n \"data\": data,\n }\n def _check_response(self, response: dict) -> dict:\n \"\"\"Check the response from the DataForSEO SERP API for errors.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/dataforseo_api_search.html"} {"id": "ac98ca32834c-2", "text": "\"\"\"Check the response from the DataForSEO SERP API for errors.\"\"\"\n if response.get(\"status_code\") != 20000:\n raise ValueError(\n f\"Got error from DataForSEO SERP API: {response.get('status_message')}\"\n )\n return response\n def _response_json(self, url: str) -> dict:\n \"\"\"Use requests to run request to DataForSEO SERP API and return results.\"\"\"\n request_details = self._prepare_request(url)\n response = requests.post(\n request_details[\"url\"],\n headers=request_details[\"headers\"],\n json=request_details[\"data\"],\n )\n response.raise_for_status()\n return self._check_response(response.json())\n async def _aresponse_json(self, url: str) -> dict:\n \"\"\"Use aiohttp to request DataForSEO SERP API and return results async.\"\"\"\n request_details = self._prepare_request(url)\n if not self.aiosession:\n async with aiohttp.ClientSession() as session:\n async with session.post(\n request_details[\"url\"],\n headers=request_details[\"headers\"],\n json=request_details[\"data\"],\n ) as response:\n res = await response.json()\n else:\n async with self.aiosession.post(\n request_details[\"url\"],\n headers=request_details[\"headers\"],\n json=request_details[\"data\"],\n ) as response:\n res = await response.json()\n return self._check_response(res)\n def _filter_results(self, res: dict) -> list:\n output = []\n types = self.json_result_types if self.json_result_types is not None else []\n for task in res.get(\"tasks\", []):\n for result in task.get(\"result\", []):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/dataforseo_api_search.html"} {"id": "ac98ca32834c-3", "text": "for result in task.get(\"result\", []):\n for item in result.get(\"items\", []):\n if len(types) == 0 or item.get(\"type\", \"\") in types:\n self._cleanup_unnecessary_items(item)\n if len(item) != 0:\n output.append(item)\n if self.top_count is not None and len(output) >= self.top_count:\n break\n return output\n def _cleanup_unnecessary_items(self, d: dict) -> dict:\n fields = self.json_result_fields if self.json_result_fields is not None else []\n if len(fields) > 0:\n for k, v in list(d.items()):\n if isinstance(v, dict):\n self._cleanup_unnecessary_items(v)\n if len(v) == 0:\n del d[k]\n elif k not in fields:\n del d[k]\n if \"xpath\" in d:\n del d[\"xpath\"]\n if \"position\" in d:\n del d[\"position\"]\n if \"rectangle\" in d:\n del d[\"rectangle\"]\n for k, v in list(d.items()):\n if isinstance(v, dict):\n self._cleanup_unnecessary_items(v)\n return d\n def _process_response(self, res: dict) -> str:\n \"\"\"Process response from DataForSEO SERP API.\"\"\"\n toret = \"No good search result found\"\n for task in res.get(\"tasks\", []):\n for result in task.get(\"result\", []):\n item_types = result.get(\"item_types\")\n items = result.get(\"items\", [])\n if \"answer_box\" in item_types:\n toret = next(\n item for item in items if item.get(\"type\") == \"answer_box\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/dataforseo_api_search.html"} {"id": "ac98ca32834c-4", "text": "item for item in items if item.get(\"type\") == \"answer_box\"\n ).get(\"text\")\n elif \"knowledge_graph\" in item_types:\n toret = next(\n item for item in items if item.get(\"type\") == \"knowledge_graph\"\n ).get(\"description\")\n elif \"featured_snippet\" in item_types:\n toret = next(\n item for item in items if item.get(\"type\") == \"featured_snippet\"\n ).get(\"description\")\n elif \"shopping\" in item_types:\n toret = next(\n item for item in items if item.get(\"type\") == \"shopping\"\n ).get(\"price\")\n elif \"organic\" in item_types:\n toret = next(\n item for item in items if item.get(\"type\") == \"organic\"\n ).get(\"description\")\n if toret:\n break\n return toret", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/dataforseo_api_search.html"} {"id": "5eb52fd3d4a4-0", "text": "Source code for langchain.utilities.jira\n\"\"\"Util that calls Jira.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.tools.jira.prompt import (\n JIRA_CATCH_ALL_PROMPT,\n JIRA_CONFLUENCE_PAGE_CREATE_PROMPT,\n JIRA_GET_ALL_PROJECTS_PROMPT,\n JIRA_ISSUE_CREATE_PROMPT,\n JIRA_JQL_PROMPT,\n)\nfrom langchain.utils import get_from_dict_or_env\n# TODO: think about error handling, more specific api specs, and jql/project limits\n[docs]class JiraAPIWrapper(BaseModel):\n \"\"\"Wrapper for Jira API.\"\"\"\n jira: Any #: :meta private:\n confluence: Any\n jira_username: Optional[str] = None\n jira_api_token: Optional[str] = None\n jira_instance_url: Optional[str] = None\n operations: List[Dict] = [\n {\n \"mode\": \"jql\",\n \"name\": \"JQL Query\",\n \"description\": JIRA_JQL_PROMPT,\n },\n {\n \"mode\": \"get_projects\",\n \"name\": \"Get Projects\",\n \"description\": JIRA_GET_ALL_PROJECTS_PROMPT,\n },\n {\n \"mode\": \"create_issue\",\n \"name\": \"Create Issue\",\n \"description\": JIRA_ISSUE_CREATE_PROMPT,\n },\n {\n \"mode\": \"other\",\n \"name\": \"Catch all Jira API call\",\n \"description\": JIRA_CATCH_ALL_PROMPT,\n },\n {\n \"mode\": \"create_page\",\n \"name\": \"Create confluence page\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/jira.html"} {"id": "5eb52fd3d4a4-1", "text": "\"mode\": \"create_page\",\n \"name\": \"Create confluence page\",\n \"description\": JIRA_CONFLUENCE_PAGE_CREATE_PROMPT,\n },\n ]\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] def list(self) -> List[Dict]:\n return self.operations\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n jira_username = get_from_dict_or_env(values, \"jira_username\", \"JIRA_USERNAME\")\n values[\"jira_username\"] = jira_username\n jira_api_token = get_from_dict_or_env(\n values, \"jira_api_token\", \"JIRA_API_TOKEN\"\n )\n values[\"jira_api_token\"] = jira_api_token\n jira_instance_url = get_from_dict_or_env(\n values, \"jira_instance_url\", \"JIRA_INSTANCE_URL\"\n )\n values[\"jira_instance_url\"] = jira_instance_url\n try:\n from atlassian import Confluence, Jira\n except ImportError:\n raise ImportError(\n \"atlassian-python-api is not installed. \"\n \"Please install it with `pip install atlassian-python-api`\"\n )\n jira = Jira(\n url=jira_instance_url,\n username=jira_username,\n password=jira_api_token,\n cloud=True,\n )\n confluence = Confluence(\n url=jira_instance_url,\n username=jira_username,\n password=jira_api_token,\n cloud=True,\n )\n values[\"jira\"] = jira", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/jira.html"} {"id": "5eb52fd3d4a4-2", "text": "cloud=True,\n )\n values[\"jira\"] = jira\n values[\"confluence\"] = confluence\n return values\n[docs] def parse_issues(self, issues: Dict) -> List[dict]:\n parsed = []\n for issue in issues[\"issues\"]:\n key = issue[\"key\"]\n summary = issue[\"fields\"][\"summary\"]\n created = issue[\"fields\"][\"created\"][0:10]\n priority = issue[\"fields\"][\"priority\"][\"name\"]\n status = issue[\"fields\"][\"status\"][\"name\"]\n try:\n assignee = issue[\"fields\"][\"assignee\"][\"displayName\"]\n except Exception:\n assignee = \"None\"\n rel_issues = {}\n for related_issue in issue[\"fields\"][\"issuelinks\"]:\n if \"inwardIssue\" in related_issue.keys():\n rel_type = related_issue[\"type\"][\"inward\"]\n rel_key = related_issue[\"inwardIssue\"][\"key\"]\n rel_summary = related_issue[\"inwardIssue\"][\"fields\"][\"summary\"]\n if \"outwardIssue\" in related_issue.keys():\n rel_type = related_issue[\"type\"][\"outward\"]\n rel_key = related_issue[\"outwardIssue\"][\"key\"]\n rel_summary = related_issue[\"outwardIssue\"][\"fields\"][\"summary\"]\n rel_issues = {\"type\": rel_type, \"key\": rel_key, \"summary\": rel_summary}\n parsed.append(\n {\n \"key\": key,\n \"summary\": summary,\n \"created\": created,\n \"assignee\": assignee,\n \"priority\": priority,\n \"status\": status,\n \"related_issues\": rel_issues,\n }\n )\n return parsed\n[docs] def parse_projects(self, projects: List[dict]) -> List[dict]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/jira.html"} {"id": "5eb52fd3d4a4-3", "text": "parsed = []\n for project in projects:\n id = project[\"id\"]\n key = project[\"key\"]\n name = project[\"name\"]\n type = project[\"projectTypeKey\"]\n style = project[\"style\"]\n parsed.append(\n {\"id\": id, \"key\": key, \"name\": name, \"type\": type, \"style\": style}\n )\n return parsed\n[docs] def search(self, query: str) -> str:\n issues = self.jira.jql(query)\n parsed_issues = self.parse_issues(issues)\n parsed_issues_str = (\n \"Found \" + str(len(parsed_issues)) + \" issues:\\n\" + str(parsed_issues)\n )\n return parsed_issues_str\n[docs] def project(self) -> str:\n projects = self.jira.projects()\n parsed_projects = self.parse_projects(projects)\n parsed_projects_str = (\n \"Found \" + str(len(parsed_projects)) + \" projects:\\n\" + str(parsed_projects)\n )\n return parsed_projects_str\n[docs] def issue_create(self, query: str) -> str:\n try:\n import json\n except ImportError:\n raise ImportError(\n \"json is not installed. Please install it with `pip install json`\"\n )\n params = json.loads(query)\n return self.jira.issue_create(fields=dict(params))\n[docs] def page_create(self, query: str) -> str:\n try:\n import json\n except ImportError:\n raise ImportError(\n \"json is not installed. Please install it with `pip install json`\"\n )\n params = json.loads(query)\n return self.confluence.create_page(**dict(params))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/jira.html"} {"id": "5eb52fd3d4a4-4", "text": "params = json.loads(query)\n return self.confluence.create_page(**dict(params))\n[docs] def other(self, query: str) -> str:\n try:\n import json\n except ImportError:\n raise ImportError(\n \"json is not installed. Please install it with `pip install json`\"\n )\n params = json.loads(query)\n jira_function = getattr(self.jira, params[\"function\"])\n return jira_function(*params.get(\"args\", []), **params.get(\"kwargs\", {}))\n[docs] def run(self, mode: str, query: str) -> str:\n if mode == \"jql\":\n return self.search(query)\n elif mode == \"get_projects\":\n return self.project()\n elif mode == \"create_issue\":\n return self.issue_create(query)\n elif mode == \"other\":\n return self.other(query)\n elif mode == \"create_page\":\n return self.page_create(query)\n else:\n raise ValueError(f\"Got unexpected mode {mode}\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/jira.html"} {"id": "a3e1a2f80a75-0", "text": "Source code for langchain.utilities.openweathermap\n\"\"\"Util that calls OpenWeatherMap using PyOWM.\"\"\"\nfrom typing import Any, Dict, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.tools.base import BaseModel\nfrom langchain.utils import get_from_dict_or_env\n[docs]class OpenWeatherMapAPIWrapper(BaseModel):\n \"\"\"Wrapper for OpenWeatherMap API using PyOWM.\n Docs for using:\n 1. Go to OpenWeatherMap and sign up for an API key\n 2. Save your API KEY into OPENWEATHERMAP_API_KEY env variable\n 3. pip install pyowm\n \"\"\"\n owm: Any\n openweathermap_api_key: Optional[str] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key exists in environment.\"\"\"\n openweathermap_api_key = get_from_dict_or_env(\n values, \"openweathermap_api_key\", \"OPENWEATHERMAP_API_KEY\"\n )\n try:\n import pyowm\n except ImportError:\n raise ImportError(\n \"pyowm is not installed. Please install it with `pip install pyowm`\"\n )\n owm = pyowm.OWM(openweathermap_api_key)\n values[\"owm\"] = owm\n return values\n def _format_weather_info(self, location: str, w: Any) -> str:\n detailed_status = w.detailed_status\n wind = w.wind()\n humidity = w.humidity\n temperature = w.temperature(\"celsius\")\n rain = w.rain\n heat_index = w.heat_index", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/openweathermap.html"} {"id": "a3e1a2f80a75-1", "text": "rain = w.rain\n heat_index = w.heat_index\n clouds = w.clouds\n return (\n f\"In {location}, the current weather is as follows:\\n\"\n f\"Detailed status: {detailed_status}\\n\"\n f\"Wind speed: {wind['speed']} m/s, direction: {wind['deg']}\u00b0\\n\"\n f\"Humidity: {humidity}%\\n\"\n f\"Temperature: \\n\"\n f\" - Current: {temperature['temp']}\u00b0C\\n\"\n f\" - High: {temperature['temp_max']}\u00b0C\\n\"\n f\" - Low: {temperature['temp_min']}\u00b0C\\n\"\n f\" - Feels like: {temperature['feels_like']}\u00b0C\\n\"\n f\"Rain: {rain}\\n\"\n f\"Heat index: {heat_index}\\n\"\n f\"Cloud cover: {clouds}%\"\n )\n[docs] def run(self, location: str) -> str:\n \"\"\"Get the current weather information for a specified location.\"\"\"\n mgr = self.owm.weather_manager()\n observation = mgr.weather_at_place(location)\n w = observation.weather\n return self._format_weather_info(location, w)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/openweathermap.html"} {"id": "9c281b984f06-0", "text": "Source code for langchain.utilities.bing_search\n\"\"\"Util that calls Bing Search.\nIn order to set this up, follow instructions at:\nhttps://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e\n\"\"\"\nfrom typing import Dict, List\nimport requests\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.utils import get_from_dict_or_env\n[docs]class BingSearchAPIWrapper(BaseModel):\n \"\"\"Wrapper for Bing Search API.\n In order to set this up, follow instructions at:\n https://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e\n \"\"\"\n bing_subscription_key: str\n bing_search_url: str\n k: int = 10\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n def _bing_search_results(self, search_term: str, count: int) -> List[dict]:\n headers = {\"Ocp-Apim-Subscription-Key\": self.bing_subscription_key}\n params = {\n \"q\": search_term,\n \"count\": count,\n \"textDecorations\": True,\n \"textFormat\": \"HTML\",\n }\n response = requests.get(\n self.bing_search_url, headers=headers, params=params # type: ignore\n )\n response.raise_for_status()\n search_results = response.json()\n return search_results[\"webPages\"][\"value\"]\n[docs] @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and endpoint exists in environment.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/bing_search.html"} {"id": "9c281b984f06-1", "text": "\"\"\"Validate that api key and endpoint exists in environment.\"\"\"\n bing_subscription_key = get_from_dict_or_env(\n values, \"bing_subscription_key\", \"BING_SUBSCRIPTION_KEY\"\n )\n values[\"bing_subscription_key\"] = bing_subscription_key\n bing_search_url = get_from_dict_or_env(\n values,\n \"bing_search_url\",\n \"BING_SEARCH_URL\",\n # default=\"https://api.bing.microsoft.com/v7.0/search\",\n )\n values[\"bing_search_url\"] = bing_search_url\n return values\n[docs] def run(self, query: str) -> str:\n \"\"\"Run query through BingSearch and parse result.\"\"\"\n snippets = []\n results = self._bing_search_results(query, count=self.k)\n if len(results) == 0:\n return \"No good Bing Search Result was found\"\n for result in results:\n snippets.append(result[\"snippet\"])\n return \" \".join(snippets)\n[docs] def results(self, query: str, num_results: int) -> List[Dict]:\n \"\"\"Run query through BingSearch and return metadata.\n Args:\n query: The query to search for.\n num_results: The number of results to return.\n Returns:\n A list of dictionaries with the following keys:\n snippet - The description of the result.\n title - The title of the result.\n link - The link to the result.\n \"\"\"\n metadata_results = []\n results = self._bing_search_results(query, count=num_results)\n if len(results) == 0:\n return [{\"Result\": \"No good Bing Search Result was found\"}]\n for result in results:\n metadata_result = {\n \"snippet\": result[\"snippet\"],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/bing_search.html"} {"id": "9c281b984f06-2", "text": "metadata_result = {\n \"snippet\": result[\"snippet\"],\n \"title\": result[\"name\"],\n \"link\": result[\"url\"],\n }\n metadata_results.append(metadata_result)\n return metadata_results", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/bing_search.html"} {"id": "2cdeb4496db6-0", "text": "Source code for langchain.utilities.bibtex\n\"\"\"Util that calls bibtexparser.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping\nfrom pydantic import BaseModel, Extra, root_validator\nlogger = logging.getLogger(__name__)\nOPTIONAL_FIELDS = [\n \"annotate\",\n \"booktitle\",\n \"editor\",\n \"howpublished\",\n \"journal\",\n \"keywords\",\n \"note\",\n \"organization\",\n \"publisher\",\n \"school\",\n \"series\",\n \"type\",\n \"doi\",\n \"issn\",\n \"isbn\",\n]\n[docs]class BibtexparserWrapper(BaseModel):\n \"\"\"Wrapper around bibtexparser.\n To use, you should have the ``bibtexparser`` python package installed.\n https://bibtexparser.readthedocs.io/en/master/\n This wrapper will use bibtexparser to load a collection of references from\n a bibtex file and fetch document summaries.\n \"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the python package exists in environment.\"\"\"\n try:\n import bibtexparser # noqa\n except ImportError:\n raise ImportError(\n \"Could not import bibtexparser python package. \"\n \"Please install it with `pip install bibtexparser`.\"\n )\n return values\n[docs] def load_bibtex_entries(self, path: str) -> List[Dict[str, Any]]:\n \"\"\"Load bibtex entries from the bibtex file at the given path.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/bibtex.html"} {"id": "2cdeb4496db6-1", "text": "\"\"\"Load bibtex entries from the bibtex file at the given path.\"\"\"\n import bibtexparser\n with open(path) as file:\n entries = bibtexparser.load(file).entries\n return entries\n[docs] def get_metadata(\n self, entry: Mapping[str, Any], load_extra: bool = False\n ) -> Dict[str, Any]:\n \"\"\"Get metadata for the given entry.\"\"\"\n publication = entry.get(\"journal\") or entry.get(\"booktitle\")\n if \"url\" in entry:\n url = entry[\"url\"]\n elif \"doi\" in entry:\n url = f'https://doi.org/{entry[\"doi\"]}'\n else:\n url = None\n meta = {\n \"id\": entry.get(\"ID\"),\n \"published_year\": entry.get(\"year\"),\n \"title\": entry.get(\"title\"),\n \"publication\": publication,\n \"authors\": entry.get(\"author\"),\n \"abstract\": entry.get(\"abstract\"),\n \"url\": url,\n }\n if load_extra:\n for field in OPTIONAL_FIELDS:\n meta[field] = entry.get(field)\n return {k: v for k, v in meta.items() if v is not None}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/bibtex.html"} {"id": "efc3669230a1-0", "text": "Source code for langchain.utilities.scenexplain\n\"\"\"Util that calls SceneXplain.\nIn order to set this up, you need API key for the SceneXplain API.\nYou can obtain a key by following the steps below.\n- Sign up for a free account at https://scenex.jina.ai/.\n- Navigate to the API Access page (https://scenex.jina.ai/api) and create a new API key.\n\"\"\"\nfrom typing import Dict\nimport requests\nfrom pydantic import BaseModel, BaseSettings, Field, root_validator\nfrom langchain.utils import get_from_dict_or_env\n[docs]class SceneXplainAPIWrapper(BaseSettings, BaseModel):\n \"\"\"Wrapper for SceneXplain API.\n In order to set this up, you need API key for the SceneXplain API.\n You can obtain a key by following the steps below.\n - Sign up for a free account at https://scenex.jina.ai/.\n - Navigate to the API Access page (https://scenex.jina.ai/api)\n and create a new API key.\n \"\"\"\n scenex_api_key: str = Field(..., env=\"SCENEX_API_KEY\")\n scenex_api_url: str = \"https://api.scenex.jina.ai/v1/describe\"\n def _describe_image(self, image: str) -> str:\n headers = {\n \"x-api-key\": f\"token {self.scenex_api_key}\",\n \"content-type\": \"application/json\",\n }\n payload = {\n \"data\": [\n {\n \"image\": image,\n \"algorithm\": \"Ember\",\n \"languages\": [\"en\"],\n }\n ]\n }", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/scenexplain.html"} {"id": "efc3669230a1-1", "text": "\"languages\": [\"en\"],\n }\n ]\n }\n response = requests.post(self.scenex_api_url, headers=headers, json=payload)\n response.raise_for_status()\n result = response.json().get(\"result\", [])\n img = result[0] if result else {}\n return img.get(\"text\", \"\")\n[docs] @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key exists in environment.\"\"\"\n scenex_api_key = get_from_dict_or_env(\n values, \"scenex_api_key\", \"SCENEX_API_KEY\"\n )\n values[\"scenex_api_key\"] = scenex_api_key\n return values\n[docs] def run(self, image: str) -> str:\n \"\"\"Run SceneXplain image explainer.\"\"\"\n description = self._describe_image(image)\n if not description:\n return \"No description found.\"\n return description", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/scenexplain.html"} {"id": "ccde36fc1971-0", "text": "Source code for langchain.utilities.google_serper\n\"\"\"Util that calls Google Search using the Serper.dev API.\"\"\"\nfrom typing import Any, Dict, List, Optional\nimport aiohttp\nimport requests\nfrom pydantic.class_validators import root_validator\nfrom pydantic.main import BaseModel\nfrom typing_extensions import Literal\nfrom langchain.utils import get_from_dict_or_env\n[docs]class GoogleSerperAPIWrapper(BaseModel):\n \"\"\"Wrapper around the Serper.dev Google Search API.\n You can create a free API key at https://serper.dev.\n To use, you should have the environment variable ``SERPER_API_KEY``\n set with your API key, or pass `serper_api_key` as a named parameter\n to the constructor.\n Example:\n .. code-block:: python\n from langchain import GoogleSerperAPIWrapper\n google_serper = GoogleSerperAPIWrapper()\n \"\"\"\n k: int = 10\n gl: str = \"us\"\n hl: str = \"en\"\n # \"places\" and \"images\" is available from Serper but not implemented in the\n # parser of run(). They can be used in results()\n type: Literal[\"news\", \"search\", \"places\", \"images\"] = \"search\"\n result_key_for_type = {\n \"news\": \"news\",\n \"places\": \"places\",\n \"images\": \"images\",\n \"search\": \"organic\",\n }\n tbs: Optional[str] = None\n serper_api_key: Optional[str] = None\n aiosession: Optional[aiohttp.ClientSession] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] @root_validator()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_serper.html"} {"id": "ccde36fc1971-1", "text": "arbitrary_types_allowed = True\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key exists in environment.\"\"\"\n serper_api_key = get_from_dict_or_env(\n values, \"serper_api_key\", \"SERPER_API_KEY\"\n )\n values[\"serper_api_key\"] = serper_api_key\n return values\n[docs] def results(self, query: str, **kwargs: Any) -> Dict:\n \"\"\"Run query through GoogleSearch.\"\"\"\n return self._google_serper_api_results(\n query,\n gl=self.gl,\n hl=self.hl,\n num=self.k,\n tbs=self.tbs,\n search_type=self.type,\n **kwargs,\n )\n[docs] def run(self, query: str, **kwargs: Any) -> str:\n \"\"\"Run query through GoogleSearch and parse result.\"\"\"\n results = self._google_serper_api_results(\n query,\n gl=self.gl,\n hl=self.hl,\n num=self.k,\n tbs=self.tbs,\n search_type=self.type,\n **kwargs,\n )\n return self._parse_results(results)\n[docs] async def aresults(self, query: str, **kwargs: Any) -> Dict:\n \"\"\"Run query through GoogleSearch.\"\"\"\n results = await self._async_google_serper_search_results(\n query,\n gl=self.gl,\n hl=self.hl,\n num=self.k,\n search_type=self.type,\n tbs=self.tbs,\n **kwargs,\n )\n return results\n[docs] async def arun(self, query: str, **kwargs: Any) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_serper.html"} {"id": "ccde36fc1971-2", "text": "\"\"\"Run query through GoogleSearch and parse result async.\"\"\"\n results = await self._async_google_serper_search_results(\n query,\n gl=self.gl,\n hl=self.hl,\n num=self.k,\n search_type=self.type,\n tbs=self.tbs,\n **kwargs,\n )\n return self._parse_results(results)\n def _parse_snippets(self, results: dict) -> List[str]:\n snippets = []\n if results.get(\"answerBox\"):\n answer_box = results.get(\"answerBox\", {})\n if answer_box.get(\"answer\"):\n return [answer_box.get(\"answer\")]\n elif answer_box.get(\"snippet\"):\n return [answer_box.get(\"snippet\").replace(\"\\n\", \" \")]\n elif answer_box.get(\"snippetHighlighted\"):\n return answer_box.get(\"snippetHighlighted\")\n if results.get(\"knowledgeGraph\"):\n kg = results.get(\"knowledgeGraph\", {})\n title = kg.get(\"title\")\n entity_type = kg.get(\"type\")\n if entity_type:\n snippets.append(f\"{title}: {entity_type}.\")\n description = kg.get(\"description\")\n if description:\n snippets.append(description)\n for attribute, value in kg.get(\"attributes\", {}).items():\n snippets.append(f\"{title} {attribute}: {value}.\")\n for result in results[self.result_key_for_type[self.type]][: self.k]:\n if \"snippet\" in result:\n snippets.append(result[\"snippet\"])\n for attribute, value in result.get(\"attributes\", {}).items():\n snippets.append(f\"{attribute}: {value}.\")\n if len(snippets) == 0:\n return [\"No good Google Search Result was found\"]\n return snippets", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_serper.html"} {"id": "ccde36fc1971-3", "text": "return [\"No good Google Search Result was found\"]\n return snippets\n def _parse_results(self, results: dict) -> str:\n return \" \".join(self._parse_snippets(results))\n def _google_serper_api_results(\n self, search_term: str, search_type: str = \"search\", **kwargs: Any\n ) -> dict:\n headers = {\n \"X-API-KEY\": self.serper_api_key or \"\",\n \"Content-Type\": \"application/json\",\n }\n params = {\n \"q\": search_term,\n **{key: value for key, value in kwargs.items() if value is not None},\n }\n response = requests.post(\n f\"https://google.serper.dev/{search_type}\", headers=headers, params=params\n )\n response.raise_for_status()\n search_results = response.json()\n return search_results\n async def _async_google_serper_search_results(\n self, search_term: str, search_type: str = \"search\", **kwargs: Any\n ) -> dict:\n headers = {\n \"X-API-KEY\": self.serper_api_key or \"\",\n \"Content-Type\": \"application/json\",\n }\n url = f\"https://google.serper.dev/{search_type}\"\n params = {\n \"q\": search_term,\n **{key: value for key, value in kwargs.items() if value is not None},\n }\n if not self.aiosession:\n async with aiohttp.ClientSession() as session:\n async with session.post(\n url, params=params, headers=headers, raise_for_status=False\n ) as response:\n search_results = await response.json()\n else:\n async with self.aiosession.post(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_serper.html"} {"id": "ccde36fc1971-4", "text": "else:\n async with self.aiosession.post(\n url, params=params, headers=headers, raise_for_status=True\n ) as response:\n search_results = await response.json()\n return search_results", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_serper.html"} {"id": "29585180565a-0", "text": "Source code for langchain.utilities.openapi\n\"\"\"Utility functions for parsing an OpenAPI spec.\"\"\"\nimport copy\nimport json\nimport logging\nimport re\nfrom enum import Enum\nfrom pathlib import Path\nfrom typing import Dict, List, Optional, Union\nimport requests\nimport yaml\nfrom openapi_schema_pydantic import (\n Components,\n OpenAPI,\n Operation,\n Parameter,\n PathItem,\n Paths,\n Reference,\n RequestBody,\n Schema,\n)\nfrom pydantic import ValidationError\nlogger = logging.getLogger(__name__)\n[docs]class HTTPVerb(str, Enum):\n \"\"\"Enumerator of the HTTP verbs.\"\"\"\n GET = \"get\"\n PUT = \"put\"\n POST = \"post\"\n DELETE = \"delete\"\n OPTIONS = \"options\"\n HEAD = \"head\"\n PATCH = \"patch\"\n TRACE = \"trace\"\n[docs] @classmethod\n def from_str(cls, verb: str) -> \"HTTPVerb\":\n \"\"\"Parse an HTTP verb.\"\"\"\n try:\n return cls(verb)\n except ValueError:\n raise ValueError(f\"Invalid HTTP verb. Valid values are {cls.__members__}\")\n[docs]class OpenAPISpec(OpenAPI):\n \"\"\"OpenAPI Model that removes misformatted parts of the spec.\"\"\"\n @property\n def _paths_strict(self) -> Paths:\n if not self.paths:\n raise ValueError(\"No paths found in spec\")\n return self.paths\n def _get_path_strict(self, path: str) -> PathItem:\n path_item = self._paths_strict.get(path)\n if not path_item:\n raise ValueError(f\"No path found for {path}\")\n return path_item\n @property", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/openapi.html"} {"id": "29585180565a-1", "text": "return path_item\n @property\n def _components_strict(self) -> Components:\n \"\"\"Get components or err.\"\"\"\n if self.components is None:\n raise ValueError(\"No components found in spec. \")\n return self.components\n @property\n def _parameters_strict(self) -> Dict[str, Union[Parameter, Reference]]:\n \"\"\"Get parameters or err.\"\"\"\n parameters = self._components_strict.parameters\n if parameters is None:\n raise ValueError(\"No parameters found in spec. \")\n return parameters\n @property\n def _schemas_strict(self) -> Dict[str, Schema]:\n \"\"\"Get the dictionary of schemas or err.\"\"\"\n schemas = self._components_strict.schemas\n if schemas is None:\n raise ValueError(\"No schemas found in spec. \")\n return schemas\n @property\n def _request_bodies_strict(self) -> Dict[str, Union[RequestBody, Reference]]:\n \"\"\"Get the request body or err.\"\"\"\n request_bodies = self._components_strict.requestBodies\n if request_bodies is None:\n raise ValueError(\"No request body found in spec. \")\n return request_bodies\n def _get_referenced_parameter(self, ref: Reference) -> Union[Parameter, Reference]:\n \"\"\"Get a parameter (or nested reference) or err.\"\"\"\n ref_name = ref.ref.split(\"/\")[-1]\n parameters = self._parameters_strict\n if ref_name not in parameters:\n raise ValueError(f\"No parameter found for {ref_name}\")\n return parameters[ref_name]\n def _get_root_referenced_parameter(self, ref: Reference) -> Parameter:\n \"\"\"Get the root reference or err.\"\"\"\n parameter = self._get_referenced_parameter(ref)\n while isinstance(parameter, Reference):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/openapi.html"} {"id": "29585180565a-2", "text": "parameter = self._get_referenced_parameter(ref)\n while isinstance(parameter, Reference):\n parameter = self._get_referenced_parameter(parameter)\n return parameter\n[docs] def get_referenced_schema(self, ref: Reference) -> Schema:\n \"\"\"Get a schema (or nested reference) or err.\"\"\"\n ref_name = ref.ref.split(\"/\")[-1]\n schemas = self._schemas_strict\n if ref_name not in schemas:\n raise ValueError(f\"No schema found for {ref_name}\")\n return schemas[ref_name]\n[docs] def get_schema(self, schema: Union[Reference, Schema]) -> Schema:\n if isinstance(schema, Reference):\n return self.get_referenced_schema(schema)\n return schema\n def _get_root_referenced_schema(self, ref: Reference) -> Schema:\n \"\"\"Get the root reference or err.\"\"\"\n schema = self.get_referenced_schema(ref)\n while isinstance(schema, Reference):\n schema = self.get_referenced_schema(schema)\n return schema\n def _get_referenced_request_body(\n self, ref: Reference\n ) -> Optional[Union[Reference, RequestBody]]:\n \"\"\"Get a request body (or nested reference) or err.\"\"\"\n ref_name = ref.ref.split(\"/\")[-1]\n request_bodies = self._request_bodies_strict\n if ref_name not in request_bodies:\n raise ValueError(f\"No request body found for {ref_name}\")\n return request_bodies[ref_name]\n def _get_root_referenced_request_body(\n self, ref: Reference\n ) -> Optional[RequestBody]:\n \"\"\"Get the root request Body or err.\"\"\"\n request_body = self._get_referenced_request_body(ref)\n while isinstance(request_body, Reference):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/openapi.html"} {"id": "29585180565a-3", "text": "while isinstance(request_body, Reference):\n request_body = self._get_referenced_request_body(request_body)\n return request_body\n @staticmethod\n def _alert_unsupported_spec(obj: dict) -> None:\n \"\"\"Alert if the spec is not supported.\"\"\"\n warning_message = (\n \" This may result in degraded performance.\"\n + \" Convert your OpenAPI spec to 3.1.* spec\"\n + \" for better support.\"\n )\n swagger_version = obj.get(\"swagger\")\n openapi_version = obj.get(\"openapi\")\n if isinstance(openapi_version, str):\n if openapi_version != \"3.1.0\":\n logger.warning(\n f\"Attempting to load an OpenAPI {openapi_version}\"\n f\" spec. {warning_message}\"\n )\n else:\n pass\n elif isinstance(swagger_version, str):\n logger.warning(\n f\"Attempting to load a Swagger {swagger_version}\"\n f\" spec. {warning_message}\"\n )\n else:\n raise ValueError(\n \"Attempting to load an unsupported spec:\"\n f\"\\n\\n{obj}\\n{warning_message}\"\n )\n[docs] @classmethod\n def parse_obj(cls, obj: dict) -> \"OpenAPISpec\":\n try:\n cls._alert_unsupported_spec(obj)\n return super().parse_obj(obj)\n except ValidationError as e:\n # We are handling possibly misconfigured specs and want to do a best-effort\n # job to get a reasonable interface out of it.\n new_obj = copy.deepcopy(obj)\n for error in e.errors():\n keys = error[\"loc\"]\n item = new_obj\n for key in keys[:-1]:\n item = item[key]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/openapi.html"} {"id": "29585180565a-4", "text": "for key in keys[:-1]:\n item = item[key]\n item.pop(keys[-1], None)\n return cls.parse_obj(new_obj)\n[docs] @classmethod\n def from_spec_dict(cls, spec_dict: dict) -> \"OpenAPISpec\":\n \"\"\"Get an OpenAPI spec from a dict.\"\"\"\n return cls.parse_obj(spec_dict)\n[docs] @classmethod\n def from_text(cls, text: str) -> \"OpenAPISpec\":\n \"\"\"Get an OpenAPI spec from a text.\"\"\"\n try:\n spec_dict = json.loads(text)\n except json.JSONDecodeError:\n spec_dict = yaml.safe_load(text)\n return cls.from_spec_dict(spec_dict)\n[docs] @classmethod\n def from_file(cls, path: Union[str, Path]) -> \"OpenAPISpec\":\n \"\"\"Get an OpenAPI spec from a file path.\"\"\"\n path_ = path if isinstance(path, Path) else Path(path)\n if not path_.exists():\n raise FileNotFoundError(f\"{path} does not exist\")\n with path_.open(\"r\") as f:\n return cls.from_text(f.read())\n[docs] @classmethod\n def from_url(cls, url: str) -> \"OpenAPISpec\":\n \"\"\"Get an OpenAPI spec from a URL.\"\"\"\n response = requests.get(url)\n return cls.from_text(response.text)\n @property\n def base_url(self) -> str:\n \"\"\"Get the base url.\"\"\"\n return self.servers[0].url\n[docs] def get_methods_for_path(self, path: str) -> List[str]:\n \"\"\"Return a list of valid methods for the specified path.\"\"\"\n path_item = self._get_path_strict(path)\n results = []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/openapi.html"} {"id": "29585180565a-5", "text": "path_item = self._get_path_strict(path)\n results = []\n for method in HTTPVerb:\n operation = getattr(path_item, method.value, None)\n if isinstance(operation, Operation):\n results.append(method.value)\n return results\n[docs] def get_parameters_for_path(self, path: str) -> List[Parameter]:\n path_item = self._get_path_strict(path)\n parameters = []\n if not path_item.parameters:\n return []\n for parameter in path_item.parameters:\n if isinstance(parameter, Reference):\n parameter = self._get_root_referenced_parameter(parameter)\n parameters.append(parameter)\n return parameters\n[docs] def get_operation(self, path: str, method: str) -> Operation:\n \"\"\"Get the operation object for a given path and HTTP method.\"\"\"\n path_item = self._get_path_strict(path)\n operation_obj = getattr(path_item, method, None)\n if not isinstance(operation_obj, Operation):\n raise ValueError(f\"No {method} method found for {path}\")\n return operation_obj\n[docs] def get_parameters_for_operation(self, operation: Operation) -> List[Parameter]:\n \"\"\"Get the components for a given operation.\"\"\"\n parameters = []\n if operation.parameters:\n for parameter in operation.parameters:\n if isinstance(parameter, Reference):\n parameter = self._get_root_referenced_parameter(parameter)\n parameters.append(parameter)\n return parameters\n[docs] def get_request_body_for_operation(\n self, operation: Operation\n ) -> Optional[RequestBody]:\n \"\"\"Get the request body for a given operation.\"\"\"\n request_body = operation.requestBody\n if isinstance(request_body, Reference):\n request_body = self._get_root_referenced_request_body(request_body)\n return request_body", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/openapi.html"} {"id": "29585180565a-6", "text": "return request_body\n[docs] @staticmethod\n def get_cleaned_operation_id(operation: Operation, path: str, method: str) -> str:\n \"\"\"Get a cleaned operation id from an operation id.\"\"\"\n operation_id = operation.operationId\n if operation_id is None:\n # Replace all punctuation of any kind with underscore\n path = re.sub(r\"[^a-zA-Z0-9]\", \"_\", path.lstrip(\"/\"))\n operation_id = f\"{path}_{method}\"\n return operation_id.replace(\"-\", \"_\").replace(\".\", \"_\").replace(\"/\", \"_\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/openapi.html"} {"id": "a3b2580b2a9f-0", "text": "Source code for langchain.utilities.pupmed\nimport json\nimport logging\nimport time\nimport urllib.error\nimport urllib.request\nfrom typing import List\nfrom pydantic import BaseModel\nfrom langchain.schema import Document\nlogger = logging.getLogger(__name__)\n[docs]class PubMedAPIWrapper(BaseModel):\n \"\"\"\n Wrapper around PubMed API.\n This wrapper will use the PubMed API to conduct searches and fetch\n document summaries. By default, it will return the document summaries\n of the top-k results of an input search.\n Parameters:\n top_k_results: number of the top-scored document used for the PubMed tool\n load_max_docs: a limit to the number of loaded documents\n load_all_available_meta:\n if True: the `metadata` of the loaded Documents gets all available meta info\n (see https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch)\n if False: the `metadata` gets only the most informative fields.\n \"\"\"\n base_url_esearch = \"https://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?\"\n base_url_efetch = \"https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?\"\n max_retry = 5\n sleep_time = 0.2\n # Default values for the parameters\n top_k_results: int = 3\n load_max_docs: int = 25\n ARXIV_MAX_QUERY_LENGTH = 300\n doc_content_chars_max: int = 2000\n load_all_available_meta: bool = False\n email: str = \"your_email@example.com\"\n[docs] def run(self, query: str) -> str:\n \"\"\"\n Run PubMed search and get the article meta information.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/pupmed.html"} {"id": "a3b2580b2a9f-1", "text": "\"\"\"\n Run PubMed search and get the article meta information.\n See https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch\n It uses only the most informative fields of article meta information.\n \"\"\"\n try:\n # Retrieve the top-k results for the query\n docs = [\n f\"Published: {result['pub_date']}\\nTitle: {result['title']}\\n\"\n f\"Summary: {result['summary']}\"\n for result in self.load(query[: self.ARXIV_MAX_QUERY_LENGTH])\n ]\n # Join the results and limit the character count\n return (\n \"\\n\\n\".join(docs)[: self.doc_content_chars_max]\n if docs\n else \"No good PubMed Result was found\"\n )\n except Exception as ex:\n return f\"PubMed exception: {ex}\"\n[docs] def load(self, query: str) -> List[dict]:\n \"\"\"\n Search PubMed for documents matching the query.\n Return a list of dictionaries containing the document metadata.\n \"\"\"\n url = (\n self.base_url_esearch\n + \"db=pubmed&term=\"\n + str({urllib.parse.quote(query)})\n + f\"&retmode=json&retmax={self.top_k_results}&usehistory=y\"\n )\n result = urllib.request.urlopen(url)\n text = result.read().decode(\"utf-8\")\n json_text = json.loads(text)\n articles = []\n webenv = json_text[\"esearchresult\"][\"webenv\"]\n for uid in json_text[\"esearchresult\"][\"idlist\"]:\n article = self.retrieve_article(uid, webenv)\n articles.append(article)\n # Convert the list of articles to a JSON string\n return articles", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/pupmed.html"} {"id": "a3b2580b2a9f-2", "text": "# Convert the list of articles to a JSON string\n return articles\n def _transform_doc(self, doc: dict) -> Document:\n summary = doc.pop(\"summary\")\n return Document(page_content=summary, metadata=doc)\n[docs] def load_docs(self, query: str) -> List[Document]:\n document_dicts = self.load(query=query)\n return [self._transform_doc(d) for d in document_dicts]\n[docs] def retrieve_article(self, uid: str, webenv: str) -> dict:\n url = (\n self.base_url_efetch\n + \"db=pubmed&retmode=xml&id=\"\n + uid\n + \"&webenv=\"\n + webenv\n )\n retry = 0\n while True:\n try:\n result = urllib.request.urlopen(url)\n break\n except urllib.error.HTTPError as e:\n if e.code == 429 and retry < self.max_retry:\n # Too Many Requests error\n # wait for an exponentially increasing amount of time\n print(\n f\"Too Many Requests, \"\n f\"waiting for {self.sleep_time:.2f} seconds...\"\n )\n time.sleep(self.sleep_time)\n self.sleep_time *= 2\n retry += 1\n else:\n raise e\n xml_text = result.read().decode(\"utf-8\")\n # Get title\n title = \"\"\n if \"\" in xml_text and \"\" in xml_text:\n start_tag = \"\"\n end_tag = \"\"\n title = xml_text[\n xml_text.index(start_tag) + len(start_tag) : xml_text.index(end_tag)\n ]\n # Get abstract\n abstract = \"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/pupmed.html"} {"id": "a3b2580b2a9f-3", "text": "]\n # Get abstract\n abstract = \"\"\n if \"\" in xml_text and \"\" in xml_text:\n start_tag = \"\"\n end_tag = \"\"\n abstract = xml_text[\n xml_text.index(start_tag) + len(start_tag) : xml_text.index(end_tag)\n ]\n # Get publication date\n pub_date = \"\"\n if \"\" in xml_text and \"\" in xml_text:\n start_tag = \"\"\n end_tag = \"\"\n pub_date = xml_text[\n xml_text.index(start_tag) + len(start_tag) : xml_text.index(end_tag)\n ]\n # Return article as dictionary\n article = {\n \"uid\": uid,\n \"title\": title,\n \"summary\": abstract,\n \"pub_date\": pub_date,\n }\n return article", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/pupmed.html"} {"id": "850a30cbe031-0", "text": "Source code for langchain.utilities.loading\n\"\"\"Utilities for loading configurations from langchain-hub.\"\"\"\nimport os\nimport re\nimport tempfile\nfrom pathlib import Path, PurePosixPath\nfrom typing import Any, Callable, Optional, Set, TypeVar, Union\nfrom urllib.parse import urljoin\nimport requests\nDEFAULT_REF = os.environ.get(\"LANGCHAIN_HUB_DEFAULT_REF\", \"master\")\nURL_BASE = os.environ.get(\n \"LANGCHAIN_HUB_URL_BASE\",\n \"https://raw.githubusercontent.com/hwchase17/langchain-hub/{ref}/\",\n)\nHUB_PATH_RE = re.compile(r\"lc(?P@[^:]+)?://(?P.*)\")\nT = TypeVar(\"T\")\n[docs]def try_load_from_hub(\n path: Union[str, Path],\n loader: Callable[[str], T],\n valid_prefix: str,\n valid_suffixes: Set[str],\n **kwargs: Any,\n) -> Optional[T]:\n \"\"\"Load configuration from hub. Returns None if path is not a hub path.\"\"\"\n if not isinstance(path, str) or not (match := HUB_PATH_RE.match(path)):\n return None\n ref, remote_path_str = match.groups()\n ref = ref[1:] if ref else DEFAULT_REF\n remote_path = Path(remote_path_str)\n if remote_path.parts[0] != valid_prefix:\n return None\n if remote_path.suffix[1:] not in valid_suffixes:\n raise ValueError(\"Unsupported file type.\")\n # Using Path with URLs is not recommended, because on Windows\n # the backslash is used as the path separator, which can cause issues\n # when working with URLs that use forward slashes as the path separator.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/loading.html"} {"id": "850a30cbe031-1", "text": "# when working with URLs that use forward slashes as the path separator.\n # Instead, use PurePosixPath to ensure that forward slashes are used as the\n # path separator, regardless of the operating system.\n full_url = urljoin(URL_BASE.format(ref=ref), PurePosixPath(remote_path).__str__())\n r = requests.get(full_url, timeout=5)\n if r.status_code != 200:\n raise ValueError(f\"Could not find file at {full_url}\")\n with tempfile.TemporaryDirectory() as tmpdirname:\n file = Path(tmpdirname) / remote_path.name\n with open(file, \"wb\") as f:\n f.write(r.content)\n return loader(str(file), **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/loading.html"} {"id": "b83cc8fb0ee0-0", "text": "Source code for langchain.utilities.wolfram_alpha\n\"\"\"Util that calls WolframAlpha.\"\"\"\nfrom typing import Any, Dict, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.utils import get_from_dict_or_env\n[docs]class WolframAlphaAPIWrapper(BaseModel):\n \"\"\"Wrapper for Wolfram Alpha.\n Docs for using:\n 1. Go to wolfram alpha and sign up for a developer account\n 2. Create an app and get your APP ID\n 3. Save your APP ID into WOLFRAM_ALPHA_APPID env variable\n 4. pip install wolframalpha\n \"\"\"\n wolfram_client: Any #: :meta private:\n wolfram_alpha_appid: Optional[str] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n wolfram_alpha_appid = get_from_dict_or_env(\n values, \"wolfram_alpha_appid\", \"WOLFRAM_ALPHA_APPID\"\n )\n values[\"wolfram_alpha_appid\"] = wolfram_alpha_appid\n try:\n import wolframalpha\n except ImportError:\n raise ImportError(\n \"wolframalpha is not installed. \"\n \"Please install it with `pip install wolframalpha`\"\n )\n client = wolframalpha.Client(wolfram_alpha_appid)\n values[\"wolfram_client\"] = client\n return values\n[docs] def run(self, query: str) -> str:\n \"\"\"Run query through WolframAlpha and parse result.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/wolfram_alpha.html"} {"id": "b83cc8fb0ee0-1", "text": "\"\"\"Run query through WolframAlpha and parse result.\"\"\"\n res = self.wolfram_client.query(query)\n try:\n assumption = next(res.pods).text\n answer = next(res.results).text\n except StopIteration:\n return \"Wolfram Alpha wasn't able to answer it\"\n if answer is None or answer == \"\":\n # We don't want to return the assumption alone if answer is empty\n return \"No good Wolfram Alpha Result was found\"\n else:\n return f\"Assumption: {assumption} \\nAnswer: {answer}\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/wolfram_alpha.html"} {"id": "b3be72950fc3-0", "text": "Source code for langchain.utilities.powerbi\n\"\"\"Wrapper around a Power BI endpoint.\"\"\"\nfrom __future__ import annotations\nimport asyncio\nimport logging\nimport os\nfrom typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Union\nimport aiohttp\nimport requests\nfrom aiohttp import ServerTimeoutError\nfrom pydantic import BaseModel, Field, root_validator, validator\nfrom requests.exceptions import Timeout\n_LOGGER = logging.getLogger(__name__)\nBASE_URL = os.getenv(\"POWERBI_BASE_URL\", \"https://api.powerbi.com/v1.0/myorg\")\nif TYPE_CHECKING:\n from azure.core.credentials import TokenCredential\n[docs]class PowerBIDataset(BaseModel):\n \"\"\"Create PowerBI engine from dataset ID and credential or token.\n Use either the credential or a supplied token to authenticate.\n If both are supplied the credential is used to generate a token.\n The impersonated_user_name is the UPN of a user to be impersonated.\n If the model is not RLS enabled, this will be ignored.\n \"\"\"\n dataset_id: str\n table_names: List[str]\n group_id: Optional[str] = None\n credential: Optional[TokenCredential] = None\n token: Optional[str] = None\n impersonated_user_name: Optional[str] = None\n sample_rows_in_table_info: int = Field(default=1, gt=0, le=10)\n schemas: Dict[str, str] = Field(default_factory=dict)\n aiosession: Optional[aiohttp.ClientSession] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] @validator(\"table_names\", allow_reuse=True)\n def fix_table_names(cls, table_names: List[str]) -> List[str]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html"} {"id": "b3be72950fc3-1", "text": "def fix_table_names(cls, table_names: List[str]) -> List[str]:\n \"\"\"Fix the table names.\"\"\"\n return [fix_table_name(table) for table in table_names]\n[docs] @root_validator(pre=True, allow_reuse=True)\n def token_or_credential_present(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Validate that at least one of token and credentials is present.\"\"\"\n if \"token\" in values or \"credential\" in values:\n return values\n raise ValueError(\"Please provide either a credential or a token.\")\n @property\n def request_url(self) -> str:\n \"\"\"Get the request url.\"\"\"\n if self.group_id:\n return f\"{BASE_URL}/groups/{self.group_id}/datasets/{self.dataset_id}/executeQueries\" # noqa: E501 # pylint: disable=C0301\n return f\"{BASE_URL}/datasets/{self.dataset_id}/executeQueries\" # noqa: E501 # pylint: disable=C0301\n @property\n def headers(self) -> Dict[str, str]:\n \"\"\"Get the token.\"\"\"\n if self.token:\n return {\n \"Content-Type\": \"application/json\",\n \"Authorization\": \"Bearer \" + self.token,\n }\n from azure.core.exceptions import (\n ClientAuthenticationError, # pylint: disable=import-outside-toplevel\n )\n if self.credential:\n try:\n token = self.credential.get_token(\n \"https://analysis.windows.net/powerbi/api/.default\"\n ).token\n return {\n \"Content-Type\": \"application/json\",\n \"Authorization\": \"Bearer \" + token,\n }\n except Exception as exc: # pylint: disable=broad-exception-caught", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html"} {"id": "b3be72950fc3-2", "text": "except Exception as exc: # pylint: disable=broad-exception-caught\n raise ClientAuthenticationError(\n \"Could not get a token from the supplied credentials.\"\n ) from exc\n raise ClientAuthenticationError(\"No credential or token supplied.\")\n[docs] def get_table_names(self) -> Iterable[str]:\n \"\"\"Get names of tables available.\"\"\"\n return self.table_names\n[docs] def get_schemas(self) -> str:\n \"\"\"Get the available schema's.\"\"\"\n if self.schemas:\n return \", \".join([f\"{key}: {value}\" for key, value in self.schemas.items()])\n return \"No known schema's yet. Use the schema_powerbi tool first.\"\n @property\n def table_info(self) -> str:\n \"\"\"Information about all tables in the database.\"\"\"\n return self.get_table_info()\n def _get_tables_to_query(\n self, table_names: Optional[Union[List[str], str]] = None\n ) -> Optional[List[str]]:\n \"\"\"Get the tables names that need to be queried, after checking they exist.\"\"\"\n if table_names is not None:\n if (\n isinstance(table_names, list)\n and len(table_names) > 0\n and table_names[0] != \"\"\n ):\n fixed_tables = [fix_table_name(table) for table in table_names]\n non_existing_tables = [\n table for table in fixed_tables if table not in self.table_names\n ]\n if non_existing_tables:\n _LOGGER.warning(\n \"Table(s) %s not found in dataset.\",\n \", \".join(non_existing_tables),\n )\n tables = [\n table for table in fixed_tables if table not in non_existing_tables\n ]\n return tables if tables else None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html"} {"id": "b3be72950fc3-3", "text": "]\n return tables if tables else None\n if isinstance(table_names, str) and table_names != \"\":\n if table_names not in self.table_names:\n _LOGGER.warning(\"Table %s not found in dataset.\", table_names)\n return None\n return [fix_table_name(table_names)]\n return self.table_names\n def _get_tables_todo(self, tables_todo: List[str]) -> List[str]:\n \"\"\"Get the tables that still need to be queried.\"\"\"\n return [table for table in tables_todo if table not in self.schemas]\n def _get_schema_for_tables(self, table_names: List[str]) -> str:\n \"\"\"Create a string of the table schemas for the supplied tables.\"\"\"\n schemas = [\n schema for table, schema in self.schemas.items() if table in table_names\n ]\n return \", \".join(schemas)\n[docs] def get_table_info(\n self, table_names: Optional[Union[List[str], str]] = None\n ) -> str:\n \"\"\"Get information about specified tables.\"\"\"\n tables_requested = self._get_tables_to_query(table_names)\n if tables_requested is None:\n return \"No (valid) tables requested.\"\n tables_todo = self._get_tables_todo(tables_requested)\n for table in tables_todo:\n self._get_schema(table)\n return self._get_schema_for_tables(tables_requested)\n[docs] async def aget_table_info(\n self, table_names: Optional[Union[List[str], str]] = None\n ) -> str:\n \"\"\"Get information about specified tables.\"\"\"\n tables_requested = self._get_tables_to_query(table_names)\n if tables_requested is None:\n return \"No (valid) tables requested.\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html"} {"id": "b3be72950fc3-4", "text": "if tables_requested is None:\n return \"No (valid) tables requested.\"\n tables_todo = self._get_tables_todo(tables_requested)\n await asyncio.gather(*[self._aget_schema(table) for table in tables_todo])\n return self._get_schema_for_tables(tables_requested)\n def _get_schema(self, table: str) -> None:\n \"\"\"Get the schema for a table.\"\"\"\n try:\n result = self.run(\n f\"EVALUATE TOPN({self.sample_rows_in_table_info}, {table})\"\n )\n self.schemas[table] = json_to_md(result[\"results\"][0][\"tables\"][0][\"rows\"])\n except Timeout:\n _LOGGER.warning(\"Timeout while getting table info for %s\", table)\n self.schemas[table] = \"unknown\"\n except Exception as exc: # pylint: disable=broad-exception-caught\n _LOGGER.warning(\"Error while getting table info for %s: %s\", table, exc)\n self.schemas[table] = \"unknown\"\n async def _aget_schema(self, table: str) -> None:\n \"\"\"Get the schema for a table.\"\"\"\n try:\n result = await self.arun(\n f\"EVALUATE TOPN({self.sample_rows_in_table_info}, {table})\"\n )\n self.schemas[table] = json_to_md(result[\"results\"][0][\"tables\"][0][\"rows\"])\n except ServerTimeoutError:\n _LOGGER.warning(\"Timeout while getting table info for %s\", table)\n self.schemas[table] = \"unknown\"\n except Exception as exc: # pylint: disable=broad-exception-caught\n _LOGGER.warning(\"Error while getting table info for %s: %s\", table, exc)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html"} {"id": "b3be72950fc3-5", "text": "self.schemas[table] = \"unknown\"\n def _create_json_content(self, command: str) -> dict[str, Any]:\n \"\"\"Create the json content for the request.\"\"\"\n return {\n \"queries\": [{\"query\": rf\"{command}\"}],\n \"impersonatedUserName\": self.impersonated_user_name,\n \"serializerSettings\": {\"includeNulls\": True},\n }\n[docs] def run(self, command: str) -> Any:\n \"\"\"Execute a DAX command and return a json representing the results.\"\"\"\n _LOGGER.debug(\"Running command: %s\", command)\n response = requests.post(\n self.request_url,\n json=self._create_json_content(command),\n headers=self.headers,\n timeout=10,\n )\n if response.status_code == 403:\n return (\n \"TokenError: Could not login to PowerBI, please check your credentials.\"\n )\n return response.json()\n[docs] async def arun(self, command: str) -> Any:\n \"\"\"Execute a DAX command and return the result asynchronously.\"\"\"\n _LOGGER.debug(\"Running command: %s\", command)\n if self.aiosession:\n async with self.aiosession.post(\n self.request_url,\n headers=self.headers,\n json=self._create_json_content(command),\n timeout=10,\n ) as response:\n if response.status == 403:\n return \"TokenError: Could not login to PowerBI, please check your credentials.\" # noqa: E501\n response_json = await response.json(content_type=response.content_type)\n return response_json\n async with aiohttp.ClientSession() as session:\n async with session.post(\n self.request_url,\n headers=self.headers,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html"} {"id": "b3be72950fc3-6", "text": "async with session.post(\n self.request_url,\n headers=self.headers,\n json=self._create_json_content(command),\n timeout=10,\n ) as response:\n if response.status == 403:\n return \"TokenError: Could not login to PowerBI, please check your credentials.\" # noqa: E501\n response_json = await response.json(content_type=response.content_type)\n return response_json\n[docs]def json_to_md(\n json_contents: List[Dict[str, Union[str, int, float]]],\n table_name: Optional[str] = None,\n) -> str:\n \"\"\"Converts a JSON object to a markdown table.\"\"\"\n if len(json_contents) == 0:\n return \"\"\n output_md = \"\"\n headers = json_contents[0].keys()\n for header in headers:\n header.replace(\"[\", \".\").replace(\"]\", \"\")\n if table_name:\n header.replace(f\"{table_name}.\", \"\")\n output_md += f\"| {header} \"\n output_md += \"|\\n\"\n for row in json_contents:\n for value in row.values():\n output_md += f\"| {value} \"\n output_md += \"|\\n\"\n return output_md\n[docs]def fix_table_name(table: str) -> str:\n \"\"\"Add single quotes around table names that contain spaces.\"\"\"\n if \" \" in table and not table.startswith(\"'\") and not table.endswith(\"'\"):\n return f\"'{table}'\"\n return table", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html"} {"id": "04284095990a-0", "text": "Source code for langchain.utilities.brave_search\nimport json\nfrom typing import List\nimport requests\nfrom pydantic import BaseModel, Field\nfrom langchain.schema import Document\n[docs]class BraveSearchWrapper(BaseModel):\n api_key: str\n search_kwargs: dict = Field(default_factory=dict)\n base_url = \"https://api.search.brave.com/res/v1/web/search\"\n[docs] def run(self, query: str) -> str:\n \"\"\"Query the Brave search engine and return the results as a JSON string.\n Args:\n query: The query to search for.\n Returns: The results as a JSON string.\n \"\"\"\n web_search_results = self._search_request(query=query)\n final_results = [\n {\n \"title\": item.get(\"title\"),\n \"link\": item.get(\"url\"),\n \"snippet\": item.get(\"description\"),\n }\n for item in web_search_results\n ]\n return json.dumps(final_results)\n[docs] def download_documents(self, query: str) -> List[Document]:\n \"\"\"Query the Brave search engine and return the results as a list of Documents.\n Args:\n query: The query to search for.\n Returns: The results as a list of Documents.\n \"\"\"\n results = self._search_request(query)\n return [\n Document(\n page_content=item.get(\"description\"),\n metadata={\"title\": item.get(\"title\"), \"link\": item.get(\"url\")},\n )\n for item in results\n ]\n def _search_request(self, query: str) -> List[dict]:\n headers = {\n \"X-Subscription-Token\": self.api_key,\n \"Accept\": \"application/json\",\n }\n req = requests.PreparedRequest()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/brave_search.html"} {"id": "04284095990a-1", "text": "}\n req = requests.PreparedRequest()\n params = {**self.search_kwargs, **{\"q\": query}}\n req.prepare_url(self.base_url, params)\n if req.url is None:\n raise ValueError(\"prepared url is None, this should not happen\")\n response = requests.get(req.url, headers=headers)\n if not response.ok:\n raise Exception(f\"HTTP error {response.status_code}\")\n return response.json().get(\"web\", {}).get(\"results\", [])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/brave_search.html"} {"id": "97ac1e148602-0", "text": "Source code for langchain.utilities.wikipedia\n\"\"\"Util that calls Wikipedia.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, root_validator\nfrom langchain.schema import Document\nlogger = logging.getLogger(__name__)\nWIKIPEDIA_MAX_QUERY_LENGTH = 300\n[docs]class WikipediaAPIWrapper(BaseModel):\n \"\"\"Wrapper around WikipediaAPI.\n To use, you should have the ``wikipedia`` python package installed.\n This wrapper will use the Wikipedia API to conduct searches and\n fetch page summaries. By default, it will return the page summaries\n of the top-k results.\n It limits the Document content by doc_content_chars_max.\n \"\"\"\n wiki_client: Any #: :meta private:\n top_k_results: int = 3\n lang: str = \"en\"\n load_all_available_meta: bool = False\n doc_content_chars_max: int = 4000\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the python package exists in environment.\"\"\"\n try:\n import wikipedia\n wikipedia.set_lang(values[\"lang\"])\n values[\"wiki_client\"] = wikipedia\n except ImportError:\n raise ImportError(\n \"Could not import wikipedia python package. \"\n \"Please install it with `pip install wikipedia`.\"\n )\n return values\n[docs] def run(self, query: str) -> str:\n \"\"\"Run Wikipedia search and get page summaries.\"\"\"\n page_titles = self.wiki_client.search(query[:WIKIPEDIA_MAX_QUERY_LENGTH])\n summaries = []\n for page_title in page_titles[: self.top_k_results]:\n if wiki_page := self._fetch_page(page_title):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/wikipedia.html"} {"id": "97ac1e148602-1", "text": "if wiki_page := self._fetch_page(page_title):\n if summary := self._formatted_page_summary(page_title, wiki_page):\n summaries.append(summary)\n if not summaries:\n return \"No good Wikipedia Search Result was found\"\n return \"\\n\\n\".join(summaries)[: self.doc_content_chars_max]\n @staticmethod\n def _formatted_page_summary(page_title: str, wiki_page: Any) -> Optional[str]:\n return f\"Page: {page_title}\\nSummary: {wiki_page.summary}\"\n def _page_to_document(self, page_title: str, wiki_page: Any) -> Document:\n main_meta = {\n \"title\": page_title,\n \"summary\": wiki_page.summary,\n \"source\": wiki_page.url,\n }\n add_meta = (\n {\n \"categories\": wiki_page.categories,\n \"page_url\": wiki_page.url,\n \"image_urls\": wiki_page.images,\n \"related_titles\": wiki_page.links,\n \"parent_id\": wiki_page.parent_id,\n \"references\": wiki_page.references,\n \"revision_id\": wiki_page.revision_id,\n \"sections\": wiki_page.sections,\n }\n if self.load_all_available_meta\n else {}\n )\n doc = Document(\n page_content=wiki_page.content[: self.doc_content_chars_max],\n metadata={\n **main_meta,\n **add_meta,\n },\n )\n return doc\n def _fetch_page(self, page: str) -> Optional[str]:\n try:\n return self.wiki_client.page(title=page, auto_suggest=False)\n except (\n self.wiki_client.exceptions.PageError,\n self.wiki_client.exceptions.DisambiguationError,\n ):\n return None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/wikipedia.html"} {"id": "97ac1e148602-2", "text": "self.wiki_client.exceptions.DisambiguationError,\n ):\n return None\n[docs] def load(self, query: str) -> List[Document]:\n \"\"\"\n Run Wikipedia search and get the article text plus the meta information.\n See\n Returns: a list of documents.\n \"\"\"\n page_titles = self.wiki_client.search(query[:WIKIPEDIA_MAX_QUERY_LENGTH])\n docs = []\n for page_title in page_titles[: self.top_k_results]:\n if wiki_page := self._fetch_page(page_title):\n if doc := self._page_to_document(page_title, wiki_page):\n docs.append(doc)\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/wikipedia.html"} {"id": "d95102a0876b-0", "text": "Source code for langchain.utilities.zapier\n\"\"\"Util that can interact with Zapier NLA.\nFull docs here: https://nla.zapier.com/start/\nNote: this wrapper currently only implemented the `api_key` auth method for testing\nand server-side production use cases (using the developer's connected accounts on\nZapier.com)\nFor use-cases where LangChain + Zapier NLA is powering a user-facing application, and\nLangChain needs access to the end-user's connected accounts on Zapier.com, you'll need\nto use oauth. Review the full docs above and reach out to nla@zapier.com for\ndeveloper support.\n\"\"\"\nimport json\nfrom typing import Any, Dict, List, Optional\nimport aiohttp\nimport requests\nfrom pydantic import BaseModel, Extra, root_validator\nfrom requests import Request, Session\nfrom langchain.utils import get_from_dict_or_env\n[docs]class ZapierNLAWrapper(BaseModel):\n \"\"\"Wrapper for Zapier NLA.\n Full docs here: https://nla.zapier.com/start/\n This wrapper supports both API Key and OAuth Credential auth methods. API Key\n is the fastest way to get started using this wrapper.\n Call this wrapper with either `zapier_nla_api_key` or\n `zapier_nla_oauth_access_token` arguments, or set the `ZAPIER_NLA_API_KEY`\n environment variable. If both arguments are set, the Access Token will take\n precedence.\n For use-cases where LangChain + Zapier NLA is powering a user-facing application,\n and LangChain needs access to the end-user's connected accounts on Zapier.com,\n you'll need to use OAuth. Review the full docs above to learn how to create\n your own provider and generate credentials.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/zapier.html"} {"id": "d95102a0876b-1", "text": "your own provider and generate credentials.\n \"\"\"\n zapier_nla_api_key: str\n zapier_nla_oauth_access_token: str\n zapier_nla_api_base: str = \"https://nla.zapier.com/api/v1/\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n def _format_headers(self) -> Dict[str, str]:\n \"\"\"Format headers for requests.\"\"\"\n headers = {\n \"Accept\": \"application/json\",\n \"Content-Type\": \"application/json\",\n }\n if self.zapier_nla_oauth_access_token:\n headers.update(\n {\"Authorization\": f\"Bearer {self.zapier_nla_oauth_access_token}\"}\n )\n else:\n headers.update({\"X-API-Key\": self.zapier_nla_api_key})\n return headers\n def _get_session(self) -> Session:\n session = requests.Session()\n session.headers.update(self._format_headers())\n return session\n async def _arequest(self, method: str, url: str, **kwargs: Any) -> Dict[str, Any]:\n \"\"\"Make an async request.\"\"\"\n async with aiohttp.ClientSession(headers=self._format_headers()) as session:\n async with session.request(method, url, **kwargs) as response:\n response.raise_for_status()\n return await response.json()\n def _create_action_payload( # type: ignore[no-untyped-def]\n self, instructions: str, params: Optional[Dict] = None, preview_only=False\n ) -> Dict:\n \"\"\"Create a payload for an action.\"\"\"\n data = params if params else {}\n data.update(\n {\n \"instructions\": instructions,\n }\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/zapier.html"} {"id": "d95102a0876b-2", "text": "{\n \"instructions\": instructions,\n }\n )\n if preview_only:\n data.update({\"preview_only\": True})\n return data\n def _create_action_url(self, action_id: str) -> str:\n \"\"\"Create a url for an action.\"\"\"\n return self.zapier_nla_api_base + f\"exposed/{action_id}/execute/\"\n def _create_action_request( # type: ignore[no-untyped-def]\n self,\n action_id: str,\n instructions: str,\n params: Optional[Dict] = None,\n preview_only=False,\n ) -> Request:\n data = self._create_action_payload(instructions, params, preview_only)\n return Request(\n \"POST\",\n self._create_action_url(action_id),\n json=data,\n )\n[docs] @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key exists in environment.\"\"\"\n zapier_nla_api_key_default = None\n # If there is a oauth_access_key passed in the values\n # we don't need a nla_api_key it can be blank\n if \"zapier_nla_oauth_access_token\" in values:\n zapier_nla_api_key_default = \"\"\n else:\n values[\"zapier_nla_oauth_access_token\"] = \"\"\n # we require at least one API Key\n zapier_nla_api_key = get_from_dict_or_env(\n values,\n \"zapier_nla_api_key\",\n \"ZAPIER_NLA_API_KEY\",\n zapier_nla_api_key_default,\n )\n values[\"zapier_nla_api_key\"] = zapier_nla_api_key\n return values", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/zapier.html"} {"id": "d95102a0876b-3", "text": "return values\n[docs] async def alist(self) -> List[Dict]:\n \"\"\"Returns a list of all exposed (enabled) actions associated with\n current user (associated with the set api_key). Change your exposed\n actions here: https://nla.zapier.com/demo/start/\n The return list can be empty if no actions exposed. Else will contain\n a list of action objects:\n [{\n \"id\": str,\n \"description\": str,\n \"params\": Dict[str, str]\n }]\n `params` will always contain an `instructions` key, the only required\n param. All others optional and if provided will override any AI guesses\n (see \"understanding the AI guessing flow\" here:\n https://nla.zapier.com/api/v1/docs)\n \"\"\"\n response = await self._arequest(\"GET\", self.zapier_nla_api_base + \"exposed/\")\n return response[\"results\"]\n[docs] def list(self) -> List[Dict]:\n \"\"\"Returns a list of all exposed (enabled) actions associated with\n current user (associated with the set api_key). Change your exposed\n actions here: https://nla.zapier.com/demo/start/\n The return list can be empty if no actions exposed. Else will contain\n a list of action objects:\n [{\n \"id\": str,\n \"description\": str,\n \"params\": Dict[str, str]\n }]\n `params` will always contain an `instructions` key, the only required\n param. All others optional and if provided will override any AI guesses\n (see \"understanding the AI guessing flow\" here:\n https://nla.zapier.com/docs/using-the-api#ai-guessing)\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/zapier.html"} {"id": "d95102a0876b-4", "text": "\"\"\"\n session = self._get_session()\n try:\n response = session.get(self.zapier_nla_api_base + \"exposed/\")\n response.raise_for_status()\n except requests.HTTPError as http_err:\n if response.status_code == 401:\n if self.zapier_nla_oauth_access_token:\n raise requests.HTTPError(\n f\"An unauthorized response occurred. Check that your \"\n f\"access token is correct and doesn't need to be \"\n f\"refreshed. Err: {http_err}\"\n )\n raise requests.HTTPError(\n f\"An unauthorized response occurred. Check that your api \"\n f\"key is correct. Err: {http_err}\"\n )\n raise http_err\n return response.json()[\"results\"]\n[docs] def run(\n self, action_id: str, instructions: str, params: Optional[Dict] = None\n ) -> Dict:\n \"\"\"Executes an action that is identified by action_id, must be exposed\n (enabled) by the current user (associated with the set api_key). Change\n your exposed actions here: https://nla.zapier.com/demo/start/\n The return JSON is guaranteed to be less than ~500 words (350\n tokens) making it safe to inject into the prompt of another LLM\n call.\n \"\"\"\n session = self._get_session()\n request = self._create_action_request(action_id, instructions, params)\n response = session.send(session.prepare_request(request))\n response.raise_for_status()\n return response.json()[\"result\"]\n[docs] async def arun(\n self, action_id: str, instructions: str, params: Optional[Dict] = None\n ) -> Dict:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/zapier.html"} {"id": "d95102a0876b-5", "text": ") -> Dict:\n \"\"\"Executes an action that is identified by action_id, must be exposed\n (enabled) by the current user (associated with the set api_key). Change\n your exposed actions here: https://nla.zapier.com/demo/start/\n The return JSON is guaranteed to be less than ~500 words (350\n tokens) making it safe to inject into the prompt of another LLM\n call.\n \"\"\"\n response = await self._arequest(\n \"POST\",\n self._create_action_url(action_id),\n json=self._create_action_payload(instructions, params),\n )\n return response[\"result\"]\n[docs] def preview(\n self, action_id: str, instructions: str, params: Optional[Dict] = None\n ) -> Dict:\n \"\"\"Same as run, but instead of actually executing the action, will\n instead return a preview of params that have been guessed by the AI in\n case you need to explicitly review before executing.\"\"\"\n session = self._get_session()\n params = params if params else {}\n params.update({\"preview_only\": True})\n request = self._create_action_request(action_id, instructions, params, True)\n response = session.send(session.prepare_request(request))\n response.raise_for_status()\n return response.json()[\"input_params\"]\n[docs] async def apreview(\n self, action_id: str, instructions: str, params: Optional[Dict] = None\n ) -> Dict:\n \"\"\"Same as run, but instead of actually executing the action, will\n instead return a preview of params that have been guessed by the AI in\n case you need to explicitly review before executing.\"\"\"\n response = await self._arequest(\n \"POST\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/zapier.html"} {"id": "d95102a0876b-6", "text": "response = await self._arequest(\n \"POST\",\n self._create_action_url(action_id),\n json=self._create_action_payload(instructions, params, preview_only=True),\n )\n return response[\"result\"]\n[docs] def run_as_str(self, *args, **kwargs) -> str: # type: ignore[no-untyped-def]\n \"\"\"Same as run, but returns a stringified version of the JSON for\n insertting back into an LLM.\"\"\"\n data = self.run(*args, **kwargs)\n return json.dumps(data)\n[docs] async def arun_as_str(self, *args, **kwargs) -> str: # type: ignore[no-untyped-def]\n \"\"\"Same as run, but returns a stringified version of the JSON for\n insertting back into an LLM.\"\"\"\n data = await self.arun(*args, **kwargs)\n return json.dumps(data)\n[docs] def preview_as_str(self, *args, **kwargs) -> str: # type: ignore[no-untyped-def]\n \"\"\"Same as preview, but returns a stringified version of the JSON for\n insertting back into an LLM.\"\"\"\n data = self.preview(*args, **kwargs)\n return json.dumps(data)\n[docs] async def apreview_as_str( # type: ignore[no-untyped-def]\n self, *args, **kwargs\n ) -> str:\n \"\"\"Same as preview, but returns a stringified version of the JSON for\n insertting back into an LLM.\"\"\"\n data = await self.apreview(*args, **kwargs)\n return json.dumps(data)\n[docs] def list_as_str(self) -> str: # type: ignore[no-untyped-def]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/zapier.html"} {"id": "d95102a0876b-7", "text": "\"\"\"Same as list, but returns a stringified version of the JSON for\n insertting back into an LLM.\"\"\"\n actions = self.list()\n return json.dumps(actions)\n[docs] async def alist_as_str(self) -> str: # type: ignore[no-untyped-def]\n \"\"\"Same as list, but returns a stringified version of the JSON for\n insertting back into an LLM.\"\"\"\n actions = await self.alist()\n return json.dumps(actions)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/zapier.html"} {"id": "2031225a221f-0", "text": "Source code for langchain.utilities.serpapi\n\"\"\"Chain that calls SerpAPI.\nHeavily borrowed from https://github.com/ofirpress/self-ask\n\"\"\"\nimport os\nimport sys\nfrom typing import Any, Dict, Optional, Tuple\nimport aiohttp\nfrom pydantic import BaseModel, Extra, Field, root_validator\nfrom langchain.utils import get_from_dict_or_env\nclass HiddenPrints:\n \"\"\"Context manager to hide prints.\"\"\"\n def __enter__(self) -> None:\n \"\"\"Open file to pipe stdout to.\"\"\"\n self._original_stdout = sys.stdout\n sys.stdout = open(os.devnull, \"w\")\n def __exit__(self, *_: Any) -> None:\n \"\"\"Close file that stdout was piped to.\"\"\"\n sys.stdout.close()\n sys.stdout = self._original_stdout\n[docs]class SerpAPIWrapper(BaseModel):\n \"\"\"Wrapper around SerpAPI.\n To use, you should have the ``google-search-results`` python package installed,\n and the environment variable ``SERPAPI_API_KEY`` set with your API key, or pass\n `serpapi_api_key` as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.utilities import SerpAPIWrapper\n serpapi = SerpAPIWrapper()\n \"\"\"\n search_engine: Any #: :meta private:\n params: dict = Field(\n default={\n \"engine\": \"google\",\n \"google_domain\": \"google.com\",\n \"gl\": \"us\",\n \"hl\": \"en\",\n }\n )\n serpapi_api_key: Optional[str] = None\n aiosession: Optional[aiohttp.ClientSession] = None\n[docs] class Config:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/serpapi.html"} {"id": "2031225a221f-1", "text": "[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n serpapi_api_key = get_from_dict_or_env(\n values, \"serpapi_api_key\", \"SERPAPI_API_KEY\"\n )\n values[\"serpapi_api_key\"] = serpapi_api_key\n try:\n from serpapi import GoogleSearch\n values[\"search_engine\"] = GoogleSearch\n except ImportError:\n raise ValueError(\n \"Could not import serpapi python package. \"\n \"Please install it with `pip install google-search-results`.\"\n )\n return values\n[docs] async def arun(self, query: str, **kwargs: Any) -> str:\n \"\"\"Run query through SerpAPI and parse result async.\"\"\"\n return self._process_response(await self.aresults(query))\n[docs] def run(self, query: str, **kwargs: Any) -> str:\n \"\"\"Run query through SerpAPI and parse result.\"\"\"\n return self._process_response(self.results(query))\n[docs] def results(self, query: str) -> dict:\n \"\"\"Run query through SerpAPI and return the raw result.\"\"\"\n params = self.get_params(query)\n with HiddenPrints():\n search = self.search_engine(params)\n res = search.get_dict()\n return res\n[docs] async def aresults(self, query: str) -> dict:\n \"\"\"Use aiohttp to run query through SerpAPI and return the results async.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/serpapi.html"} {"id": "2031225a221f-2", "text": "\"\"\"Use aiohttp to run query through SerpAPI and return the results async.\"\"\"\n def construct_url_and_params() -> Tuple[str, Dict[str, str]]:\n params = self.get_params(query)\n params[\"source\"] = \"python\"\n if self.serpapi_api_key:\n params[\"serp_api_key\"] = self.serpapi_api_key\n params[\"output\"] = \"json\"\n url = \"https://serpapi.com/search\"\n return url, params\n url, params = construct_url_and_params()\n if not self.aiosession:\n async with aiohttp.ClientSession() as session:\n async with session.get(url, params=params) as response:\n res = await response.json()\n else:\n async with self.aiosession.get(url, params=params) as response:\n res = await response.json()\n return res\n[docs] def get_params(self, query: str) -> Dict[str, str]:\n \"\"\"Get parameters for SerpAPI.\"\"\"\n _params = {\n \"api_key\": self.serpapi_api_key,\n \"q\": query,\n }\n params = {**self.params, **_params}\n return params\n @staticmethod\n def _process_response(res: dict) -> str:\n \"\"\"Process response from SerpAPI.\"\"\"\n if \"error\" in res.keys():\n raise ValueError(f\"Got error from SerpAPI: {res['error']}\")\n if \"answer_box\" in res.keys() and type(res[\"answer_box\"]) == list:\n res[\"answer_box\"] = res[\"answer_box\"][0]\n if \"answer_box\" in res.keys() and \"answer\" in res[\"answer_box\"].keys():", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/serpapi.html"} {"id": "2031225a221f-3", "text": "toret = res[\"answer_box\"][\"answer\"]\n elif \"answer_box\" in res.keys() and \"snippet\" in res[\"answer_box\"].keys():\n toret = res[\"answer_box\"][\"snippet\"]\n elif (\n \"answer_box\" in res.keys()\n and \"snippet_highlighted_words\" in res[\"answer_box\"].keys()\n ):\n toret = res[\"answer_box\"][\"snippet_highlighted_words\"][0]\n elif (\n \"sports_results\" in res.keys()\n and \"game_spotlight\" in res[\"sports_results\"].keys()\n ):\n toret = res[\"sports_results\"][\"game_spotlight\"]\n elif (\n \"shopping_results\" in res.keys()\n and \"title\" in res[\"shopping_results\"][0].keys()\n ):\n toret = res[\"shopping_results\"][:3]\n elif (\n \"knowledge_graph\" in res.keys()\n and \"description\" in res[\"knowledge_graph\"].keys()\n ):\n toret = res[\"knowledge_graph\"][\"description\"]\n elif \"snippet\" in res[\"organic_results\"][0].keys():\n toret = res[\"organic_results\"][0][\"snippet\"]\n elif \"link\" in res[\"organic_results\"][0].keys():\n toret = res[\"organic_results\"][0][\"link\"]\n else:\n toret = \"No good search result found\"\n return toret", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/serpapi.html"} {"id": "f0c25eef16de-0", "text": "Source code for langchain.utilities.graphql\nimport json\nfrom typing import Any, Callable, Dict, Optional\nfrom pydantic import BaseModel, Extra, root_validator\n[docs]class GraphQLAPIWrapper(BaseModel):\n \"\"\"Wrapper around GraphQL API.\n To use, you should have the ``gql`` python package installed.\n This wrapper will use the GraphQL API to conduct queries.\n \"\"\"\n custom_headers: Optional[Dict[str, str]] = None\n graphql_endpoint: str\n gql_client: Any #: :meta private:\n gql_function: Callable[[str], Any] #: :meta private:\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the python package exists in the environment.\"\"\"\n try:\n from gql import Client, gql\n from gql.transport.requests import RequestsHTTPTransport\n except ImportError as e:\n raise ImportError(\n \"Could not import gql python package. \"\n f\"Try installing it with `pip install gql`. Received error: {e}\"\n )\n headers = values.get(\"custom_headers\")\n transport = RequestsHTTPTransport(\n url=values[\"graphql_endpoint\"],\n headers=headers,\n )\n client = Client(transport=transport, fetch_schema_from_transport=True)\n values[\"gql_client\"] = client\n values[\"gql_function\"] = gql\n return values\n[docs] def run(self, query: str) -> str:\n \"\"\"Run a GraphQL query and get the results.\"\"\"\n result = self._execute_query(query)\n return json.dumps(result, indent=2)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/graphql.html"} {"id": "f0c25eef16de-1", "text": "return json.dumps(result, indent=2)\n def _execute_query(self, query: str) -> Dict[str, Any]:\n \"\"\"Execute a GraphQL query and return the results.\"\"\"\n document_node = self.gql_function(query)\n result = self.gql_client.execute(document_node)\n return result", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/graphql.html"} {"id": "ba02822ff0a2-0", "text": "Source code for langchain.utilities.arxiv\n\"\"\"Util that calls Arxiv.\"\"\"\nimport logging\nimport os\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, root_validator\nfrom langchain.schema import Document\nlogger = logging.getLogger(__name__)\n[docs]class ArxivAPIWrapper(BaseModel):\n \"\"\"Wrapper around ArxivAPI.\n To use, you should have the ``arxiv`` python package installed.\n https://lukasschwab.me/arxiv.py/index.html\n This wrapper will use the Arxiv API to conduct searches and\n fetch document summaries. By default, it will return the document summaries\n of the top-k results.\n It limits the Document content by doc_content_chars_max.\n Set doc_content_chars_max=None if you don't want to limit the content size.\n Parameters:\n top_k_results: number of the top-scored document used for the arxiv tool\n ARXIV_MAX_QUERY_LENGTH: the cut limit on the query used for the arxiv tool.\n load_max_docs: a limit to the number of loaded documents\n load_all_available_meta:\n if True: the `metadata` of the loaded Documents gets all available meta info\n (see https://lukasschwab.me/arxiv.py/index.html#Result),\n if False: the `metadata` gets only the most informative fields.\n \"\"\"\n arxiv_search: Any #: :meta private:\n arxiv_exceptions: Any # :meta private:\n top_k_results: int = 3\n ARXIV_MAX_QUERY_LENGTH = 300\n load_max_docs: int = 100\n load_all_available_meta: bool = False\n doc_content_chars_max: Optional[int] = 4000\n[docs] @root_validator()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/arxiv.html"} {"id": "ba02822ff0a2-1", "text": "[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the python package exists in environment.\"\"\"\n try:\n import arxiv\n values[\"arxiv_search\"] = arxiv.Search\n values[\"arxiv_exceptions\"] = (\n arxiv.ArxivError,\n arxiv.UnexpectedEmptyPageError,\n arxiv.HTTPError,\n )\n values[\"arxiv_result\"] = arxiv.Result\n except ImportError:\n raise ImportError(\n \"Could not import arxiv python package. \"\n \"Please install it with `pip install arxiv`.\"\n )\n return values\n[docs] def run(self, query: str) -> str:\n \"\"\"\n Run Arxiv search and get the article meta information.\n See https://lukasschwab.me/arxiv.py/index.html#Search\n See https://lukasschwab.me/arxiv.py/index.html#Result\n It uses only the most informative fields of article meta information.\n \"\"\"\n try:\n results = self.arxiv_search( # type: ignore\n query[: self.ARXIV_MAX_QUERY_LENGTH], max_results=self.top_k_results\n ).results()\n except self.arxiv_exceptions as ex:\n return f\"Arxiv exception: {ex}\"\n docs = [\n f\"Published: {result.updated.date()}\\nTitle: {result.title}\\n\"\n f\"Authors: {', '.join(a.name for a in result.authors)}\\n\"\n f\"Summary: {result.summary}\"\n for result in results\n ]\n if docs:\n return \"\\n\\n\".join(docs)[: self.doc_content_chars_max]\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/arxiv.html"} {"id": "ba02822ff0a2-2", "text": "return \"\\n\\n\".join(docs)[: self.doc_content_chars_max]\n else:\n return \"No good Arxiv Result was found\"\n[docs] def load(self, query: str) -> List[Document]:\n \"\"\"\n Run Arxiv search and get the article texts plus the article meta information.\n See https://lukasschwab.me/arxiv.py/index.html#Search\n Returns: a list of documents with the document.page_content in text format\n \"\"\"\n try:\n import fitz\n except ImportError:\n raise ImportError(\n \"PyMuPDF package not found, please install it with \"\n \"`pip install pymupdf`\"\n )\n try:\n results = self.arxiv_search( # type: ignore\n query[: self.ARXIV_MAX_QUERY_LENGTH], max_results=self.load_max_docs\n ).results()\n except self.arxiv_exceptions as ex:\n logger.debug(\"Error on arxiv: %s\", ex)\n return []\n docs: List[Document] = []\n for result in results:\n try:\n doc_file_name: str = result.download_pdf()\n with fitz.open(doc_file_name) as doc_file:\n text: str = \"\".join(page.get_text() for page in doc_file)\n except FileNotFoundError as f_ex:\n logger.debug(f_ex)\n continue\n if self.load_all_available_meta:\n extra_metadata = {\n \"entry_id\": result.entry_id,\n \"published_first_time\": str(result.published.date()),\n \"comment\": result.comment,\n \"journal_ref\": result.journal_ref,\n \"doi\": result.doi,\n \"primary_category\": result.primary_category,\n \"categories\": result.categories,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/arxiv.html"} {"id": "ba02822ff0a2-3", "text": "\"primary_category\": result.primary_category,\n \"categories\": result.categories,\n \"links\": [link.href for link in result.links],\n }\n else:\n extra_metadata = {}\n metadata = {\n \"Published\": str(result.updated.date()),\n \"Title\": result.title,\n \"Authors\": \", \".join(a.name for a in result.authors),\n \"Summary\": result.summary,\n **extra_metadata,\n }\n doc = Document(\n page_content=text[: self.doc_content_chars_max], metadata=metadata\n )\n docs.append(doc)\n os.remove(doc_file_name)\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/arxiv.html"} {"id": "c52b7628cb53-0", "text": "Source code for langchain.utilities.awslambda\n\"\"\"Util that calls Lambda.\"\"\"\nimport json\nfrom typing import Any, Dict, Optional\nfrom pydantic import BaseModel, Extra, root_validator\n[docs]class LambdaWrapper(BaseModel):\n \"\"\"Wrapper for AWS Lambda SDK.\n Docs for using:\n 1. pip install boto3\n 2. Create a lambda function using the AWS Console or CLI\n 3. Run `aws configure` and enter your AWS credentials\n \"\"\"\n lambda_client: Any #: :meta private:\n function_name: Optional[str] = None\n awslambda_tool_name: Optional[str] = None\n awslambda_tool_description: Optional[str] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that python package exists in environment.\"\"\"\n try:\n import boto3\n except ImportError:\n raise ImportError(\n \"boto3 is not installed. Please install it with `pip install boto3`\"\n )\n values[\"lambda_client\"] = boto3.client(\"lambda\")\n values[\"function_name\"] = values[\"function_name\"]\n return values\n[docs] def run(self, query: str) -> str:\n \"\"\"Invoke Lambda function and parse result.\"\"\"\n res = self.lambda_client.invoke(\n FunctionName=self.function_name,\n InvocationType=\"RequestResponse\",\n Payload=json.dumps({\"body\": query}),\n )\n try:\n payload_stream = res[\"Payload\"]\n payload_string = payload_stream.read().decode(\"utf-8\")\n answer = json.loads(payload_string)[\"body\"]\n except StopIteration:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/awslambda.html"} {"id": "c52b7628cb53-1", "text": "answer = json.loads(payload_string)[\"body\"]\n except StopIteration:\n return \"Failed to parse response from Lambda\"\n if answer is None or answer == \"\":\n # We don't want to return the assumption alone if answer is empty\n return \"Request failed.\"\n else:\n return f\"Result: {answer}\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/awslambda.html"} {"id": "1b88cb8d02a2-0", "text": "Source code for langchain.output_parsers.pydantic\nimport json\nimport re\nfrom typing import Type, TypeVar\nfrom pydantic import BaseModel, ValidationError\nfrom langchain.output_parsers.format_instructions import PYDANTIC_FORMAT_INSTRUCTIONS\nfrom langchain.schema import BaseOutputParser, OutputParserException\nT = TypeVar(\"T\", bound=BaseModel)\n[docs]class PydanticOutputParser(BaseOutputParser[T]):\n pydantic_object: Type[T]\n[docs] def parse(self, text: str) -> T:\n try:\n # Greedy search for 1st json candidate.\n match = re.search(\n r\"\\{.*\\}\", text.strip(), re.MULTILINE | re.IGNORECASE | re.DOTALL\n )\n json_str = \"\"\n if match:\n json_str = match.group()\n json_object = json.loads(json_str, strict=False)\n return self.pydantic_object.parse_obj(json_object)\n except (json.JSONDecodeError, ValidationError) as e:\n name = self.pydantic_object.__name__\n msg = f\"Failed to parse {name} from completion {text}. Got: {e}\"\n raise OutputParserException(msg, llm_output=text)\n[docs] def get_format_instructions(self) -> str:\n schema = self.pydantic_object.schema()\n # Remove extraneous fields.\n reduced_schema = schema\n if \"title\" in reduced_schema:\n del reduced_schema[\"title\"]\n if \"type\" in reduced_schema:\n del reduced_schema[\"type\"]\n # Ensure json in context is well-formed with double quotes.\n schema_str = json.dumps(reduced_schema)\n return PYDANTIC_FORMAT_INSTRUCTIONS.format(schema=schema_str)\n @property", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/pydantic.html"} {"id": "1b88cb8d02a2-1", "text": "return PYDANTIC_FORMAT_INSTRUCTIONS.format(schema=schema_str)\n @property\n def _type(self) -> str:\n return \"pydantic\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/pydantic.html"} {"id": "9457f022034f-0", "text": "Source code for langchain.output_parsers.enum\nfrom enum import Enum\nfrom typing import Any, Dict, List, Type\nfrom pydantic import root_validator\nfrom langchain.schema import BaseOutputParser, OutputParserException\n[docs]class EnumOutputParser(BaseOutputParser):\n enum: Type[Enum]\n[docs] @root_validator()\n def raise_deprecation(cls, values: Dict) -> Dict:\n enum = values[\"enum\"]\n if not all(isinstance(e.value, str) for e in enum):\n raise ValueError(\"Enum values must be strings\")\n return values\n @property\n def _valid_values(self) -> List[str]:\n return [e.value for e in self.enum]\n[docs] def parse(self, response: str) -> Any:\n try:\n return self.enum(response.strip())\n except ValueError:\n raise OutputParserException(\n f\"Response '{response}' is not one of the \"\n f\"expected values: {self._valid_values}\"\n )\n[docs] def get_format_instructions(self) -> str:\n return f\"Select one of the following options: {', '.join(self._valid_values)}\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/enum.html"} {"id": "b82975048e9a-0", "text": "Source code for langchain.output_parsers.list\nfrom __future__ import annotations\nfrom abc import abstractmethod\nfrom typing import List\nfrom langchain.schema import BaseOutputParser\n[docs]class ListOutputParser(BaseOutputParser):\n \"\"\"Class to parse the output of an LLM call to a list.\"\"\"\n @property\n def _type(self) -> str:\n return \"list\"\n[docs] @abstractmethod\n def parse(self, text: str) -> List[str]:\n \"\"\"Parse the output of an LLM call.\"\"\"\n[docs]class CommaSeparatedListOutputParser(ListOutputParser):\n \"\"\"Parse out comma separated lists.\"\"\"\n @property\n def lc_serializable(self) -> bool:\n return True\n[docs] def get_format_instructions(self) -> str:\n return (\n \"Your response should be a list of comma separated values, \"\n \"eg: `foo, bar, baz`\"\n )\n[docs] def parse(self, text: str) -> List[str]:\n \"\"\"Parse the output of an LLM call.\"\"\"\n return text.strip().split(\", \")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/list.html"} {"id": "e5e58bbbf45e-0", "text": "Source code for langchain.output_parsers.boolean\nfrom langchain.schema import BaseOutputParser\n[docs]class BooleanOutputParser(BaseOutputParser[bool]):\n true_val: str = \"YES\"\n false_val: str = \"NO\"\n[docs] def parse(self, text: str) -> bool:\n \"\"\"Parse the output of an LLM call to a boolean.\n Args:\n text: output of language model\n Returns:\n boolean\n \"\"\"\n cleaned_text = text.strip()\n if cleaned_text.upper() not in (self.true_val.upper(), self.false_val.upper()):\n raise ValueError(\n f\"BooleanOutputParser expected output value to either be \"\n f\"{self.true_val} or {self.false_val}. Received {cleaned_text}.\"\n )\n return cleaned_text.upper() == self.true_val.upper()\n @property\n def _type(self) -> str:\n \"\"\"Snake-case string identifier for output parser type.\"\"\"\n return \"boolean_output_parser\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/boolean.html"} {"id": "0135104f1ce3-0", "text": "Source code for langchain.output_parsers.regex\nfrom __future__ import annotations\nimport re\nfrom typing import Dict, List, Optional\nfrom langchain.schema import BaseOutputParser\n[docs]class RegexParser(BaseOutputParser):\n \"\"\"Class to parse the output into a dictionary.\"\"\"\n @property\n def lc_serializable(self) -> bool:\n return True\n regex: str\n output_keys: List[str]\n default_output_key: Optional[str] = None\n @property\n def _type(self) -> str:\n \"\"\"Return the type key.\"\"\"\n return \"regex_parser\"\n[docs] def parse(self, text: str) -> Dict[str, str]:\n \"\"\"Parse the output of an LLM call.\"\"\"\n match = re.search(self.regex, text)\n if match:\n return {key: match.group(i + 1) for i, key in enumerate(self.output_keys)}\n else:\n if self.default_output_key is None:\n raise ValueError(f\"Could not parse output: {text}\")\n else:\n return {\n key: text if key == self.default_output_key else \"\"\n for key in self.output_keys\n }", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/regex.html"} {"id": "7d6a74380100-0", "text": "Source code for langchain.output_parsers.loading\nfrom langchain.output_parsers.regex import RegexParser\n[docs]def load_output_parser(config: dict) -> dict:\n \"\"\"Load output parser.\"\"\"\n if \"output_parsers\" in config:\n if config[\"output_parsers\"] is not None:\n _config = config[\"output_parsers\"]\n output_parser_type = _config[\"_type\"]\n if output_parser_type == \"regex_parser\":\n output_parser = RegexParser(**_config)\n else:\n raise ValueError(f\"Unsupported output parser {output_parser_type}\")\n config[\"output_parsers\"] = output_parser\n return config", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/loading.html"} {"id": "bdc9e1d644c6-0", "text": "Source code for langchain.output_parsers.datetime\nimport random\nfrom datetime import datetime, timedelta\nfrom typing import List\nfrom langchain.schema import BaseOutputParser, OutputParserException\nfrom langchain.utils import comma_list\ndef _generate_random_datetime_strings(\n pattern: str,\n n: int = 3,\n start_date: datetime = datetime(1, 1, 1),\n end_date: datetime = datetime.now() + timedelta(days=3650),\n) -> List[str]:\n \"\"\"\n Generates n random datetime strings conforming to the\n given pattern within the specified date range.\n Pattern should be a string containing the desired format codes.\n start_date and end_date should be datetime objects representing\n the start and end of the date range.\n \"\"\"\n examples = []\n delta = end_date - start_date\n for i in range(n):\n random_delta = random.uniform(0, delta.total_seconds())\n dt = start_date + timedelta(seconds=random_delta)\n date_string = dt.strftime(pattern)\n examples.append(date_string)\n return examples\n[docs]class DatetimeOutputParser(BaseOutputParser[datetime]):\n format: str = \"%Y-%m-%dT%H:%M:%S.%fZ\"\n[docs] def get_format_instructions(self) -> str:\n examples = comma_list(_generate_random_datetime_strings(self.format))\n return f\"\"\"Write a datetime string that matches the \n following pattern: \"{self.format}\". Examples: {examples}\"\"\"\n[docs] def parse(self, response: str) -> datetime:\n try:\n return datetime.strptime(response.strip(), self.format)\n except ValueError as e:\n raise OutputParserException(\n f\"Could not parse datetime string: {response}\"\n ) from e\n @property", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/datetime.html"} {"id": "bdc9e1d644c6-1", "text": ") from e\n @property\n def _type(self) -> str:\n return \"datetime\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/datetime.html"} {"id": "686d8375ce11-0", "text": "Source code for langchain.output_parsers.fix\nfrom __future__ import annotations\nfrom typing import TypeVar\nfrom langchain.chains.llm import LLMChain\nfrom langchain.output_parsers.prompts import NAIVE_FIX_PROMPT\nfrom langchain.schema import BaseOutputParser, BasePromptTemplate, OutputParserException\nfrom langchain.schema.language_model import BaseLanguageModel\nT = TypeVar(\"T\")\n[docs]class OutputFixingParser(BaseOutputParser[T]):\n \"\"\"Wraps a parser and tries to fix parsing errors.\"\"\"\n @property\n def lc_serializable(self) -> bool:\n return True\n parser: BaseOutputParser[T]\n retry_chain: LLMChain\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n parser: BaseOutputParser[T],\n prompt: BasePromptTemplate = NAIVE_FIX_PROMPT,\n ) -> OutputFixingParser[T]:\n chain = LLMChain(llm=llm, prompt=prompt)\n return cls(parser=parser, retry_chain=chain)\n[docs] def parse(self, completion: str) -> T:\n try:\n parsed_completion = self.parser.parse(completion)\n except OutputParserException as e:\n new_completion = self.retry_chain.run(\n instructions=self.parser.get_format_instructions(),\n completion=completion,\n error=repr(e),\n )\n parsed_completion = self.parser.parse(new_completion)\n return parsed_completion\n[docs] def get_format_instructions(self) -> str:\n return self.parser.get_format_instructions()\n @property\n def _type(self) -> str:\n return \"output_fixing\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/fix.html"} {"id": "cca9ea553cf8-0", "text": "Source code for langchain.output_parsers.openai_functions\nimport json\nfrom typing import Any, Dict, List, Type, Union\nfrom pydantic import BaseModel, root_validator\nfrom langchain.schema import (\n BaseLLMOutputParser,\n ChatGeneration,\n Generation,\n OutputParserException,\n)\n[docs]class OutputFunctionsParser(BaseLLMOutputParser[Any]):\n args_only: bool = True\n[docs] def parse_result(self, result: List[Generation]) -> Any:\n generation = result[0]\n if not isinstance(generation, ChatGeneration):\n raise OutputParserException(\n \"This output parser can only be used with a chat generation.\"\n )\n message = generation.message\n try:\n func_call = message.additional_kwargs[\"function_call\"]\n except ValueError as exc:\n raise OutputParserException(f\"Could not parse function call: {exc}\")\n if self.args_only:\n return func_call[\"arguments\"]\n return func_call\n[docs]class JsonOutputFunctionsParser(OutputFunctionsParser):\n[docs] def parse_result(self, result: List[Generation]) -> Any:\n func = super().parse_result(result)\n if self.args_only:\n return json.loads(func)\n func[\"arguments\"] = json.loads(func[\"arguments\"])\n return func\n[docs]class JsonKeyOutputFunctionsParser(JsonOutputFunctionsParser):\n key_name: str\n[docs] def parse_result(self, result: List[Generation]) -> Any:\n res = super().parse_result(result)\n return res[self.key_name]\n[docs]class PydanticOutputFunctionsParser(OutputFunctionsParser):\n pydantic_schema: Union[Type[BaseModel], Dict[str, Type[BaseModel]]]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/openai_functions.html"} {"id": "cca9ea553cf8-1", "text": "[docs] @root_validator(pre=True)\n def validate_schema(cls, values: Dict) -> Dict:\n schema = values[\"pydantic_schema\"]\n if \"args_only\" not in values:\n values[\"args_only\"] = isinstance(schema, type) and issubclass(\n schema, BaseModel\n )\n elif values[\"args_only\"] and isinstance(schema, Dict):\n raise ValueError(\n \"If multiple pydantic schemas are provided then args_only should be\"\n \" False.\"\n )\n return values\n[docs] def parse_result(self, result: List[Generation]) -> Any:\n _result = super().parse_result(result)\n if self.args_only:\n pydantic_args = self.pydantic_schema.parse_raw(_result) # type: ignore\n else:\n fn_name = _result[\"name\"]\n _args = _result[\"arguments\"]\n pydantic_args = self.pydantic_schema[fn_name].parse_raw(_args) # type: ignore # noqa: E501\n return pydantic_args\n[docs]class PydanticAttrOutputFunctionsParser(PydanticOutputFunctionsParser):\n attr_name: str\n[docs] def parse_result(self, result: List[Generation]) -> Any:\n result = super().parse_result(result)\n return getattr(result, self.attr_name)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/openai_functions.html"} {"id": "193de8832f1d-0", "text": "Source code for langchain.output_parsers.combining\nfrom __future__ import annotations\nfrom typing import Any, Dict, List\nfrom pydantic import root_validator\nfrom langchain.schema import BaseOutputParser\n[docs]class CombiningOutputParser(BaseOutputParser):\n \"\"\"Class to combine multiple output parsers into one.\"\"\"\n @property\n def lc_serializable(self) -> bool:\n return True\n parsers: List[BaseOutputParser]\n[docs] @root_validator()\n def validate_parsers(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Validate the parsers.\"\"\"\n parsers = values[\"parsers\"]\n if len(parsers) < 2:\n raise ValueError(\"Must have at least two parsers\")\n for parser in parsers:\n if parser._type == \"combining\":\n raise ValueError(\"Cannot nest combining parsers\")\n if parser._type == \"list\":\n raise ValueError(\"Cannot comine list parsers\")\n return values\n @property\n def _type(self) -> str:\n \"\"\"Return the type key.\"\"\"\n return \"combining\"\n[docs] def get_format_instructions(self) -> str:\n \"\"\"Instructions on how the LLM output should be formatted.\"\"\"\n initial = f\"For your first output: {self.parsers[0].get_format_instructions()}\"\n subsequent = \"\\n\".join(\n f\"Complete that output fully. Then produce another output, separated by two newline characters: {p.get_format_instructions()}\" # noqa: E501\n for p in self.parsers[1:]\n )\n return f\"{initial}\\n{subsequent}\"\n[docs] def parse(self, text: str) -> Dict[str, Any]:\n \"\"\"Parse the output of an LLM call.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/combining.html"} {"id": "193de8832f1d-1", "text": "\"\"\"Parse the output of an LLM call.\"\"\"\n texts = text.split(\"\\n\\n\")\n output = dict()\n for txt, parser in zip(texts, self.parsers):\n output.update(parser.parse(txt.strip()))\n return output", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/combining.html"} {"id": "4e4fea397aac-0", "text": "Source code for langchain.output_parsers.rail_parser\nfrom __future__ import annotations\nfrom typing import Any, Callable, Dict, Optional\nfrom langchain.schema import BaseOutputParser\n[docs]class GuardrailsOutputParser(BaseOutputParser):\n guard: Any\n api: Optional[Callable]\n args: Any\n kwargs: Any\n @property\n def _type(self) -> str:\n return \"guardrails\"\n[docs] @classmethod\n def from_rail(\n cls,\n rail_file: str,\n num_reasks: int = 1,\n api: Optional[Callable] = None,\n *args: Any,\n **kwargs: Any,\n ) -> GuardrailsOutputParser:\n try:\n from guardrails import Guard\n except ImportError:\n raise ValueError(\n \"guardrails-ai package not installed. \"\n \"Install it by running `pip install guardrails-ai`.\"\n )\n return cls(\n guard=Guard.from_rail(rail_file, num_reasks=num_reasks),\n api=api,\n args=args,\n kwargs=kwargs,\n )\n[docs] @classmethod\n def from_rail_string(\n cls,\n rail_str: str,\n num_reasks: int = 1,\n api: Optional[Callable] = None,\n *args: Any,\n **kwargs: Any,\n ) -> GuardrailsOutputParser:\n try:\n from guardrails import Guard\n except ImportError:\n raise ValueError(\n \"guardrails-ai package not installed. \"\n \"Install it by running `pip install guardrails-ai`.\"\n )\n return cls(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/rail_parser.html"} {"id": "4e4fea397aac-1", "text": ")\n return cls(\n guard=Guard.from_rail_string(rail_str, num_reasks=num_reasks),\n api=api,\n args=args,\n kwargs=kwargs,\n )\n[docs] @classmethod\n def from_pydantic(\n cls,\n output_class: Any,\n num_reasks: int = 1,\n api: Optional[Callable] = None,\n *args: Any,\n **kwargs: Any,\n ) -> GuardrailsOutputParser:\n try:\n from guardrails import Guard\n except ImportError:\n raise ValueError(\n \"guardrails-ai package not installed. \"\n \"Install it by running `pip install guardrails-ai`.\"\n )\n return cls(\n guard=Guard.from_pydantic(output_class, \"\", num_reasks=num_reasks),\n api=api,\n args=args,\n kwargs=kwargs,\n )\n[docs] def get_format_instructions(self) -> str:\n return self.guard.raw_prompt.format_instructions\n[docs] def parse(self, text: str) -> Dict:\n return self.guard.parse(text, llm_api=self.api, *self.args, **self.kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/rail_parser.html"} {"id": "93e87a78a128-0", "text": "Source code for langchain.output_parsers.structured\nfrom __future__ import annotations\nfrom typing import Any, List\nfrom pydantic import BaseModel\nfrom langchain.output_parsers.format_instructions import (\n STRUCTURED_FORMAT_INSTRUCTIONS,\n STRUCTURED_FORMAT_SIMPLE_INSTRUCTIONS,\n)\nfrom langchain.output_parsers.json import parse_and_check_json_markdown\nfrom langchain.schema import BaseOutputParser\nline_template = '\\t\"{name}\": {type} // {description}'\n[docs]class ResponseSchema(BaseModel):\n name: str\n description: str\n type: str = \"string\"\ndef _get_sub_string(schema: ResponseSchema) -> str:\n return line_template.format(\n name=schema.name, description=schema.description, type=schema.type\n )\n[docs]class StructuredOutputParser(BaseOutputParser):\n response_schemas: List[ResponseSchema]\n[docs] @classmethod\n def from_response_schemas(\n cls, response_schemas: List[ResponseSchema]\n ) -> StructuredOutputParser:\n return cls(response_schemas=response_schemas)\n[docs] def get_format_instructions(self, only_json: bool = False) -> str:\n \"\"\"\n Method to get the format instructions for the output parser.\n example:\n ```python\n from langchain.output_parsers.structured import (\n StructuredOutputParser, ResponseSchema\n )\n response_schemas = [\n ResponseSchema(\n name=\"foo\",\n description=\"a list of strings\",\n type=\"List[string]\"\n ),\n ResponseSchema(\n name=\"bar\",\n description=\"a string\",\n type=\"string\"\n ),\n ]\n parser = StructuredOutputParser.from_response_schemas(response_schemas)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/structured.html"} {"id": "93e87a78a128-1", "text": "]\n parser = StructuredOutputParser.from_response_schemas(response_schemas)\n print(parser.get_format_instructions())\n output:\n # The output should be a markdown code snippet formatted in the following\n # schema, including the leading and trailing \"```json\" and \"```\":\n #\n # ```json\n # {\n # \"foo\": List[string] // a list of strings\n # \"bar\": string // a string\n # }\n Args:\n only_json (bool): If True, only the json in the markdown code snippet\n will be returned, without the introducing text. Defaults to False.\n \"\"\"\n schema_str = \"\\n\".join(\n [_get_sub_string(schema) for schema in self.response_schemas]\n )\n if only_json:\n return STRUCTURED_FORMAT_SIMPLE_INSTRUCTIONS.format(format=schema_str)\n else:\n return STRUCTURED_FORMAT_INSTRUCTIONS.format(format=schema_str)\n[docs] def parse(self, text: str) -> Any:\n expected_keys = [rs.name for rs in self.response_schemas]\n return parse_and_check_json_markdown(text, expected_keys)\n @property\n def _type(self) -> str:\n return \"structured\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/structured.html"} {"id": "4a1bdf04060d-0", "text": "Source code for langchain.output_parsers.json\nfrom __future__ import annotations\nimport json\nimport re\nfrom typing import List\nfrom langchain.schema import OutputParserException\n[docs]def parse_json_markdown(json_string: str) -> dict:\n \"\"\"\n Parse a JSON string from a Markdown string.\n Args:\n json_string: The Markdown string.\n Returns:\n The parsed JSON object as a Python dictionary.\n \"\"\"\n # Try to find JSON string within triple backticks\n match = re.search(r\"```(json)?(.*?)```\", json_string, re.DOTALL)\n # If no match found, assume the entire string is a JSON string\n if match is None:\n json_str = json_string\n else:\n # If match found, use the content within the backticks\n json_str = match.group(2)\n # Strip whitespace and newlines from the start and end\n json_str = json_str.strip()\n # Parse the JSON string into a Python dictionary\n parsed = json.loads(json_str)\n return parsed\n[docs]def parse_and_check_json_markdown(text: str, expected_keys: List[str]) -> dict:\n \"\"\"\n Parse a JSON string from a Markdown string and check that it\n contains the expected keys.\n Args:\n text: The Markdown string.\n expected_keys: The expected keys in the JSON string.\n Returns:\n The parsed JSON object as a Python dictionary.\n \"\"\"\n try:\n json_obj = parse_json_markdown(text)\n except json.JSONDecodeError as e:\n raise OutputParserException(f\"Got invalid JSON object. Error: {e}\")\n for key in expected_keys:\n if key not in json_obj:\n raise OutputParserException(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/json.html"} {"id": "4a1bdf04060d-1", "text": "if key not in json_obj:\n raise OutputParserException(\n f\"Got invalid return object. Expected key `{key}` \"\n f\"to be present, but got {json_obj}\"\n )\n return json_obj", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/json.html"} {"id": "70e19412bea5-0", "text": "Source code for langchain.output_parsers.regex_dict\nfrom __future__ import annotations\nimport re\nfrom typing import Dict, Optional\nfrom langchain.schema import BaseOutputParser\n[docs]class RegexDictParser(BaseOutputParser):\n \"\"\"Class to parse the output into a dictionary.\"\"\"\n regex_pattern: str = r\"{}:\\s?([^.'\\n']*)\\.?\" # : :meta private:\n output_key_to_format: Dict[str, str]\n no_update_value: Optional[str] = None\n @property\n def _type(self) -> str:\n \"\"\"Return the type key.\"\"\"\n return \"regex_dict_parser\"\n[docs] def parse(self, text: str) -> Dict[str, str]:\n \"\"\"Parse the output of an LLM call.\"\"\"\n result = {}\n for output_key, expected_format in self.output_key_to_format.items():\n specific_regex = self.regex_pattern.format(re.escape(expected_format))\n matches = re.findall(specific_regex, text)\n if not matches:\n raise ValueError(\n f\"No match found for output key: {output_key} with expected format \\\n {expected_format} on text {text}\"\n )\n elif len(matches) > 1:\n raise ValueError(\n f\"Multiple matches found for output key: {output_key} with \\\n expected format {expected_format} on text {text}\"\n )\n elif (\n self.no_update_value is not None and matches[0] == self.no_update_value\n ):\n continue\n else:\n result[output_key] = matches[0]\n return result", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/regex_dict.html"} {"id": "cf8b4f2c25c3-0", "text": "Source code for langchain.output_parsers.retry\nfrom __future__ import annotations\nfrom typing import TypeVar\nfrom langchain.chains.llm import LLMChain\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema import (\n BaseOutputParser,\n BasePromptTemplate,\n OutputParserException,\n PromptValue,\n)\nfrom langchain.schema.language_model import BaseLanguageModel\nNAIVE_COMPLETION_RETRY = \"\"\"Prompt:\n{prompt}\nCompletion:\n{completion}\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nPlease try again:\"\"\"\nNAIVE_COMPLETION_RETRY_WITH_ERROR = \"\"\"Prompt:\n{prompt}\nCompletion:\n{completion}\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nDetails: {error}\nPlease try again:\"\"\"\nNAIVE_RETRY_PROMPT = PromptTemplate.from_template(NAIVE_COMPLETION_RETRY)\nNAIVE_RETRY_WITH_ERROR_PROMPT = PromptTemplate.from_template(\n NAIVE_COMPLETION_RETRY_WITH_ERROR\n)\nT = TypeVar(\"T\")\n[docs]class RetryOutputParser(BaseOutputParser[T]):\n \"\"\"Wraps a parser and tries to fix parsing errors.\n Does this by passing the original prompt and the completion to another\n LLM, and telling it the completion did not satisfy criteria in the prompt.\n \"\"\"\n parser: BaseOutputParser[T]\n retry_chain: LLMChain\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n parser: BaseOutputParser[T],\n prompt: BasePromptTemplate = NAIVE_RETRY_PROMPT,\n ) -> RetryOutputParser[T]:\n chain = LLMChain(llm=llm, prompt=prompt)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/retry.html"} {"id": "cf8b4f2c25c3-1", "text": "chain = LLMChain(llm=llm, prompt=prompt)\n return cls(parser=parser, retry_chain=chain)\n[docs] def parse_with_prompt(self, completion: str, prompt_value: PromptValue) -> T:\n try:\n parsed_completion = self.parser.parse(completion)\n except OutputParserException:\n new_completion = self.retry_chain.run(\n prompt=prompt_value.to_string(), completion=completion\n )\n parsed_completion = self.parser.parse(new_completion)\n return parsed_completion\n[docs] def parse(self, completion: str) -> T:\n raise NotImplementedError(\n \"This OutputParser can only be called by the `parse_with_prompt` method.\"\n )\n[docs] def get_format_instructions(self) -> str:\n return self.parser.get_format_instructions()\n @property\n def _type(self) -> str:\n return \"retry\"\n[docs]class RetryWithErrorOutputParser(BaseOutputParser[T]):\n \"\"\"Wraps a parser and tries to fix parsing errors.\n Does this by passing the original prompt, the completion, AND the error\n that was raised to another language model and telling it that the completion\n did not work, and raised the given error. Differs from RetryOutputParser\n in that this implementation provides the error that was raised back to the\n LLM, which in theory should give it more information on how to fix it.\n \"\"\"\n parser: BaseOutputParser[T]\n retry_chain: LLMChain\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n parser: BaseOutputParser[T],\n prompt: BasePromptTemplate = NAIVE_RETRY_WITH_ERROR_PROMPT,\n ) -> RetryWithErrorOutputParser[T]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/retry.html"} {"id": "cf8b4f2c25c3-2", "text": ") -> RetryWithErrorOutputParser[T]:\n chain = LLMChain(llm=llm, prompt=prompt)\n return cls(parser=parser, retry_chain=chain)\n[docs] def parse_with_prompt(self, completion: str, prompt_value: PromptValue) -> T:\n try:\n parsed_completion = self.parser.parse(completion)\n except OutputParserException as e:\n new_completion = self.retry_chain.run(\n prompt=prompt_value.to_string(), completion=completion, error=repr(e)\n )\n parsed_completion = self.parser.parse(new_completion)\n return parsed_completion\n[docs] def parse(self, completion: str) -> T:\n raise NotImplementedError(\n \"This OutputParser can only be called by the `parse_with_prompt` method.\"\n )\n[docs] def get_format_instructions(self) -> str:\n return self.parser.get_format_instructions()\n @property\n def _type(self) -> str:\n return \"retry_with_error\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/retry.html"} {"id": "f3e3903e77c8-0", "text": "Source code for langchain.indexes.graph\n\"\"\"Graph Index Creator.\"\"\"\nfrom typing import Optional, Type\nfrom pydantic import BaseModel\nfrom langchain import BasePromptTemplate\nfrom langchain.chains.llm import LLMChain\nfrom langchain.graphs.networkx_graph import NetworkxEntityGraph, parse_triples\nfrom langchain.indexes.prompts.knowledge_triplet_extraction import (\n KNOWLEDGE_TRIPLE_EXTRACTION_PROMPT,\n)\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class GraphIndexCreator(BaseModel):\n \"\"\"Functionality to create graph index.\"\"\"\n llm: Optional[BaseLanguageModel] = None\n graph_type: Type[NetworkxEntityGraph] = NetworkxEntityGraph\n[docs] def from_text(\n self, text: str, prompt: BasePromptTemplate = KNOWLEDGE_TRIPLE_EXTRACTION_PROMPT\n ) -> NetworkxEntityGraph:\n \"\"\"Create graph index from text.\"\"\"\n if self.llm is None:\n raise ValueError(\"llm should not be None\")\n graph = self.graph_type()\n chain = LLMChain(llm=self.llm, prompt=prompt)\n output = chain.predict(text=text)\n knowledge = parse_triples(output)\n for triple in knowledge:\n graph.add_triple(triple)\n return graph\n[docs] async def afrom_text(\n self, text: str, prompt: BasePromptTemplate = KNOWLEDGE_TRIPLE_EXTRACTION_PROMPT\n ) -> NetworkxEntityGraph:\n \"\"\"Create graph index from text asynchronously.\"\"\"\n if self.llm is None:\n raise ValueError(\"llm should not be None\")\n graph = self.graph_type()\n chain = LLMChain(llm=self.llm, prompt=prompt)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/indexes/graph.html"} {"id": "f3e3903e77c8-1", "text": "chain = LLMChain(llm=self.llm, prompt=prompt)\n output = await chain.apredict(text=text)\n knowledge = parse_triples(output)\n for triple in knowledge:\n graph.add_triple(triple)\n return graph", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/indexes/graph.html"} {"id": "2a4d828f8c39-0", "text": "Source code for langchain.indexes.vectorstore\nfrom typing import Any, List, Optional, Type\nfrom pydantic import BaseModel, Extra, Field\nfrom langchain.chains.qa_with_sources.retrieval import RetrievalQAWithSourcesChain\nfrom langchain.chains.retrieval_qa.base import RetrievalQA\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.llms.openai import OpenAI\nfrom langchain.schema import Document\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter, TextSplitter\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.chroma import Chroma\ndef _get_default_text_splitter() -> TextSplitter:\n return RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n[docs]class VectorStoreIndexWrapper(BaseModel):\n \"\"\"Wrapper around a vectorstore for easy access.\"\"\"\n vectorstore: VectorStore\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] def query(\n self, question: str, llm: Optional[BaseLanguageModel] = None, **kwargs: Any\n ) -> str:\n \"\"\"Query the vectorstore.\"\"\"\n llm = llm or OpenAI(temperature=0)\n chain = RetrievalQA.from_chain_type(\n llm, retriever=self.vectorstore.as_retriever(), **kwargs\n )\n return chain.run(question)\n[docs] def query_with_sources(\n self, question: str, llm: Optional[BaseLanguageModel] = None, **kwargs: Any", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/indexes/vectorstore.html"} {"id": "2a4d828f8c39-1", "text": ") -> dict:\n \"\"\"Query the vectorstore and get back sources.\"\"\"\n llm = llm or OpenAI(temperature=0)\n chain = RetrievalQAWithSourcesChain.from_chain_type(\n llm, retriever=self.vectorstore.as_retriever(), **kwargs\n )\n return chain({chain.question_key: question})\n[docs]class VectorstoreIndexCreator(BaseModel):\n \"\"\"Logic for creating indexes.\"\"\"\n vectorstore_cls: Type[VectorStore] = Chroma\n embedding: Embeddings = Field(default_factory=OpenAIEmbeddings)\n text_splitter: TextSplitter = Field(default_factory=_get_default_text_splitter)\n vectorstore_kwargs: dict = Field(default_factory=dict)\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] def from_loaders(self, loaders: List[BaseLoader]) -> VectorStoreIndexWrapper:\n \"\"\"Create a vectorstore index from loaders.\"\"\"\n docs = []\n for loader in loaders:\n docs.extend(loader.load())\n return self.from_documents(docs)\n[docs] def from_documents(self, documents: List[Document]) -> VectorStoreIndexWrapper:\n \"\"\"Create a vectorstore index from documents.\"\"\"\n sub_docs = self.text_splitter.split_documents(documents)\n vectorstore = self.vectorstore_cls.from_documents(\n sub_docs, self.embedding, **self.vectorstore_kwargs\n )\n return VectorStoreIndexWrapper(vectorstore=vectorstore)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/indexes/vectorstore.html"} {"id": "be0950adf8b4-0", "text": "Source code for langchain.llms.huggingface_pipeline\n\"\"\"Wrapper around HuggingFace Pipeline APIs.\"\"\"\nimport importlib.util\nimport logging\nfrom typing import Any, List, Mapping, Optional\nfrom pydantic import Extra\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nDEFAULT_MODEL_ID = \"gpt2\"\nDEFAULT_TASK = \"text-generation\"\nVALID_TASKS = (\"text2text-generation\", \"text-generation\", \"summarization\")\nlogger = logging.getLogger(__name__)\n[docs]class HuggingFacePipeline(LLM):\n \"\"\"Wrapper around HuggingFace Pipeline API.\n To use, you should have the ``transformers`` python package installed.\n Only supports `text-generation`, `text2text-generation` and `summarization` for now.\n Example using from_model_id:\n .. code-block:: python\n from langchain.llms import HuggingFacePipeline\n hf = HuggingFacePipeline.from_model_id(\n model_id=\"gpt2\",\n task=\"text-generation\",\n pipeline_kwargs={\"max_new_tokens\": 10},\n )\n Example passing pipeline in directly:\n .. code-block:: python\n from langchain.llms import HuggingFacePipeline\n from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\n model_id = \"gpt2\"\n tokenizer = AutoTokenizer.from_pretrained(model_id)\n model = AutoModelForCausalLM.from_pretrained(model_id)\n pipe = pipeline(\n \"text-generation\", model=model, tokenizer=tokenizer, max_new_tokens=10\n )\n hf = HuggingFacePipeline(pipeline=pipe)\n \"\"\"\n pipeline: Any #: :meta private:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_pipeline.html"} {"id": "be0950adf8b4-1", "text": "\"\"\"\n pipeline: Any #: :meta private:\n model_id: str = DEFAULT_MODEL_ID\n \"\"\"Model name to use.\"\"\"\n model_kwargs: Optional[dict] = None\n \"\"\"Key word arguments passed to the model.\"\"\"\n pipeline_kwargs: Optional[dict] = None\n \"\"\"Key word arguments passed to the pipeline.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @classmethod\n def from_model_id(\n cls,\n model_id: str,\n task: str,\n device: int = -1,\n model_kwargs: Optional[dict] = None,\n pipeline_kwargs: Optional[dict] = None,\n **kwargs: Any,\n ) -> LLM:\n \"\"\"Construct the pipeline object from model_id and task.\"\"\"\n try:\n from transformers import (\n AutoModelForCausalLM,\n AutoModelForSeq2SeqLM,\n AutoTokenizer,\n )\n from transformers import pipeline as hf_pipeline\n except ImportError:\n raise ValueError(\n \"Could not import transformers python package. \"\n \"Please install it with `pip install transformers`.\"\n )\n _model_kwargs = model_kwargs or {}\n tokenizer = AutoTokenizer.from_pretrained(model_id, **_model_kwargs)\n try:\n if task == \"text-generation\":\n model = AutoModelForCausalLM.from_pretrained(model_id, **_model_kwargs)\n elif task in (\"text2text-generation\", \"summarization\"):\n model = AutoModelForSeq2SeqLM.from_pretrained(model_id, **_model_kwargs)\n else:\n raise ValueError(\n f\"Got invalid task {task}, \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_pipeline.html"} {"id": "be0950adf8b4-2", "text": "else:\n raise ValueError(\n f\"Got invalid task {task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n except ImportError as e:\n raise ValueError(\n f\"Could not load the {task} model due to missing dependencies.\"\n ) from e\n if importlib.util.find_spec(\"torch\") is not None:\n import torch\n cuda_device_count = torch.cuda.device_count()\n if device < -1 or (device >= cuda_device_count):\n raise ValueError(\n f\"Got device=={device}, \"\n f\"device is required to be within [-1, {cuda_device_count})\"\n )\n if device < 0 and cuda_device_count > 0:\n logger.warning(\n \"Device has %d GPUs available. \"\n \"Provide device={deviceId} to `from_model_id` to use available\"\n \"GPUs for execution. deviceId is -1 (default) for CPU and \"\n \"can be a positive integer associated with CUDA device id.\",\n cuda_device_count,\n )\n if \"trust_remote_code\" in _model_kwargs:\n _model_kwargs = {\n k: v for k, v in _model_kwargs.items() if k != \"trust_remote_code\"\n }\n _pipeline_kwargs = pipeline_kwargs or {}\n pipeline = hf_pipeline(\n task=task,\n model=model,\n tokenizer=tokenizer,\n device=device,\n model_kwargs=_model_kwargs,\n **_pipeline_kwargs,\n )\n if pipeline.task not in VALID_TASKS:\n raise ValueError(\n f\"Got invalid task {pipeline.task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n return cls(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_pipeline.html"} {"id": "be0950adf8b4-3", "text": ")\n return cls(\n pipeline=pipeline,\n model_id=model_id,\n model_kwargs=_model_kwargs,\n pipeline_kwargs=_pipeline_kwargs,\n **kwargs,\n )\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"model_id\": self.model_id,\n \"model_kwargs\": self.model_kwargs,\n \"pipeline_kwargs\": self.pipeline_kwargs,\n }\n @property\n def _llm_type(self) -> str:\n return \"huggingface_pipeline\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n response = self.pipeline(prompt)\n if self.pipeline.task == \"text-generation\":\n # Text generation return includes the starter text.\n text = response[0][\"generated_text\"][len(prompt) :]\n elif self.pipeline.task == \"text2text-generation\":\n text = response[0][\"generated_text\"]\n elif self.pipeline.task == \"summarization\":\n text = response[0][\"summary_text\"]\n else:\n raise ValueError(\n f\"Got invalid task {self.pipeline.task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n if stop:\n # This is a bit hacky, but I can't figure out a better way to enforce\n # stop tokens when making calls to huggingface_hub.\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_pipeline.html"} {"id": "63bc5a250b53-0", "text": "Source code for langchain.llms.anyscale\n\"\"\"Wrapper around Anyscale\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\n[docs]class Anyscale(LLM):\n \"\"\"Wrapper around Anyscale Services.\n To use, you should have the environment variable ``ANYSCALE_SERVICE_URL``,\n ``ANYSCALE_SERVICE_ROUTE`` and ``ANYSCALE_SERVICE_TOKEN`` set with your Anyscale\n Service, or pass it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.llms import Anyscale\n anyscale = Anyscale(anyscale_service_url=\"SERVICE_URL\",\n anyscale_service_route=\"SERVICE_ROUTE\",\n anyscale_service_token=\"SERVICE_TOKEN\")\n # Use Ray for distributed processing\n import ray\n prompt_list=[]\n @ray.remote\n def send_query(llm, prompt):\n resp = llm(prompt)\n return resp\n futures = [send_query.remote(anyscale, prompt) for prompt in prompt_list]\n results = ray.get(futures)\n \"\"\"\n model_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model. Reserved for future use\"\"\"\n anyscale_service_url: Optional[str] = None\n anyscale_service_route: Optional[str] = None\n anyscale_service_token: Optional[str] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/anyscale.html"} {"id": "63bc5a250b53-1", "text": "extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n anyscale_service_url = get_from_dict_or_env(\n values, \"anyscale_service_url\", \"ANYSCALE_SERVICE_URL\"\n )\n anyscale_service_route = get_from_dict_or_env(\n values, \"anyscale_service_route\", \"ANYSCALE_SERVICE_ROUTE\"\n )\n anyscale_service_token = get_from_dict_or_env(\n values, \"anyscale_service_token\", \"ANYSCALE_SERVICE_TOKEN\"\n )\n if anyscale_service_url.endswith(\"/\"):\n anyscale_service_url = anyscale_service_url[:-1]\n if not anyscale_service_route.startswith(\"/\"):\n anyscale_service_route = \"/\" + anyscale_service_route\n try:\n anyscale_service_endpoint = f\"{anyscale_service_url}/-/routes\"\n headers = {\"Authorization\": f\"Bearer {anyscale_service_token}\"}\n requests.get(anyscale_service_endpoint, headers=headers)\n except requests.exceptions.RequestException as e:\n raise ValueError(e)\n values[\"anyscale_service_url\"] = anyscale_service_url\n values[\"anyscale_service_route\"] = anyscale_service_route\n values[\"anyscale_service_token\"] = anyscale_service_token\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"anyscale_service_url\": self.anyscale_service_url,\n \"anyscale_service_route\": self.anyscale_service_route,\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"anyscale\"\n def _call(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/anyscale.html"} {"id": "63bc5a250b53-2", "text": "return \"anyscale\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to Anyscale Service endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = anyscale(\"Tell me a joke.\")\n \"\"\"\n anyscale_service_endpoint = (\n f\"{self.anyscale_service_url}{self.anyscale_service_route}\"\n )\n headers = {\"Authorization\": f\"Bearer {self.anyscale_service_token}\"}\n body = {\"prompt\": prompt}\n resp = requests.post(anyscale_service_endpoint, headers=headers, json=body)\n if resp.status_code != 200:\n raise ValueError(\n f\"Error returned by service, status code {resp.status_code}\"\n )\n text = resp.text\n if stop is not None:\n # This is a bit hacky, but I can't figure out a better way to enforce\n # stop tokens when making calls to huggingface_hub.\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/anyscale.html"} {"id": "a1c0b6e7f311-0", "text": "Source code for langchain.llms.manifest\n\"\"\"Wrapper around HazyResearch's Manifest library.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\n[docs]class ManifestWrapper(LLM):\n \"\"\"Wrapper around HazyResearch's Manifest library.\"\"\"\n client: Any #: :meta private:\n llm_kwargs: Optional[Dict] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that python package exists in environment.\"\"\"\n try:\n from manifest import Manifest\n if not isinstance(values[\"client\"], Manifest):\n raise ValueError\n except ImportError:\n raise ValueError(\n \"Could not import manifest python package. \"\n \"Please install it with `pip install manifest-ml`.\"\n )\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n kwargs = self.llm_kwargs or {}\n return {**self.client.client.get_model_params(), **kwargs}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"manifest\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to LLM through Manifest.\"\"\"\n if stop is not None and len(stop) != 1:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/manifest.html"} {"id": "a1c0b6e7f311-1", "text": "if stop is not None and len(stop) != 1:\n raise NotImplementedError(\n f\"Manifest currently only supports a single stop token, got {stop}\"\n )\n params = self.llm_kwargs or {}\n params = {**params, **kwargs}\n if stop is not None:\n params[\"stop_token\"] = stop\n return self.client.run(prompt, **params)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/manifest.html"} {"id": "ff7d9864388e-0", "text": "Source code for langchain.llms.deepinfra\n\"\"\"Wrapper around DeepInfra APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nDEFAULT_MODEL_ID = \"google/flan-t5-xl\"\n[docs]class DeepInfra(LLM):\n \"\"\"Wrapper around DeepInfra deployed models.\n To use, you should have the ``requests`` python package installed, and the\n environment variable ``DEEPINFRA_API_TOKEN`` set with your API token, or pass\n it as a named parameter to the constructor.\n Only supports `text-generation` and `text2text-generation` for now.\n Example:\n .. code-block:: python\n from langchain.llms import DeepInfra\n di = DeepInfra(model_id=\"google/flan-t5-xl\",\n deepinfra_api_token=\"my-api-key\")\n \"\"\"\n model_id: str = DEFAULT_MODEL_ID\n model_kwargs: Optional[dict] = None\n deepinfra_api_token: Optional[str] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n deepinfra_api_token = get_from_dict_or_env(\n values, \"deepinfra_api_token\", \"DEEPINFRA_API_TOKEN\"\n )\n values[\"deepinfra_api_token\"] = deepinfra_api_token\n return values\n @property", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/deepinfra.html"} {"id": "ff7d9864388e-1", "text": "return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"model_id\": self.model_id},\n **{\"model_kwargs\": self.model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"deepinfra\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to DeepInfra's inference API endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = di(\"Tell me a joke.\")\n \"\"\"\n _model_kwargs = self.model_kwargs or {}\n _model_kwargs = {**_model_kwargs, **kwargs}\n # HTTP headers for authorization\n headers = {\n \"Authorization\": f\"bearer {self.deepinfra_api_token}\",\n \"Content-Type\": \"application/json\",\n }\n try:\n res = requests.post(\n f\"https://api.deepinfra.com/v1/inference/{self.model_id}\",\n headers=headers,\n json={\"input\": prompt, **_model_kwargs},\n )\n except requests.exceptions.RequestException as e:\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\n if res.status_code != 200:\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/deepinfra.html"} {"id": "ff7d9864388e-2", "text": "if res.status_code != 200:\n raise ValueError(\n \"Error raised by inference API HTTP code: %s, %s\"\n % (res.status_code, res.text)\n )\n try:\n t = res.json()\n text = t[\"results\"][0][\"generated_text\"]\n except requests.exceptions.JSONDecodeError as e:\n raise ValueError(\n f\"Error raised by inference API: {e}.\\nResponse: {res.text}\"\n )\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the model parameters\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/deepinfra.html"} {"id": "32ec97716e69-0", "text": "Source code for langchain.llms.forefrontai\n\"\"\"Wrapper around ForefrontAI APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\n[docs]class ForefrontAI(LLM):\n \"\"\"Wrapper around ForefrontAI large language models.\n To use, you should have the environment variable ``FOREFRONTAI_API_KEY``\n set with your API key.\n Example:\n .. code-block:: python\n from langchain.llms import ForefrontAI\n forefrontai = ForefrontAI(endpoint_url=\"\")\n \"\"\"\n endpoint_url: str = \"\"\n \"\"\"Model name to use.\"\"\"\n temperature: float = 0.7\n \"\"\"What sampling temperature to use.\"\"\"\n length: int = 256\n \"\"\"The maximum number of tokens to generate in the completion.\"\"\"\n top_p: float = 1.0\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n top_k: int = 40\n \"\"\"The number of highest probability vocabulary tokens to\n keep for top-k-filtering.\"\"\"\n repetition_penalty: int = 1\n \"\"\"Penalizes repeated tokens according to frequency.\"\"\"\n forefrontai_api_key: Optional[str] = None\n base_url: Optional[str] = None\n \"\"\"Base url to use, if None decides based on model name.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/forefrontai.html"} {"id": "32ec97716e69-1", "text": "def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key exists in environment.\"\"\"\n forefrontai_api_key = get_from_dict_or_env(\n values, \"forefrontai_api_key\", \"FOREFRONTAI_API_KEY\"\n )\n values[\"forefrontai_api_key\"] = forefrontai_api_key\n return values\n @property\n def _default_params(self) -> Mapping[str, Any]:\n \"\"\"Get the default parameters for calling ForefrontAI API.\"\"\"\n return {\n \"temperature\": self.temperature,\n \"length\": self.length,\n \"top_p\": self.top_p,\n \"top_k\": self.top_k,\n \"repetition_penalty\": self.repetition_penalty,\n }\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"endpoint_url\": self.endpoint_url}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"forefrontai\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to ForefrontAI's complete endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = ForefrontAI(\"Tell me a joke.\")\n \"\"\"\n response = requests.post(\n url=self.endpoint_url,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/forefrontai.html"} {"id": "32ec97716e69-2", "text": "\"\"\"\n response = requests.post(\n url=self.endpoint_url,\n headers={\n \"Authorization\": f\"Bearer {self.forefrontai_api_key}\",\n \"Content-Type\": \"application/json\",\n },\n json={\"text\": prompt, **self._default_params, **kwargs},\n )\n response_json = response.json()\n text = response_json[\"result\"][0][\"completion\"]\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the model parameters\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/forefrontai.html"} {"id": "ce6c38484124-0", "text": "Source code for langchain.llms.sagemaker_endpoint\n\"\"\"Wrapper around Sagemaker InvokeEndpoint API.\"\"\"\nfrom abc import abstractmethod\nfrom typing import Any, Dict, Generic, List, Mapping, Optional, TypeVar, Union\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nINPUT_TYPE = TypeVar(\"INPUT_TYPE\", bound=Union[str, List[str]])\nOUTPUT_TYPE = TypeVar(\"OUTPUT_TYPE\", bound=Union[str, List[List[float]]])\n[docs]class ContentHandlerBase(Generic[INPUT_TYPE, OUTPUT_TYPE]):\n \"\"\"A handler class to transform input from LLM to a\n format that SageMaker endpoint expects. Similarily,\n the class also handles transforming output from the\n SageMaker endpoint to a format that LLM class expects.\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n class ContentHandler(ContentHandlerBase):\n content_type = \"application/json\"\n accepts = \"application/json\"\n def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:\n input_str = json.dumps({prompt: prompt, **model_kwargs})\n return input_str.encode('utf-8')\n \n def transform_output(self, output: bytes) -> str:\n response_json = json.loads(output.read().decode(\"utf-8\"))\n return response_json[0][\"generated_text\"]\n \"\"\"\n content_type: Optional[str] = \"text/plain\"\n \"\"\"The MIME type of the input data passed to endpoint\"\"\"\n accepts: Optional[str] = \"text/plain\"\n \"\"\"The MIME type of the response data returned from endpoint\"\"\"\n[docs] @abstractmethod", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/sagemaker_endpoint.html"} {"id": "ce6c38484124-1", "text": "[docs] @abstractmethod\n def transform_input(self, prompt: INPUT_TYPE, model_kwargs: Dict) -> bytes:\n \"\"\"Transforms the input to a format that model can accept\n as the request Body. Should return bytes or seekable file\n like object in the format specified in the content_type\n request header.\n \"\"\"\n[docs] @abstractmethod\n def transform_output(self, output: bytes) -> OUTPUT_TYPE:\n \"\"\"Transforms the output from the model to string that\n the LLM class expects.\n \"\"\"\n[docs]class LLMContentHandler(ContentHandlerBase[str, str]):\n \"\"\"Content handler for LLM class.\"\"\"\n[docs]class SagemakerEndpoint(LLM):\n \"\"\"Wrapper around custom Sagemaker Inference Endpoints.\n To use, you must supply the endpoint name from your deployed\n Sagemaker model & the region where it is deployed.\n To authenticate, the AWS client uses the following methods to\n automatically load credentials:\n https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\n If a specific credential profile should be used, you must pass\n the name of the profile from the ~/.aws/credentials file that is to be used.\n Make sure the credentials / roles used have the required policies to\n access the Sagemaker endpoint.\n See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n from langchain import SagemakerEndpoint\n endpoint_name = (\n \"my-endpoint-name\"\n )\n region_name = (\n \"us-west-2\"\n )\n credentials_profile_name = (\n \"default\"\n )\n se = SagemakerEndpoint(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/sagemaker_endpoint.html"} {"id": "ce6c38484124-2", "text": "\"default\"\n )\n se = SagemakerEndpoint(\n endpoint_name=endpoint_name,\n region_name=region_name,\n credentials_profile_name=credentials_profile_name\n )\n \"\"\"\n client: Any #: :meta private:\n endpoint_name: str = \"\"\n \"\"\"The name of the endpoint from the deployed Sagemaker model.\n Must be unique within an AWS Region.\"\"\"\n region_name: str = \"\"\n \"\"\"The aws region where the Sagemaker model is deployed, eg. `us-west-2`.\"\"\"\n credentials_profile_name: Optional[str] = None\n \"\"\"The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which\n has either access keys or role information specified.\n If not specified, the default credential profile or, if on an EC2 instance,\n credentials from IMDS will be used.\n See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\n \"\"\"\n content_handler: LLMContentHandler\n \"\"\"The content handler class that provides an input and\n output transform functions to handle formats between LLM\n and the endpoint.\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n from langchain.llms.sagemaker_endpoint import LLMContentHandler\n class ContentHandler(LLMContentHandler):\n content_type = \"application/json\"\n accepts = \"application/json\"\n def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:\n input_str = json.dumps({prompt: prompt, **model_kwargs})\n return input_str.encode('utf-8')\n \n def transform_output(self, output: bytes) -> str:\n response_json = json.loads(output.read().decode(\"utf-8\"))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/sagemaker_endpoint.html"} {"id": "ce6c38484124-3", "text": "response_json = json.loads(output.read().decode(\"utf-8\"))\n return response_json[0][\"generated_text\"]\n \"\"\"\n model_kwargs: Optional[Dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n endpoint_kwargs: Optional[Dict] = None\n \"\"\"Optional attributes passed to the invoke_endpoint\n function. See `boto3`_. docs for more info.\n .. _boto3: \n \"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that AWS credentials to and python package exists in environment.\"\"\"\n try:\n import boto3\n try:\n if values[\"credentials_profile_name\"] is not None:\n session = boto3.Session(\n profile_name=values[\"credentials_profile_name\"]\n )\n else:\n # use default credentials\n session = boto3.Session()\n values[\"client\"] = session.client(\n \"sagemaker-runtime\", region_name=values[\"region_name\"]\n )\n except Exception as e:\n raise ValueError(\n \"Could not load credentials to authenticate with AWS client. \"\n \"Please check that credentials in the specified \"\n \"profile name are valid.\"\n ) from e\n except ImportError:\n raise ImportError(\n \"Could not import boto3 python package. \"\n \"Please install it with `pip install boto3`.\"\n )\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/sagemaker_endpoint.html"} {"id": "ce6c38484124-4", "text": "\"\"\"Get the identifying parameters.\"\"\"\n _model_kwargs = self.model_kwargs or {}\n return {\n **{\"endpoint_name\": self.endpoint_name},\n **{\"model_kwargs\": _model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"sagemaker_endpoint\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to Sagemaker inference endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = se(\"Tell me a joke.\")\n \"\"\"\n _model_kwargs = self.model_kwargs or {}\n _model_kwargs = {**_model_kwargs, **kwargs}\n _endpoint_kwargs = self.endpoint_kwargs or {}\n body = self.content_handler.transform_input(prompt, _model_kwargs)\n content_type = self.content_handler.content_type\n accepts = self.content_handler.accepts\n # send request\n try:\n response = self.client.invoke_endpoint(\n EndpointName=self.endpoint_name,\n Body=body,\n ContentType=content_type,\n Accept=accepts,\n **_endpoint_kwargs,\n )\n except Exception as e:\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\n text = self.content_handler.transform_output(response[\"Body\"])\n if stop is not None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/sagemaker_endpoint.html"} {"id": "ce6c38484124-5", "text": "if stop is not None:\n # This is a bit hacky, but I can't figure out a better way to enforce\n # stop tokens when making calls to the sagemaker endpoint.\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/sagemaker_endpoint.html"} {"id": "2ae17481e535-0", "text": "Source code for langchain.llms.llamacpp\n\"\"\"Wrapper around llama.cpp.\"\"\"\nimport logging\nfrom typing import Any, Dict, Generator, List, Optional\nfrom pydantic import Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nlogger = logging.getLogger(__name__)\n[docs]class LlamaCpp(LLM):\n \"\"\"Wrapper around the llama.cpp model.\n To use, you should have the llama-cpp-python library installed, and provide the\n path to the Llama model as a named parameter to the constructor.\n Check out: https://github.com/abetlen/llama-cpp-python\n Example:\n .. code-block:: python\n from langchain.llms import LlamaCpp\n llm = LlamaCpp(model_path=\"/path/to/llama/model\")\n \"\"\"\n client: Any #: :meta private:\n model_path: str\n \"\"\"The path to the Llama model file.\"\"\"\n lora_base: Optional[str] = None\n \"\"\"The path to the Llama LoRA base model.\"\"\"\n lora_path: Optional[str] = None\n \"\"\"The path to the Llama LoRA. If None, no LoRa is loaded.\"\"\"\n n_ctx: int = Field(512, alias=\"n_ctx\")\n \"\"\"Token context window.\"\"\"\n n_parts: int = Field(-1, alias=\"n_parts\")\n \"\"\"Number of parts to split the model into.\n If -1, the number of parts is automatically determined.\"\"\"\n seed: int = Field(-1, alias=\"seed\")\n \"\"\"Seed. If -1, a random seed is used.\"\"\"\n f16_kv: bool = Field(True, alias=\"f16_kv\")\n \"\"\"Use half-precision for key/value cache.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html"} {"id": "2ae17481e535-1", "text": "\"\"\"Use half-precision for key/value cache.\"\"\"\n logits_all: bool = Field(False, alias=\"logits_all\")\n \"\"\"Return logits for all tokens, not just the last token.\"\"\"\n vocab_only: bool = Field(False, alias=\"vocab_only\")\n \"\"\"Only load the vocabulary, no weights.\"\"\"\n use_mlock: bool = Field(False, alias=\"use_mlock\")\n \"\"\"Force system to keep model in RAM.\"\"\"\n n_threads: Optional[int] = Field(None, alias=\"n_threads\")\n \"\"\"Number of threads to use.\n If None, the number of threads is automatically determined.\"\"\"\n n_batch: Optional[int] = Field(8, alias=\"n_batch\")\n \"\"\"Number of tokens to process in parallel.\n Should be a number between 1 and n_ctx.\"\"\"\n n_gpu_layers: Optional[int] = Field(None, alias=\"n_gpu_layers\")\n \"\"\"Number of layers to be loaded into gpu memory. Default None.\"\"\"\n suffix: Optional[str] = Field(None)\n \"\"\"A suffix to append to the generated text. If None, no suffix is appended.\"\"\"\n max_tokens: Optional[int] = 256\n \"\"\"The maximum number of tokens to generate.\"\"\"\n temperature: Optional[float] = 0.8\n \"\"\"The temperature to use for sampling.\"\"\"\n top_p: Optional[float] = 0.95\n \"\"\"The top-p value to use for sampling.\"\"\"\n logprobs: Optional[int] = Field(None)\n \"\"\"The number of logprobs to return. If None, no logprobs are returned.\"\"\"\n echo: Optional[bool] = False\n \"\"\"Whether to echo the prompt.\"\"\"\n stop: Optional[List[str]] = []\n \"\"\"A list of strings to stop generation when encountered.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html"} {"id": "2ae17481e535-2", "text": "\"\"\"A list of strings to stop generation when encountered.\"\"\"\n repeat_penalty: Optional[float] = 1.1\n \"\"\"The penalty to apply to repeated tokens.\"\"\"\n top_k: Optional[int] = 40\n \"\"\"The top-k value to use for sampling.\"\"\"\n last_n_tokens_size: Optional[int] = 64\n \"\"\"The number of tokens to look back when applying the repeat_penalty.\"\"\"\n use_mmap: Optional[bool] = True\n \"\"\"Whether to keep the model loaded in RAM\"\"\"\n streaming: bool = True\n \"\"\"Whether to stream the results, token by token.\"\"\"\n verbose: bool = True\n \"\"\"Print verbose output to stderr.\"\"\"\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that llama-cpp-python library is installed.\"\"\"\n model_path = values[\"model_path\"]\n model_param_names = [\n \"lora_path\",\n \"lora_base\",\n \"n_ctx\",\n \"n_parts\",\n \"seed\",\n \"f16_kv\",\n \"logits_all\",\n \"vocab_only\",\n \"use_mlock\",\n \"n_threads\",\n \"n_batch\",\n \"use_mmap\",\n \"last_n_tokens_size\",\n \"verbose\",\n ]\n model_params = {k: values[k] for k in model_param_names}\n # For backwards compatibility, only include if non-null.\n if values[\"n_gpu_layers\"] is not None:\n model_params[\"n_gpu_layers\"] = values[\"n_gpu_layers\"]\n try:\n from llama_cpp import Llama\n values[\"client\"] = Llama(model_path, **model_params)\n except ImportError:\n raise ModuleNotFoundError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html"} {"id": "2ae17481e535-3", "text": "except ImportError:\n raise ModuleNotFoundError(\n \"Could not import llama-cpp-python library. \"\n \"Please install the llama-cpp-python library to \"\n \"use this embedding model: pip install llama-cpp-python\"\n )\n except Exception as e:\n raise ValueError(\n f\"Could not load Llama model from path: {model_path}. \"\n f\"Received error {e}\"\n )\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling llama_cpp.\"\"\"\n return {\n \"suffix\": self.suffix,\n \"max_tokens\": self.max_tokens,\n \"temperature\": self.temperature,\n \"top_p\": self.top_p,\n \"logprobs\": self.logprobs,\n \"echo\": self.echo,\n \"stop_sequences\": self.stop, # key here is convention among LLM classes\n \"repeat_penalty\": self.repeat_penalty,\n \"top_k\": self.top_k,\n }\n @property\n def _identifying_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model_path\": self.model_path}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"llamacpp\"\n def _get_parameters(self, stop: Optional[List[str]] = None) -> Dict[str, Any]:\n \"\"\"\n Performs sanity check, preparing parameters in format needed by llama_cpp.\n Args:\n stop (Optional[List[str]]): List of stop sequences for llama_cpp.\n Returns:\n Dictionary containing the combined parameters.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html"} {"id": "2ae17481e535-4", "text": "Returns:\n Dictionary containing the combined parameters.\n \"\"\"\n # Raise error if stop sequences are in both input and default params\n if self.stop and stop is not None:\n raise ValueError(\"`stop` found in both the input and default params.\")\n params = self._default_params\n # llama_cpp expects the \"stop\" key not this, so we remove it:\n params.pop(\"stop_sequences\")\n # then sets it as configured, or default to an empty list:\n params[\"stop\"] = self.stop or stop or []\n return params\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call the Llama model and return the output.\n Args:\n prompt: The prompt to use for generation.\n stop: A list of strings to stop generation when encountered.\n Returns:\n The generated text.\n Example:\n .. code-block:: python\n from langchain.llms import LlamaCpp\n llm = LlamaCpp(model_path=\"/path/to/local/llama/model.bin\")\n llm(\"This is a prompt.\")\n \"\"\"\n if self.streaming:\n # If streaming is enabled, we use the stream\n # method that yields as they are generated\n # and return the combined strings from the first choices's text:\n combined_text_output = \"\"\n for token in self.stream(prompt=prompt, stop=stop, run_manager=run_manager):\n combined_text_output += token[\"choices\"][0][\"text\"]\n return combined_text_output\n else:\n params = self._get_parameters(stop)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html"} {"id": "2ae17481e535-5", "text": "return combined_text_output\n else:\n params = self._get_parameters(stop)\n params = {**params, **kwargs}\n result = self.client(prompt=prompt, **params)\n return result[\"choices\"][0][\"text\"]\n[docs] def stream(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> Generator[Dict, None, None]:\n \"\"\"Yields results objects as they are generated in real time.\n BETA: this is a beta feature while we figure out the right abstraction.\n Once that happens, this interface could change.\n It also calls the callback manager's on_llm_new_token event with\n similar parameters to the OpenAI LLM class method of the same name.\n Args:\n prompt: The prompts to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n A generator representing the stream of tokens being generated.\n Yields:\n A dictionary like objects containing a string token and metadata.\n See llama-cpp-python docs and below for more.\n Example:\n .. code-block:: python\n from langchain.llms import LlamaCpp\n llm = LlamaCpp(\n model_path=\"/path/to/local/model.bin\",\n temperature = 0.5\n )\n for chunk in llm.stream(\"Ask 'Hi, how are you?' like a pirate:'\",\n stop=[\"'\",\"\\n\"]):\n result = chunk[\"choices\"][0]\n print(result[\"text\"], end='', flush=True)\n \"\"\"\n params = self._get_parameters(stop)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html"} {"id": "2ae17481e535-6", "text": "\"\"\"\n params = self._get_parameters(stop)\n result = self.client(prompt=prompt, stream=True, **params)\n for chunk in result:\n token = chunk[\"choices\"][0][\"text\"]\n log_probs = chunk[\"choices\"][0].get(\"logprobs\", None)\n if run_manager:\n run_manager.on_llm_new_token(\n token=token, verbose=self.verbose, log_probs=log_probs\n )\n yield chunk\n[docs] def get_num_tokens(self, text: str) -> int:\n tokenized_text = self.client.tokenize(text.encode(\"utf-8\"))\n return len(tokenized_text)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html"} {"id": "67bd1c7b053b-0", "text": "Source code for langchain.llms.openai\n\"\"\"Wrapper around OpenAI APIs.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport sys\nimport warnings\nfrom typing import (\n AbstractSet,\n Any,\n Callable,\n Collection,\n Dict,\n Generator,\n List,\n Literal,\n Mapping,\n Optional,\n Set,\n Tuple,\n Union,\n)\nfrom pydantic import Field, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.llms.base import BaseLLM, create_base_retry_decorator\nfrom langchain.schema import Generation, LLMResult\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]def update_token_usage(\n keys: Set[str], response: Dict[str, Any], token_usage: Dict[str, Any]\n) -> None:\n \"\"\"Update token usage.\"\"\"\n _keys_to_use = keys.intersection(response[\"usage\"])\n for _key in _keys_to_use:\n if _key not in token_usage:\n token_usage[_key] = response[\"usage\"][_key]\n else:\n token_usage[_key] += response[\"usage\"][_key]\ndef _update_response(response: Dict[str, Any], stream_response: Dict[str, Any]) -> None:\n \"\"\"Update response from the stream response.\"\"\"\n response[\"choices\"][0][\"text\"] += stream_response[\"choices\"][0][\"text\"]\n response[\"choices\"][0][\"finish_reason\"] = stream_response[\"choices\"][0].get(\n \"finish_reason\", None\n )\n response[\"choices\"][0][\"logprobs\"] = stream_response[\"choices\"][0][\"logprobs\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} {"id": "67bd1c7b053b-1", "text": "def _streaming_response_template() -> Dict[str, Any]:\n return {\n \"choices\": [\n {\n \"text\": \"\",\n \"finish_reason\": None,\n \"logprobs\": None,\n }\n ]\n }\ndef _create_retry_decorator(llm: Union[BaseOpenAI, OpenAIChat]) -> Callable[[Any], Any]:\n import openai\n errors = [\n openai.error.Timeout,\n openai.error.APIError,\n openai.error.APIConnectionError,\n openai.error.RateLimitError,\n openai.error.ServiceUnavailableError,\n ]\n return create_base_retry_decorator(error_types=errors, max_retries=llm.max_retries)\n[docs]def completion_with_retry(llm: Union[BaseOpenAI, OpenAIChat], **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the completion call.\"\"\"\n retry_decorator = _create_retry_decorator(llm)\n @retry_decorator\n def _completion_with_retry(**kwargs: Any) -> Any:\n return llm.client.create(**kwargs)\n return _completion_with_retry(**kwargs)\nasync def acompletion_with_retry(\n llm: Union[BaseOpenAI, OpenAIChat], **kwargs: Any\n) -> Any:\n \"\"\"Use tenacity to retry the async completion call.\"\"\"\n retry_decorator = _create_retry_decorator(llm)\n @retry_decorator\n async def _completion_with_retry(**kwargs: Any) -> Any:\n # Use OpenAI's async api https://github.com/openai/openai-python#async-api\n return await llm.client.acreate(**kwargs)\n return await _completion_with_retry(**kwargs)\n[docs]class BaseOpenAI(BaseLLM):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} {"id": "67bd1c7b053b-2", "text": "[docs]class BaseOpenAI(BaseLLM):\n \"\"\"Wrapper around OpenAI large language models.\"\"\"\n @property\n def lc_secrets(self) -> Dict[str, str]:\n return {\"openai_api_key\": \"OPENAI_API_KEY\"}\n @property\n def lc_serializable(self) -> bool:\n return True\n client: Any #: :meta private:\n model_name: str = Field(\"text-davinci-003\", alias=\"model\")\n \"\"\"Model name to use.\"\"\"\n temperature: float = 0.7\n \"\"\"What sampling temperature to use.\"\"\"\n max_tokens: int = 256\n \"\"\"The maximum number of tokens to generate in the completion.\n -1 returns as many tokens as possible given the prompt and\n the models maximal context size.\"\"\"\n top_p: float = 1\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n frequency_penalty: float = 0\n \"\"\"Penalizes repeated tokens according to frequency.\"\"\"\n presence_penalty: float = 0\n \"\"\"Penalizes repeated tokens.\"\"\"\n n: int = 1\n \"\"\"How many completions to generate for each prompt.\"\"\"\n best_of: int = 1\n \"\"\"Generates best_of completions server-side and returns the \"best\".\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not explicitly specified.\"\"\"\n openai_api_key: Optional[str] = None\n openai_api_base: Optional[str] = None\n openai_organization: Optional[str] = None\n # to support explicit proxy for OpenAI\n openai_proxy: Optional[str] = None\n batch_size: int = 20", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} {"id": "67bd1c7b053b-3", "text": "openai_proxy: Optional[str] = None\n batch_size: int = 20\n \"\"\"Batch size to use when passing multiple documents to generate.\"\"\"\n request_timeout: Optional[Union[float, Tuple[float, float]]] = None\n \"\"\"Timeout for requests to OpenAI completion API. Default is 600 seconds.\"\"\"\n logit_bias: Optional[Dict[str, float]] = Field(default_factory=dict)\n \"\"\"Adjust the probability of specific tokens being generated.\"\"\"\n max_retries: int = 6\n \"\"\"Maximum number of retries to make when generating.\"\"\"\n streaming: bool = False\n \"\"\"Whether to stream the results or not.\"\"\"\n allowed_special: Union[Literal[\"all\"], AbstractSet[str]] = set()\n \"\"\"Set of special tokens that are allowed\u3002\"\"\"\n disallowed_special: Union[Literal[\"all\"], Collection[str]] = \"all\"\n \"\"\"Set of special tokens that are not allowed\u3002\"\"\"\n tiktoken_model_name: Optional[str] = None\n \"\"\"The model name to pass to tiktoken when using this class. \n Tiktoken is used to count the number of tokens in documents to constrain \n them to be under a certain limit. By default, when set to None, this will \n be the same as the embedding model name. However, there are some cases \n where you may want to use this Embedding class with a model name not \n supported by tiktoken. This can include when using Azure embeddings or \n when using one of the many model providers that expose an OpenAI-like \n API but with different models. In those cases, in order to avoid erroring \n when tiktoken is called, you can specify a model name to use here.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} {"id": "67bd1c7b053b-4", "text": "when tiktoken is called, you can specify a model name to use here.\"\"\"\n def __new__(cls, **data: Any) -> Union[OpenAIChat, BaseOpenAI]: # type: ignore\n \"\"\"Initialize the OpenAI object.\"\"\"\n model_name = data.get(\"model_name\", \"\")\n if model_name.startswith(\"gpt-3.5-turbo\") or model_name.startswith(\"gpt-4\"):\n warnings.warn(\n \"You are trying to use a chat model. This way of initializing it is \"\n \"no longer supported. Instead, please use: \"\n \"`from langchain.chat_models import ChatOpenAI`\"\n )\n return OpenAIChat(**data)\n return super().__new__(cls)\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n allow_population_by_field_name = True\n[docs] @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = cls._all_required_field_names()\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n if field_name not in all_required_field_names:\n logger.warning(\n f\"\"\"WARNING! {field_name} is not default parameter.\n {field_name} was transferred to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n invalid_model_kwargs = all_required_field_names.intersection(extra.keys())\n if invalid_model_kwargs:\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} {"id": "67bd1c7b053b-5", "text": "if invalid_model_kwargs:\n raise ValueError(\n f\"Parameters {invalid_model_kwargs} should be specified explicitly. \"\n f\"Instead they were passed in as part of `model_kwargs` parameter.\"\n )\n values[\"model_kwargs\"] = extra\n return values\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n values[\"openai_api_key\"] = get_from_dict_or_env(\n values, \"openai_api_key\", \"OPENAI_API_KEY\"\n )\n values[\"openai_api_base\"] = get_from_dict_or_env(\n values,\n \"openai_api_base\",\n \"OPENAI_API_BASE\",\n default=\"\",\n )\n values[\"openai_proxy\"] = get_from_dict_or_env(\n values,\n \"openai_proxy\",\n \"OPENAI_PROXY\",\n default=\"\",\n )\n values[\"openai_organization\"] = get_from_dict_or_env(\n values,\n \"openai_organization\",\n \"OPENAI_ORGANIZATION\",\n default=\"\",\n )\n try:\n import openai\n values[\"client\"] = openai.Completion\n except ImportError:\n raise ImportError(\n \"Could not import openai python package. \"\n \"Please install it with `pip install openai`.\"\n )\n if values[\"streaming\"] and values[\"n\"] > 1:\n raise ValueError(\"Cannot stream results when n > 1.\")\n if values[\"streaming\"] and values[\"best_of\"] > 1:\n raise ValueError(\"Cannot stream results when best_of > 1.\")\n return values\n @property", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} {"id": "67bd1c7b053b-6", "text": "return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling OpenAI API.\"\"\"\n normal_params = {\n \"temperature\": self.temperature,\n \"max_tokens\": self.max_tokens,\n \"top_p\": self.top_p,\n \"frequency_penalty\": self.frequency_penalty,\n \"presence_penalty\": self.presence_penalty,\n \"n\": self.n,\n \"request_timeout\": self.request_timeout,\n \"logit_bias\": self.logit_bias,\n }\n # Azure gpt-35-turbo doesn't support best_of\n # don't specify best_of if it is 1\n if self.best_of > 1:\n normal_params[\"best_of\"] = self.best_of\n return {**normal_params, **self.model_kwargs}\n def _generate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> LLMResult:\n \"\"\"Call out to OpenAI's endpoint with k unique prompts.\n Args:\n prompts: The prompts to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The full LLM output.\n Example:\n .. code-block:: python\n response = openai.generate([\"Tell me a joke.\"])\n \"\"\"\n # TODO: write a unit test for this\n params = self._invocation_params\n params = {**params, **kwargs}\n sub_prompts = self.get_sub_prompts(params, prompts, stop)\n choices = []\n token_usage: Dict[str, int] = {}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} {"id": "67bd1c7b053b-7", "text": "choices = []\n token_usage: Dict[str, int] = {}\n # Get the token usage from the response.\n # Includes prompt, completion, and total tokens used.\n _keys = {\"completion_tokens\", \"prompt_tokens\", \"total_tokens\"}\n for _prompts in sub_prompts:\n if self.streaming:\n if len(_prompts) > 1:\n raise ValueError(\"Cannot stream results with multiple prompts.\")\n params[\"stream\"] = True\n response = _streaming_response_template()\n for stream_resp in completion_with_retry(\n self, prompt=_prompts, **params\n ):\n if run_manager:\n run_manager.on_llm_new_token(\n stream_resp[\"choices\"][0][\"text\"],\n verbose=self.verbose,\n logprobs=stream_resp[\"choices\"][0][\"logprobs\"],\n )\n _update_response(response, stream_resp)\n choices.extend(response[\"choices\"])\n else:\n response = completion_with_retry(self, prompt=_prompts, **params)\n choices.extend(response[\"choices\"])\n if not self.streaming:\n # Can't update token usage if streaming\n update_token_usage(_keys, response, token_usage)\n return self.create_llm_result(choices, prompts, token_usage)\n async def _agenerate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> LLMResult:\n \"\"\"Call out to OpenAI's endpoint async with k unique prompts.\"\"\"\n params = self._invocation_params\n params = {**params, **kwargs}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} {"id": "67bd1c7b053b-8", "text": "params = self._invocation_params\n params = {**params, **kwargs}\n sub_prompts = self.get_sub_prompts(params, prompts, stop)\n choices = []\n token_usage: Dict[str, int] = {}\n # Get the token usage from the response.\n # Includes prompt, completion, and total tokens used.\n _keys = {\"completion_tokens\", \"prompt_tokens\", \"total_tokens\"}\n for _prompts in sub_prompts:\n if self.streaming:\n if len(_prompts) > 1:\n raise ValueError(\"Cannot stream results with multiple prompts.\")\n params[\"stream\"] = True\n response = _streaming_response_template()\n async for stream_resp in await acompletion_with_retry(\n self, prompt=_prompts, **params\n ):\n if run_manager:\n await run_manager.on_llm_new_token(\n stream_resp[\"choices\"][0][\"text\"],\n verbose=self.verbose,\n logprobs=stream_resp[\"choices\"][0][\"logprobs\"],\n )\n _update_response(response, stream_resp)\n choices.extend(response[\"choices\"])\n else:\n response = await acompletion_with_retry(self, prompt=_prompts, **params)\n choices.extend(response[\"choices\"])\n if not self.streaming:\n # Can't update token usage if streaming\n update_token_usage(_keys, response, token_usage)\n return self.create_llm_result(choices, prompts, token_usage)\n[docs] def get_sub_prompts(\n self,\n params: Dict[str, Any],\n prompts: List[str],\n stop: Optional[List[str]] = None,\n ) -> List[List[str]]:\n \"\"\"Get the sub prompts for llm call.\"\"\"\n if stop is not None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} {"id": "67bd1c7b053b-9", "text": "\"\"\"Get the sub prompts for llm call.\"\"\"\n if stop is not None:\n if \"stop\" in params:\n raise ValueError(\"`stop` found in both the input and default params.\")\n params[\"stop\"] = stop\n if params[\"max_tokens\"] == -1:\n if len(prompts) != 1:\n raise ValueError(\n \"max_tokens set to -1 not supported for multiple inputs.\"\n )\n params[\"max_tokens\"] = self.max_tokens_for_prompt(prompts[0])\n sub_prompts = [\n prompts[i : i + self.batch_size]\n for i in range(0, len(prompts), self.batch_size)\n ]\n return sub_prompts\n[docs] def create_llm_result(\n self, choices: Any, prompts: List[str], token_usage: Dict[str, int]\n ) -> LLMResult:\n \"\"\"Create the LLMResult from the choices and prompts.\"\"\"\n generations = []\n for i, _ in enumerate(prompts):\n sub_choices = choices[i * self.n : (i + 1) * self.n]\n generations.append(\n [\n Generation(\n text=choice[\"text\"],\n generation_info=dict(\n finish_reason=choice.get(\"finish_reason\"),\n logprobs=choice.get(\"logprobs\"),\n ),\n )\n for choice in sub_choices\n ]\n )\n llm_output = {\"token_usage\": token_usage, \"model_name\": self.model_name}\n return LLMResult(generations=generations, llm_output=llm_output)\n[docs] def stream(self, prompt: str, stop: Optional[List[str]] = None) -> Generator:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} {"id": "67bd1c7b053b-10", "text": "\"\"\"Call OpenAI with streaming flag and return the resulting generator.\n BETA: this is a beta feature while we figure out the right abstraction.\n Once that happens, this interface could change.\n Args:\n prompt: The prompts to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n A generator representing the stream of tokens from OpenAI.\n Example:\n .. code-block:: python\n generator = openai.stream(\"Tell me a joke.\")\n for token in generator:\n yield token\n \"\"\"\n params = self.prep_streaming_params(stop)\n generator = self.client.create(prompt=prompt, **params)\n return generator\n[docs] def prep_streaming_params(self, stop: Optional[List[str]] = None) -> Dict[str, Any]:\n \"\"\"Prepare the params for streaming.\"\"\"\n params = self._invocation_params\n if \"best_of\" in params and params[\"best_of\"] != 1:\n raise ValueError(\"OpenAI only supports best_of == 1 for streaming\")\n if stop is not None:\n if \"stop\" in params:\n raise ValueError(\"`stop` found in both the input and default params.\")\n params[\"stop\"] = stop\n params[\"stream\"] = True\n return params\n @property\n def _invocation_params(self) -> Dict[str, Any]:\n \"\"\"Get the parameters used to invoke the model.\"\"\"\n openai_creds: Dict[str, Any] = {\n \"api_key\": self.openai_api_key,\n \"api_base\": self.openai_api_base,\n \"organization\": self.openai_organization,\n }\n if self.openai_proxy:\n import openai", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} {"id": "67bd1c7b053b-11", "text": "}\n if self.openai_proxy:\n import openai\n openai.proxy = {\"http\": self.openai_proxy, \"https\": self.openai_proxy} # type: ignore[assignment] # noqa: E501\n return {**openai_creds, **self._default_params}\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model_name\": self.model_name}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"openai\"\n[docs] def get_token_ids(self, text: str) -> List[int]:\n \"\"\"Get the token IDs using the tiktoken package.\"\"\"\n # tiktoken NOT supported for Python < 3.8\n if sys.version_info[1] < 8:\n return super().get_num_tokens(text)\n try:\n import tiktoken\n except ImportError:\n raise ImportError(\n \"Could not import tiktoken python package. \"\n \"This is needed in order to calculate get_num_tokens. \"\n \"Please install it with `pip install tiktoken`.\"\n )\n model_name = self.tiktoken_model_name or self.model_name\n try:\n enc = tiktoken.encoding_for_model(model_name)\n except KeyError:\n logger.warning(\"Warning: model not found. Using cl100k_base encoding.\")\n model = \"cl100k_base\"\n enc = tiktoken.get_encoding(model)\n return enc.encode(\n text,\n allowed_special=self.allowed_special,\n disallowed_special=self.disallowed_special,\n )\n[docs] @staticmethod", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} {"id": "67bd1c7b053b-12", "text": "disallowed_special=self.disallowed_special,\n )\n[docs] @staticmethod\n def modelname_to_contextsize(modelname: str) -> int:\n \"\"\"Calculate the maximum number of tokens possible to generate for a model.\n Args:\n modelname: The modelname we want to know the context size for.\n Returns:\n The maximum context size\n Example:\n .. code-block:: python\n max_tokens = openai.modelname_to_contextsize(\"text-davinci-003\")\n \"\"\"\n model_token_mapping = {\n \"gpt-4\": 8192,\n \"gpt-4-0314\": 8192,\n \"gpt-4-0613\": 8192,\n \"gpt-4-32k\": 32768,\n \"gpt-4-32k-0314\": 32768,\n \"gpt-4-32k-0613\": 32768,\n \"gpt-3.5-turbo\": 4096,\n \"gpt-3.5-turbo-0301\": 4096,\n \"gpt-3.5-turbo-0613\": 4096,\n \"gpt-3.5-turbo-16k\": 16385,\n \"gpt-3.5-turbo-16k-0613\": 16385,\n \"text-ada-001\": 2049,\n \"ada\": 2049,\n \"text-babbage-001\": 2040,\n \"babbage\": 2049,\n \"text-curie-001\": 2049,\n \"curie\": 2049,\n \"davinci\": 2049,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} {"id": "67bd1c7b053b-13", "text": "\"davinci\": 2049,\n \"text-davinci-003\": 4097,\n \"text-davinci-002\": 4097,\n \"code-davinci-002\": 8001,\n \"code-davinci-001\": 8001,\n \"code-cushman-002\": 2048,\n \"code-cushman-001\": 2048,\n }\n # handling finetuned models\n if \"ft-\" in modelname:\n modelname = modelname.split(\":\")[0]\n context_size = model_token_mapping.get(modelname, None)\n if context_size is None:\n raise ValueError(\n f\"Unknown model: {modelname}. Please provide a valid OpenAI model name.\"\n \"Known models are: \" + \", \".join(model_token_mapping.keys())\n )\n return context_size\n @property\n def max_context_size(self) -> int:\n \"\"\"Get max context size for this model.\"\"\"\n return self.modelname_to_contextsize(self.model_name)\n[docs] def max_tokens_for_prompt(self, prompt: str) -> int:\n \"\"\"Calculate the maximum number of tokens possible to generate for a prompt.\n Args:\n prompt: The prompt to pass into the model.\n Returns:\n The maximum number of tokens to generate for a prompt.\n Example:\n .. code-block:: python\n max_tokens = openai.max_token_for_prompt(\"Tell me a joke.\")\n \"\"\"\n num_tokens = self.get_num_tokens(prompt)\n return self.max_context_size - num_tokens\n[docs]class OpenAI(BaseOpenAI):\n \"\"\"Wrapper around OpenAI large language models.\n To use, you should have the ``openai`` python package installed, and the", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} {"id": "67bd1c7b053b-14", "text": "To use, you should have the ``openai`` python package installed, and the\n environment variable ``OPENAI_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the openai.create call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.llms import OpenAI\n openai = OpenAI(model_name=\"text-davinci-003\")\n \"\"\"\n @property\n def _invocation_params(self) -> Dict[str, Any]:\n return {**{\"model\": self.model_name}, **super()._invocation_params}\n[docs]class AzureOpenAI(BaseOpenAI):\n \"\"\"Wrapper around Azure-specific OpenAI large language models.\n To use, you should have the ``openai`` python package installed, and the\n environment variable ``OPENAI_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the openai.create call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.llms import AzureOpenAI\n openai = AzureOpenAI(model_name=\"text-davinci-003\")\n \"\"\"\n deployment_name: str = \"\"\n \"\"\"Deployment name to use.\"\"\"\n openai_api_type: str = \"azure\"\n openai_api_version: str = \"\"\n[docs] @root_validator()\n def validate_azure_settings(cls, values: Dict) -> Dict:\n values[\"openai_api_version\"] = get_from_dict_or_env(\n values,\n \"openai_api_version\",\n \"OPENAI_API_VERSION\",\n )\n values[\"openai_api_type\"] = get_from_dict_or_env(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} {"id": "67bd1c7b053b-15", "text": ")\n values[\"openai_api_type\"] = get_from_dict_or_env(\n values,\n \"openai_api_type\",\n \"OPENAI_API_TYPE\",\n )\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n return {\n **{\"deployment_name\": self.deployment_name},\n **super()._identifying_params,\n }\n @property\n def _invocation_params(self) -> Dict[str, Any]:\n openai_params = {\n \"engine\": self.deployment_name,\n \"api_type\": self.openai_api_type,\n \"api_version\": self.openai_api_version,\n }\n return {**openai_params, **super()._invocation_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"azure\"\n[docs]class OpenAIChat(BaseLLM):\n \"\"\"Wrapper around OpenAI Chat large language models.\n To use, you should have the ``openai`` python package installed, and the\n environment variable ``OPENAI_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the openai.create call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.llms import OpenAIChat\n openaichat = OpenAIChat(model_name=\"gpt-3.5-turbo\")\n \"\"\"\n client: Any #: :meta private:\n model_name: str = \"gpt-3.5-turbo\"\n \"\"\"Model name to use.\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} {"id": "67bd1c7b053b-16", "text": "model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not explicitly specified.\"\"\"\n openai_api_key: Optional[str] = None\n openai_api_base: Optional[str] = None\n # to support explicit proxy for OpenAI\n openai_proxy: Optional[str] = None\n max_retries: int = 6\n \"\"\"Maximum number of retries to make when generating.\"\"\"\n prefix_messages: List = Field(default_factory=list)\n \"\"\"Series of messages for Chat input.\"\"\"\n streaming: bool = False\n \"\"\"Whether to stream the results or not.\"\"\"\n allowed_special: Union[Literal[\"all\"], AbstractSet[str]] = set()\n \"\"\"Set of special tokens that are allowed\u3002\"\"\"\n disallowed_special: Union[Literal[\"all\"], Collection[str]] = \"all\"\n \"\"\"Set of special tokens that are not allowed\u3002\"\"\"\n[docs] @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n openai_api_key = get_from_dict_or_env(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} {"id": "67bd1c7b053b-17", "text": "openai_api_key = get_from_dict_or_env(\n values, \"openai_api_key\", \"OPENAI_API_KEY\"\n )\n openai_api_base = get_from_dict_or_env(\n values,\n \"openai_api_base\",\n \"OPENAI_API_BASE\",\n default=\"\",\n )\n openai_proxy = get_from_dict_or_env(\n values,\n \"openai_proxy\",\n \"OPENAI_PROXY\",\n default=\"\",\n )\n openai_organization = get_from_dict_or_env(\n values, \"openai_organization\", \"OPENAI_ORGANIZATION\", default=\"\"\n )\n try:\n import openai\n openai.api_key = openai_api_key\n if openai_api_base:\n openai.api_base = openai_api_base\n if openai_organization:\n openai.organization = openai_organization\n if openai_proxy:\n openai.proxy = {\"http\": openai_proxy, \"https\": openai_proxy} # type: ignore[assignment] # noqa: E501\n except ImportError:\n raise ImportError(\n \"Could not import openai python package. \"\n \"Please install it with `pip install openai`.\"\n )\n try:\n values[\"client\"] = openai.ChatCompletion\n except AttributeError:\n raise ValueError(\n \"`openai` has no `ChatCompletion` attribute, this is likely \"\n \"due to an old version of the openai package. Try upgrading it \"\n \"with `pip install --upgrade openai`.\"\n )\n warnings.warn(\n \"You are trying to use a chat model. This way of initializing it is \"\n \"no longer supported. Instead, please use: \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} {"id": "67bd1c7b053b-18", "text": "\"no longer supported. Instead, please use: \"\n \"`from langchain.chat_models import ChatOpenAI`\"\n )\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling OpenAI API.\"\"\"\n return self.model_kwargs\n def _get_chat_params(\n self, prompts: List[str], stop: Optional[List[str]] = None\n ) -> Tuple:\n if len(prompts) > 1:\n raise ValueError(\n f\"OpenAIChat currently only supports single prompt, got {prompts}\"\n )\n messages = self.prefix_messages + [{\"role\": \"user\", \"content\": prompts[0]}]\n params: Dict[str, Any] = {**{\"model\": self.model_name}, **self._default_params}\n if stop is not None:\n if \"stop\" in params:\n raise ValueError(\"`stop` found in both the input and default params.\")\n params[\"stop\"] = stop\n if params.get(\"max_tokens\") == -1:\n # for ChatGPT api, omitting max_tokens is equivalent to having no limit\n del params[\"max_tokens\"]\n return messages, params\n def _generate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> LLMResult:\n messages, params = self._get_chat_params(prompts, stop)\n params = {**params, **kwargs}\n if self.streaming:\n response = \"\"\n params[\"stream\"] = True\n for stream_resp in completion_with_retry(self, messages=messages, **params):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} {"id": "67bd1c7b053b-19", "text": "for stream_resp in completion_with_retry(self, messages=messages, **params):\n token = stream_resp[\"choices\"][0][\"delta\"].get(\"content\", \"\")\n response += token\n if run_manager:\n run_manager.on_llm_new_token(\n token,\n )\n return LLMResult(\n generations=[[Generation(text=response)]],\n )\n else:\n full_response = completion_with_retry(self, messages=messages, **params)\n llm_output = {\n \"token_usage\": full_response[\"usage\"],\n \"model_name\": self.model_name,\n }\n return LLMResult(\n generations=[\n [Generation(text=full_response[\"choices\"][0][\"message\"][\"content\"])]\n ],\n llm_output=llm_output,\n )\n async def _agenerate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> LLMResult:\n messages, params = self._get_chat_params(prompts, stop)\n params = {**params, **kwargs}\n if self.streaming:\n response = \"\"\n params[\"stream\"] = True\n async for stream_resp in await acompletion_with_retry(\n self, messages=messages, **params\n ):\n token = stream_resp[\"choices\"][0][\"delta\"].get(\"content\", \"\")\n response += token\n if run_manager:\n await run_manager.on_llm_new_token(\n token,\n )\n return LLMResult(\n generations=[[Generation(text=response)]],\n )\n else:\n full_response = await acompletion_with_retry(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} {"id": "67bd1c7b053b-20", "text": ")\n else:\n full_response = await acompletion_with_retry(\n self, messages=messages, **params\n )\n llm_output = {\n \"token_usage\": full_response[\"usage\"],\n \"model_name\": self.model_name,\n }\n return LLMResult(\n generations=[\n [Generation(text=full_response[\"choices\"][0][\"message\"][\"content\"])]\n ],\n llm_output=llm_output,\n )\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model_name\": self.model_name}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"openai-chat\"\n[docs] def get_token_ids(self, text: str) -> List[int]:\n \"\"\"Get the token IDs using the tiktoken package.\"\"\"\n # tiktoken NOT supported for Python < 3.8\n if sys.version_info[1] < 8:\n return super().get_token_ids(text)\n try:\n import tiktoken\n except ImportError:\n raise ImportError(\n \"Could not import tiktoken python package. \"\n \"This is needed in order to calculate get_num_tokens. \"\n \"Please install it with `pip install tiktoken`.\"\n )\n enc = tiktoken.encoding_for_model(self.model_name)\n return enc.encode(\n text,\n allowed_special=self.allowed_special,\n disallowed_special=self.disallowed_special,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} {"id": "9cc54fdb222b-0", "text": "Source code for langchain.llms.modal\n\"\"\"Wrapper around Modal API.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nlogger = logging.getLogger(__name__)\n[docs]class Modal(LLM):\n \"\"\"Wrapper around Modal large language models.\n To use, you should have the ``modal-client`` python package installed.\n Any parameters that are valid to be passed to the call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.llms import Modal\n modal = Modal(endpoint_url=\"\")\n \"\"\"\n endpoint_url: str = \"\"\n \"\"\"model endpoint to use\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not\n explicitly specified.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic config.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/modal.html"} {"id": "9cc54fdb222b-1", "text": "raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"{field_name} was transfered to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"endpoint_url\": self.endpoint_url},\n **{\"model_kwargs\": self.model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"modal\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call to Modal endpoint.\"\"\"\n params = self.model_kwargs or {}\n params = {**params, **kwargs}\n response = requests.post(\n url=self.endpoint_url,\n headers={\n \"Content-Type\": \"application/json\",\n },\n json={\"prompt\": prompt, **params},\n )\n try:\n if prompt in response.json()[\"prompt\"]:\n response_json = response.json()\n except KeyError:\n raise ValueError(\"LangChain requires 'prompt' key in response.\")\n text = response_json[\"prompt\"]\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the model parameters\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/modal.html"} {"id": "bb341f0d9964-0", "text": "Source code for langchain.llms.vertexai\n\"\"\"Wrapper around Google VertexAI models.\"\"\"\nfrom __future__ import annotations\nimport asyncio\nfrom concurrent.futures import Executor, ThreadPoolExecutor\nfrom typing import TYPE_CHECKING, Any, Callable, ClassVar, Dict, List, Optional\nfrom pydantic import BaseModel, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.llms.base import LLM, create_base_retry_decorator\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utilities.vertexai import (\n init_vertexai,\n raise_vertex_import_error,\n)\nif TYPE_CHECKING:\n from vertexai.language_models._language_models import _LanguageModel\n[docs]def is_codey_model(model_name: str) -> bool:\n \"\"\"Returns True if the model name is a Codey model.\n Args:\n model_name: The model name to check.\n Returns: True if the model name is a Codey model.\n \"\"\"\n return \"code\" in model_name\ndef _create_retry_decorator(llm: VertexAI) -> Callable[[Any], Any]:\n import google.api_core\n errors = [\n google.api_core.exceptions.ResourceExhausted,\n google.api_core.exceptions.ServiceUnavailable,\n google.api_core.exceptions.Aborted,\n google.api_core.exceptions.DeadlineExceeded,\n ]\n decorator = create_base_retry_decorator(\n error_types=errors, max_retries=llm.max_retries # type: ignore\n )\n return decorator\n[docs]def completion_with_retry(llm: VertexAI, *args: Any, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the completion call.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/vertexai.html"} {"id": "bb341f0d9964-1", "text": "\"\"\"Use tenacity to retry the completion call.\"\"\"\n retry_decorator = _create_retry_decorator(llm)\n @retry_decorator\n def _completion_with_retry(*args: Any, **kwargs: Any) -> Any:\n return llm.client.predict(*args, **kwargs)\n return _completion_with_retry(*args, **kwargs)\nclass _VertexAICommon(BaseModel):\n client: \"_LanguageModel\" = None #: :meta private:\n model_name: str\n \"Model name to use.\"\n temperature: float = 0.0\n \"Sampling temperature, it controls the degree of randomness in token selection.\"\n max_output_tokens: int = 128\n \"Token limit determines the maximum amount of text output from one prompt.\"\n top_p: float = 0.95\n \"Tokens are selected from most probable to least until the sum of their \"\n \"probabilities equals the top-p value. Top-p is ignored for Codey models.\"\n top_k: int = 40\n \"How the model selects tokens for output, the next token is selected from \"\n \"among the top-k most probable tokens. Top-k is ignored for Codey models.\"\n stop: Optional[List[str]] = None\n \"Optional list of stop words to use when generating.\"\n project: Optional[str] = None\n \"The default GCP project to use when making Vertex API calls.\"\n location: str = \"us-central1\"\n \"The default location to use when making API calls.\"\n credentials: Any = None\n \"The default custom credentials (google.auth.credentials.Credentials) to use \"\n \"when making API calls. If not provided, credentials will be ascertained from \"\n \"the environment.\"\n request_parallelism: int = 5", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/vertexai.html"} {"id": "bb341f0d9964-2", "text": "\"the environment.\"\n request_parallelism: int = 5\n \"The amount of parallelism allowed for requests issued to VertexAI models. \"\n \"Default is 5.\"\n max_retries: int = 6\n \"\"\"The maximum number of retries to make when generating.\"\"\"\n task_executor: ClassVar[Optional[Executor]] = None\n @property\n def is_codey_model(self) -> bool:\n return is_codey_model(self.model_name)\n @property\n def _default_params(self) -> Dict[str, Any]:\n if self.is_codey_model:\n return {\n \"temperature\": self.temperature,\n \"max_output_tokens\": self.max_output_tokens,\n }\n else:\n return {\n \"temperature\": self.temperature,\n \"max_output_tokens\": self.max_output_tokens,\n \"top_k\": self.top_k,\n \"top_p\": self.top_p,\n }\n def _predict(\n self, prompt: str, stop: Optional[List[str]] = None, **kwargs: Any\n ) -> str:\n params = {**self._default_params, **kwargs}\n res = completion_with_retry(self, prompt, **params) # type: ignore\n return self._enforce_stop_words(res.text, stop)\n def _enforce_stop_words(self, text: str, stop: Optional[List[str]] = None) -> str:\n if stop is None and self.stop is not None:\n stop = self.stop\n if stop:\n return enforce_stop_tokens(text, stop)\n return text\n @property\n def _llm_type(self) -> str:\n return \"vertexai\"\n @classmethod", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/vertexai.html"} {"id": "bb341f0d9964-3", "text": "return \"vertexai\"\n @classmethod\n def _get_task_executor(cls, request_parallelism: int = 5) -> Executor:\n if cls.task_executor is None:\n cls.task_executor = ThreadPoolExecutor(max_workers=request_parallelism)\n return cls.task_executor\n @classmethod\n def _try_init_vertexai(cls, values: Dict) -> None:\n allowed_params = [\"project\", \"location\", \"credentials\"]\n params = {k: v for k, v in values.items() if k in allowed_params}\n init_vertexai(**params)\n return None\n[docs]class VertexAI(_VertexAICommon, LLM):\n \"\"\"Wrapper around Google Vertex AI large language models.\"\"\"\n model_name: str = \"text-bison\"\n \"The name of the Vertex AI large language model.\"\n tuned_model_name: Optional[str] = None\n \"The name of a tuned model. If provided, model_name is ignored.\"\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the python package exists in environment.\"\"\"\n cls._try_init_vertexai(values)\n tuned_model_name = values.get(\"tuned_model_name\")\n model_name = values[\"model_name\"]\n try:\n if tuned_model_name or not is_codey_model(model_name):\n from vertexai.preview.language_models import TextGenerationModel\n if tuned_model_name:\n values[\"client\"] = TextGenerationModel.get_tuned_model(\n tuned_model_name\n )\n else:\n values[\"client\"] = TextGenerationModel.from_pretrained(model_name)\n else:\n from vertexai.preview.language_models import CodeGenerationModel\n values[\"client\"] = CodeGenerationModel.from_pretrained(model_name)\n except ImportError:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/vertexai.html"} {"id": "bb341f0d9964-4", "text": "except ImportError:\n raise_vertex_import_error()\n return values\n async def _acall(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call Vertex model to get predictions based on the prompt.\n Args:\n prompt: The prompt to pass into the model.\n stop: A list of stop words (optional).\n run_manager: A callback manager for async interaction with LLMs.\n Returns:\n The string generated by the model.\n \"\"\"\n return await asyncio.wrap_future(\n self._get_task_executor().submit(self._predict, prompt, stop)\n )\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call Vertex model to get predictions based on the prompt.\n Args:\n prompt: The prompt to pass into the model.\n stop: A list of stop words (optional).\n run_manager: A Callbackmanager for LLM run, optional.\n Returns:\n The string generated by the model.\n \"\"\"\n return self._predict(prompt, stop, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/vertexai.html"} {"id": "59804138f74a-0", "text": "Source code for langchain.llms.aviary\n\"\"\"Wrapper around Aviary\"\"\"\nimport dataclasses\nimport os\nfrom typing import Any, Dict, List, Mapping, Optional, Union, cast\nimport requests\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nTIMEOUT = 60\n@dataclasses.dataclass\nclass AviaryBackend:\n backend_url: str\n bearer: str\n def __post_init__(self) -> None:\n self.header = {\"Authorization\": self.bearer}\n @classmethod\n def from_env(cls) -> \"AviaryBackend\":\n aviary_url = os.getenv(\"AVIARY_URL\")\n assert aviary_url, \"AVIARY_URL must be set\"\n aviary_token = os.getenv(\"AVIARY_TOKEN\", \"\")\n bearer = f\"Bearer {aviary_token}\" if aviary_token else \"\"\n aviary_url += \"/\" if not aviary_url.endswith(\"/\") else \"\"\n return cls(aviary_url, bearer)\n[docs]def get_models() -> List[str]:\n \"\"\"List available models\"\"\"\n backend = AviaryBackend.from_env()\n request_url = backend.backend_url + \"-/routes\"\n response = requests.get(request_url, headers=backend.header, timeout=TIMEOUT)\n try:\n result = response.json()\n except requests.JSONDecodeError as e:\n raise RuntimeError(\n f\"Error decoding JSON from {request_url}. Text response: {response.text}\"\n ) from e\n result = sorted(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/aviary.html"} {"id": "59804138f74a-1", "text": ") from e\n result = sorted(\n [k.lstrip(\"/\").replace(\"--\", \"/\") for k in result.keys() if \"--\" in k]\n )\n return result\n[docs]def get_completions(\n model: str,\n prompt: str,\n use_prompt_format: bool = True,\n version: str = \"\",\n) -> Dict[str, Union[str, float, int]]:\n \"\"\"Get completions from Aviary models.\"\"\"\n backend = AviaryBackend.from_env()\n url = backend.backend_url + model.replace(\"/\", \"--\") + \"/\" + version + \"query\"\n response = requests.post(\n url,\n headers=backend.header,\n json={\"prompt\": prompt, \"use_prompt_format\": use_prompt_format},\n timeout=TIMEOUT,\n )\n try:\n return response.json()\n except requests.JSONDecodeError as e:\n raise RuntimeError(\n f\"Error decoding JSON from {url}. Text response: {response.text}\"\n ) from e\n[docs]class Aviary(LLM):\n \"\"\"Allow you to use an Aviary.\n Aviary is a backend for hosted models. You can\n find out more about aviary at\n http://github.com/ray-project/aviary\n To get a list of the models supported on an\n aviary, follow the instructions on the web site to\n install the aviary CLI and then use:\n `aviary models`\n AVIARY_URL and AVIARY_TOKEN environement variables must be set.\n Example:\n .. code-block:: python\n from langchain.llms import Aviary\n os.environ[\"AVIARY_URL\"] = \"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/aviary.html"} {"id": "59804138f74a-2", "text": "os.environ[\"AVIARY_URL\"] = \"\"\n os.environ[\"AVIARY_TOKEN\"] = \"\"\n light = Aviary(model='amazon/LightGPT')\n output = light('How do you make fried rice?')\n \"\"\"\n model: str = \"amazon/LightGPT\"\n aviary_url: Optional[str] = None\n aviary_token: Optional[str] = None\n # If True the prompt template for the model will be ignored.\n use_prompt_format: bool = True\n # API version to use for Aviary\n version: Optional[str] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n aviary_url = get_from_dict_or_env(values, \"aviary_url\", \"AVIARY_URL\")\n aviary_token = get_from_dict_or_env(values, \"aviary_token\", \"AVIARY_TOKEN\")\n # Set env viarables for aviary sdk\n os.environ[\"AVIARY_URL\"] = aviary_url\n os.environ[\"AVIARY_TOKEN\"] = aviary_token\n try:\n aviary_models = get_models()\n except requests.exceptions.RequestException as e:\n raise ValueError(e)\n model = values.get(\"model\")\n if model and model not in aviary_models:\n raise ValueError(f\"{aviary_url} does not support model {values['model']}.\")\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/aviary.html"} {"id": "59804138f74a-3", "text": "\"\"\"Get the identifying parameters.\"\"\"\n return {\n \"model_name\": self.model,\n \"aviary_url\": self.aviary_url,\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return f\"aviary-{self.model.replace('/', '-')}\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to Aviary\n Args:\n prompt: The prompt to pass into the model.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = aviary(\"Tell me a joke.\")\n \"\"\"\n kwargs = {\"use_prompt_format\": self.use_prompt_format}\n if self.version:\n kwargs[\"version\"] = self.version\n output = get_completions(\n model=self.model,\n prompt=prompt,\n **kwargs,\n )\n text = cast(str, output[\"generated_text\"])\n if stop:\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/aviary.html"} {"id": "c9f72d9cd73f-0", "text": "Source code for langchain.llms.octoai_endpoint\n\"\"\"Wrapper around OctoAI APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\n[docs]class OctoAIEndpoint(LLM):\n \"\"\"Wrapper around OctoAI Inference Endpoints.\n OctoAIEndpoint is a class to interact with OctoAI\n Compute Service large language model endpoints.\n To use, you should have the ``octoai`` python package installed, and the\n environment variable ``OCTOAI_API_TOKEN`` set with your API token, or pass\n it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.llms.octoai_endpoint import OctoAIEndpoint\n OctoAIEndpoint(\n octoai_api_token=\"octoai-api-key\",\n endpoint_url=\"https://mpt-7b-demo-kk0powt97tmb.octoai.cloud/generate\",\n model_kwargs={\n \"max_new_tokens\": 200,\n \"temperature\": 0.75,\n \"top_p\": 0.95,\n \"repetition_penalty\": 1,\n \"seed\": None,\n \"stop\": [],\n },\n )\n \"\"\"\n endpoint_url: Optional[str] = None\n \"\"\"Endpoint URL to use.\"\"\"\n model_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n octoai_api_token: Optional[str] = None\n \"\"\"OCTOAI API Token\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/octoai_endpoint.html"} {"id": "c9f72d9cd73f-1", "text": "\"\"\"OCTOAI API Token\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator(allow_reuse=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n octoai_api_token = get_from_dict_or_env(\n values, \"octoai_api_token\", \"OCTOAI_API_TOKEN\"\n )\n values[\"endpoint_url\"] = get_from_dict_or_env(\n values, \"endpoint_url\", \"ENDPOINT_URL\"\n )\n values[\"octoai_api_token\"] = octoai_api_token\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n _model_kwargs = self.model_kwargs or {}\n return {\n **{\"endpoint_url\": self.endpoint_url},\n **{\"model_kwargs\": _model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"octoai_endpoint\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to OctoAI's inference endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n \"\"\"\n _model_kwargs = self.model_kwargs or {}\n # Prepare the payload JSON", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/octoai_endpoint.html"} {"id": "c9f72d9cd73f-2", "text": "_model_kwargs = self.model_kwargs or {}\n # Prepare the payload JSON\n parameter_payload = {\"inputs\": prompt, \"parameters\": _model_kwargs}\n try:\n # Initialize the OctoAI client\n from octoai import client\n octoai_client = client.Client(token=self.octoai_api_token)\n # Send the request using the OctoAI client\n resp_json = octoai_client.infer(self.endpoint_url, parameter_payload)\n text = resp_json[\"generated_text\"]\n except Exception as e:\n # Handle any errors raised by the inference endpoint\n raise ValueError(f\"Error raised by the inference endpoint: {e}\") from e\n if stop is not None:\n # Apply stop tokens when making calls to OctoAI\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/octoai_endpoint.html"} {"id": "af43cd0c4c96-0", "text": "Source code for langchain.llms.petals\n\"\"\"Wrapper around Petals API.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class Petals(LLM):\n \"\"\"Wrapper around Petals Bloom models.\n To use, you should have the ``petals`` python package installed, and the\n environment variable ``HUGGINGFACE_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.llms import petals\n petals = Petals()\n \"\"\"\n client: Any\n \"\"\"The client to use for the API calls.\"\"\"\n tokenizer: Any\n \"\"\"The tokenizer to use for the API calls.\"\"\"\n model_name: str = \"bigscience/bloom-petals\"\n \"\"\"The model to use.\"\"\"\n temperature: float = 0.7\n \"\"\"What sampling temperature to use\"\"\"\n max_new_tokens: int = 256\n \"\"\"The maximum number of new tokens to generate in the completion.\"\"\"\n top_p: float = 0.9\n \"\"\"The cumulative probability for top-p sampling.\"\"\"\n top_k: Optional[int] = None\n \"\"\"The number of highest probability vocabulary tokens\n to keep for top-k-filtering.\"\"\"\n do_sample: bool = True\n \"\"\"Whether or not to use sampling; use greedy decoding otherwise.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/petals.html"} {"id": "af43cd0c4c96-1", "text": "\"\"\"Whether or not to use sampling; use greedy decoding otherwise.\"\"\"\n max_length: Optional[int] = None\n \"\"\"The maximum length of the sequence to be generated.\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call\n not explicitly specified.\"\"\"\n huggingface_api_key: Optional[str] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic config.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"WARNING! {field_name} is not default parameter.\n {field_name} was transfered to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n huggingface_api_key = get_from_dict_or_env(\n values, \"huggingface_api_key\", \"HUGGINGFACE_API_KEY\"\n )\n try:\n from petals import DistributedBloomForCausalLM", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/petals.html"} {"id": "af43cd0c4c96-2", "text": ")\n try:\n from petals import DistributedBloomForCausalLM\n from transformers import BloomTokenizerFast\n model_name = values[\"model_name\"]\n values[\"tokenizer\"] = BloomTokenizerFast.from_pretrained(model_name)\n values[\"client\"] = DistributedBloomForCausalLM.from_pretrained(model_name)\n values[\"huggingface_api_key\"] = huggingface_api_key\n except ImportError:\n raise ValueError(\n \"Could not import transformers or petals python package.\"\n \"Please install with `pip install -U transformers petals`.\"\n )\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling Petals API.\"\"\"\n normal_params = {\n \"temperature\": self.temperature,\n \"max_new_tokens\": self.max_new_tokens,\n \"top_p\": self.top_p,\n \"top_k\": self.top_k,\n \"do_sample\": self.do_sample,\n \"max_length\": self.max_length,\n }\n return {**normal_params, **self.model_kwargs}\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model_name\": self.model_name}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"petals\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call the Petals API.\"\"\"\n params = self._default_params", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/petals.html"} {"id": "af43cd0c4c96-3", "text": "\"\"\"Call the Petals API.\"\"\"\n params = self._default_params\n params = {**params, **kwargs}\n inputs = self.tokenizer(prompt, return_tensors=\"pt\")[\"input_ids\"]\n outputs = self.client.generate(inputs, **params)\n text = self.tokenizer.decode(outputs[0])\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the model parameters\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/petals.html"} {"id": "7a5eaa17cfe9-0", "text": "Source code for langchain.llms.huggingface_hub\n\"\"\"Wrapper around HuggingFace APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nDEFAULT_REPO_ID = \"gpt2\"\nVALID_TASKS = (\"text2text-generation\", \"text-generation\", \"summarization\")\n[docs]class HuggingFaceHub(LLM):\n \"\"\"Wrapper around HuggingFaceHub models.\n To use, you should have the ``huggingface_hub`` python package installed, and the\n environment variable ``HUGGINGFACEHUB_API_TOKEN`` set with your API token, or pass\n it as a named parameter to the constructor.\n Only supports `text-generation`, `text2text-generation` and `summarization` for now.\n Example:\n .. code-block:: python\n from langchain.llms import HuggingFaceHub\n hf = HuggingFaceHub(repo_id=\"gpt2\", huggingfacehub_api_token=\"my-api-key\")\n \"\"\"\n client: Any #: :meta private:\n repo_id: str = DEFAULT_REPO_ID\n \"\"\"Model name to use.\"\"\"\n task: Optional[str] = None\n \"\"\"Task to call the model with.\n Should be a task that returns `generated_text` or `summary_text`.\"\"\"\n model_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n huggingfacehub_api_token: Optional[str] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_hub.html"} {"id": "7a5eaa17cfe9-1", "text": "\"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n huggingfacehub_api_token = get_from_dict_or_env(\n values, \"huggingfacehub_api_token\", \"HUGGINGFACEHUB_API_TOKEN\"\n )\n try:\n from huggingface_hub.inference_api import InferenceApi\n repo_id = values[\"repo_id\"]\n client = InferenceApi(\n repo_id=repo_id,\n token=huggingfacehub_api_token,\n task=values.get(\"task\"),\n )\n if client.task not in VALID_TASKS:\n raise ValueError(\n f\"Got invalid task {client.task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n values[\"client\"] = client\n except ImportError:\n raise ValueError(\n \"Could not import huggingface_hub python package. \"\n \"Please install it with `pip install huggingface_hub`.\"\n )\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n _model_kwargs = self.model_kwargs or {}\n return {\n **{\"repo_id\": self.repo_id, \"task\": self.task},\n **{\"model_kwargs\": _model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"huggingface_hub\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_hub.html"} {"id": "7a5eaa17cfe9-2", "text": "prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to HuggingFace Hub's inference endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = hf(\"Tell me a joke.\")\n \"\"\"\n _model_kwargs = self.model_kwargs or {}\n params = {**_model_kwargs, **kwargs}\n response = self.client(inputs=prompt, params=params)\n if \"error\" in response:\n raise ValueError(f\"Error raised by inference API: {response['error']}\")\n if self.client.task == \"text-generation\":\n # Text generation return includes the starter text.\n text = response[0][\"generated_text\"][len(prompt) :]\n elif self.client.task == \"text2text-generation\":\n text = response[0][\"generated_text\"]\n elif self.client.task == \"summarization\":\n text = response[0][\"summary_text\"]\n else:\n raise ValueError(\n f\"Got invalid task {self.client.task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n if stop is not None:\n # This is a bit hacky, but I can't figure out a better way to enforce\n # stop tokens when making calls to huggingface_hub.\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_hub.html"} {"id": "50d37aef1319-0", "text": "Source code for langchain.llms.nlpcloud\n\"\"\"Wrapper around NLPCloud APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.utils import get_from_dict_or_env\n[docs]class NLPCloud(LLM):\n \"\"\"Wrapper around NLPCloud large language models.\n To use, you should have the ``nlpcloud`` python package installed, and the\n environment variable ``NLPCLOUD_API_KEY`` set with your API key.\n Example:\n .. code-block:: python\n from langchain.llms import NLPCloud\n nlpcloud = NLPCloud(model=\"gpt-neox-20b\")\n \"\"\"\n client: Any #: :meta private:\n model_name: str = \"finetuned-gpt-neox-20b\"\n \"\"\"Model name to use.\"\"\"\n temperature: float = 0.7\n \"\"\"What sampling temperature to use.\"\"\"\n min_length: int = 1\n \"\"\"The minimum number of tokens to generate in the completion.\"\"\"\n max_length: int = 256\n \"\"\"The maximum number of tokens to generate in the completion.\"\"\"\n length_no_input: bool = True\n \"\"\"Whether min_length and max_length should include the length of the input.\"\"\"\n remove_input: bool = True\n \"\"\"Remove input text from API response\"\"\"\n remove_end_sequence: bool = True\n \"\"\"Whether or not to remove the end sequence token.\"\"\"\n bad_words: List[str] = []\n \"\"\"List of tokens not allowed to be generated.\"\"\"\n top_p: int = 1\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/nlpcloud.html"} {"id": "50d37aef1319-1", "text": "\"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n top_k: int = 50\n \"\"\"The number of highest probability tokens to keep for top-k filtering.\"\"\"\n repetition_penalty: float = 1.0\n \"\"\"Penalizes repeated tokens. 1.0 means no penalty.\"\"\"\n length_penalty: float = 1.0\n \"\"\"Exponential penalty to the length.\"\"\"\n do_sample: bool = True\n \"\"\"Whether to use sampling (True) or greedy decoding.\"\"\"\n num_beams: int = 1\n \"\"\"Number of beams for beam search.\"\"\"\n early_stopping: bool = False\n \"\"\"Whether to stop beam search at num_beams sentences.\"\"\"\n num_return_sequences: int = 1\n \"\"\"How many completions to generate for each prompt.\"\"\"\n nlpcloud_api_key: Optional[str] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n nlpcloud_api_key = get_from_dict_or_env(\n values, \"nlpcloud_api_key\", \"NLPCLOUD_API_KEY\"\n )\n try:\n import nlpcloud\n values[\"client\"] = nlpcloud.Client(\n values[\"model_name\"], nlpcloud_api_key, gpu=True, lang=\"en\"\n )\n except ImportError:\n raise ImportError(\n \"Could not import nlpcloud python package. \"\n \"Please install it with `pip install nlpcloud`.\"\n )\n return values\n @property\n def _default_params(self) -> Mapping[str, Any]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/nlpcloud.html"} {"id": "50d37aef1319-2", "text": "@property\n def _default_params(self) -> Mapping[str, Any]:\n \"\"\"Get the default parameters for calling NLPCloud API.\"\"\"\n return {\n \"temperature\": self.temperature,\n \"min_length\": self.min_length,\n \"max_length\": self.max_length,\n \"length_no_input\": self.length_no_input,\n \"remove_input\": self.remove_input,\n \"remove_end_sequence\": self.remove_end_sequence,\n \"bad_words\": self.bad_words,\n \"top_p\": self.top_p,\n \"top_k\": self.top_k,\n \"repetition_penalty\": self.repetition_penalty,\n \"length_penalty\": self.length_penalty,\n \"do_sample\": self.do_sample,\n \"num_beams\": self.num_beams,\n \"early_stopping\": self.early_stopping,\n \"num_return_sequences\": self.num_return_sequences,\n }\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model_name\": self.model_name}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"nlpcloud\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to NLPCloud's create endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Not supported by this interface (pass in init method)\n Returns:\n The string generated by the model.\n Example:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/nlpcloud.html"} {"id": "50d37aef1319-3", "text": "Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = nlpcloud(\"Tell me a joke.\")\n \"\"\"\n if stop and len(stop) > 1:\n raise ValueError(\n \"NLPCloud only supports a single stop sequence per generation.\"\n \"Pass in a list of length 1.\"\n )\n elif stop and len(stop) == 1:\n end_sequence = stop[0]\n else:\n end_sequence = None\n params = {**self._default_params, **kwargs}\n response = self.client.generation(prompt, end_sequence=end_sequence, **params)\n return response[\"generated_text\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/nlpcloud.html"} {"id": "d86c177b8c01-0", "text": "Source code for langchain.llms.google_palm\n\"\"\"Wrapper arround Google's PaLM Text APIs.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, Callable, Dict, List, Optional\nfrom pydantic import BaseModel, root_validator\nfrom tenacity import (\n before_sleep_log,\n retry,\n retry_if_exception_type,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.llms import BaseLLM\nfrom langchain.schema import Generation, LLMResult\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\ndef _create_retry_decorator() -> Callable[[Any], Any]:\n \"\"\"Returns a tenacity retry decorator, preconfigured to handle PaLM exceptions\"\"\"\n try:\n import google.api_core.exceptions\n except ImportError:\n raise ImportError(\n \"Could not import google-api-core python package. \"\n \"Please install it with `pip install google-api-core`.\"\n )\n multiplier = 2\n min_seconds = 1\n max_seconds = 60\n max_retries = 10\n return retry(\n reraise=True,\n stop=stop_after_attempt(max_retries),\n wait=wait_exponential(multiplier=multiplier, min=min_seconds, max=max_seconds),\n retry=(\n retry_if_exception_type(google.api_core.exceptions.ResourceExhausted)\n | retry_if_exception_type(google.api_core.exceptions.ServiceUnavailable)\n | retry_if_exception_type(google.api_core.exceptions.GoogleAPIError)\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/google_palm.html"} {"id": "d86c177b8c01-1", "text": "),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\n[docs]def generate_with_retry(llm: GooglePalm, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the completion call.\"\"\"\n retry_decorator = _create_retry_decorator()\n @retry_decorator\n def _generate_with_retry(**kwargs: Any) -> Any:\n return llm.client.generate_text(**kwargs)\n return _generate_with_retry(**kwargs)\ndef _strip_erroneous_leading_spaces(text: str) -> str:\n \"\"\"Strip erroneous leading spaces from text.\n The PaLM API will sometimes erroneously return a single leading space in all\n lines > 1. This function strips that space.\n \"\"\"\n has_leading_space = all(not line or line[0] == \" \" for line in text.split(\"\\n\")[1:])\n if has_leading_space:\n return text.replace(\"\\n \", \"\\n\")\n else:\n return text\n[docs]class GooglePalm(BaseLLM, BaseModel):\n client: Any #: :meta private:\n google_api_key: Optional[str]\n model_name: str = \"models/text-bison-001\"\n \"\"\"Model name to use.\"\"\"\n temperature: float = 0.7\n \"\"\"Run inference with this temperature. Must by in the closed interval\n [0.0, 1.0].\"\"\"\n top_p: Optional[float] = None\n \"\"\"Decode using nucleus sampling: consider the smallest set of tokens whose\n probability sum is at least top_p. Must be in the closed interval [0.0, 1.0].\"\"\"\n top_k: Optional[int] = None\n \"\"\"Decode using top-k sampling: consider the set of top_k most probable tokens.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/google_palm.html"} {"id": "d86c177b8c01-2", "text": "\"\"\"Decode using top-k sampling: consider the set of top_k most probable tokens.\n Must be positive.\"\"\"\n max_output_tokens: Optional[int] = None\n \"\"\"Maximum number of tokens to include in a candidate. Must be greater than zero.\n If unset, will default to 64.\"\"\"\n n: int = 1\n \"\"\"Number of chat completions to generate for each prompt. Note that the API may\n not return the full n completions if duplicates are generated.\"\"\"\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate api key, python package exists.\"\"\"\n google_api_key = get_from_dict_or_env(\n values, \"google_api_key\", \"GOOGLE_API_KEY\"\n )\n try:\n import google.generativeai as genai\n genai.configure(api_key=google_api_key)\n except ImportError:\n raise ImportError(\n \"Could not import google-generativeai python package. \"\n \"Please install it with `pip install google-generativeai`.\"\n )\n values[\"client\"] = genai\n if values[\"temperature\"] is not None and not 0 <= values[\"temperature\"] <= 1:\n raise ValueError(\"temperature must be in the range [0.0, 1.0]\")\n if values[\"top_p\"] is not None and not 0 <= values[\"top_p\"] <= 1:\n raise ValueError(\"top_p must be in the range [0.0, 1.0]\")\n if values[\"top_k\"] is not None and values[\"top_k\"] <= 0:\n raise ValueError(\"top_k must be positive\")\n if values[\"max_output_tokens\"] is not None and values[\"max_output_tokens\"] <= 0:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/google_palm.html"} {"id": "d86c177b8c01-3", "text": "raise ValueError(\"max_output_tokens must be greater than zero\")\n return values\n def _generate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> LLMResult:\n generations = []\n for prompt in prompts:\n completion = generate_with_retry(\n self,\n model=self.model_name,\n prompt=prompt,\n stop_sequences=stop,\n temperature=self.temperature,\n top_p=self.top_p,\n top_k=self.top_k,\n max_output_tokens=self.max_output_tokens,\n candidate_count=self.n,\n **kwargs,\n )\n prompt_generations = []\n for candidate in completion.candidates:\n raw_text = candidate[\"output\"]\n stripped_text = _strip_erroneous_leading_spaces(raw_text)\n prompt_generations.append(Generation(text=stripped_text))\n generations.append(prompt_generations)\n return LLMResult(generations=generations)\n async def _agenerate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> LLMResult:\n raise NotImplementedError()\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"google_palm\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/google_palm.html"} {"id": "aa9f4dcba560-0", "text": "Source code for langchain.llms.openlm\nfrom typing import Any, Dict\nfrom pydantic import root_validator\nfrom langchain.llms.openai import BaseOpenAI\n[docs]class OpenLM(BaseOpenAI):\n @property\n def _invocation_params(self) -> Dict[str, Any]:\n return {**{\"model\": self.model_name}, **super()._invocation_params}\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n try:\n import openlm\n values[\"client\"] = openlm.Completion\n except ImportError:\n raise ValueError(\n \"Could not import openlm python package. \"\n \"Please install it with `pip install openlm`.\"\n )\n if values[\"streaming\"]:\n raise ValueError(\"Streaming not supported with openlm\")\n return values", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openlm.html"} {"id": "ac9b99b44c4f-0", "text": "Source code for langchain.llms.promptlayer_openai\n\"\"\"PromptLayer wrapper.\"\"\"\nimport datetime\nfrom typing import Any, List, Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.llms import OpenAI, OpenAIChat\nfrom langchain.schema import LLMResult\n[docs]class PromptLayerOpenAI(OpenAI):\n \"\"\"Wrapper around OpenAI large language models.\n To use, you should have the ``openai`` and ``promptlayer`` python\n package installed, and the environment variable ``OPENAI_API_KEY``\n and ``PROMPTLAYER_API_KEY`` set with your openAI API key and\n promptlayer key respectively.\n All parameters that can be passed to the OpenAI LLM can also\n be passed here. The PromptLayerOpenAI LLM adds two optional\n parameters:\n ``pl_tags``: List of strings to tag the request with.\n ``return_pl_id``: If True, the PromptLayer request ID will be\n returned in the ``generation_info`` field of the\n ``Generation`` object.\n Example:\n .. code-block:: python\n from langchain.llms import PromptLayerOpenAI\n openai = PromptLayerOpenAI(model_name=\"text-davinci-003\")\n \"\"\"\n pl_tags: Optional[List[str]]\n return_pl_id: Optional[bool] = False\n def _generate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> LLMResult:\n \"\"\"Call OpenAI generate and then call PromptLayer API to log the request.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/promptlayer_openai.html"} {"id": "ac9b99b44c4f-1", "text": "\"\"\"Call OpenAI generate and then call PromptLayer API to log the request.\"\"\"\n from promptlayer.utils import get_api_key, promptlayer_api_request\n request_start_time = datetime.datetime.now().timestamp()\n generated_responses = super()._generate(prompts, stop, run_manager)\n request_end_time = datetime.datetime.now().timestamp()\n for i in range(len(prompts)):\n prompt = prompts[i]\n generation = generated_responses.generations[i][0]\n resp = {\n \"text\": generation.text,\n \"llm_output\": generated_responses.llm_output,\n }\n params = {**self._identifying_params, **kwargs}\n pl_request_id = promptlayer_api_request(\n \"langchain.PromptLayerOpenAI\",\n \"langchain\",\n [prompt],\n params,\n self.pl_tags,\n resp,\n request_start_time,\n request_end_time,\n get_api_key(),\n return_pl_id=self.return_pl_id,\n )\n if self.return_pl_id:\n if generation.generation_info is None or not isinstance(\n generation.generation_info, dict\n ):\n generation.generation_info = {}\n generation.generation_info[\"pl_request_id\"] = pl_request_id\n return generated_responses\n async def _agenerate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> LLMResult:\n from promptlayer.utils import get_api_key, promptlayer_api_request_async\n request_start_time = datetime.datetime.now().timestamp()\n generated_responses = await super()._agenerate(prompts, stop, run_manager)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/promptlayer_openai.html"} {"id": "ac9b99b44c4f-2", "text": "generated_responses = await super()._agenerate(prompts, stop, run_manager)\n request_end_time = datetime.datetime.now().timestamp()\n for i in range(len(prompts)):\n prompt = prompts[i]\n generation = generated_responses.generations[i][0]\n resp = {\n \"text\": generation.text,\n \"llm_output\": generated_responses.llm_output,\n }\n params = {**self._identifying_params, **kwargs}\n pl_request_id = await promptlayer_api_request_async(\n \"langchain.PromptLayerOpenAI.async\",\n \"langchain\",\n [prompt],\n params,\n self.pl_tags,\n resp,\n request_start_time,\n request_end_time,\n get_api_key(),\n return_pl_id=self.return_pl_id,\n )\n if self.return_pl_id:\n if generation.generation_info is None or not isinstance(\n generation.generation_info, dict\n ):\n generation.generation_info = {}\n generation.generation_info[\"pl_request_id\"] = pl_request_id\n return generated_responses\n[docs]class PromptLayerOpenAIChat(OpenAIChat):\n \"\"\"Wrapper around OpenAI large language models.\n To use, you should have the ``openai`` and ``promptlayer`` python\n package installed, and the environment variable ``OPENAI_API_KEY``\n and ``PROMPTLAYER_API_KEY`` set with your openAI API key and\n promptlayer key respectively.\n All parameters that can be passed to the OpenAIChat LLM can also\n be passed here. The PromptLayerOpenAIChat adds two optional\n parameters:\n ``pl_tags``: List of strings to tag the request with.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/promptlayer_openai.html"} {"id": "ac9b99b44c4f-3", "text": "parameters:\n ``pl_tags``: List of strings to tag the request with.\n ``return_pl_id``: If True, the PromptLayer request ID will be\n returned in the ``generation_info`` field of the\n ``Generation`` object.\n Example:\n .. code-block:: python\n from langchain.llms import PromptLayerOpenAIChat\n openaichat = PromptLayerOpenAIChat(model_name=\"gpt-3.5-turbo\")\n \"\"\"\n pl_tags: Optional[List[str]]\n return_pl_id: Optional[bool] = False\n def _generate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> LLMResult:\n \"\"\"Call OpenAI generate and then call PromptLayer API to log the request.\"\"\"\n from promptlayer.utils import get_api_key, promptlayer_api_request\n request_start_time = datetime.datetime.now().timestamp()\n generated_responses = super()._generate(prompts, stop, run_manager)\n request_end_time = datetime.datetime.now().timestamp()\n for i in range(len(prompts)):\n prompt = prompts[i]\n generation = generated_responses.generations[i][0]\n resp = {\n \"text\": generation.text,\n \"llm_output\": generated_responses.llm_output,\n }\n params = {**self._identifying_params, **kwargs}\n pl_request_id = promptlayer_api_request(\n \"langchain.PromptLayerOpenAIChat\",\n \"langchain\",\n [prompt],\n params,\n self.pl_tags,\n resp,\n request_start_time,\n request_end_time,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/promptlayer_openai.html"} {"id": "ac9b99b44c4f-4", "text": "resp,\n request_start_time,\n request_end_time,\n get_api_key(),\n return_pl_id=self.return_pl_id,\n )\n if self.return_pl_id:\n if generation.generation_info is None or not isinstance(\n generation.generation_info, dict\n ):\n generation.generation_info = {}\n generation.generation_info[\"pl_request_id\"] = pl_request_id\n return generated_responses\n async def _agenerate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> LLMResult:\n from promptlayer.utils import get_api_key, promptlayer_api_request_async\n request_start_time = datetime.datetime.now().timestamp()\n generated_responses = await super()._agenerate(prompts, stop, run_manager)\n request_end_time = datetime.datetime.now().timestamp()\n for i in range(len(prompts)):\n prompt = prompts[i]\n generation = generated_responses.generations[i][0]\n resp = {\n \"text\": generation.text,\n \"llm_output\": generated_responses.llm_output,\n }\n params = {**self._identifying_params, **kwargs}\n pl_request_id = await promptlayer_api_request_async(\n \"langchain.PromptLayerOpenAIChat.async\",\n \"langchain\",\n [prompt],\n params,\n self.pl_tags,\n resp,\n request_start_time,\n request_end_time,\n get_api_key(),\n return_pl_id=self.return_pl_id,\n )\n if self.return_pl_id:\n if generation.generation_info is None or not isinstance(\n generation.generation_info, dict", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/promptlayer_openai.html"} {"id": "ac9b99b44c4f-5", "text": "generation.generation_info, dict\n ):\n generation.generation_info = {}\n generation.generation_info[\"pl_request_id\"] = pl_request_id\n return generated_responses", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/promptlayer_openai.html"} {"id": "6355c8f6973b-0", "text": "Source code for langchain.llms.clarifai\n\"\"\"Wrapper around Clarifai's APIs.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class Clarifai(LLM):\n \"\"\"Wrapper around Clarifai's large language models.\n To use, you should have an account on the Clarifai platform,\n the ``clarifai`` python package installed, and the\n environment variable ``CLARIFAI_PAT`` set with your PAT key,\n or pass it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.llms import Clarifai\n clarifai_llm = Clarifai(pat=CLARIFAI_PAT, \\\n user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)\n \"\"\"\n stub: Any #: :meta private:\n userDataObject: Any\n model_id: Optional[str] = None\n \"\"\"Model id to use.\"\"\"\n model_version_id: Optional[str] = None\n \"\"\"Model version id to use.\"\"\"\n app_id: Optional[str] = None\n \"\"\"Clarifai application id to use.\"\"\"\n user_id: Optional[str] = None\n \"\"\"Clarifai user id to use.\"\"\"\n pat: Optional[str] = None\n api_base: str = \"https://api.clarifai.com\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/clarifai.html"} {"id": "6355c8f6973b-1", "text": "\"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that we have all required info to access Clarifai\n platform and python package exists in environment.\"\"\"\n values[\"pat\"] = get_from_dict_or_env(values, \"pat\", \"CLARIFAI_PAT\")\n user_id = values.get(\"user_id\")\n app_id = values.get(\"app_id\")\n model_id = values.get(\"model_id\")\n if values[\"pat\"] is None:\n raise ValueError(\"Please provide a pat.\")\n if user_id is None:\n raise ValueError(\"Please provide a user_id.\")\n if app_id is None:\n raise ValueError(\"Please provide a app_id.\")\n if model_id is None:\n raise ValueError(\"Please provide a model_id.\")\n try:\n from clarifai.auth.helper import ClarifaiAuthHelper\n from clarifai.client import create_stub\n except ImportError:\n raise ImportError(\n \"Could not import clarifai python package. \"\n \"Please install it with `pip install clarifai`.\"\n )\n auth = ClarifaiAuthHelper(\n user_id=user_id,\n app_id=app_id,\n pat=values[\"pat\"],\n base=values[\"api_base\"],\n )\n values[\"userDataObject\"] = auth.get_user_app_id_proto()\n values[\"stub\"] = create_stub(auth)\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling Clarifai API.\"\"\"\n return {}\n @property\n def _identifying_params(self) -> Dict[str, Any]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/clarifai.html"} {"id": "6355c8f6973b-2", "text": "@property\n def _identifying_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\n \"user_id\": self.user_id,\n \"app_id\": self.app_id,\n \"model_id\": self.model_id,\n }\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"clarifai\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to Clarfai's PostModelOutputs endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = clarifai_llm(\"Tell me a joke.\")\n \"\"\"\n try:\n from clarifai_grpc.grpc.api import (\n resources_pb2,\n service_pb2,\n )\n from clarifai_grpc.grpc.api.status import status_code_pb2\n except ImportError:\n raise ImportError(\n \"Could not import clarifai python package. \"\n \"Please install it with `pip install clarifai`.\"\n )\n # The userDataObject is created in the overview and\n # is required when using a PAT\n # If version_id None, Defaults to the latest model version\n post_model_outputs_request = service_pb2.PostModelOutputsRequest(\n user_app_id=self.userDataObject,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/clarifai.html"} {"id": "6355c8f6973b-3", "text": "user_app_id=self.userDataObject,\n model_id=self.model_id,\n version_id=self.model_version_id,\n inputs=[\n resources_pb2.Input(\n data=resources_pb2.Data(text=resources_pb2.Text(raw=prompt))\n )\n ],\n )\n post_model_outputs_response = self.stub.PostModelOutputs(\n post_model_outputs_request\n )\n if post_model_outputs_response.status.code != status_code_pb2.SUCCESS:\n logger.error(post_model_outputs_response.status)\n first_model_failure = (\n post_model_outputs_response.outputs[0].status\n if len(post_model_outputs_response.outputs[0])\n else None\n )\n raise Exception(\n f\"Post model outputs failed, status: \"\n f\"{post_model_outputs_response.status}, first output failure: \"\n f\"{first_model_failure}\"\n )\n text = post_model_outputs_response.outputs[0].data.text.raw\n # In order to make this consistent with other endpoints, we strip them.\n if stop is not None:\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/clarifai.html"} {"id": "2969fccf2da6-0", "text": "Source code for langchain.llms.cohere\n\"\"\"Wrapper around Cohere APIs.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, Callable, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom tenacity import (\n before_sleep_log,\n retry,\n retry_if_exception_type,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\ndef _create_retry_decorator(llm: Cohere) -> Callable[[Any], Any]:\n import cohere\n min_seconds = 4\n max_seconds = 10\n # Wait 2^x * 1 second between each retry starting with\n # 4 seconds, then up to 10 seconds, then 10 seconds afterwards\n return retry(\n reraise=True,\n stop=stop_after_attempt(llm.max_retries),\n wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),\n retry=(retry_if_exception_type(cohere.error.CohereError)),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\n[docs]def completion_with_retry(llm: Cohere, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the completion call.\"\"\"\n retry_decorator = _create_retry_decorator(llm)\n @retry_decorator\n def _completion_with_retry(**kwargs: Any) -> Any:\n return llm.client.generate(**kwargs)\n return _completion_with_retry(**kwargs)\n[docs]class Cohere(LLM):\n \"\"\"Wrapper around Cohere large language models.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/cohere.html"} {"id": "2969fccf2da6-1", "text": "\"\"\"Wrapper around Cohere large language models.\n To use, you should have the ``cohere`` python package installed, and the\n environment variable ``COHERE_API_KEY`` set with your API key, or pass\n it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.llms import Cohere\n cohere = Cohere(model=\"gptd-instruct-tft\", cohere_api_key=\"my-api-key\")\n \"\"\"\n client: Any #: :meta private:\n model: Optional[str] = None\n \"\"\"Model name to use.\"\"\"\n max_tokens: int = 256\n \"\"\"Denotes the number of tokens to predict per generation.\"\"\"\n temperature: float = 0.75\n \"\"\"A non-negative float that tunes the degree of randomness in generation.\"\"\"\n k: int = 0\n \"\"\"Number of most likely tokens to consider at each step.\"\"\"\n p: int = 1\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n frequency_penalty: float = 0.0\n \"\"\"Penalizes repeated tokens according to frequency. Between 0 and 1.\"\"\"\n presence_penalty: float = 0.0\n \"\"\"Penalizes repeated tokens. Between 0 and 1.\"\"\"\n truncate: Optional[str] = None\n \"\"\"Specify how the client handles inputs longer than the maximum token\n length: Truncate from START, END or NONE\"\"\"\n max_retries: int = 10\n \"\"\"Maximum number of retries to make when generating.\"\"\"\n cohere_api_key: Optional[str] = None\n stop: Optional[List[str]] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/cohere.html"} {"id": "2969fccf2da6-2", "text": "\"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n cohere_api_key = get_from_dict_or_env(\n values, \"cohere_api_key\", \"COHERE_API_KEY\"\n )\n try:\n import cohere\n values[\"client\"] = cohere.Client(cohere_api_key)\n except ImportError:\n raise ImportError(\n \"Could not import cohere python package. \"\n \"Please install it with `pip install cohere`.\"\n )\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling Cohere API.\"\"\"\n return {\n \"max_tokens\": self.max_tokens,\n \"temperature\": self.temperature,\n \"k\": self.k,\n \"p\": self.p,\n \"frequency_penalty\": self.frequency_penalty,\n \"presence_penalty\": self.presence_penalty,\n \"truncate\": self.truncate,\n }\n @property\n def _identifying_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model\": self.model}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"cohere\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/cohere.html"} {"id": "2969fccf2da6-3", "text": "**kwargs: Any,\n ) -> str:\n \"\"\"Call out to Cohere's generate endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = cohere(\"Tell me a joke.\")\n \"\"\"\n params = self._default_params\n if self.stop is not None and stop is not None:\n raise ValueError(\"`stop` found in both the input and default params.\")\n elif self.stop is not None:\n params[\"stop_sequences\"] = self.stop\n else:\n params[\"stop_sequences\"] = stop\n params = {**params, **kwargs}\n response = completion_with_retry(\n self, model=self.model, prompt=prompt, **params\n )\n text = response.generations[0].text\n # If stop tokens are provided, Cohere's endpoint returns them.\n # In order to make this consistent with other endpoints, we strip them.\n if stop is not None or self.stop is not None:\n text = enforce_stop_tokens(text, params[\"stop_sequences\"])\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/cohere.html"} {"id": "e5a11daa2c0d-0", "text": "Source code for langchain.llms.azureml_endpoint\n\"\"\"Wrapper around AzureML Managed Online Endpoint API.\"\"\"\nimport json\nimport urllib.request\nfrom abc import abstractmethod\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import BaseModel, validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.utils import get_from_dict_or_env\n[docs]class AzureMLEndpointClient(object):\n \"\"\"Wrapper around AzureML Managed Online Endpoint Client.\"\"\"\n def __init__(\n self, endpoint_url: str, endpoint_api_key: str, deployment_name: str\n ) -> None:\n \"\"\"Initialize the class.\"\"\"\n if not endpoint_api_key:\n raise ValueError(\"A key should be provided to invoke the endpoint\")\n self.endpoint_url = endpoint_url\n self.endpoint_api_key = endpoint_api_key\n self.deployment_name = deployment_name\n[docs] def call(self, body: bytes) -> bytes:\n \"\"\"call.\"\"\"\n # The azureml-model-deployment header will force the request to go to a\n # specific deployment. Remove this header to have the request observe the\n # endpoint traffic rules.\n headers = {\n \"Content-Type\": \"application/json\",\n \"Authorization\": (\"Bearer \" + self.endpoint_api_key),\n \"azureml-model-deployment\": self.deployment_name,\n }\n req = urllib.request.Request(self.endpoint_url, body, headers)\n response = urllib.request.urlopen(req, timeout=50)\n result = response.read()\n return result\nclass ContentFormatterBase:\n \"\"\"A handler class to transform request and response of\n AzureML endpoint to match with required schema.\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/azureml_endpoint.html"} {"id": "e5a11daa2c0d-1", "text": "\"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n \n class ContentFormatter(ContentFormatterBase):\n content_type = \"application/json\"\n accepts = \"application/json\"\n \n def format_request_payload(\n self, \n prompt: str, \n model_kwargs: Dict\n ) -> bytes:\n input_str = json.dumps(\n {\n \"inputs\": {\"input_string\": [prompt]}, \n \"parameters\": model_kwargs,\n }\n )\n return str.encode(input_str)\n \n def format_response_payload(self, output: str) -> str:\n response_json = json.loads(output)\n return response_json[0][\"0\"]\n \"\"\"\n content_type: Optional[str] = \"application/json\"\n \"\"\"The MIME type of the input data passed to the endpoint\"\"\"\n accepts: Optional[str] = \"application/json\"\n \"\"\"The MIME type of the response data returned form the endpoint\"\"\"\n @abstractmethod\n def format_request_payload(self, prompt: str, model_kwargs: Dict) -> bytes:\n \"\"\"Formats the request body according to the input schema of\n the model. Returns bytes or seekable file like object in the\n format specified in the content_type request header.\n \"\"\"\n @abstractmethod\n def format_response_payload(self, output: bytes) -> str:\n \"\"\"Formats the response body according to the output\n schema of the model. Returns the data type that is\n received from the response.\n \"\"\"\n[docs]class OSSContentFormatter(ContentFormatterBase):\n \"\"\"Content handler for LLMs from the OSS catalog.\"\"\"\n[docs] def format_request_payload(self, prompt: str, model_kwargs: Dict) -> bytes:\n input_str = json.dumps(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/azureml_endpoint.html"} {"id": "e5a11daa2c0d-2", "text": "input_str = json.dumps(\n {\"inputs\": {\"input_string\": [prompt]}, \"parameters\": model_kwargs}\n )\n return str.encode(input_str)\n[docs] def format_response_payload(self, output: bytes) -> str:\n response_json = json.loads(output)\n return response_json[0][\"0\"]\n[docs]class HFContentFormatter(ContentFormatterBase):\n \"\"\"Content handler for LLMs from the HuggingFace catalog.\"\"\"\n[docs] def format_request_payload(self, prompt: str, model_kwargs: Dict) -> bytes:\n input_str = json.dumps({\"inputs\": [prompt], \"parameters\": model_kwargs})\n return str.encode(input_str)\n[docs] def format_response_payload(self, output: bytes) -> str:\n response_json = json.loads(output)\n return response_json[0][0][\"generated_text\"]\n[docs]class DollyContentFormatter(ContentFormatterBase):\n \"\"\"Content handler for the Dolly-v2-12b model\"\"\"\n[docs] def format_request_payload(self, prompt: str, model_kwargs: Dict) -> bytes:\n input_str = json.dumps(\n {\"input_data\": {\"input_string\": [prompt]}, \"parameters\": model_kwargs}\n )\n return str.encode(input_str)\n[docs] def format_response_payload(self, output: bytes) -> str:\n response_json = json.loads(output)\n return response_json[0]\n[docs]class AzureMLOnlineEndpoint(LLM, BaseModel):\n \"\"\"Wrapper around Azure ML Hosted models using Managed Online Endpoints.\n Example:\n .. code-block:: python\n azure_llm = AzureMLModel(\n endpoint_url=\"https://..inference.ml.azure.com/score\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/azureml_endpoint.html"} {"id": "e5a11daa2c0d-3", "text": "endpoint_api_key=\"my-api-key\",\n deployment_name=\"my-deployment-name\",\n content_formatter=content_formatter,\n )\n \"\"\" # noqa: E501\n endpoint_url: str = \"\"\n \"\"\"URL of pre-existing Endpoint. Should be passed to constructor or specified as \n env var `AZUREML_ENDPOINT_URL`.\"\"\"\n endpoint_api_key: str = \"\"\n \"\"\"Authentication Key for Endpoint. Should be passed to constructor or specified as\n env var `AZUREML_ENDPOINT_API_KEY`.\"\"\"\n deployment_name: str = \"\"\n \"\"\"Deployment Name for Endpoint. Should be passed to constructor or specified as\n env var `AZUREML_DEPLOYMENT_NAME`.\"\"\"\n http_client: Any = None #: :meta private:\n content_formatter: Any = None\n \"\"\"The content formatter that provides an input and output\n transform function to handle formats between the LLM and\n the endpoint\"\"\"\n model_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n[docs] @validator(\"http_client\", always=True, allow_reuse=True)\n @classmethod\n def validate_client(cls, field_value: Any, values: Dict) -> AzureMLEndpointClient:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n endpoint_key = get_from_dict_or_env(\n values, \"endpoint_api_key\", \"AZUREML_ENDPOINT_API_KEY\"\n )\n endpoint_url = get_from_dict_or_env(\n values, \"endpoint_url\", \"AZUREML_ENDPOINT_URL\"\n )\n deployment_name = get_from_dict_or_env(\n values, \"deployment_name\", \"AZUREML_DEPLOYMENT_NAME\"\n )\n http_client = AzureMLEndpointClient(endpoint_url, endpoint_key, deployment_name)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/azureml_endpoint.html"} {"id": "e5a11daa2c0d-4", "text": "http_client = AzureMLEndpointClient(endpoint_url, endpoint_key, deployment_name)\n return http_client\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n _model_kwargs = self.model_kwargs or {}\n return {\n **{\"deployment_name\": self.deployment_name},\n **{\"model_kwargs\": _model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"azureml_endpoint\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any\n ) -> str:\n \"\"\"Call out to an AzureML Managed Online endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = azureml_model(\"Tell me a joke.\")\n \"\"\"\n _model_kwargs = self.model_kwargs or {}\n body = self.content_formatter.format_request_payload(prompt, _model_kwargs)\n endpoint_response = self.http_client.call(body)\n response = self.content_formatter.format_response_payload(endpoint_response)\n return response", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/azureml_endpoint.html"} {"id": "f57bd734cc78-0", "text": "Source code for langchain.llms.bedrock\nimport json\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nclass LLMInputOutputAdapter:\n \"\"\"Adapter class to prepare the inputs from Langchain to a format\n that LLM model expects. Also, provides helper function to extract\n the generated text from the model response.\"\"\"\n @classmethod\n def prepare_input(\n cls, provider: str, prompt: str, model_kwargs: Dict[str, Any]\n ) -> Dict[str, Any]:\n input_body = {**model_kwargs}\n if provider == \"anthropic\" or provider == \"ai21\":\n input_body[\"prompt\"] = prompt\n elif provider == \"amazon\":\n input_body = dict()\n input_body[\"inputText\"] = prompt\n input_body[\"textGenerationConfig\"] = {**model_kwargs}\n else:\n input_body[\"inputText\"] = prompt\n if provider == \"anthropic\" and \"max_tokens_to_sample\" not in input_body:\n input_body[\"max_tokens_to_sample\"] = 50\n return input_body\n @classmethod\n def prepare_output(cls, provider: str, response: Any) -> str:\n if provider == \"anthropic\":\n response_body = json.loads(response.get(\"body\").read().decode())\n return response_body.get(\"completion\")\n else:\n response_body = json.loads(response.get(\"body\").read())\n if provider == \"ai21\":\n return response_body.get(\"completions\")[0].get(\"data\").get(\"text\")\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/bedrock.html"} {"id": "f57bd734cc78-1", "text": "else:\n return response_body.get(\"results\")[0].get(\"outputText\")\n[docs]class Bedrock(LLM):\n \"\"\"LLM provider to invoke Bedrock models.\n To authenticate, the AWS client uses the following methods to\n automatically load credentials:\n https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\n If a specific credential profile should be used, you must pass\n the name of the profile from the ~/.aws/credentials file that is to be used.\n Make sure the credentials / roles used have the required policies to\n access the Bedrock service.\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n from bedrock_langchain.bedrock_llm import BedrockLLM\n llm = BedrockLLM(\n credentials_profile_name=\"default\", \n model_id=\"amazon.titan-tg1-large\"\n )\n \"\"\"\n client: Any #: :meta private:\n region_name: Optional[str] = None\n \"\"\"The aws region e.g., `us-west-2`. Fallsback to AWS_DEFAULT_REGION env variable\n or region specified in ~/.aws/config in case it is not provided here.\n \"\"\"\n credentials_profile_name: Optional[str] = None\n \"\"\"The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which\n has either access keys or role information specified.\n If not specified, the default credential profile or, if on an EC2 instance,\n credentials from IMDS will be used.\n See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\n \"\"\"\n model_id: str\n \"\"\"Id of the model to call, e.g., amazon.titan-tg1-large, this is", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/bedrock.html"} {"id": "f57bd734cc78-2", "text": "equivalent to the modelId property in the list-foundation-models api\"\"\"\n model_kwargs: Optional[Dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that AWS credentials to and python package exists in environment.\"\"\"\n # Skip creating new client if passed in constructor\n if values[\"client\"] is not None:\n return values\n try:\n import boto3\n if values[\"credentials_profile_name\"] is not None:\n session = boto3.Session(profile_name=values[\"credentials_profile_name\"])\n else:\n # use default credentials\n session = boto3.Session()\n client_params = {}\n if values[\"region_name\"]:\n client_params[\"region_name\"] = values[\"region_name\"]\n values[\"client\"] = session.client(\"bedrock\", **client_params)\n except ImportError:\n raise ModuleNotFoundError(\n \"Could not import boto3 python package. \"\n \"Please install it with `pip install boto3`.\"\n )\n except Exception as e:\n raise ValueError(\n \"Could not load credentials to authenticate with AWS client. \"\n \"Please check that credentials in the specified \"\n \"profile name are valid.\"\n ) from e\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n _model_kwargs = self.model_kwargs or {}\n return {\n **{\"model_kwargs\": _model_kwargs},\n }\n @property\n def _llm_type(self) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/bedrock.html"} {"id": "f57bd734cc78-3", "text": "}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"amazon_bedrock\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to Bedrock service model.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = se(\"Tell me a joke.\")\n \"\"\"\n _model_kwargs = self.model_kwargs or {}\n provider = self.model_id.split(\".\")[0]\n params = {**_model_kwargs, **kwargs}\n input_body = LLMInputOutputAdapter.prepare_input(provider, prompt, params)\n body = json.dumps(input_body)\n accept = \"application/json\"\n contentType = \"application/json\"\n try:\n response = self.client.invoke_model(\n body=body, modelId=self.model_id, accept=accept, contentType=contentType\n )\n text = LLMInputOutputAdapter.prepare_output(provider, response)\n except Exception as e:\n raise ValueError(f\"Error raised by bedrock service: {e}\")\n if stop is not None:\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/bedrock.html"} {"id": "360ff2d5251b-0", "text": "Source code for langchain.llms.gpt4all\n\"\"\"Wrapper for the GPT4All model.\"\"\"\nfrom functools import partial\nfrom typing import Any, Dict, List, Mapping, Optional, Set\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\n[docs]class GPT4All(LLM):\n r\"\"\"Wrapper around GPT4All language models.\n To use, you should have the ``gpt4all`` python package installed, the\n pre-trained model file, and the model's config information.\n Example:\n .. code-block:: python\n from langchain.llms import GPT4All\n model = GPT4All(model=\"./models/gpt4all-model.bin\", n_threads=8)\n # Simplest invocation\n response = model(\"Once upon a time, \")\n \"\"\"\n model: str\n \"\"\"Path to the pre-trained GPT4All model file.\"\"\"\n backend: Optional[str] = Field(None, alias=\"backend\")\n max_tokens: int = Field(200, alias=\"max_tokens\")\n \"\"\"Token context window.\"\"\"\n n_parts: int = Field(-1, alias=\"n_parts\")\n \"\"\"Number of parts to split the model into. \n If -1, the number of parts is automatically determined.\"\"\"\n seed: int = Field(0, alias=\"seed\")\n \"\"\"Seed. If -1, a random seed is used.\"\"\"\n f16_kv: bool = Field(False, alias=\"f16_kv\")\n \"\"\"Use half-precision for key/value cache.\"\"\"\n logits_all: bool = Field(False, alias=\"logits_all\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/gpt4all.html"} {"id": "360ff2d5251b-1", "text": "logits_all: bool = Field(False, alias=\"logits_all\")\n \"\"\"Return logits for all tokens, not just the last token.\"\"\"\n vocab_only: bool = Field(False, alias=\"vocab_only\")\n \"\"\"Only load the vocabulary, no weights.\"\"\"\n use_mlock: bool = Field(False, alias=\"use_mlock\")\n \"\"\"Force system to keep model in RAM.\"\"\"\n embedding: bool = Field(False, alias=\"embedding\")\n \"\"\"Use embedding mode only.\"\"\"\n n_threads: Optional[int] = Field(4, alias=\"n_threads\")\n \"\"\"Number of threads to use.\"\"\"\n n_predict: Optional[int] = 256\n \"\"\"The maximum number of tokens to generate.\"\"\"\n temp: Optional[float] = 0.7\n \"\"\"The temperature to use for sampling.\"\"\"\n top_p: Optional[float] = 0.1\n \"\"\"The top-p value to use for sampling.\"\"\"\n top_k: Optional[int] = 40\n \"\"\"The top-k value to use for sampling.\"\"\"\n echo: Optional[bool] = False\n \"\"\"Whether to echo the prompt.\"\"\"\n stop: Optional[List[str]] = []\n \"\"\"A list of strings to stop generation when encountered.\"\"\"\n repeat_last_n: Optional[int] = 64\n \"Last n tokens to penalize\"\n repeat_penalty: Optional[float] = 1.18\n \"\"\"The penalty to apply to repeated tokens.\"\"\"\n n_batch: int = Field(8, alias=\"n_batch\")\n \"\"\"Batch size for prompt processing.\"\"\"\n streaming: bool = False\n \"\"\"Whether to stream the results or not.\"\"\"\n allow_download: bool = False\n \"\"\"If model does not exist in ~/.cache/gpt4all/, download it.\"\"\"\n client: Any = None #: :meta private:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/gpt4all.html"} {"id": "360ff2d5251b-2", "text": "client: Any = None #: :meta private:\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @staticmethod\n def _model_param_names() -> Set[str]:\n return {\n \"max_tokens\",\n \"n_predict\",\n \"top_k\",\n \"top_p\",\n \"temp\",\n \"n_batch\",\n \"repeat_penalty\",\n \"repeat_last_n\",\n }\n def _default_params(self) -> Dict[str, Any]:\n return {\n \"max_tokens\": self.max_tokens,\n \"n_predict\": self.n_predict,\n \"top_k\": self.top_k,\n \"top_p\": self.top_p,\n \"temp\": self.temp,\n \"n_batch\": self.n_batch,\n \"repeat_penalty\": self.repeat_penalty,\n \"repeat_last_n\": self.repeat_last_n,\n }\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the python package exists in the environment.\"\"\"\n try:\n from gpt4all import GPT4All as GPT4AllModel\n except ImportError:\n raise ImportError(\n \"Could not import gpt4all python package. \"\n \"Please install it with `pip install gpt4all`.\"\n )\n full_path = values[\"model\"]\n model_path, delimiter, model_name = full_path.rpartition(\"/\")\n model_path += delimiter\n values[\"client\"] = GPT4AllModel(\n model_name,\n model_path=model_path or None,\n model_type=values[\"backend\"],\n allow_download=values[\"allow_download\"],\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/gpt4all.html"} {"id": "360ff2d5251b-3", "text": "allow_download=values[\"allow_download\"],\n )\n if values[\"n_threads\"] is not None:\n # set n_threads\n values[\"client\"].model.set_thread_count(values[\"n_threads\"])\n try:\n values[\"backend\"] = values[\"client\"].model_type\n except AttributeError:\n # The below is for compatibility with GPT4All Python bindings <= 0.2.3.\n values[\"backend\"] = values[\"client\"].model.model_type\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"model\": self.model,\n **self._default_params(),\n **{\n k: v for k, v in self.__dict__.items() if k in self._model_param_names()\n },\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return the type of llm.\"\"\"\n return \"gpt4all\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n r\"\"\"Call out to GPT4All's generate method.\n Args:\n prompt: The prompt to pass into the model.\n stop: A list of strings to stop generation when encountered.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n prompt = \"Once upon a time, \"\n response = model(prompt, n_predict=55)\n \"\"\"\n text_callback = None\n if run_manager:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/gpt4all.html"} {"id": "360ff2d5251b-4", "text": "\"\"\"\n text_callback = None\n if run_manager:\n text_callback = partial(run_manager.on_llm_new_token, verbose=self.verbose)\n text = \"\"\n params = {**self._default_params(), **kwargs}\n for token in self.client.generate(prompt, **params):\n if text_callback:\n text_callback(token)\n text += token\n if stop is not None:\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/gpt4all.html"} {"id": "423bc918a2f0-0", "text": "Source code for langchain.llms.self_hosted_hugging_face\n\"\"\"Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware.\"\"\"\nimport importlib.util\nimport logging\nfrom typing import Any, Callable, List, Mapping, Optional\nfrom pydantic import Extra\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.self_hosted import SelfHostedPipeline\nfrom langchain.llms.utils import enforce_stop_tokens\nDEFAULT_MODEL_ID = \"gpt2\"\nDEFAULT_TASK = \"text-generation\"\nVALID_TASKS = (\"text2text-generation\", \"text-generation\", \"summarization\")\nlogger = logging.getLogger(__name__)\ndef _generate_text(\n pipeline: Any,\n prompt: str,\n *args: Any,\n stop: Optional[List[str]] = None,\n **kwargs: Any,\n) -> str:\n \"\"\"Inference function to send to the remote hardware.\n Accepts a Hugging Face pipeline (or more likely,\n a key pointing to such a pipeline on the cluster's object store)\n and returns generated text.\n \"\"\"\n response = pipeline(prompt, *args, **kwargs)\n if pipeline.task == \"text-generation\":\n # Text generation return includes the starter text.\n text = response[0][\"generated_text\"][len(prompt) :]\n elif pipeline.task == \"text2text-generation\":\n text = response[0][\"generated_text\"]\n elif pipeline.task == \"summarization\":\n text = response[0][\"summary_text\"]\n else:\n raise ValueError(\n f\"Got invalid task {pipeline.task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n if stop is not None:\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/self_hosted_hugging_face.html"} {"id": "423bc918a2f0-1", "text": "text = enforce_stop_tokens(text, stop)\n return text\ndef _load_transformer(\n model_id: str = DEFAULT_MODEL_ID,\n task: str = DEFAULT_TASK,\n device: int = 0,\n model_kwargs: Optional[dict] = None,\n) -> Any:\n \"\"\"Inference function to send to the remote hardware.\n Accepts a huggingface model_id and returns a pipeline for the task.\n \"\"\"\n from transformers import AutoModelForCausalLM, AutoModelForSeq2SeqLM, AutoTokenizer\n from transformers import pipeline as hf_pipeline\n _model_kwargs = model_kwargs or {}\n tokenizer = AutoTokenizer.from_pretrained(model_id, **_model_kwargs)\n try:\n if task == \"text-generation\":\n model = AutoModelForCausalLM.from_pretrained(model_id, **_model_kwargs)\n elif task in (\"text2text-generation\", \"summarization\"):\n model = AutoModelForSeq2SeqLM.from_pretrained(model_id, **_model_kwargs)\n else:\n raise ValueError(\n f\"Got invalid task {task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n except ImportError as e:\n raise ValueError(\n f\"Could not load the {task} model due to missing dependencies.\"\n ) from e\n if importlib.util.find_spec(\"torch\") is not None:\n import torch\n cuda_device_count = torch.cuda.device_count()\n if device < -1 or (device >= cuda_device_count):\n raise ValueError(\n f\"Got device=={device}, \"\n f\"device is required to be within [-1, {cuda_device_count})\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/self_hosted_hugging_face.html"} {"id": "423bc918a2f0-2", "text": ")\n if device < 0 and cuda_device_count > 0:\n logger.warning(\n \"Device has %d GPUs available. \"\n \"Provide device={deviceId} to `from_model_id` to use available\"\n \"GPUs for execution. deviceId is -1 for CPU and \"\n \"can be a positive integer associated with CUDA device id.\",\n cuda_device_count,\n )\n pipeline = hf_pipeline(\n task=task,\n model=model,\n tokenizer=tokenizer,\n device=device,\n model_kwargs=_model_kwargs,\n )\n if pipeline.task not in VALID_TASKS:\n raise ValueError(\n f\"Got invalid task {pipeline.task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n return pipeline\n[docs]class SelfHostedHuggingFaceLLM(SelfHostedPipeline):\n \"\"\"Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware.\n Supported hardware includes auto-launched instances on AWS, GCP, Azure,\n and Lambda, as well as servers specified\n by IP address and SSH credentials (such as on-prem, or another cloud\n like Paperspace, Coreweave, etc.).\n To use, you should have the ``runhouse`` python package installed.\n Only supports `text-generation`, `text2text-generation` and `summarization` for now.\n Example using from_model_id:\n .. code-block:: python\n from langchain.llms import SelfHostedHuggingFaceLLM\n import runhouse as rh\n gpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\n hf = SelfHostedHuggingFaceLLM(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/self_hosted_hugging_face.html"} {"id": "423bc918a2f0-3", "text": "hf = SelfHostedHuggingFaceLLM(\n model_id=\"google/flan-t5-large\", task=\"text2text-generation\",\n hardware=gpu\n )\n Example passing fn that generates a pipeline (bc the pipeline is not serializable):\n .. code-block:: python\n from langchain.llms import SelfHostedHuggingFaceLLM\n from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\n import runhouse as rh\n def get_pipeline():\n model_id = \"gpt2\"\n tokenizer = AutoTokenizer.from_pretrained(model_id)\n model = AutoModelForCausalLM.from_pretrained(model_id)\n pipe = pipeline(\n \"text-generation\", model=model, tokenizer=tokenizer\n )\n return pipe\n hf = SelfHostedHuggingFaceLLM(\n model_load_fn=get_pipeline, model_id=\"gpt2\", hardware=gpu)\n \"\"\"\n model_id: str = DEFAULT_MODEL_ID\n \"\"\"Hugging Face model_id to load the model.\"\"\"\n task: str = DEFAULT_TASK\n \"\"\"Hugging Face task (\"text-generation\", \"text2text-generation\" or\n \"summarization\").\"\"\"\n device: int = 0\n \"\"\"Device to use for inference. -1 for CPU, 0 for GPU, 1 for second GPU, etc.\"\"\"\n model_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n hardware: Any\n \"\"\"Remote hardware to send the inference function to.\"\"\"\n model_reqs: List[str] = [\"./\", \"transformers\", \"torch\"]\n \"\"\"Requirements to install on hardware to inference the model.\"\"\"\n model_load_fn: Callable = _load_transformer\n \"\"\"Function to load the model remotely on the server.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/self_hosted_hugging_face.html"} {"id": "423bc918a2f0-4", "text": "\"\"\"Function to load the model remotely on the server.\"\"\"\n inference_fn: Callable = _generate_text #: :meta private:\n \"\"\"Inference function to send to the remote hardware.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n def __init__(self, **kwargs: Any):\n \"\"\"Construct the pipeline remotely using an auxiliary function.\n The load function needs to be importable to be imported\n and run on the server, i.e. in a module and not a REPL or closure.\n Then, initialize the remote inference function.\n \"\"\"\n load_fn_kwargs = {\n \"model_id\": kwargs.get(\"model_id\", DEFAULT_MODEL_ID),\n \"task\": kwargs.get(\"task\", DEFAULT_TASK),\n \"device\": kwargs.get(\"device\", 0),\n \"model_kwargs\": kwargs.get(\"model_kwargs\", None),\n }\n super().__init__(load_fn_kwargs=load_fn_kwargs, **kwargs)\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"model_id\": self.model_id},\n **{\"model_kwargs\": self.model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n return \"selfhosted_huggingface_pipeline\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n return self.client(\n pipeline=self.pipeline_ref, prompt=prompt, stop=stop, **kwargs\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/self_hosted_hugging_face.html"} {"id": "c6e457f5957e-0", "text": "Source code for langchain.llms.anthropic\n\"\"\"Wrapper around Anthropic APIs.\"\"\"\nimport re\nimport warnings\nfrom typing import Any, Callable, Dict, Generator, List, Mapping, Optional\nfrom pydantic import BaseModel, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.llms.base import LLM\nfrom langchain.utils import check_package_version, get_from_dict_or_env\nclass _AnthropicCommon(BaseModel):\n client: Any = None #: :meta private:\n async_client: Any = None #: :meta private:\n model: str = \"claude-v1\"\n \"\"\"Model name to use.\"\"\"\n max_tokens_to_sample: int = 256\n \"\"\"Denotes the number of tokens to predict per generation.\"\"\"\n temperature: Optional[float] = None\n \"\"\"A non-negative float that tunes the degree of randomness in generation.\"\"\"\n top_k: Optional[int] = None\n \"\"\"Number of most likely tokens to consider at each step.\"\"\"\n top_p: Optional[float] = None\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n streaming: bool = False\n \"\"\"Whether to stream the results.\"\"\"\n default_request_timeout: Optional[float] = None\n \"\"\"Timeout for requests to Anthropic Completion API. Default is 600 seconds.\"\"\"\n anthropic_api_url: Optional[str] = None\n anthropic_api_key: Optional[str] = None\n HUMAN_PROMPT: Optional[str] = None\n AI_PROMPT: Optional[str] = None\n count_tokens: Optional[Callable[[str], int]] = None\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html"} {"id": "c6e457f5957e-1", "text": "\"\"\"Validate that api key and python package exists in environment.\"\"\"\n values[\"anthropic_api_key\"] = get_from_dict_or_env(\n values, \"anthropic_api_key\", \"ANTHROPIC_API_KEY\"\n )\n # Get custom api url from environment.\n values[\"anthropic_api_url\"] = get_from_dict_or_env(\n values,\n \"anthropic_api_url\",\n \"ANTHROPIC_API_URL\",\n default=\"https://api.anthropic.com\",\n )\n try:\n import anthropic\n check_package_version(\"anthropic\", gte_version=\"0.3\")\n values[\"client\"] = anthropic.Anthropic(\n base_url=values[\"anthropic_api_url\"],\n api_key=values[\"anthropic_api_key\"],\n timeout=values[\"default_request_timeout\"],\n )\n values[\"async_client\"] = anthropic.AsyncAnthropic(\n base_url=values[\"anthropic_api_url\"],\n api_key=values[\"anthropic_api_key\"],\n timeout=values[\"default_request_timeout\"],\n )\n values[\"HUMAN_PROMPT\"] = anthropic.HUMAN_PROMPT\n values[\"AI_PROMPT\"] = anthropic.AI_PROMPT\n values[\"count_tokens\"] = values[\"client\"].count_tokens\n except ImportError:\n raise ImportError(\n \"Could not import anthropic python package. \"\n \"Please it install it with `pip install anthropic`.\"\n )\n return values\n @property\n def _default_params(self) -> Mapping[str, Any]:\n \"\"\"Get the default parameters for calling Anthropic API.\"\"\"\n d = {\n \"max_tokens_to_sample\": self.max_tokens_to_sample,\n \"model\": self.model,\n }\n if self.temperature is not None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html"} {"id": "c6e457f5957e-2", "text": "\"model\": self.model,\n }\n if self.temperature is not None:\n d[\"temperature\"] = self.temperature\n if self.top_k is not None:\n d[\"top_k\"] = self.top_k\n if self.top_p is not None:\n d[\"top_p\"] = self.top_p\n return d\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{}, **self._default_params}\n def _get_anthropic_stop(self, stop: Optional[List[str]] = None) -> List[str]:\n if not self.HUMAN_PROMPT or not self.AI_PROMPT:\n raise NameError(\"Please ensure the anthropic package is loaded\")\n if stop is None:\n stop = []\n # Never want model to invent new turns of Human / Assistant dialog.\n stop.extend([self.HUMAN_PROMPT])\n return stop\n[docs]class Anthropic(LLM, _AnthropicCommon):\n r\"\"\"Wrapper around Anthropic's large language models.\n To use, you should have the ``anthropic`` python package installed, and the\n environment variable ``ANTHROPIC_API_KEY`` set with your API key, or pass\n it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n import anthropic\n from langchain.llms import Anthropic\n model = Anthropic(model=\"\", anthropic_api_key=\"my-api-key\")\n # Simplest invocation, automatically wrapped with HUMAN_PROMPT\n # and AI_PROMPT.\n response = model(\"What are the biggest risks facing humanity?\")\n # Or if you want to use the chat mode, build a few-shot-prompt, or", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html"} {"id": "c6e457f5957e-3", "text": "# put words in the Assistant's mouth, use HUMAN_PROMPT and AI_PROMPT:\n raw_prompt = \"What are the biggest risks facing humanity?\"\n prompt = f\"{anthropic.HUMAN_PROMPT} {prompt}{anthropic.AI_PROMPT}\"\n response = model(prompt)\n \"\"\"\n[docs] @root_validator()\n def raise_warning(cls, values: Dict) -> Dict:\n \"\"\"Raise warning that this class is deprecated.\"\"\"\n warnings.warn(\n \"This Anthropic LLM is deprecated. \"\n \"Please use `from langchain.chat_models import ChatAnthropic` instead\"\n )\n return values\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"anthropic-llm\"\n def _wrap_prompt(self, prompt: str) -> str:\n if not self.HUMAN_PROMPT or not self.AI_PROMPT:\n raise NameError(\"Please ensure the anthropic package is loaded\")\n if prompt.startswith(self.HUMAN_PROMPT):\n return prompt # Already wrapped.\n # Guard against common errors in specifying wrong number of newlines.\n corrected_prompt, n_subs = re.subn(r\"^\\n*Human:\", self.HUMAN_PROMPT, prompt)\n if n_subs == 1:\n return corrected_prompt\n # As a last resort, wrap the prompt ourselves to emulate instruct-style.\n return f\"{self.HUMAN_PROMPT} {prompt}{self.AI_PROMPT} Sure, here you go:\\n\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html"} {"id": "c6e457f5957e-4", "text": "**kwargs: Any,\n ) -> str:\n r\"\"\"Call out to Anthropic's completion endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n prompt = \"What are the biggest risks facing humanity?\"\n prompt = f\"\\n\\nHuman: {prompt}\\n\\nAssistant:\"\n response = model(prompt)\n \"\"\"\n stop = self._get_anthropic_stop(stop)\n params = {**self._default_params, **kwargs}\n if self.streaming:\n stream_resp = self.client.completions.create(\n prompt=self._wrap_prompt(prompt),\n stop_sequences=stop,\n stream=True,\n **params,\n )\n current_completion = \"\"\n for data in stream_resp:\n delta = data.completion\n current_completion += delta\n if run_manager:\n run_manager.on_llm_new_token(\n delta,\n )\n return current_completion\n response = self.client.completions.create(\n prompt=self._wrap_prompt(prompt),\n stop_sequences=stop,\n **params,\n )\n return response.completion\n async def _acall(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to Anthropic's completion endpoint asynchronously.\"\"\"\n stop = self._get_anthropic_stop(stop)\n params = {**self._default_params, **kwargs}\n if self.streaming:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html"} {"id": "c6e457f5957e-5", "text": "params = {**self._default_params, **kwargs}\n if self.streaming:\n stream_resp = await self.async_client.completions.create(\n prompt=self._wrap_prompt(prompt),\n stop_sequences=stop,\n stream=True,\n **params,\n )\n current_completion = \"\"\n async for data in stream_resp:\n delta = data.completion\n current_completion += delta\n if run_manager:\n await run_manager.on_llm_new_token(delta)\n return current_completion\n response = await self.async_client.completions.create(\n prompt=self._wrap_prompt(prompt),\n stop_sequences=stop,\n **params,\n )\n return response.completion\n[docs] def stream(self, prompt: str, stop: Optional[List[str]] = None) -> Generator:\n r\"\"\"Call Anthropic completion_stream and return the resulting generator.\n BETA: this is a beta feature while we figure out the right abstraction.\n Once that happens, this interface could change.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n A generator representing the stream of tokens from Anthropic.\n Example:\n .. code-block:: python\n prompt = \"Write a poem about a stream.\"\n prompt = f\"\\n\\nHuman: {prompt}\\n\\nAssistant:\"\n generator = anthropic.stream(prompt)\n for token in generator:\n yield token\n \"\"\"\n stop = self._get_anthropic_stop(stop)\n return self.client.completions.create(\n prompt=self._wrap_prompt(prompt),\n stop_sequences=stop,\n stream=True,\n **self._default_params,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html"} {"id": "c6e457f5957e-6", "text": "stream=True,\n **self._default_params,\n )\n[docs] def get_num_tokens(self, text: str) -> int:\n \"\"\"Calculate number of tokens.\"\"\"\n if not self.count_tokens:\n raise NameError(\"Please ensure the anthropic package is loaded\")\n return self.count_tokens(text)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html"} {"id": "bb61cfa107a6-0", "text": "Source code for langchain.llms.rwkv\n\"\"\"Wrapper for the RWKV model.\nBased on https://github.com/saharNooby/rwkv.cpp/blob/master/rwkv/chat_with_bot.py\n https://github.com/BlinkDL/ChatRWKV/blob/main/v2/chat.py\n\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional, Set\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\n[docs]class RWKV(LLM, BaseModel):\n r\"\"\"Wrapper around RWKV language models.\n To use, you should have the ``rwkv`` python package installed, the\n pre-trained model file, and the model's config information.\n Example:\n .. code-block:: python\n from langchain.llms import RWKV\n model = RWKV(model=\"./models/rwkv-3b-fp16.bin\", strategy=\"cpu fp32\")\n # Simplest invocation\n response = model(\"Once upon a time, \")\n \"\"\"\n model: str\n \"\"\"Path to the pre-trained RWKV model file.\"\"\"\n tokens_path: str\n \"\"\"Path to the RWKV tokens file.\"\"\"\n strategy: str = \"cpu fp32\"\n \"\"\"Token context window.\"\"\"\n rwkv_verbose: bool = True\n \"\"\"Print debug information.\"\"\"\n temperature: float = 1.0\n \"\"\"The temperature to use for sampling.\"\"\"\n top_p: float = 0.5\n \"\"\"The top-p value to use for sampling.\"\"\"\n penalty_alpha_frequency: float = 0.4\n \"\"\"Positive values penalize new tokens based on their existing frequency", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/rwkv.html"} {"id": "bb61cfa107a6-1", "text": "\"\"\"Positive values penalize new tokens based on their existing frequency\n in the text so far, decreasing the model's likelihood to repeat the same\n line verbatim..\"\"\"\n penalty_alpha_presence: float = 0.4\n \"\"\"Positive values penalize new tokens based on whether they appear\n in the text so far, increasing the model's likelihood to talk about\n new topics..\"\"\"\n CHUNK_LEN: int = 256\n \"\"\"Batch size for prompt processing.\"\"\"\n max_tokens_per_generation: int = 256\n \"\"\"Maximum number of tokens to generate.\"\"\"\n client: Any = None #: :meta private:\n tokenizer: Any = None #: :meta private:\n pipeline: Any = None #: :meta private:\n model_tokens: Any = None #: :meta private:\n model_state: Any = None #: :meta private:\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"verbose\": self.verbose,\n \"top_p\": self.top_p,\n \"temperature\": self.temperature,\n \"penalty_alpha_frequency\": self.penalty_alpha_frequency,\n \"penalty_alpha_presence\": self.penalty_alpha_presence,\n \"CHUNK_LEN\": self.CHUNK_LEN,\n \"max_tokens_per_generation\": self.max_tokens_per_generation,\n }\n @staticmethod\n def _rwkv_param_names() -> Set[str]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"verbose\",\n }\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/rwkv.html"} {"id": "bb61cfa107a6-2", "text": "def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the python package exists in the environment.\"\"\"\n try:\n import tokenizers\n except ImportError:\n raise ImportError(\n \"Could not import tokenizers python package. \"\n \"Please install it with `pip install tokenizers`.\"\n )\n try:\n from rwkv.model import RWKV as RWKVMODEL\n from rwkv.utils import PIPELINE\n values[\"tokenizer\"] = tokenizers.Tokenizer.from_file(values[\"tokens_path\"])\n rwkv_keys = cls._rwkv_param_names()\n model_kwargs = {k: v for k, v in values.items() if k in rwkv_keys}\n model_kwargs[\"verbose\"] = values[\"rwkv_verbose\"]\n values[\"client\"] = RWKVMODEL(\n values[\"model\"], strategy=values[\"strategy\"], **model_kwargs\n )\n values[\"pipeline\"] = PIPELINE(values[\"client\"], values[\"tokens_path\"])\n except ImportError:\n raise ValueError(\n \"Could not import rwkv python package. \"\n \"Please install it with `pip install rwkv`.\"\n )\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"model\": self.model,\n **self._default_params,\n **{k: v for k, v in self.__dict__.items() if k in RWKV._rwkv_param_names()},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return the type of llm.\"\"\"\n return \"rwkv\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/rwkv.html"} {"id": "bb61cfa107a6-3", "text": "\"\"\"Return the type of llm.\"\"\"\n return \"rwkv\"\n[docs] def run_rnn(self, _tokens: List[str], newline_adj: int = 0) -> Any:\n AVOID_REPEAT_TOKENS = []\n AVOID_REPEAT = \"\uff0c\uff1a\uff1f\uff01\"\n for i in AVOID_REPEAT:\n dd = self.pipeline.encode(i)\n assert len(dd) == 1\n AVOID_REPEAT_TOKENS += dd\n tokens = [int(x) for x in _tokens]\n self.model_tokens += tokens\n out: Any = None\n while len(tokens) > 0:\n out, self.model_state = self.client.forward(\n tokens[: self.CHUNK_LEN], self.model_state\n )\n tokens = tokens[self.CHUNK_LEN :]\n END_OF_LINE = 187\n out[END_OF_LINE] += newline_adj # adjust \\n probability\n if self.model_tokens[-1] in AVOID_REPEAT_TOKENS:\n out[self.model_tokens[-1]] = -999999999\n return out\n[docs] def rwkv_generate(self, prompt: str) -> str:\n self.model_state = None\n self.model_tokens = []\n logits = self.run_rnn(self.tokenizer.encode(prompt).ids)\n begin = len(self.model_tokens)\n out_last = begin\n occurrence: Dict = {}\n decoded = \"\"\n for i in range(self.max_tokens_per_generation):\n for n in occurrence:\n logits[n] -= (\n self.penalty_alpha_presence\n + occurrence[n] * self.penalty_alpha_frequency\n )\n token = self.pipeline.sample_logits(\n logits, temperature=self.temperature, top_p=self.top_p\n )\n END_OF_TEXT = 0\n if token == END_OF_TEXT:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/rwkv.html"} {"id": "bb61cfa107a6-4", "text": ")\n END_OF_TEXT = 0\n if token == END_OF_TEXT:\n break\n if token not in occurrence:\n occurrence[token] = 1\n else:\n occurrence[token] += 1\n logits = self.run_rnn([token])\n xxx = self.tokenizer.decode(self.model_tokens[out_last:])\n if \"\\ufffd\" not in xxx: # avoid utf-8 display issues\n decoded += xxx\n out_last = begin + i + 1\n if i >= self.max_tokens_per_generation - 100:\n break\n return decoded\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n r\"\"\"RWKV generation\n Args:\n prompt: The prompt to pass into the model.\n stop: A list of strings to stop generation when encountered.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n prompt = \"Once upon a time, \"\n response = model(prompt, n_predict=55)\n \"\"\"\n text = self.rwkv_generate(prompt)\n if stop is not None:\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/rwkv.html"} {"id": "d05951f9c5f4-0", "text": "Source code for langchain.llms.stochasticai\n\"\"\"Wrapper around StochasticAI APIs.\"\"\"\nimport logging\nimport time\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class StochasticAI(LLM):\n \"\"\"Wrapper around StochasticAI large language models.\n To use, you should have the environment variable ``STOCHASTICAI_API_KEY``\n set with your API key.\n Example:\n .. code-block:: python\n from langchain.llms import StochasticAI\n stochasticai = StochasticAI(api_url=\"\")\n \"\"\"\n api_url: str = \"\"\n \"\"\"Model name to use.\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not\n explicitly specified.\"\"\"\n stochasticai_api_key: Optional[str] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/stochasticai.html"} {"id": "d05951f9c5f4-1", "text": "if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"{field_name} was transfered to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key exists in environment.\"\"\"\n stochasticai_api_key = get_from_dict_or_env(\n values, \"stochasticai_api_key\", \"STOCHASTICAI_API_KEY\"\n )\n values[\"stochasticai_api_key\"] = stochasticai_api_key\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"endpoint_url\": self.api_url},\n **{\"model_kwargs\": self.model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"stochasticai\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to StochasticAI's complete endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/stochasticai.html"} {"id": "d05951f9c5f4-2", "text": "The string generated by the model.\n Example:\n .. code-block:: python\n response = StochasticAI(\"Tell me a joke.\")\n \"\"\"\n params = self.model_kwargs or {}\n params = {**params, **kwargs}\n response_post = requests.post(\n url=self.api_url,\n json={\"prompt\": prompt, \"params\": params},\n headers={\n \"apiKey\": f\"{self.stochasticai_api_key}\",\n \"Accept\": \"application/json\",\n \"Content-Type\": \"application/json\",\n },\n )\n response_post.raise_for_status()\n response_post_json = response_post.json()\n completed = False\n while not completed:\n response_get = requests.get(\n url=response_post_json[\"data\"][\"responseUrl\"],\n headers={\n \"apiKey\": f\"{self.stochasticai_api_key}\",\n \"Accept\": \"application/json\",\n \"Content-Type\": \"application/json\",\n },\n )\n response_get.raise_for_status()\n response_get_json = response_get.json()[\"data\"]\n text = response_get_json.get(\"completion\")\n completed = text is not None\n time.sleep(0.5)\n text = text[0]\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the model parameters\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/stochasticai.html"} {"id": "78436241d304-0", "text": "Source code for langchain.llms.base\n\"\"\"Base interface for large language models to expose.\"\"\"\nfrom __future__ import annotations\nimport asyncio\nimport inspect\nimport json\nimport logging\nimport warnings\nfrom abc import ABC, abstractmethod\nfrom pathlib import Path\nfrom typing import (\n Any,\n Callable,\n Dict,\n List,\n Mapping,\n Optional,\n Sequence,\n Tuple,\n Type,\n Union,\n)\nimport yaml\nfrom pydantic import Field, root_validator, validator\nfrom tenacity import (\n before_sleep_log,\n retry,\n retry_base,\n retry_if_exception_type,\n stop_after_attempt,\n wait_exponential,\n)\nimport langchain\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.callbacks.manager import (\n AsyncCallbackManager,\n AsyncCallbackManagerForLLMRun,\n CallbackManager,\n CallbackManagerForLLMRun,\n Callbacks,\n)\nfrom langchain.load.dump import dumpd\nfrom langchain.schema import (\n Generation,\n LLMResult,\n PromptValue,\n RunInfo,\n)\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.schema.messages import AIMessage, BaseMessage, get_buffer_string\nlogger = logging.getLogger(__name__)\ndef _get_verbosity() -> bool:\n return langchain.verbose\n[docs]def create_base_retry_decorator(\n error_types: List[Type[BaseException]], max_retries: int = 1\n) -> Callable[[Any], Any]:\n \"\"\"Create a retry decorator for a given LLM and provided list of error types.\"\"\"\n min_seconds = 4\n max_seconds = 10\n # Wait 2^x * 1 second between each retry starting with", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/base.html"} {"id": "78436241d304-1", "text": "# Wait 2^x * 1 second between each retry starting with\n # 4 seconds, then up to 10 seconds, then 10 seconds afterwards\n retry_instance: \"retry_base\" = retry_if_exception_type(error_types[0])\n for error in error_types[1:]:\n retry_instance = retry_instance | retry_if_exception_type(error)\n return retry(\n reraise=True,\n stop=stop_after_attempt(max_retries),\n wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),\n retry=retry_instance,\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\n[docs]def get_prompts(\n params: Dict[str, Any], prompts: List[str]\n) -> Tuple[Dict[int, List], str, List[int], List[str]]:\n \"\"\"Get prompts that are already cached.\"\"\"\n llm_string = str(sorted([(k, v) for k, v in params.items()]))\n missing_prompts = []\n missing_prompt_idxs = []\n existing_prompts = {}\n for i, prompt in enumerate(prompts):\n if langchain.llm_cache is not None:\n cache_val = langchain.llm_cache.lookup(prompt, llm_string)\n if isinstance(cache_val, list):\n existing_prompts[i] = cache_val\n else:\n missing_prompts.append(prompt)\n missing_prompt_idxs.append(i)\n return existing_prompts, llm_string, missing_prompt_idxs, missing_prompts\n[docs]def update_cache(\n existing_prompts: Dict[int, List],\n llm_string: str,\n missing_prompt_idxs: List[int],\n new_results: LLMResult,\n prompts: List[str],\n) -> Optional[dict]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/base.html"} {"id": "78436241d304-2", "text": "prompts: List[str],\n) -> Optional[dict]:\n \"\"\"Update the cache and get the LLM output.\"\"\"\n for i, result in enumerate(new_results.generations):\n existing_prompts[missing_prompt_idxs[i]] = result\n prompt = prompts[missing_prompt_idxs[i]]\n if langchain.llm_cache is not None:\n langchain.llm_cache.update(prompt, llm_string, result)\n llm_output = new_results.llm_output\n return llm_output\n[docs]class BaseLLM(BaseLanguageModel, ABC):\n \"\"\"LLM wrapper should take in a prompt and return a string.\"\"\"\n cache: Optional[bool] = None\n verbose: bool = Field(default_factory=_get_verbosity)\n \"\"\"Whether to print out response text.\"\"\"\n callbacks: Callbacks = Field(default=None, exclude=True)\n callback_manager: Optional[BaseCallbackManager] = Field(default=None, exclude=True)\n tags: Optional[List[str]] = Field(default=None, exclude=True)\n \"\"\"Tags to add to the run trace.\"\"\"\n metadata: Optional[Dict[str, Any]] = Field(default=None, exclude=True)\n \"\"\"Metadata to add to the run trace.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] @root_validator()\n def raise_deprecation(cls, values: Dict) -> Dict:\n \"\"\"Raise deprecation warning if callback_manager is used.\"\"\"\n if values.get(\"callback_manager\") is not None:\n warnings.warn(\n \"callback_manager is deprecated. Please use callbacks instead.\",\n DeprecationWarning,\n )\n values[\"callbacks\"] = values.pop(\"callback_manager\", None)\n return values", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/base.html"} {"id": "78436241d304-3", "text": "values[\"callbacks\"] = values.pop(\"callback_manager\", None)\n return values\n[docs] @validator(\"verbose\", pre=True, always=True)\n def set_verbose(cls, verbose: Optional[bool]) -> bool:\n \"\"\"If verbose is None, set it.\n This allows users to pass in None as verbose to access the global setting.\n \"\"\"\n if verbose is None:\n return _get_verbosity()\n else:\n return verbose\n @abstractmethod\n def _generate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> LLMResult:\n \"\"\"Run the LLM on the given prompts.\"\"\"\n @abstractmethod\n async def _agenerate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> LLMResult:\n \"\"\"Run the LLM on the given prompts.\"\"\"\n[docs] def generate_prompt(\n self,\n prompts: List[PromptValue],\n stop: Optional[List[str]] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> LLMResult:\n prompt_strings = [p.to_string() for p in prompts]\n return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)\n[docs] async def agenerate_prompt(\n self,\n prompts: List[PromptValue],\n stop: Optional[List[str]] = None,\n callbacks: Callbacks = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/base.html"} {"id": "78436241d304-4", "text": "stop: Optional[List[str]] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> LLMResult:\n prompt_strings = [p.to_string() for p in prompts]\n return await self.agenerate(\n prompt_strings, stop=stop, callbacks=callbacks, **kwargs\n )\n def _generate_helper(\n self,\n prompts: List[str],\n stop: Optional[List[str]],\n run_managers: List[CallbackManagerForLLMRun],\n new_arg_supported: bool,\n **kwargs: Any,\n ) -> LLMResult:\n try:\n output = (\n self._generate(\n prompts,\n stop=stop,\n # TODO: support multiple run managers\n run_manager=run_managers[0] if run_managers else None,\n **kwargs,\n )\n if new_arg_supported\n else self._generate(prompts, stop=stop)\n )\n except (KeyboardInterrupt, Exception) as e:\n for run_manager in run_managers:\n run_manager.on_llm_error(e)\n raise e\n flattened_outputs = output.flatten()\n for manager, flattened_output in zip(run_managers, flattened_outputs):\n manager.on_llm_end(flattened_output)\n if run_managers:\n output.run = [\n RunInfo(run_id=run_manager.run_id) for run_manager in run_managers\n ]\n return output\n[docs] def generate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n callbacks: Callbacks = None,\n *,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/base.html"} {"id": "78436241d304-5", "text": "metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> LLMResult:\n \"\"\"Run the LLM on the given prompt and input.\"\"\"\n if not isinstance(prompts, list):\n raise ValueError(\n \"Argument 'prompts' is expected to be of type List[str], received\"\n f\" argument of type {type(prompts)}.\"\n )\n params = self.dict()\n params[\"stop\"] = stop\n options = {\"stop\": stop}\n (\n existing_prompts,\n llm_string,\n missing_prompt_idxs,\n missing_prompts,\n ) = get_prompts(params, prompts)\n disregard_cache = self.cache is not None and not self.cache\n callback_manager = CallbackManager.configure(\n callbacks,\n self.callbacks,\n self.verbose,\n tags,\n self.tags,\n metadata,\n self.metadata,\n )\n new_arg_supported = inspect.signature(self._generate).parameters.get(\n \"run_manager\"\n )\n if langchain.llm_cache is None or disregard_cache:\n if self.cache is not None and self.cache:\n raise ValueError(\n \"Asked to cache, but no cache found at `langchain.cache`.\"\n )\n run_managers = callback_manager.on_llm_start(\n dumpd(self), prompts, invocation_params=params, options=options\n )\n output = self._generate_helper(\n prompts, stop, run_managers, bool(new_arg_supported), **kwargs\n )\n return output\n if len(missing_prompts) > 0:\n run_managers = callback_manager.on_llm_start(\n dumpd(self), missing_prompts, invocation_params=params, options=options\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/base.html"} {"id": "78436241d304-6", "text": "dumpd(self), missing_prompts, invocation_params=params, options=options\n )\n new_results = self._generate_helper(\n missing_prompts, stop, run_managers, bool(new_arg_supported), **kwargs\n )\n llm_output = update_cache(\n existing_prompts, llm_string, missing_prompt_idxs, new_results, prompts\n )\n run_info = (\n [RunInfo(run_id=run_manager.run_id) for run_manager in run_managers]\n if run_managers\n else None\n )\n else:\n llm_output = {}\n run_info = None\n generations = [existing_prompts[i] for i in range(len(prompts))]\n return LLMResult(generations=generations, llm_output=llm_output, run=run_info)\n async def _agenerate_helper(\n self,\n prompts: List[str],\n stop: Optional[List[str]],\n run_managers: List[AsyncCallbackManagerForLLMRun],\n new_arg_supported: bool,\n **kwargs: Any,\n ) -> LLMResult:\n try:\n output = (\n await self._agenerate(\n prompts,\n stop=stop,\n run_manager=run_managers[0] if run_managers else None,\n **kwargs,\n )\n if new_arg_supported\n else await self._agenerate(prompts, stop=stop)\n )\n except (KeyboardInterrupt, Exception) as e:\n await asyncio.gather(\n *[run_manager.on_llm_error(e) for run_manager in run_managers]\n )\n raise e\n flattened_outputs = output.flatten()\n await asyncio.gather(\n *[", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/base.html"} {"id": "78436241d304-7", "text": "flattened_outputs = output.flatten()\n await asyncio.gather(\n *[\n run_manager.on_llm_end(flattened_output)\n for run_manager, flattened_output in zip(\n run_managers, flattened_outputs\n )\n ]\n )\n if run_managers:\n output.run = [\n RunInfo(run_id=run_manager.run_id) for run_manager in run_managers\n ]\n return output\n[docs] async def agenerate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n callbacks: Callbacks = None,\n *,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> LLMResult:\n \"\"\"Run the LLM on the given prompt and input.\"\"\"\n params = self.dict()\n params[\"stop\"] = stop\n options = {\"stop\": stop}\n (\n existing_prompts,\n llm_string,\n missing_prompt_idxs,\n missing_prompts,\n ) = get_prompts(params, prompts)\n disregard_cache = self.cache is not None and not self.cache\n callback_manager = AsyncCallbackManager.configure(\n callbacks,\n self.callbacks,\n self.verbose,\n tags,\n self.tags,\n metadata,\n self.metadata,\n )\n new_arg_supported = inspect.signature(self._agenerate).parameters.get(\n \"run_manager\"\n )\n if langchain.llm_cache is None or disregard_cache:\n if self.cache is not None and self.cache:\n raise ValueError(\n \"Asked to cache, but no cache found at `langchain.cache`.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/base.html"} {"id": "78436241d304-8", "text": ")\n run_managers = await callback_manager.on_llm_start(\n dumpd(self), prompts, invocation_params=params, options=options\n )\n output = await self._agenerate_helper(\n prompts, stop, run_managers, bool(new_arg_supported), **kwargs\n )\n return output\n if len(missing_prompts) > 0:\n run_managers = await callback_manager.on_llm_start(\n dumpd(self), missing_prompts, invocation_params=params, options=options\n )\n new_results = await self._agenerate_helper(\n missing_prompts, stop, run_managers, bool(new_arg_supported), **kwargs\n )\n llm_output = update_cache(\n existing_prompts, llm_string, missing_prompt_idxs, new_results, prompts\n )\n run_info = (\n [RunInfo(run_id=run_manager.run_id) for run_manager in run_managers]\n if run_managers\n else None\n )\n else:\n llm_output = {}\n run_info = None\n generations = [existing_prompts[i] for i in range(len(prompts))]\n return LLMResult(generations=generations, llm_output=llm_output, run=run_info)\n[docs] def __call__(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n callbacks: Callbacks = None,\n *,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Check Cache and run the LLM on the given prompt and input.\"\"\"\n if not isinstance(prompt, str):\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/base.html"} {"id": "78436241d304-9", "text": "if not isinstance(prompt, str):\n raise ValueError(\n \"Argument `prompt` is expected to be a string. Instead found \"\n f\"{type(prompt)}. If you want to run the LLM on multiple prompts, use \"\n \"`generate` instead.\"\n )\n return (\n self.generate(\n [prompt],\n stop=stop,\n callbacks=callbacks,\n tags=tags,\n metadata=metadata,\n **kwargs,\n )\n .generations[0][0]\n .text\n )\n async def _call_async(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n callbacks: Callbacks = None,\n *,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Check Cache and run the LLM on the given prompt and input.\"\"\"\n result = await self.agenerate(\n [prompt],\n stop=stop,\n callbacks=callbacks,\n tags=tags,\n metadata=metadata,\n **kwargs,\n )\n return result.generations[0][0].text\n[docs] def predict(\n self, text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any\n ) -> str:\n if stop is None:\n _stop = None\n else:\n _stop = list(stop)\n return self(text, stop=_stop, **kwargs)\n[docs] def predict_messages(\n self,\n messages: List[BaseMessage],\n *,\n stop: Optional[Sequence[str]] = None,\n **kwargs: Any,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/base.html"} {"id": "78436241d304-10", "text": "stop: Optional[Sequence[str]] = None,\n **kwargs: Any,\n ) -> BaseMessage:\n text = get_buffer_string(messages)\n if stop is None:\n _stop = None\n else:\n _stop = list(stop)\n content = self(text, stop=_stop, **kwargs)\n return AIMessage(content=content)\n[docs] async def apredict(\n self, text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any\n ) -> str:\n if stop is None:\n _stop = None\n else:\n _stop = list(stop)\n return await self._call_async(text, stop=_stop, **kwargs)\n[docs] async def apredict_messages(\n self,\n messages: List[BaseMessage],\n *,\n stop: Optional[Sequence[str]] = None,\n **kwargs: Any,\n ) -> BaseMessage:\n text = get_buffer_string(messages)\n if stop is None:\n _stop = None\n else:\n _stop = list(stop)\n content = await self._call_async(text, stop=_stop, **kwargs)\n return AIMessage(content=content)\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {}\n def __str__(self) -> str:\n \"\"\"Get a string representation of the object for printing.\"\"\"\n cls_name = f\"\\033[1m{self.__class__.__name__}\\033[0m\"\n return f\"{cls_name}\\nParams: {self._identifying_params}\"\n @property\n @abstractmethod\n def _llm_type(self) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/base.html"} {"id": "78436241d304-11", "text": "@property\n @abstractmethod\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return a dictionary of the LLM.\"\"\"\n starter_dict = dict(self._identifying_params)\n starter_dict[\"_type\"] = self._llm_type\n return starter_dict\n[docs] def save(self, file_path: Union[Path, str]) -> None:\n \"\"\"Save the LLM.\n Args:\n file_path: Path to file to save the LLM to.\n Example:\n .. code-block:: python\n llm.save(file_path=\"path/llm.yaml\")\n \"\"\"\n # Convert file to Path object.\n if isinstance(file_path, str):\n save_path = Path(file_path)\n else:\n save_path = file_path\n directory_path = save_path.parent\n directory_path.mkdir(parents=True, exist_ok=True)\n # Fetch dictionary to save\n prompt_dict = self.dict()\n if save_path.suffix == \".json\":\n with open(file_path, \"w\") as f:\n json.dump(prompt_dict, f, indent=4)\n elif save_path.suffix == \".yaml\":\n with open(file_path, \"w\") as f:\n yaml.dump(prompt_dict, f, default_flow_style=False)\n else:\n raise ValueError(f\"{save_path} must be json or yaml\")\n[docs]class LLM(BaseLLM):\n \"\"\"LLM class that expect subclasses to implement a simpler call method.\n The purpose of this class is to expose a simpler interface for working\n with LLMs, rather than expect the user to implement the full _generate method.\n \"\"\"\n @abstractmethod", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/base.html"} {"id": "78436241d304-12", "text": "\"\"\"\n @abstractmethod\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Run the LLM on the given prompt and input.\"\"\"\n async def _acall(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Run the LLM on the given prompt and input.\"\"\"\n raise NotImplementedError(\"Async generation not implemented for this LLM.\")\n def _generate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> LLMResult:\n \"\"\"Run the LLM on the given prompt and input.\"\"\"\n # TODO: add caching here.\n generations = []\n new_arg_supported = inspect.signature(self._call).parameters.get(\"run_manager\")\n for prompt in prompts:\n text = (\n self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)\n if new_arg_supported\n else self._call(prompt, stop=stop, **kwargs)\n )\n generations.append([Generation(text=text)])\n return LLMResult(generations=generations)\n async def _agenerate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/base.html"} {"id": "78436241d304-13", "text": "run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> LLMResult:\n \"\"\"Run the LLM on the given prompt and input.\"\"\"\n generations = []\n new_arg_supported = inspect.signature(self._acall).parameters.get(\"run_manager\")\n for prompt in prompts:\n text = (\n await self._acall(prompt, stop=stop, run_manager=run_manager, **kwargs)\n if new_arg_supported\n else await self._acall(prompt, stop=stop, **kwargs)\n )\n generations.append([Generation(text=text)])\n return LLMResult(generations=generations)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/base.html"} {"id": "75977413d37f-0", "text": "Source code for langchain.llms.utils\n\"\"\"Common utility functions for working with LLM APIs.\"\"\"\nimport re\nfrom typing import List\n[docs]def enforce_stop_tokens(text: str, stop: List[str]) -> str:\n \"\"\"Cut off the text as soon as any stop words occur.\"\"\"\n return re.split(\"|\".join(stop), text)[0]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/utils.html"} {"id": "be1b69dbcf80-0", "text": "Source code for langchain.llms.cerebriumai\n\"\"\"Wrapper around CerebriumAI API.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class CerebriumAI(LLM):\n \"\"\"Wrapper around CerebriumAI large language models.\n To use, you should have the ``cerebrium`` python package installed, and the\n environment variable ``CEREBRIUMAI_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.llms import CerebriumAI\n cerebrium = CerebriumAI(endpoint_url=\"\")\n \"\"\"\n endpoint_url: str = \"\"\n \"\"\"model endpoint to use\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not\n explicitly specified.\"\"\"\n cerebriumai_api_key: Optional[str] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic config.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/cerebriumai.html"} {"id": "be1b69dbcf80-1", "text": "all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"{field_name} was transfered to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n cerebriumai_api_key = get_from_dict_or_env(\n values, \"cerebriumai_api_key\", \"CEREBRIUMAI_API_KEY\"\n )\n values[\"cerebriumai_api_key\"] = cerebriumai_api_key\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"endpoint_url\": self.endpoint_url},\n **{\"model_kwargs\": self.model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"cerebriumai\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call to CerebriumAI endpoint.\"\"\"\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/cerebriumai.html"} {"id": "be1b69dbcf80-2", "text": "\"\"\"Call to CerebriumAI endpoint.\"\"\"\n try:\n from cerebrium import model_api_request\n except ImportError:\n raise ValueError(\n \"Could not import cerebrium python package. \"\n \"Please install it with `pip install cerebrium`.\"\n )\n params = self.model_kwargs or {}\n response = model_api_request(\n self.endpoint_url,\n {\"prompt\": prompt, **params, **kwargs},\n self.cerebriumai_api_key,\n )\n text = response[\"data\"][\"result\"]\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the model parameters\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/cerebriumai.html"} {"id": "679b781d0f23-0", "text": "Source code for langchain.llms.bananadev\n\"\"\"Wrapper around Banana API.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class Banana(LLM):\n \"\"\"Wrapper around Banana large language models.\n To use, you should have the ``banana-dev`` python package installed,\n and the environment variable ``BANANA_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.llms import Banana\n banana = Banana(model_key=\"\")\n \"\"\"\n model_key: str = \"\"\n \"\"\"model endpoint to use\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not\n explicitly specified.\"\"\"\n banana_api_key: Optional[str] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic config.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/bananadev.html"} {"id": "679b781d0f23-1", "text": "if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"{field_name} was transfered to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n banana_api_key = get_from_dict_or_env(\n values, \"banana_api_key\", \"BANANA_API_KEY\"\n )\n values[\"banana_api_key\"] = banana_api_key\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"model_key\": self.model_key},\n **{\"model_kwargs\": self.model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"bananadev\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call to Banana endpoint.\"\"\"\n try:\n import banana_dev as banana\n except ImportError:\n raise ImportError(\n \"Could not import banana-dev python package. \"\n \"Please install it with `pip install banana-dev`.\"\n )\n params = self.model_kwargs or {}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/bananadev.html"} {"id": "679b781d0f23-2", "text": ")\n params = self.model_kwargs or {}\n params = {**params, **kwargs}\n api_key = self.banana_api_key\n model_key = self.model_key\n model_inputs = {\n # a json specific to your model.\n \"prompt\": prompt,\n **params,\n }\n response = banana.run(api_key, model_key, model_inputs)\n try:\n text = response[\"modelOutputs\"][0][\"output\"]\n except (KeyError, TypeError):\n returned = response[\"modelOutputs\"][0]\n raise ValueError(\n \"Response should be of schema: {'output': 'text'}.\"\n f\"\\nResponse was: {returned}\"\n \"\\nTo fix this:\"\n \"\\n- fork the source repo of the Banana model\"\n \"\\n- modify app.py to return the above schema\"\n \"\\n- deploy that as a custom repo\"\n )\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the model parameters\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/bananadev.html"} {"id": "bba37b27546a-0", "text": "Source code for langchain.llms.amazon_api_gateway\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nclass ContentHandlerAmazonAPIGateway:\n \"\"\"Adapter class to prepare the inputs from Langchain to a format\n that LLM model expects. Also, provides helper function to extract\n the generated text from the model response.\"\"\"\n @classmethod\n def transform_input(\n cls, prompt: str, model_kwargs: Dict[str, Any]\n ) -> Dict[str, Any]:\n return {\"inputs\": prompt, \"parameters\": model_kwargs}\n @classmethod\n def transform_output(cls, response: Any) -> str:\n return response.json()[0][\"generated_text\"]\n[docs]class AmazonAPIGateway(LLM):\n \"\"\"Wrapper around custom Amazon API Gateway\"\"\"\n api_url: str\n \"\"\"API Gateway URL\"\"\"\n headers: Optional[Dict] = None\n \"\"\"API Gateway HTTP Headers to send, e.g. for authentication\"\"\"\n model_kwargs: Optional[Dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n content_handler: ContentHandlerAmazonAPIGateway = ContentHandlerAmazonAPIGateway()\n \"\"\"The content handler class that provides an input and\n output transform functions to handle formats between LLM\n and the endpoint.\n \"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n _model_kwargs = self.model_kwargs or {}\n return {", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/amazon_api_gateway.html"} {"id": "bba37b27546a-1", "text": "_model_kwargs = self.model_kwargs or {}\n return {\n **{\"api_url\": self.api_url, \"headers\": self.headers},\n **{\"model_kwargs\": _model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"amazon_api_gateway\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to Amazon API Gateway model.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = se(\"Tell me a joke.\")\n \"\"\"\n _model_kwargs = self.model_kwargs or {}\n payload = self.content_handler.transform_input(prompt, _model_kwargs)\n try:\n response = requests.post(\n self.api_url,\n headers=self.headers,\n json=payload,\n )\n text = self.content_handler.transform_output(response)\n except Exception as error:\n raise ValueError(f\"Error raised by the service: {error}\")\n if stop is not None:\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/amazon_api_gateway.html"} {"id": "ee50309a8576-0", "text": "Source code for langchain.llms.huggingface_endpoint\n\"\"\"Wrapper around HuggingFace APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nVALID_TASKS = (\"text2text-generation\", \"text-generation\", \"summarization\")\n[docs]class HuggingFaceEndpoint(LLM):\n \"\"\"Wrapper around HuggingFaceHub Inference Endpoints.\n To use, you should have the ``huggingface_hub`` python package installed, and the\n environment variable ``HUGGINGFACEHUB_API_TOKEN`` set with your API token, or pass\n it as a named parameter to the constructor.\n Only supports `text-generation` and `text2text-generation` for now.\n Example:\n .. code-block:: python\n from langchain.llms import HuggingFaceEndpoint\n endpoint_url = (\n \"https://abcdefghijklmnop.us-east-1.aws.endpoints.huggingface.cloud\"\n )\n hf = HuggingFaceEndpoint(\n endpoint_url=endpoint_url,\n huggingfacehub_api_token=\"my-api-key\"\n )\n \"\"\"\n endpoint_url: str = \"\"\n \"\"\"Endpoint URL to use.\"\"\"\n task: Optional[str] = None\n \"\"\"Task to call the model with.\n Should be a task that returns `generated_text` or `summary_text`.\"\"\"\n model_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n huggingfacehub_api_token: Optional[str] = None\n[docs] class Config:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_endpoint.html"} {"id": "ee50309a8576-1", "text": "[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n huggingfacehub_api_token = get_from_dict_or_env(\n values, \"huggingfacehub_api_token\", \"HUGGINGFACEHUB_API_TOKEN\"\n )\n try:\n from huggingface_hub.hf_api import HfApi\n try:\n HfApi(\n endpoint=\"https://huggingface.co\", # Can be a Private Hub endpoint.\n token=huggingfacehub_api_token,\n ).whoami()\n except Exception as e:\n raise ValueError(\n \"Could not authenticate with huggingface_hub. \"\n \"Please check your API token.\"\n ) from e\n except ImportError:\n raise ValueError(\n \"Could not import huggingface_hub python package. \"\n \"Please install it with `pip install huggingface_hub`.\"\n )\n values[\"huggingfacehub_api_token\"] = huggingfacehub_api_token\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n _model_kwargs = self.model_kwargs or {}\n return {\n **{\"endpoint_url\": self.endpoint_url, \"task\": self.task},\n **{\"model_kwargs\": _model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"huggingface_endpoint\"\n def _call(\n self,\n prompt: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_endpoint.html"} {"id": "ee50309a8576-2", "text": "def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to HuggingFace Hub's inference endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = hf(\"Tell me a joke.\")\n \"\"\"\n _model_kwargs = self.model_kwargs or {}\n # payload samples\n params = {**_model_kwargs, **kwargs}\n parameter_payload = {\"inputs\": prompt, \"parameters\": params}\n # HTTP headers for authorization\n headers = {\n \"Authorization\": f\"Bearer {self.huggingfacehub_api_token}\",\n \"Content-Type\": \"application/json\",\n }\n # send request\n try:\n response = requests.post(\n self.endpoint_url, headers=headers, json=parameter_payload\n )\n except requests.exceptions.RequestException as e: # This is the correct syntax\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\n generated_text = response.json()\n if \"error\" in generated_text:\n raise ValueError(\n f\"Error raised by inference API: {generated_text['error']}\"\n )\n if self.task == \"text-generation\":\n # Text generation return includes the starter text.\n text = generated_text[0][\"generated_text\"][len(prompt) :]\n elif self.task == \"text2text-generation\":\n text = generated_text[0][\"generated_text\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_endpoint.html"} {"id": "ee50309a8576-3", "text": "text = generated_text[0][\"generated_text\"]\n elif self.task == \"summarization\":\n text = generated_text[0][\"summary_text\"]\n else:\n raise ValueError(\n f\"Got invalid task {self.task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n if stop is not None:\n # This is a bit hacky, but I can't figure out a better way to enforce\n # stop tokens when making calls to huggingface_hub.\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_endpoint.html"} {"id": "bf62576e7a6e-0", "text": "Source code for langchain.llms.predictionguard\n\"\"\"Wrapper around Prediction Guard APIs.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class PredictionGuard(LLM):\n \"\"\"Wrapper around Prediction Guard large language models.\n To use, you should have the ``predictionguard`` python package installed, and the\n environment variable ``PREDICTIONGUARD_TOKEN`` set with your access token, or pass\n it as a named parameter to the constructor. To use Prediction Guard's API along\n with OpenAI models, set the environment variable ``OPENAI_API_KEY`` with your\n OpenAI API key as well.\n Example:\n .. code-block:: python\n pgllm = PredictionGuard(model=\"MPT-7B-Instruct\",\n token=\"my-access-token\",\n output={\n \"type\": \"boolean\"\n })\n \"\"\"\n client: Any #: :meta private:\n model: Optional[str] = \"MPT-7B-Instruct\"\n \"\"\"Model name to use.\"\"\"\n output: Optional[Dict[str, Any]] = None\n \"\"\"The output type or structure for controlling the LLM output.\"\"\"\n max_tokens: int = 256\n \"\"\"Denotes the number of tokens to predict per generation.\"\"\"\n temperature: float = 0.75\n \"\"\"A non-negative float that tunes the degree of randomness in generation.\"\"\"\n token: Optional[str] = None\n \"\"\"Your Prediction Guard access token.\"\"\"\n stop: Optional[List[str]] = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/predictionguard.html"} {"id": "bf62576e7a6e-1", "text": "\"\"\"Your Prediction Guard access token.\"\"\"\n stop: Optional[List[str]] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the access token and python package exists in environment.\"\"\"\n token = get_from_dict_or_env(values, \"token\", \"PREDICTIONGUARD_TOKEN\")\n try:\n import predictionguard as pg\n values[\"client\"] = pg.Client(token=token)\n except ImportError:\n raise ImportError(\n \"Could not import predictionguard python package. \"\n \"Please install it with `pip install predictionguard`.\"\n )\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling the Prediction Guard API.\"\"\"\n return {\n \"max_tokens\": self.max_tokens,\n \"temperature\": self.temperature,\n }\n @property\n def _identifying_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model\": self.model}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"predictionguard\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to Prediction Guard's model API.\n Args:\n prompt: The prompt to pass into the model.\n Returns:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/predictionguard.html"} {"id": "bf62576e7a6e-2", "text": "Args:\n prompt: The prompt to pass into the model.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = pgllm(\"Tell me a joke.\")\n \"\"\"\n import predictionguard as pg\n params = self._default_params\n if self.stop is not None and stop is not None:\n raise ValueError(\"`stop` found in both the input and default params.\")\n elif self.stop is not None:\n params[\"stop_sequences\"] = self.stop\n else:\n params[\"stop_sequences\"] = stop\n response = pg.Completion.create(\n model=self.model,\n prompt=prompt,\n output=self.output,\n temperature=params[\"temperature\"],\n max_tokens=params[\"max_tokens\"],\n **kwargs,\n )\n text = response[\"choices\"][0][\"text\"]\n # If stop tokens are provided, Prediction Guard's endpoint returns them.\n # In order to make this consistent with other endpoints, we strip them.\n if stop is not None or self.stop is not None:\n text = enforce_stop_tokens(text, params[\"stop_sequences\"])\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/predictionguard.html"} {"id": "f3e11bba4588-0", "text": "Source code for langchain.llms.databricks\nimport os\nfrom abc import ABC, abstractmethod\nfrom typing import Any, Callable, Dict, List, Optional\nimport requests\nfrom pydantic import BaseModel, Extra, Field, PrivateAttr, root_validator, validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\n__all__ = [\"Databricks\"]\nclass _DatabricksClientBase(BaseModel, ABC):\n \"\"\"A base JSON API client that talks to Databricks.\"\"\"\n api_url: str\n api_token: str\n def post_raw(self, request: Any) -> Any:\n headers = {\"Authorization\": f\"Bearer {self.api_token}\"}\n response = requests.post(self.api_url, headers=headers, json=request)\n # TODO: error handling and automatic retries\n if not response.ok:\n raise ValueError(f\"HTTP {response.status_code} error: {response.text}\")\n return response.json()\n @abstractmethod\n def post(self, request: Any) -> Any:\n ...\nclass _DatabricksServingEndpointClient(_DatabricksClientBase):\n \"\"\"An API client that talks to a Databricks serving endpoint.\"\"\"\n host: str\n endpoint_name: str\n @root_validator(pre=True)\n def set_api_url(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n if \"api_url\" not in values:\n host = values[\"host\"]\n endpoint_name = values[\"endpoint_name\"]\n api_url = f\"https://{host}/serving-endpoints/{endpoint_name}/invocations\"\n values[\"api_url\"] = api_url\n return values\n def post(self, request: Any) -> Any:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/databricks.html"} {"id": "f3e11bba4588-1", "text": "return values\n def post(self, request: Any) -> Any:\n # See https://docs.databricks.com/machine-learning/model-serving/score-model-serving-endpoints.html\n wrapped_request = {\"dataframe_records\": [request]}\n response = self.post_raw(wrapped_request)[\"predictions\"]\n # For a single-record query, the result is not a list.\n if isinstance(response, list):\n response = response[0]\n return response\nclass _DatabricksClusterDriverProxyClient(_DatabricksClientBase):\n \"\"\"An API client that talks to a Databricks cluster driver proxy app.\"\"\"\n host: str\n cluster_id: str\n cluster_driver_port: str\n @root_validator(pre=True)\n def set_api_url(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n if \"api_url\" not in values:\n host = values[\"host\"]\n cluster_id = values[\"cluster_id\"]\n port = values[\"cluster_driver_port\"]\n api_url = f\"https://{host}/driver-proxy-api/o/0/{cluster_id}/{port}\"\n values[\"api_url\"] = api_url\n return values\n def post(self, request: Any) -> Any:\n return self.post_raw(request)\n[docs]def get_repl_context() -> Any:\n \"\"\"Gets the notebook REPL context if running inside a Databricks notebook.\n Returns None otherwise.\n \"\"\"\n try:\n from dbruntime.databricks_repl_context import get_context\n return get_context()\n except ImportError:\n raise ValueError(\n \"Cannot access dbruntime, not running inside a Databricks notebook.\"\n )\n[docs]def get_default_host() -> str:\n \"\"\"Gets the default Databricks workspace hostname.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/databricks.html"} {"id": "f3e11bba4588-2", "text": "\"\"\"Gets the default Databricks workspace hostname.\n Raises an error if the hostname cannot be automatically determined.\n \"\"\"\n host = os.getenv(\"DATABRICKS_HOST\")\n if not host:\n try:\n host = get_repl_context().browserHostName\n if not host:\n raise ValueError(\"context doesn't contain browserHostName.\")\n except Exception as e:\n raise ValueError(\n \"host was not set and cannot be automatically inferred. Set \"\n f\"environment variable 'DATABRICKS_HOST'. Received error: {e}\"\n )\n # TODO: support Databricks CLI profile\n host = host.lstrip(\"https://\").lstrip(\"http://\").rstrip(\"/\")\n return host\n[docs]def get_default_api_token() -> str:\n \"\"\"Gets the default Databricks personal access token.\n Raises an error if the token cannot be automatically determined.\n \"\"\"\n if api_token := os.getenv(\"DATABRICKS_TOKEN\"):\n return api_token\n try:\n api_token = get_repl_context().apiToken\n if not api_token:\n raise ValueError(\"context doesn't contain apiToken.\")\n except Exception as e:\n raise ValueError(\n \"api_token was not set and cannot be automatically inferred. Set \"\n f\"environment variable 'DATABRICKS_TOKEN'. Received error: {e}\"\n )\n # TODO: support Databricks CLI profile\n return api_token\n[docs]class Databricks(LLM):\n \"\"\"LLM wrapper around a Databricks serving endpoint or a cluster driver proxy app.\n It supports two endpoint types:\n * **Serving endpoint** (recommended for both production and development).", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/databricks.html"} {"id": "f3e11bba4588-3", "text": "* **Serving endpoint** (recommended for both production and development).\n We assume that an LLM was registered and deployed to a serving endpoint.\n To wrap it as an LLM you must have \"Can Query\" permission to the endpoint.\n Set ``endpoint_name`` accordingly and do not set ``cluster_id`` and\n ``cluster_driver_port``.\n The expected model signature is:\n * inputs::\n [{\"name\": \"prompt\", \"type\": \"string\"},\n {\"name\": \"stop\", \"type\": \"list[string]\"}]\n * outputs: ``[{\"type\": \"string\"}]``\n * **Cluster driver proxy app** (recommended for interactive development).\n One can load an LLM on a Databricks interactive cluster and start a local HTTP\n server on the driver node to serve the model at ``/`` using HTTP POST method\n with JSON input/output.\n Please use a port number between ``[3000, 8000]`` and let the server listen to\n the driver IP address or simply ``0.0.0.0`` instead of localhost only.\n To wrap it as an LLM you must have \"Can Attach To\" permission to the cluster.\n Set ``cluster_id`` and ``cluster_driver_port`` and do not set ``endpoint_name``.\n The expected server schema (using JSON schema) is:\n * inputs::\n {\"type\": \"object\",\n \"properties\": {\n \"prompt\": {\"type\": \"string\"},\n \"stop\": {\"type\": \"array\", \"items\": {\"type\": \"string\"}}},\n \"required\": [\"prompt\"]}`\n * outputs: ``{\"type\": \"string\"}``\n If the endpoint model signature is different or you want to set extra params,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/databricks.html"} {"id": "f3e11bba4588-4", "text": "If the endpoint model signature is different or you want to set extra params,\n you can use `transform_input_fn` and `transform_output_fn` to apply necessary\n transformations before and after the query.\n \"\"\"\n host: str = Field(default_factory=get_default_host)\n \"\"\"Databricks workspace hostname.\n If not provided, the default value is determined by\n * the ``DATABRICKS_HOST`` environment variable if present, or\n * the hostname of the current Databricks workspace if running inside\n a Databricks notebook attached to an interactive cluster in \"single user\"\n or \"no isolation shared\" mode.\n \"\"\"\n api_token: str = Field(default_factory=get_default_api_token)\n \"\"\"Databricks personal access token.\n If not provided, the default value is determined by\n * the ``DATABRICKS_TOKEN`` environment variable if present, or\n * an automatically generated temporary token if running inside a Databricks\n notebook attached to an interactive cluster in \"single user\" or\n \"no isolation shared\" mode.\n \"\"\"\n endpoint_name: Optional[str] = None\n \"\"\"Name of the model serving endpont.\n You must specify the endpoint name to connect to a model serving endpoint.\n You must not set both ``endpoint_name`` and ``cluster_id``.\n \"\"\"\n cluster_id: Optional[str] = None\n \"\"\"ID of the cluster if connecting to a cluster driver proxy app.\n If neither ``endpoint_name`` nor ``cluster_id`` is not provided and the code runs\n inside a Databricks notebook attached to an interactive cluster in \"single user\"\n or \"no isolation shared\" mode, the current cluster ID is used as default.\n You must not set both ``endpoint_name`` and ``cluster_id``.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/databricks.html"} {"id": "f3e11bba4588-5", "text": "You must not set both ``endpoint_name`` and ``cluster_id``.\n \"\"\"\n cluster_driver_port: Optional[str] = None\n \"\"\"The port number used by the HTTP server running on the cluster driver node.\n The server should listen on the driver IP address or simply ``0.0.0.0`` to connect.\n We recommend the server using a port number between ``[3000, 8000]``.\n \"\"\"\n model_kwargs: Optional[Dict[str, Any]] = None\n \"\"\"Extra parameters to pass to the endpoint.\"\"\"\n transform_input_fn: Optional[Callable] = None\n \"\"\"A function that transforms ``{prompt, stop, **kwargs}`` into a JSON-compatible\n request object that the endpoint accepts.\n For example, you can apply a prompt template to the input prompt.\n \"\"\"\n transform_output_fn: Optional[Callable[..., str]] = None\n \"\"\"A function that transforms the output from the endpoint to the generated text.\n \"\"\"\n _client: _DatabricksClientBase = PrivateAttr()\n[docs] class Config:\n extra = Extra.forbid\n underscore_attrs_are_private = True\n[docs] @validator(\"cluster_id\", always=True)\n def set_cluster_id(cls, v: Any, values: Dict[str, Any]) -> Optional[str]:\n if v and values[\"endpoint_name\"]:\n raise ValueError(\"Cannot set both endpoint_name and cluster_id.\")\n elif values[\"endpoint_name\"]:\n return None\n elif v:\n return v\n else:\n try:\n if v := get_repl_context().clusterId:\n return v\n raise ValueError(\"Context doesn't contain clusterId.\")\n except Exception as e:\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/databricks.html"} {"id": "f3e11bba4588-6", "text": "except Exception as e:\n raise ValueError(\n \"Neither endpoint_name nor cluster_id was set. \"\n \"And the cluster_id cannot be automatically determined. Received\"\n f\" error: {e}\"\n )\n[docs] @validator(\"cluster_driver_port\", always=True)\n def set_cluster_driver_port(cls, v: Any, values: Dict[str, Any]) -> Optional[str]:\n if v and values[\"endpoint_name\"]:\n raise ValueError(\"Cannot set both endpoint_name and cluster_driver_port.\")\n elif values[\"endpoint_name\"]:\n return None\n elif v is None:\n raise ValueError(\n \"Must set cluster_driver_port to connect to a cluster driver.\"\n )\n elif int(v) <= 0:\n raise ValueError(f\"Invalid cluster_driver_port: {v}\")\n else:\n return v\n[docs] @validator(\"model_kwargs\", always=True)\n def set_model_kwargs(cls, v: Optional[Dict[str, Any]]) -> Optional[Dict[str, Any]]:\n if v:\n assert \"prompt\" not in v, \"model_kwargs must not contain key 'prompt'\"\n assert \"stop\" not in v, \"model_kwargs must not contain key 'stop'\"\n return v\n def __init__(self, **data: Any):\n super().__init__(**data)\n if self.endpoint_name:\n self._client = _DatabricksServingEndpointClient(\n host=self.host,\n api_token=self.api_token,\n endpoint_name=self.endpoint_name,\n )\n elif self.cluster_id and self.cluster_driver_port:\n self._client = _DatabricksClusterDriverProxyClient(\n host=self.host,\n api_token=self.api_token,\n cluster_id=self.cluster_id,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/databricks.html"} {"id": "f3e11bba4588-7", "text": "api_token=self.api_token,\n cluster_id=self.cluster_id,\n cluster_driver_port=self.cluster_driver_port,\n )\n else:\n raise ValueError(\n \"Must specify either endpoint_name or cluster_id/cluster_driver_port.\"\n )\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"databricks\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Queries the LLM endpoint with the given prompt and stop sequence.\"\"\"\n # TODO: support callbacks\n request = {\"prompt\": prompt, \"stop\": stop}\n request.update(kwargs)\n if self.model_kwargs:\n request.update(self.model_kwargs)\n if self.transform_input_fn:\n request = self.transform_input_fn(**request)\n response = self._client.post(request)\n if self.transform_output_fn:\n response = self.transform_output_fn(response)\n return response", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/databricks.html"} {"id": "25c8e02e7fb5-0", "text": "Source code for langchain.llms.self_hosted\n\"\"\"Run model inference on self-hosted remote hardware.\"\"\"\nimport importlib.util\nimport logging\nimport pickle\nfrom typing import Any, Callable, List, Mapping, Optional\nfrom pydantic import Extra\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nlogger = logging.getLogger(__name__)\ndef _generate_text(\n pipeline: Any,\n prompt: str,\n *args: Any,\n stop: Optional[List[str]] = None,\n **kwargs: Any,\n) -> str:\n \"\"\"Inference function to send to the remote hardware.\n Accepts a pipeline callable (or, more likely,\n a key pointing to the model on the cluster's object store)\n and returns text predictions for each document\n in the batch.\n \"\"\"\n text = pipeline(prompt, *args, **kwargs)\n if stop is not None:\n text = enforce_stop_tokens(text, stop)\n return text\ndef _send_pipeline_to_device(pipeline: Any, device: int) -> Any:\n \"\"\"Send a pipeline to a device on the cluster.\"\"\"\n if isinstance(pipeline, str):\n with open(pipeline, \"rb\") as f:\n pipeline = pickle.load(f)\n if importlib.util.find_spec(\"torch\") is not None:\n import torch\n cuda_device_count = torch.cuda.device_count()\n if device < -1 or (device >= cuda_device_count):\n raise ValueError(\n f\"Got device=={device}, \"\n f\"device is required to be within [-1, {cuda_device_count})\"\n )\n if device < 0 and cuda_device_count > 0:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/self_hosted.html"} {"id": "25c8e02e7fb5-1", "text": ")\n if device < 0 and cuda_device_count > 0:\n logger.warning(\n \"Device has %d GPUs available. \"\n \"Provide device={deviceId} to `from_model_id` to use available\"\n \"GPUs for execution. deviceId is -1 for CPU and \"\n \"can be a positive integer associated with CUDA device id.\",\n cuda_device_count,\n )\n pipeline.device = torch.device(device)\n pipeline.model = pipeline.model.to(pipeline.device)\n return pipeline\n[docs]class SelfHostedPipeline(LLM):\n \"\"\"Run model inference on self-hosted remote hardware.\n Supported hardware includes auto-launched instances on AWS, GCP, Azure,\n and Lambda, as well as servers specified\n by IP address and SSH credentials (such as on-prem, or another\n cloud like Paperspace, Coreweave, etc.).\n To use, you should have the ``runhouse`` python package installed.\n Example for custom pipeline and inference functions:\n .. code-block:: python\n from langchain.llms import SelfHostedPipeline\n from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\n import runhouse as rh\n def load_pipeline():\n tokenizer = AutoTokenizer.from_pretrained(\"gpt2\")\n model = AutoModelForCausalLM.from_pretrained(\"gpt2\")\n return pipeline(\n \"text-generation\", model=model, tokenizer=tokenizer,\n max_new_tokens=10\n )\n def inference_fn(pipeline, prompt, stop = None):\n return pipeline(prompt)[0][\"generated_text\"]\n gpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\n llm = SelfHostedPipeline(\n model_load_fn=load_pipeline,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/self_hosted.html"} {"id": "25c8e02e7fb5-2", "text": "llm = SelfHostedPipeline(\n model_load_fn=load_pipeline,\n hardware=gpu,\n model_reqs=model_reqs, inference_fn=inference_fn\n )\n Example for <2GB model (can be serialized and sent directly to the server):\n .. code-block:: python\n from langchain.llms import SelfHostedPipeline\n import runhouse as rh\n gpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\n my_model = ...\n llm = SelfHostedPipeline.from_pipeline(\n pipeline=my_model,\n hardware=gpu,\n model_reqs=[\"./\", \"torch\", \"transformers\"],\n )\n Example passing model path for larger models:\n .. code-block:: python\n from langchain.llms import SelfHostedPipeline\n import runhouse as rh\n import pickle\n from transformers import pipeline\n generator = pipeline(model=\"gpt2\")\n rh.blob(pickle.dumps(generator), path=\"models/pipeline.pkl\"\n ).save().to(gpu, path=\"models\")\n llm = SelfHostedPipeline.from_pipeline(\n pipeline=\"models/pipeline.pkl\",\n hardware=gpu,\n model_reqs=[\"./\", \"torch\", \"transformers\"],\n )\n \"\"\"\n pipeline_ref: Any #: :meta private:\n client: Any #: :meta private:\n inference_fn: Callable = _generate_text #: :meta private:\n \"\"\"Inference function to send to the remote hardware.\"\"\"\n hardware: Any\n \"\"\"Remote hardware to send the inference function to.\"\"\"\n model_load_fn: Callable\n \"\"\"Function to load the model remotely on the server.\"\"\"\n load_fn_kwargs: Optional[dict] = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/self_hosted.html"} {"id": "25c8e02e7fb5-3", "text": "load_fn_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model load function.\"\"\"\n model_reqs: List[str] = [\"./\", \"torch\"]\n \"\"\"Requirements to install on hardware to inference the model.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n def __init__(self, **kwargs: Any):\n \"\"\"Init the pipeline with an auxiliary function.\n The load function must be in global scope to be imported\n and run on the server, i.e. in a module and not a REPL or closure.\n Then, initialize the remote inference function.\n \"\"\"\n super().__init__(**kwargs)\n try:\n import runhouse as rh\n except ImportError:\n raise ImportError(\n \"Could not import runhouse python package. \"\n \"Please install it with `pip install runhouse`.\"\n )\n remote_load_fn = rh.function(fn=self.model_load_fn).to(\n self.hardware, reqs=self.model_reqs\n )\n _load_fn_kwargs = self.load_fn_kwargs or {}\n self.pipeline_ref = remote_load_fn.remote(**_load_fn_kwargs)\n self.client = rh.function(fn=self.inference_fn).to(\n self.hardware, reqs=self.model_reqs\n )\n[docs] @classmethod\n def from_pipeline(\n cls,\n pipeline: Any,\n hardware: Any,\n model_reqs: Optional[List[str]] = None,\n device: int = 0,\n **kwargs: Any,\n ) -> LLM:\n \"\"\"Init the SelfHostedPipeline from a pipeline object or string.\"\"\"\n if not isinstance(pipeline, str):\n logger.warning(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/self_hosted.html"} {"id": "25c8e02e7fb5-4", "text": "if not isinstance(pipeline, str):\n logger.warning(\n \"Serializing pipeline to send to remote hardware. \"\n \"Note, it can be quite slow\"\n \"to serialize and send large models with each execution. \"\n \"Consider sending the pipeline\"\n \"to the cluster and passing the path to the pipeline instead.\"\n )\n load_fn_kwargs = {\"pipeline\": pipeline, \"device\": device}\n return cls(\n load_fn_kwargs=load_fn_kwargs,\n model_load_fn=_send_pipeline_to_device,\n hardware=hardware,\n model_reqs=[\"transformers\", \"torch\"] + (model_reqs or []),\n **kwargs,\n )\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"hardware\": self.hardware},\n }\n @property\n def _llm_type(self) -> str:\n return \"self_hosted_llm\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n return self.client(\n pipeline=self.pipeline_ref, prompt=prompt, stop=stop, **kwargs\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/self_hosted.html"} {"id": "03752bdce78c-0", "text": "Source code for langchain.llms.textgen\n\"\"\"Wrapper around text-generation-webui.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Optional\nimport requests\nfrom pydantic import Field\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nlogger = logging.getLogger(__name__)\n[docs]class TextGen(LLM):\n \"\"\"Wrapper around the text-generation-webui model.\n To use, you should have the text-generation-webui installed, a model loaded,\n and --api added as a command-line option.\n Suggested installation, use one-click installer for your OS:\n https://github.com/oobabooga/text-generation-webui#one-click-installers\n Paremeters below taken from text-generation-webui api example:\n https://github.com/oobabooga/text-generation-webui/blob/main/api-examples/api-example.py\n Example:\n .. code-block:: python\n from langchain.llms import TextGen\n llm = TextGen(model_url=\"http://localhost:8500\")\n \"\"\"\n model_url: str\n \"\"\"The full URL to the textgen webui including http[s]://host:port \"\"\"\n preset: Optional[str] = None\n \"\"\"The preset to use in the textgen webui \"\"\"\n max_new_tokens: Optional[int] = 250\n \"\"\"The maximum number of tokens to generate.\"\"\"\n do_sample: bool = Field(True, alias=\"do_sample\")\n \"\"\"Do sample\"\"\"\n temperature: Optional[float] = 1.3\n \"\"\"Primary factor to control randomness of outputs. 0 = deterministic\n (only the most likely token is used). Higher value = more randomness.\"\"\"\n top_p: Optional[float] = 0.1", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/textgen.html"} {"id": "03752bdce78c-1", "text": "top_p: Optional[float] = 0.1\n \"\"\"If not set to 1, select tokens with probabilities adding up to less than this\n number. Higher value = higher range of possible random results.\"\"\"\n typical_p: Optional[float] = 1\n \"\"\"If not set to 1, select only tokens that are at least this much more likely to\n appear than random tokens, given the prior text.\"\"\"\n epsilon_cutoff: Optional[float] = 0 # In units of 1e-4\n \"\"\"Epsilon cutoff\"\"\"\n eta_cutoff: Optional[float] = 0 # In units of 1e-4\n \"\"\"ETA cutoff\"\"\"\n repetition_penalty: Optional[float] = 1.18\n \"\"\"Exponential penalty factor for repeating prior tokens. 1 means no penalty,\n higher value = less repetition, lower value = more repetition.\"\"\"\n top_k: Optional[float] = 40\n \"\"\"Similar to top_p, but select instead only the top_k most likely tokens.\n Higher value = higher range of possible random results.\"\"\"\n min_length: Optional[int] = 0\n \"\"\"Minimum generation length in tokens.\"\"\"\n no_repeat_ngram_size: Optional[int] = 0\n \"\"\"If not set to 0, specifies the length of token sets that are completely blocked\n from repeating at all. Higher values = blocks larger phrases,\n lower values = blocks words or letters from repeating.\n Only 0 or high values are a good idea in most cases.\"\"\"\n num_beams: Optional[int] = 1\n \"\"\"Number of beams\"\"\"\n penalty_alpha: Optional[float] = 0\n \"\"\"Penalty Alpha\"\"\"\n length_penalty: Optional[float] = 1\n \"\"\"Length Penalty\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/textgen.html"} {"id": "03752bdce78c-2", "text": "length_penalty: Optional[float] = 1\n \"\"\"Length Penalty\"\"\"\n early_stopping: bool = Field(False, alias=\"early_stopping\")\n \"\"\"Early stopping\"\"\"\n seed: int = Field(-1, alias=\"seed\")\n \"\"\"Seed (-1 for random)\"\"\"\n add_bos_token: bool = Field(True, alias=\"add_bos_token\")\n \"\"\"Add the bos_token to the beginning of prompts.\n Disabling this can make the replies more creative.\"\"\"\n truncation_length: Optional[int] = 2048\n \"\"\"Truncate the prompt up to this length. The leftmost tokens are removed if\n the prompt exceeds this length. Most models require this to be at most 2048.\"\"\"\n ban_eos_token: bool = Field(False, alias=\"ban_eos_token\")\n \"\"\"Ban the eos_token. Forces the model to never end the generation prematurely.\"\"\"\n skip_special_tokens: bool = Field(True, alias=\"skip_special_tokens\")\n \"\"\"Skip special tokens. Some specific models need this unset.\"\"\"\n stopping_strings: Optional[List[str]] = []\n \"\"\"A list of strings to stop generation when encountered.\"\"\"\n streaming: bool = False\n \"\"\"Whether to stream the results, token by token (currently unimplemented).\"\"\"\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling textgen.\"\"\"\n return {\n \"max_new_tokens\": self.max_new_tokens,\n \"do_sample\": self.do_sample,\n \"temperature\": self.temperature,\n \"top_p\": self.top_p,\n \"typical_p\": self.typical_p,\n \"epsilon_cutoff\": self.epsilon_cutoff,\n \"eta_cutoff\": self.eta_cutoff,\n \"repetition_penalty\": self.repetition_penalty,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/textgen.html"} {"id": "03752bdce78c-3", "text": "\"repetition_penalty\": self.repetition_penalty,\n \"top_k\": self.top_k,\n \"min_length\": self.min_length,\n \"no_repeat_ngram_size\": self.no_repeat_ngram_size,\n \"num_beams\": self.num_beams,\n \"penalty_alpha\": self.penalty_alpha,\n \"length_penalty\": self.length_penalty,\n \"early_stopping\": self.early_stopping,\n \"seed\": self.seed,\n \"add_bos_token\": self.add_bos_token,\n \"truncation_length\": self.truncation_length,\n \"ban_eos_token\": self.ban_eos_token,\n \"skip_special_tokens\": self.skip_special_tokens,\n \"stopping_strings\": self.stopping_strings,\n }\n @property\n def _identifying_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model_url\": self.model_url}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"textgen\"\n def _get_parameters(self, stop: Optional[List[str]] = None) -> Dict[str, Any]:\n \"\"\"\n Performs sanity check, preparing paramaters in format needed by textgen.\n Args:\n stop (Optional[List[str]]): List of stop sequences for textgen.\n Returns:\n Dictionary containing the combined parameters.\n \"\"\"\n # Raise error if stop sequences are in both input and default params\n # if self.stop and stop is not None:\n if self.stopping_strings and stop is not None:\n raise ValueError(\"`stop` found in both the input and default params.\")\n if self.preset is None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/textgen.html"} {"id": "03752bdce78c-4", "text": "if self.preset is None:\n params = self._default_params\n else:\n params = {\"preset\": self.preset}\n # then sets it as configured, or default to an empty list:\n params[\"stop\"] = self.stopping_strings or stop or []\n return params\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call the textgen web API and return the output.\n Args:\n prompt: The prompt to use for generation.\n stop: A list of strings to stop generation when encountered.\n Returns:\n The generated text.\n Example:\n .. code-block:: python\n from langchain.llms import TextGen\n llm = TextGen(model_url=\"http://localhost:5000\")\n llm(\"Write a story about llamas.\")\n \"\"\"\n if self.streaming:\n raise ValueError(\"`streaming` option currently unsupported.\")\n url = f\"{self.model_url}/api/v1/generate\"\n params = self._get_parameters(stop)\n request = params.copy()\n request[\"prompt\"] = prompt\n response = requests.post(url, json=request)\n if response.status_code == 200:\n result = response.json()[\"results\"][0][\"text\"]\n print(prompt + result)\n else:\n print(f\"ERROR: response: {response}\")\n result = \"\"\n return result", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/textgen.html"} {"id": "45855c2d913d-0", "text": "Source code for langchain.llms.ai21\n\"\"\"Wrapper around AI21 APIs.\"\"\"\nfrom typing import Any, Dict, List, Optional\nimport requests\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.utils import get_from_dict_or_env\n[docs]class AI21PenaltyData(BaseModel):\n \"\"\"Parameters for AI21 penalty data.\"\"\"\n scale: int = 0\n applyToWhitespaces: bool = True\n applyToPunctuations: bool = True\n applyToNumbers: bool = True\n applyToStopwords: bool = True\n applyToEmojis: bool = True\n[docs]class AI21(LLM):\n \"\"\"Wrapper around AI21 large language models.\n To use, you should have the environment variable ``AI21_API_KEY``\n set with your API key.\n Example:\n .. code-block:: python\n from langchain.llms import AI21\n ai21 = AI21(model=\"j2-jumbo-instruct\")\n \"\"\"\n model: str = \"j2-jumbo-instruct\"\n \"\"\"Model name to use.\"\"\"\n temperature: float = 0.7\n \"\"\"What sampling temperature to use.\"\"\"\n maxTokens: int = 256\n \"\"\"The maximum number of tokens to generate in the completion.\"\"\"\n minTokens: int = 0\n \"\"\"The minimum number of tokens to generate in the completion.\"\"\"\n topP: float = 1.0\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n presencePenalty: AI21PenaltyData = AI21PenaltyData()\n \"\"\"Penalizes repeated tokens.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/ai21.html"} {"id": "45855c2d913d-1", "text": "\"\"\"Penalizes repeated tokens.\"\"\"\n countPenalty: AI21PenaltyData = AI21PenaltyData()\n \"\"\"Penalizes repeated tokens according to count.\"\"\"\n frequencyPenalty: AI21PenaltyData = AI21PenaltyData()\n \"\"\"Penalizes repeated tokens according to frequency.\"\"\"\n numResults: int = 1\n \"\"\"How many completions to generate for each prompt.\"\"\"\n logitBias: Optional[Dict[str, float]] = None\n \"\"\"Adjust the probability of specific tokens being generated.\"\"\"\n ai21_api_key: Optional[str] = None\n stop: Optional[List[str]] = None\n base_url: Optional[str] = None\n \"\"\"Base url to use, if None decides based on model name.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key exists in environment.\"\"\"\n ai21_api_key = get_from_dict_or_env(values, \"ai21_api_key\", \"AI21_API_KEY\")\n values[\"ai21_api_key\"] = ai21_api_key\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling AI21 API.\"\"\"\n return {\n \"temperature\": self.temperature,\n \"maxTokens\": self.maxTokens,\n \"minTokens\": self.minTokens,\n \"topP\": self.topP,\n \"presencePenalty\": self.presencePenalty.dict(),\n \"countPenalty\": self.countPenalty.dict(),\n \"frequencyPenalty\": self.frequencyPenalty.dict(),\n \"numResults\": self.numResults,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/ai21.html"} {"id": "45855c2d913d-2", "text": "\"numResults\": self.numResults,\n \"logitBias\": self.logitBias,\n }\n @property\n def _identifying_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model\": self.model}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"ai21\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to AI21's complete endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = ai21(\"Tell me a joke.\")\n \"\"\"\n if self.stop is not None and stop is not None:\n raise ValueError(\"`stop` found in both the input and default params.\")\n elif self.stop is not None:\n stop = self.stop\n elif stop is None:\n stop = []\n if self.base_url is not None:\n base_url = self.base_url\n else:\n if self.model in (\"j1-grande-instruct\",):\n base_url = \"https://api.ai21.com/studio/v1/experimental\"\n else:\n base_url = \"https://api.ai21.com/studio/v1\"\n params = {**self._default_params, **kwargs}\n response = requests.post(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/ai21.html"} {"id": "45855c2d913d-3", "text": "response = requests.post(\n url=f\"{base_url}/{self.model}/complete\",\n headers={\"Authorization\": f\"Bearer {self.ai21_api_key}\"},\n json={\"prompt\": prompt, \"stopSequences\": stop, **params},\n )\n if response.status_code != 200:\n optional_detail = response.json().get(\"error\")\n raise ValueError(\n f\"AI21 /complete call failed with status code {response.status_code}.\"\n f\" Details: {optional_detail}\"\n )\n response_json = response.json()\n return response_json[\"completions\"][0][\"data\"][\"text\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/ai21.html"} {"id": "e6454f177e15-0", "text": "Source code for langchain.llms.loading\n\"\"\"Base interface for loading large language models apis.\"\"\"\nimport json\nfrom pathlib import Path\nfrom typing import Union\nimport yaml\nfrom langchain.llms import type_to_cls_dict\nfrom langchain.llms.base import BaseLLM\n[docs]def load_llm_from_config(config: dict) -> BaseLLM:\n \"\"\"Load LLM from Config Dict.\"\"\"\n if \"_type\" not in config:\n raise ValueError(\"Must specify an LLM Type in config\")\n config_type = config.pop(\"_type\")\n if config_type not in type_to_cls_dict:\n raise ValueError(f\"Loading {config_type} LLM not supported\")\n llm_cls = type_to_cls_dict[config_type]\n return llm_cls(**config)\n[docs]def load_llm(file: Union[str, Path]) -> BaseLLM:\n \"\"\"Load LLM from file.\"\"\"\n # Convert file to Path object.\n if isinstance(file, str):\n file_path = Path(file)\n else:\n file_path = file\n # Load from either json or yaml.\n if file_path.suffix == \".json\":\n with open(file_path) as f:\n config = json.load(f)\n elif file_path.suffix == \".yaml\":\n with open(file_path, \"r\") as f:\n config = yaml.safe_load(f)\n else:\n raise ValueError(\"File type must be json or yaml\")\n # Load the LLM from the config now.\n return load_llm_from_config(config)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/loading.html"} {"id": "14b06dccf963-0", "text": "Source code for langchain.llms.mosaicml\n\"\"\"Wrapper around MosaicML APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nINSTRUCTION_KEY = \"### Instruction:\"\nRESPONSE_KEY = \"### Response:\"\nINTRO_BLURB = (\n \"Below is an instruction that describes a task. \"\n \"Write a response that appropriately completes the request.\"\n)\nPROMPT_FOR_GENERATION_FORMAT = \"\"\"{intro}\n{instruction_key}\n{instruction}\n{response_key}\n\"\"\".format(\n intro=INTRO_BLURB,\n instruction_key=INSTRUCTION_KEY,\n instruction=\"{instruction}\",\n response_key=RESPONSE_KEY,\n)\n[docs]class MosaicML(LLM):\n \"\"\"Wrapper around MosaicML's LLM inference service.\n To use, you should have the\n environment variable ``MOSAICML_API_TOKEN`` set with your API token, or pass\n it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.llms import MosaicML\n endpoint_url = (\n \"https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict\"\n )\n mosaic_llm = MosaicML(\n endpoint_url=endpoint_url,\n mosaicml_api_token=\"my-api-key\"\n )\n \"\"\"\n endpoint_url: str = (", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/mosaicml.html"} {"id": "14b06dccf963-1", "text": ")\n \"\"\"\n endpoint_url: str = (\n \"https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict\"\n )\n \"\"\"Endpoint URL to use.\"\"\"\n inject_instruction_format: bool = False\n \"\"\"Whether to inject the instruction format into the prompt.\"\"\"\n model_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n retry_sleep: float = 1.0\n \"\"\"How long to try sleeping for if a rate limit is encountered\"\"\"\n mosaicml_api_token: Optional[str] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n mosaicml_api_token = get_from_dict_or_env(\n values, \"mosaicml_api_token\", \"MOSAICML_API_TOKEN\"\n )\n values[\"mosaicml_api_token\"] = mosaicml_api_token\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n _model_kwargs = self.model_kwargs or {}\n return {\n **{\"endpoint_url\": self.endpoint_url},\n **{\"model_kwargs\": _model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"mosaic\"\n def _transform_prompt(self, prompt: str) -> str:\n \"\"\"Transform prompt.\"\"\"\n if self.inject_instruction_format:\n prompt = PROMPT_FOR_GENERATION_FORMAT.format(\n instruction=prompt,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/mosaicml.html"} {"id": "14b06dccf963-2", "text": "prompt = PROMPT_FOR_GENERATION_FORMAT.format(\n instruction=prompt,\n )\n return prompt\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n is_retry: bool = False,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to a MosaicML LLM inference endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = mosaic_llm(\"Tell me a joke.\")\n \"\"\"\n _model_kwargs = self.model_kwargs or {}\n prompt = self._transform_prompt(prompt)\n payload = {\"inputs\": [prompt]}\n payload.update(_model_kwargs)\n payload.update(kwargs)\n # HTTP headers for authorization\n headers = {\n \"Authorization\": f\"{self.mosaicml_api_token}\",\n \"Content-Type\": \"application/json\",\n }\n # send request\n try:\n response = requests.post(self.endpoint_url, headers=headers, json=payload)\n except requests.exceptions.RequestException as e:\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\n try:\n parsed_response = response.json()\n if \"error\" in parsed_response:\n # if we get rate limited, try sleeping for 1 second\n if (\n not is_retry\n and \"rate limit exceeded\" in parsed_response[\"error\"].lower()\n ):\n import time\n time.sleep(self.retry_sleep)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/mosaicml.html"} {"id": "14b06dccf963-3", "text": "):\n import time\n time.sleep(self.retry_sleep)\n return self._call(prompt, stop, run_manager, is_retry=True)\n raise ValueError(\n f\"Error raised by inference API: {parsed_response['error']}\"\n )\n # The inference API has changed a couple of times, so we add some handling\n # to be robust to multiple response formats.\n if isinstance(parsed_response, dict):\n output_keys = [\"data\", \"output\", \"outputs\"]\n for key in output_keys:\n if key in parsed_response:\n output_item = parsed_response[key]\n break\n else:\n raise ValueError(\n f\"No valid key ({', '.join(output_keys)}) in response:\"\n f\" {parsed_response}\"\n )\n if isinstance(output_item, list):\n text = output_item[0]\n else:\n text = output_item\n elif isinstance(parsed_response, list):\n first_item = parsed_response[0]\n if isinstance(first_item, str):\n text = first_item\n elif isinstance(first_item, dict):\n if \"output\" in parsed_response:\n text = first_item[\"output\"]\n else:\n raise ValueError(\n f\"No key data or output in response: {parsed_response}\"\n )\n else:\n raise ValueError(f\"Unexpected response format: {parsed_response}\")\n else:\n raise ValueError(f\"Unexpected response type: {parsed_response}\")\n text = text[len(prompt) :]\n except requests.exceptions.JSONDecodeError as e:\n raise ValueError(\n f\"Error raised by inference API: {e}.\\nResponse: {response.text}\"\n )\n # TODO: replace when MosaicML supports custom stop tokens natively\n if stop is not None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/mosaicml.html"} {"id": "14b06dccf963-4", "text": "if stop is not None:\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/mosaicml.html"} {"id": "4f2a4d339a1b-0", "text": "Source code for langchain.llms.baseten\n\"\"\"Wrapper around Baseten deployed model API.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nlogger = logging.getLogger(__name__)\n[docs]class Baseten(LLM):\n \"\"\"Use your Baseten models in Langchain\n To use, you should have the ``baseten`` python package installed,\n and run ``baseten.login()`` with your Baseten API key.\n The required ``model`` param can be either a model id or model\n version id. Using a model version ID will result in\n slightly faster invocation.\n Any other model parameters can also\n be passed in with the format input={model_param: value, ...}\n The Baseten model must accept a dictionary of input with the key\n \"prompt\" and return a dictionary with a key \"data\" which maps\n to a list of response strings.\n Example:\n .. code-block:: python\n from langchain.llms import Baseten\n my_model = Baseten(model=\"MODEL_ID\")\n output = my_model(\"prompt\")\n \"\"\"\n model: str\n input: Dict[str, Any] = Field(default_factory=dict)\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"model_kwargs\": self.model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of model.\"\"\"\n return \"baseten\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/baseten.html"} {"id": "4f2a4d339a1b-1", "text": "\"\"\"Return type of model.\"\"\"\n return \"baseten\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call to Baseten deployed model endpoint.\"\"\"\n try:\n import baseten\n except ImportError as exc:\n raise ValueError(\n \"Could not import Baseten Python package. \"\n \"Please install it with `pip install baseten`.\"\n ) from exc\n # get the model and version\n try:\n model = baseten.deployed_model_version_id(self.model)\n response = model.predict({\"prompt\": prompt})\n except baseten.common.core.ApiError:\n model = baseten.deployed_model_id(self.model)\n response = model.predict({\"prompt\": prompt})\n return \"\".join(response)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/baseten.html"} {"id": "13bfb85f78e3-0", "text": "Source code for langchain.llms.ctransformers\n\"\"\"Wrapper around the C Transformers library.\"\"\"\nfrom typing import Any, Dict, Optional, Sequence\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\n[docs]class CTransformers(LLM):\n \"\"\"Wrapper around the C Transformers LLM interface.\n To use, you should have the ``ctransformers`` python package installed.\n See https://github.com/marella/ctransformers\n Example:\n .. code-block:: python\n from langchain.llms import CTransformers\n llm = CTransformers(model=\"/path/to/ggml-gpt-2.bin\", model_type=\"gpt2\")\n \"\"\"\n client: Any #: :meta private:\n model: str\n \"\"\"The path to a model file or directory or the name of a Hugging Face Hub\n model repo.\"\"\"\n model_type: Optional[str] = None\n \"\"\"The model type.\"\"\"\n model_file: Optional[str] = None\n \"\"\"The name of the model file in repo or directory.\"\"\"\n config: Optional[Dict[str, Any]] = None\n \"\"\"The config parameters.\n See https://github.com/marella/ctransformers#config\"\"\"\n lib: Optional[str] = None\n \"\"\"The path to a shared library or one of `avx2`, `avx`, `basic`.\"\"\"\n @property\n def _identifying_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"model\": self.model,\n \"model_type\": self.model_type,\n \"model_file\": self.model_file,\n \"config\": self.config,\n }\n @property", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/ctransformers.html"} {"id": "13bfb85f78e3-1", "text": "\"config\": self.config,\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"ctransformers\"\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that ``ctransformers`` package is installed.\"\"\"\n try:\n from ctransformers import AutoModelForCausalLM\n except ImportError:\n raise ImportError(\n \"Could not import `ctransformers` package. \"\n \"Please install it with `pip install ctransformers`\"\n )\n config = values[\"config\"] or {}\n values[\"client\"] = AutoModelForCausalLM.from_pretrained(\n values[\"model\"],\n model_type=values[\"model_type\"],\n model_file=values[\"model_file\"],\n lib=values[\"lib\"],\n **config,\n )\n return values\n def _call(\n self,\n prompt: str,\n stop: Optional[Sequence[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Generate text from a prompt.\n Args:\n prompt: The prompt to generate text from.\n stop: A list of sequences to stop generation when encountered.\n Returns:\n The generated text.\n Example:\n .. code-block:: python\n response = llm(\"Tell me a joke.\")\n \"\"\"\n text = []\n _run_manager = run_manager or CallbackManagerForLLMRun.get_noop_manager()\n for chunk in self.client(prompt, stop=stop, stream=True):\n text.append(chunk)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/ctransformers.html"} {"id": "13bfb85f78e3-2", "text": "text.append(chunk)\n _run_manager.on_llm_new_token(chunk, verbose=self.verbose)\n return \"\".join(text)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/ctransformers.html"} {"id": "0cbc52b0f259-0", "text": "Source code for langchain.llms.fake\n\"\"\"Fake LLM wrapper for testing purposes.\"\"\"\nfrom typing import Any, List, Mapping, Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.llms.base import LLM\n[docs]class FakeListLLM(LLM):\n \"\"\"Fake LLM wrapper for testing purposes.\"\"\"\n responses: List\n i: int = 0\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"fake-list\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Return next response\"\"\"\n response = self.responses[self.i]\n self.i += 1\n return response\n async def _acall(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Return next response\"\"\"\n response = self.responses[self.i]\n self.i += 1\n return response\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n return {\"responses\": self.responses}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/fake.html"} {"id": "037d5d0c678d-0", "text": "Source code for langchain.llms.pipelineai\n\"\"\"Wrapper around Pipeline Cloud API.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import BaseModel, Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class PipelineAI(LLM, BaseModel):\n \"\"\"Wrapper around PipelineAI large language models.\n To use, you should have the ``pipeline-ai`` python package installed,\n and the environment variable ``PIPELINE_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain import PipelineAI\n pipeline = PipelineAI(pipeline_key=\"\")\n \"\"\"\n pipeline_key: str = \"\"\n \"\"\"The id or tag of the target pipeline\"\"\"\n pipeline_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any pipeline parameters valid for `create` call not\n explicitly specified.\"\"\"\n pipeline_api_key: Optional[str] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic config.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"pipeline_kwargs\", {})\n for field_name in list(values):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/pipelineai.html"} {"id": "037d5d0c678d-1", "text": "extra = values.get(\"pipeline_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"{field_name} was transfered to pipeline_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"pipeline_kwargs\"] = extra\n return values\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n pipeline_api_key = get_from_dict_or_env(\n values, \"pipeline_api_key\", \"PIPELINE_API_KEY\"\n )\n values[\"pipeline_api_key\"] = pipeline_api_key\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"pipeline_key\": self.pipeline_key},\n **{\"pipeline_kwargs\": self.pipeline_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"pipeline_ai\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call to Pipeline Cloud endpoint.\"\"\"\n try:\n from pipeline import PipelineCloud\n except ImportError:\n raise ValueError(\n \"Could not import pipeline-ai python package. \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/pipelineai.html"} {"id": "037d5d0c678d-2", "text": "raise ValueError(\n \"Could not import pipeline-ai python package. \"\n \"Please install it with `pip install pipeline-ai`.\"\n )\n client = PipelineCloud(token=self.pipeline_api_key)\n params = self.pipeline_kwargs or {}\n params = {**params, **kwargs}\n run = client.run_pipeline(self.pipeline_key, [prompt, params])\n try:\n text = run.result_preview[0][0]\n except AttributeError:\n raise AttributeError(\n f\"A pipeline run should have a `result_preview` attribute.\"\n f\"Run was: {run}\"\n )\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the pipeline parameters\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/pipelineai.html"} {"id": "2fea90e0d948-0", "text": "Source code for langchain.llms.replicate\n\"\"\"Wrapper around Replicate API.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class Replicate(LLM):\n \"\"\"Wrapper around Replicate models.\n To use, you should have the ``replicate`` python package installed,\n and the environment variable ``REPLICATE_API_TOKEN`` set with your API token.\n You can find your token here: https://replicate.com/account\n The model param is required, but any other model parameters can also\n be passed in with the format input={model_param: value, ...}\n Example:\n .. code-block:: python\n from langchain.llms import Replicate\n replicate = Replicate(model=\"stability-ai/stable-diffusion: \\\n 27b93a2413e7f36cd83da926f365628\\\n 0b2931564ff050bf9575f1fdf9bcd7478\",\n input={\"image_dimensions\": \"512x512\"})\n \"\"\"\n model: str\n input: Dict[str, Any] = Field(default_factory=dict)\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n replicate_api_token: Optional[str] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic config.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/replicate.html"} {"id": "2fea90e0d948-1", "text": "def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"{field_name} was transfered to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n replicate_api_token = get_from_dict_or_env(\n values, \"REPLICATE_API_TOKEN\", \"REPLICATE_API_TOKEN\"\n )\n values[\"replicate_api_token\"] = replicate_api_token\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"model\": self.model,\n **{\"model_kwargs\": self.model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of model.\"\"\"\n return \"replicate\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/replicate.html"} {"id": "2fea90e0d948-2", "text": "**kwargs: Any,\n ) -> str:\n \"\"\"Call to replicate endpoint.\"\"\"\n try:\n import replicate as replicate_python\n except ImportError:\n raise ImportError(\n \"Could not import replicate python package. \"\n \"Please install it with `pip install replicate`.\"\n )\n # get the model and version\n model_str, version_str = self.model.split(\":\")\n model = replicate_python.models.get(model_str)\n version = model.versions.get(version_str)\n # sort through the openapi schema to get the name of the first input\n input_properties = sorted(\n version.openapi_schema[\"components\"][\"schemas\"][\"Input\"][\n \"properties\"\n ].items(),\n key=lambda item: item[1].get(\"x-order\", 0),\n )\n first_input_name = input_properties[0][0]\n inputs = {first_input_name: prompt, **self.input}\n iterator = replicate_python.run(self.model, input={**inputs, **kwargs})\n return \"\".join([output for output in iterator])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/replicate.html"} {"id": "e0cfc6dc3421-0", "text": "Source code for langchain.llms.aleph_alpha\n\"\"\"Wrapper around Aleph Alpha APIs.\"\"\"\nfrom typing import Any, Dict, List, Optional, Sequence\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\n[docs]class AlephAlpha(LLM):\n \"\"\"Wrapper around Aleph Alpha large language models.\n To use, you should have the ``aleph_alpha_client`` python package installed, and the\n environment variable ``ALEPH_ALPHA_API_KEY`` set with your API key, or pass\n it as a named parameter to the constructor.\n Parameters are explained more in depth here:\n https://github.com/Aleph-Alpha/aleph-alpha-client/blob/c14b7dd2b4325c7da0d6a119f6e76385800e097b/aleph_alpha_client/completion.py#L10\n Example:\n .. code-block:: python\n from langchain.llms import AlephAlpha\n aleph_alpha = AlephAlpha(aleph_alpha_api_key=\"my-api-key\")\n \"\"\"\n client: Any #: :meta private:\n model: Optional[str] = \"luminous-base\"\n \"\"\"Model name to use.\"\"\"\n maximum_tokens: int = 64\n \"\"\"The maximum number of tokens to be generated.\"\"\"\n temperature: float = 0.0\n \"\"\"A non-negative float that tunes the degree of randomness in generation.\"\"\"\n top_k: int = 0\n \"\"\"Number of most likely tokens to consider at each step.\"\"\"\n top_p: float = 0.0\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/aleph_alpha.html"} {"id": "e0cfc6dc3421-1", "text": "\"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n presence_penalty: float = 0.0\n \"\"\"Penalizes repeated tokens.\"\"\"\n frequency_penalty: float = 0.0\n \"\"\"Penalizes repeated tokens according to frequency.\"\"\"\n repetition_penalties_include_prompt: Optional[bool] = False\n \"\"\"Flag deciding whether presence penalty or frequency penalty are\n updated from the prompt.\"\"\"\n use_multiplicative_presence_penalty: Optional[bool] = False\n \"\"\"Flag deciding whether presence penalty is applied\n multiplicatively (True) or additively (False).\"\"\"\n penalty_bias: Optional[str] = None\n \"\"\"Penalty bias for the completion.\"\"\"\n penalty_exceptions: Optional[List[str]] = None\n \"\"\"List of strings that may be generated without penalty,\n regardless of other penalty settings\"\"\"\n penalty_exceptions_include_stop_sequences: Optional[bool] = None\n \"\"\"Should stop_sequences be included in penalty_exceptions.\"\"\"\n best_of: Optional[int] = None\n \"\"\"returns the one with the \"best of\" results\n (highest log probability per token)\n \"\"\"\n n: int = 1\n \"\"\"How many completions to generate for each prompt.\"\"\"\n logit_bias: Optional[Dict[int, float]] = None\n \"\"\"The logit bias allows to influence the likelihood of generating tokens.\"\"\"\n log_probs: Optional[int] = None\n \"\"\"Number of top log probabilities to be returned for each generated token.\"\"\"\n tokens: Optional[bool] = False\n \"\"\"return tokens of completion.\"\"\"\n disable_optimizations: Optional[bool] = False\n minimum_tokens: Optional[int] = 0\n \"\"\"Generate at least this number of tokens.\"\"\"\n echo: bool = False\n \"\"\"Echo the prompt in the completion.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/aleph_alpha.html"} {"id": "e0cfc6dc3421-2", "text": "echo: bool = False\n \"\"\"Echo the prompt in the completion.\"\"\"\n use_multiplicative_frequency_penalty: bool = False\n sequence_penalty: float = 0.0\n sequence_penalty_min_length: int = 2\n use_multiplicative_sequence_penalty: bool = False\n completion_bias_inclusion: Optional[Sequence[str]] = None\n completion_bias_inclusion_first_token_only: bool = False\n completion_bias_exclusion: Optional[Sequence[str]] = None\n completion_bias_exclusion_first_token_only: bool = False\n \"\"\"Only consider the first token for the completion_bias_exclusion.\"\"\"\n contextual_control_threshold: Optional[float] = None\n \"\"\"If set to None, attention control parameters only apply to those tokens that have\n explicitly been set in the request.\n If set to a non-None value, control parameters are also applied to similar tokens.\n \"\"\"\n control_log_additive: Optional[bool] = True\n \"\"\"True: apply control by adding the log(control_factor) to attention scores.\n False: (attention_scores - - attention_scores.min(-1)) * control_factor\n \"\"\"\n repetition_penalties_include_completion: bool = True\n \"\"\"Flag deciding whether presence penalty or frequency penalty\n are updated from the completion.\"\"\"\n raw_completion: bool = False\n \"\"\"Force the raw completion of the model to be returned.\"\"\"\n aleph_alpha_api_key: Optional[str] = None\n \"\"\"API key for Aleph Alpha API.\"\"\"\n stop_sequences: Optional[List[str]] = None\n \"\"\"Stop sequences to use.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/aleph_alpha.html"} {"id": "e0cfc6dc3421-3", "text": "def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n aleph_alpha_api_key = get_from_dict_or_env(\n values, \"aleph_alpha_api_key\", \"ALEPH_ALPHA_API_KEY\"\n )\n try:\n import aleph_alpha_client\n values[\"client\"] = aleph_alpha_client.Client(token=aleph_alpha_api_key)\n except ImportError:\n raise ImportError(\n \"Could not import aleph_alpha_client python package. \"\n \"Please install it with `pip install aleph_alpha_client`.\"\n )\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling the Aleph Alpha API.\"\"\"\n return {\n \"maximum_tokens\": self.maximum_tokens,\n \"temperature\": self.temperature,\n \"top_k\": self.top_k,\n \"top_p\": self.top_p,\n \"presence_penalty\": self.presence_penalty,\n \"frequency_penalty\": self.frequency_penalty,\n \"n\": self.n,\n \"repetition_penalties_include_prompt\": self.repetition_penalties_include_prompt, # noqa: E501\n \"use_multiplicative_presence_penalty\": self.use_multiplicative_presence_penalty, # noqa: E501\n \"penalty_bias\": self.penalty_bias,\n \"penalty_exceptions\": self.penalty_exceptions,\n \"penalty_exceptions_include_stop_sequences\": self.penalty_exceptions_include_stop_sequences, # noqa: E501\n \"best_of\": self.best_of,\n \"logit_bias\": self.logit_bias,\n \"log_probs\": self.log_probs,\n \"tokens\": self.tokens,\n \"disable_optimizations\": self.disable_optimizations,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/aleph_alpha.html"} {"id": "e0cfc6dc3421-4", "text": "\"disable_optimizations\": self.disable_optimizations,\n \"minimum_tokens\": self.minimum_tokens,\n \"echo\": self.echo,\n \"use_multiplicative_frequency_penalty\": self.use_multiplicative_frequency_penalty, # noqa: E501\n \"sequence_penalty\": self.sequence_penalty,\n \"sequence_penalty_min_length\": self.sequence_penalty_min_length,\n \"use_multiplicative_sequence_penalty\": self.use_multiplicative_sequence_penalty, # noqa: E501\n \"completion_bias_inclusion\": self.completion_bias_inclusion,\n \"completion_bias_inclusion_first_token_only\": self.completion_bias_inclusion_first_token_only, # noqa: E501\n \"completion_bias_exclusion\": self.completion_bias_exclusion,\n \"completion_bias_exclusion_first_token_only\": self.completion_bias_exclusion_first_token_only, # noqa: E501\n \"contextual_control_threshold\": self.contextual_control_threshold,\n \"control_log_additive\": self.control_log_additive,\n \"repetition_penalties_include_completion\": self.repetition_penalties_include_completion, # noqa: E501\n \"raw_completion\": self.raw_completion,\n }\n @property\n def _identifying_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model\": self.model}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"aleph_alpha\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to Aleph Alpha's completion endpoint.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/aleph_alpha.html"} {"id": "e0cfc6dc3421-5", "text": "\"\"\"Call out to Aleph Alpha's completion endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = aleph_alpha(\"Tell me a joke.\")\n \"\"\"\n from aleph_alpha_client import CompletionRequest, Prompt\n params = self._default_params\n if self.stop_sequences is not None and stop is not None:\n raise ValueError(\n \"stop sequences found in both the input and default params.\"\n )\n elif self.stop_sequences is not None:\n params[\"stop_sequences\"] = self.stop_sequences\n else:\n params[\"stop_sequences\"] = stop\n params = {**params, **kwargs}\n request = CompletionRequest(prompt=Prompt.from_text(prompt), **params)\n response = self.client.complete(model=self.model, request=request)\n text = response.completions[0].completion\n # If stop tokens are provided, Aleph Alpha's endpoint returns them.\n # In order to make this consistent with other endpoints, we strip them.\n if stop is not None or self.stop_sequences is not None:\n text = enforce_stop_tokens(text, params[\"stop_sequences\"])\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/aleph_alpha.html"} {"id": "1a4b601fa940-0", "text": "Source code for langchain.llms.huggingface_text_gen_inference\n\"\"\"Wrapper around Huggingface text generation inference API.\"\"\"\nfrom functools import partial\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.llms.base import LLM\n[docs]class HuggingFaceTextGenInference(LLM):\n \"\"\"\n HuggingFace text generation inference API.\n This class is a wrapper around the HuggingFace text generation inference API.\n It is used to generate text from a given prompt.\n Attributes:\n - max_new_tokens: The maximum number of tokens to generate.\n - top_k: The number of top-k tokens to consider when generating text.\n - top_p: The cumulative probability threshold for generating text.\n - typical_p: The typical probability threshold for generating text.\n - temperature: The temperature to use when generating text.\n - repetition_penalty: The repetition penalty to use when generating text.\n - stop_sequences: A list of stop sequences to use when generating text.\n - seed: The seed to use when generating text.\n - inference_server_url: The URL of the inference server to use.\n - timeout: The timeout value in seconds to use while connecting to inference server.\n - server_kwargs: The keyword arguments to pass to the inference server.\n - client: The client object used to communicate with the inference server.\n - async_client: The async client object used to communicate with the server.\n Methods:\n - _call: Generates text based on a given prompt and stop sequences.\n - _acall: Async generates text based on a given prompt and stop sequences.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_text_gen_inference.html"} {"id": "1a4b601fa940-1", "text": "- _acall: Async generates text based on a given prompt and stop sequences.\n - _llm_type: Returns the type of LLM.\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n # Basic Example (no streaming)\n llm = HuggingFaceTextGenInference(\n inference_server_url = \"http://localhost:8010/\",\n max_new_tokens = 512,\n top_k = 10,\n top_p = 0.95,\n typical_p = 0.95,\n temperature = 0.01,\n repetition_penalty = 1.03,\n )\n print(llm(\"What is Deep Learning?\"))\n \n # Streaming response example\n from langchain.callbacks import streaming_stdout\n \n callbacks = [streaming_stdout.StreamingStdOutCallbackHandler()]\n llm = HuggingFaceTextGenInference(\n inference_server_url = \"http://localhost:8010/\",\n max_new_tokens = 512,\n top_k = 10,\n top_p = 0.95,\n typical_p = 0.95,\n temperature = 0.01,\n repetition_penalty = 1.03,\n callbacks = callbacks,\n stream = True\n )\n print(llm(\"What is Deep Learning?\"))\n \n \"\"\"\n max_new_tokens: int = 512\n top_k: Optional[int] = None\n top_p: Optional[float] = 0.95\n typical_p: Optional[float] = 0.95\n temperature: float = 0.8\n repetition_penalty: Optional[float] = None\n stop_sequences: List[str] = Field(default_factory=list)\n seed: Optional[int] = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_text_gen_inference.html"} {"id": "1a4b601fa940-2", "text": "seed: Optional[int] = None\n inference_server_url: str = \"\"\n timeout: int = 120\n server_kwargs: Dict[str, Any] = Field(default_factory=dict)\n stream: bool = False\n client: Any\n async_client: Any\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that python package exists in environment.\"\"\"\n try:\n import text_generation\n values[\"client\"] = text_generation.Client(\n values[\"inference_server_url\"],\n timeout=values[\"timeout\"],\n **values[\"server_kwargs\"],\n )\n values[\"async_client\"] = text_generation.AsyncClient(\n values[\"inference_server_url\"],\n timeout=values[\"timeout\"],\n **values[\"server_kwargs\"],\n )\n except ImportError:\n raise ImportError(\n \"Could not import text_generation python package. \"\n \"Please install it with `pip install text_generation`.\"\n )\n return values\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"huggingface_textgen_inference\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n if stop is None:\n stop = self.stop_sequences\n else:\n stop += self.stop_sequences\n if not self.stream:\n res = self.client.generate(\n prompt,\n stop_sequences=stop,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_text_gen_inference.html"} {"id": "1a4b601fa940-3", "text": "res = self.client.generate(\n prompt,\n stop_sequences=stop,\n max_new_tokens=self.max_new_tokens,\n top_k=self.top_k,\n top_p=self.top_p,\n typical_p=self.typical_p,\n temperature=self.temperature,\n repetition_penalty=self.repetition_penalty,\n seed=self.seed,\n **kwargs,\n )\n # remove stop sequences from the end of the generated text\n for stop_seq in stop:\n if stop_seq in res.generated_text:\n res.generated_text = res.generated_text[\n : res.generated_text.index(stop_seq)\n ]\n text = res.generated_text\n else:\n text_callback = None\n if run_manager:\n text_callback = partial(\n run_manager.on_llm_new_token, verbose=self.verbose\n )\n params = {\n \"stop_sequences\": stop,\n \"max_new_tokens\": self.max_new_tokens,\n \"top_k\": self.top_k,\n \"top_p\": self.top_p,\n \"typical_p\": self.typical_p,\n \"temperature\": self.temperature,\n \"repetition_penalty\": self.repetition_penalty,\n \"seed\": self.seed,\n }\n text = \"\"\n for res in self.client.generate_stream(prompt, **params):\n token = res.token\n is_stop = False\n for stop_seq in stop:\n if stop_seq in token.text:\n is_stop = True\n break\n if is_stop:\n break\n if not token.special:\n if text_callback:\n text_callback(token.text)\n text += token.text\n return text\n async def _acall(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_text_gen_inference.html"} {"id": "1a4b601fa940-4", "text": "prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n if stop is None:\n stop = self.stop_sequences\n else:\n stop += self.stop_sequences\n if not self.stream:\n res = await self.async_client.generate(\n prompt,\n stop_sequences=stop,\n max_new_tokens=self.max_new_tokens,\n top_k=self.top_k,\n top_p=self.top_p,\n typical_p=self.typical_p,\n temperature=self.temperature,\n repetition_penalty=self.repetition_penalty,\n seed=self.seed,\n **kwargs,\n )\n # remove stop sequences from the end of the generated text\n for stop_seq in stop:\n if stop_seq in res.generated_text:\n res.generated_text = res.generated_text[\n : res.generated_text.index(stop_seq)\n ]\n text: str = res.generated_text\n else:\n text_callback = None\n if run_manager:\n text_callback = partial(\n run_manager.on_llm_new_token, verbose=self.verbose\n )\n params = {\n **{\n \"stop_sequences\": stop,\n \"max_new_tokens\": self.max_new_tokens,\n \"top_k\": self.top_k,\n \"top_p\": self.top_p,\n \"typical_p\": self.typical_p,\n \"temperature\": self.temperature,\n \"repetition_penalty\": self.repetition_penalty,\n \"seed\": self.seed,\n },\n **kwargs,\n }\n text = \"\"\n async for res in self.async_client.generate_stream(prompt, **params):\n token = res.token\n is_stop = False", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_text_gen_inference.html"} {"id": "1a4b601fa940-5", "text": "token = res.token\n is_stop = False\n for stop_seq in stop:\n if stop_seq in token.text:\n is_stop = True\n break\n if is_stop:\n break\n if not token.special:\n if text_callback:\n await text_callback(token.text)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_text_gen_inference.html"} {"id": "7b93f51e8493-0", "text": "Source code for langchain.llms.writer\n\"\"\"Wrapper around Writer APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\n[docs]class Writer(LLM):\n \"\"\"Wrapper around Writer large language models.\n To use, you should have the environment variable ``WRITER_API_KEY`` and\n ``WRITER_ORG_ID`` set with your API key and organization ID respectively.\n Example:\n .. code-block:: python\n from langchain import Writer\n writer = Writer(model_id=\"palmyra-base\")\n \"\"\"\n writer_org_id: Optional[str] = None\n \"\"\"Writer organization ID.\"\"\"\n model_id: str = \"palmyra-instruct\"\n \"\"\"Model name to use.\"\"\"\n min_tokens: Optional[int] = None\n \"\"\"Minimum number of tokens to generate.\"\"\"\n max_tokens: Optional[int] = None\n \"\"\"Maximum number of tokens to generate.\"\"\"\n temperature: Optional[float] = None\n \"\"\"What sampling temperature to use.\"\"\"\n top_p: Optional[float] = None\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n stop: Optional[List[str]] = None\n \"\"\"Sequences when completion generation will stop.\"\"\"\n presence_penalty: Optional[float] = None\n \"\"\"Penalizes repeated tokens regardless of frequency.\"\"\"\n repetition_penalty: Optional[float] = None\n \"\"\"Penalizes repeated tokens according to frequency.\"\"\"\n best_of: Optional[int] = None\n \"\"\"Generates this many completions server-side and returns the \"best\".\"\"\"\n logprobs: bool = False", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/writer.html"} {"id": "7b93f51e8493-1", "text": "logprobs: bool = False\n \"\"\"Whether to return log probabilities.\"\"\"\n n: Optional[int] = None\n \"\"\"How many completions to generate.\"\"\"\n writer_api_key: Optional[str] = None\n \"\"\"Writer API key.\"\"\"\n base_url: Optional[str] = None\n \"\"\"Base url to use, if None decides based on model name.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and organization id exist in environment.\"\"\"\n writer_api_key = get_from_dict_or_env(\n values, \"writer_api_key\", \"WRITER_API_KEY\"\n )\n values[\"writer_api_key\"] = writer_api_key\n writer_org_id = get_from_dict_or_env(values, \"writer_org_id\", \"WRITER_ORG_ID\")\n values[\"writer_org_id\"] = writer_org_id\n return values\n @property\n def _default_params(self) -> Mapping[str, Any]:\n \"\"\"Get the default parameters for calling Writer API.\"\"\"\n return {\n \"minTokens\": self.min_tokens,\n \"maxTokens\": self.max_tokens,\n \"temperature\": self.temperature,\n \"topP\": self.top_p,\n \"stop\": self.stop,\n \"presencePenalty\": self.presence_penalty,\n \"repetitionPenalty\": self.repetition_penalty,\n \"bestOf\": self.best_of,\n \"logprobs\": self.logprobs,\n \"n\": self.n,\n }\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/writer.html"} {"id": "7b93f51e8493-2", "text": "\"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"model_id\": self.model_id, \"writer_org_id\": self.writer_org_id},\n **self._default_params,\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"writer\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to Writer's completions endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = Writer(\"Tell me a joke.\")\n \"\"\"\n if self.base_url is not None:\n base_url = self.base_url\n else:\n base_url = (\n \"https://enterprise-api.writer.com/llm\"\n f\"/organization/{self.writer_org_id}\"\n f\"/model/{self.model_id}/completions\"\n )\n params = {**self._default_params, **kwargs}\n response = requests.post(\n url=base_url,\n headers={\n \"Authorization\": f\"{self.writer_api_key}\",\n \"Content-Type\": \"application/json\",\n \"Accept\": \"application/json\",\n },\n json={\"prompt\": prompt, **params},\n )\n text = response.text\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the model parameters", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/writer.html"} {"id": "7b93f51e8493-3", "text": "# are not enforced by the model parameters\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/writer.html"} {"id": "7bdd494030a5-0", "text": "Source code for langchain.llms.gooseai\n\"\"\"Wrapper around GooseAI API.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class GooseAI(LLM):\n \"\"\"Wrapper around OpenAI large language models.\n To use, you should have the ``openai`` python package installed, and the\n environment variable ``GOOSEAI_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the openai.create call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.llms import GooseAI\n gooseai = GooseAI(model_name=\"gpt-neo-20b\")\n \"\"\"\n client: Any\n model_name: str = \"gpt-neo-20b\"\n \"\"\"Model name to use\"\"\"\n temperature: float = 0.7\n \"\"\"What sampling temperature to use\"\"\"\n max_tokens: int = 256\n \"\"\"The maximum number of tokens to generate in the completion.\n -1 returns as many tokens as possible given the prompt and\n the models maximal context size.\"\"\"\n top_p: float = 1\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n min_tokens: int = 1\n \"\"\"The minimum number of tokens to generate in the completion.\"\"\"\n frequency_penalty: float = 0\n \"\"\"Penalizes repeated tokens according to frequency.\"\"\"\n presence_penalty: float = 0\n \"\"\"Penalizes repeated tokens.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/gooseai.html"} {"id": "7bdd494030a5-1", "text": "presence_penalty: float = 0\n \"\"\"Penalizes repeated tokens.\"\"\"\n n: int = 1\n \"\"\"How many completions to generate for each prompt.\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not explicitly specified.\"\"\"\n logit_bias: Optional[Dict[str, float]] = Field(default_factory=dict)\n \"\"\"Adjust the probability of specific tokens being generated.\"\"\"\n gooseai_api_key: Optional[str] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic config.\"\"\"\n extra = Extra.ignore\n[docs] @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"WARNING! {field_name} is not default parameter.\n {field_name} was transfered to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n gooseai_api_key = get_from_dict_or_env(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/gooseai.html"} {"id": "7bdd494030a5-2", "text": "gooseai_api_key = get_from_dict_or_env(\n values, \"gooseai_api_key\", \"GOOSEAI_API_KEY\"\n )\n try:\n import openai\n openai.api_key = gooseai_api_key\n openai.api_base = \"https://api.goose.ai/v1\"\n values[\"client\"] = openai.Completion\n except ImportError:\n raise ImportError(\n \"Could not import openai python package. \"\n \"Please install it with `pip install openai`.\"\n )\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling GooseAI API.\"\"\"\n normal_params = {\n \"temperature\": self.temperature,\n \"max_tokens\": self.max_tokens,\n \"top_p\": self.top_p,\n \"min_tokens\": self.min_tokens,\n \"frequency_penalty\": self.frequency_penalty,\n \"presence_penalty\": self.presence_penalty,\n \"n\": self.n,\n \"logit_bias\": self.logit_bias,\n }\n return {**normal_params, **self.model_kwargs}\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model_name\": self.model_name}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"gooseai\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/gooseai.html"} {"id": "7bdd494030a5-3", "text": "**kwargs: Any,\n ) -> str:\n \"\"\"Call the GooseAI API.\"\"\"\n params = self._default_params\n if stop is not None:\n if \"stop\" in params:\n raise ValueError(\"`stop` found in both the input and default params.\")\n params[\"stop\"] = stop\n params = {**params, **kwargs}\n response = self.client.create(engine=self.model_name, prompt=prompt, **params)\n text = response.choices[0].text\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/gooseai.html"} {"id": "3d7425c47baf-0", "text": "Source code for langchain.llms.beam\n\"\"\"Wrapper around Beam API.\"\"\"\nimport base64\nimport json\nimport logging\nimport subprocess\nimport textwrap\nimport time\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\nDEFAULT_NUM_TRIES = 10\nDEFAULT_SLEEP_TIME = 4\n[docs]class Beam(LLM):\n \"\"\"Wrapper around Beam API for gpt2 large language model.\n To use, you should have the ``beam-sdk`` python package installed,\n and the environment variable ``BEAM_CLIENT_ID`` set with your client id\n and ``BEAM_CLIENT_SECRET`` set with your client secret. Information on how\n to get these is available here: https://docs.beam.cloud/account/api-keys.\n The wrapper can then be called as follows, where the name, cpu, memory, gpu,\n python version, and python packages can be updated accordingly. Once deployed,\n the instance can be called.\n Example:\n .. code-block:: python\n llm = Beam(model_name=\"gpt2\",\n name=\"langchain-gpt2\",\n cpu=8,\n memory=\"32Gi\",\n gpu=\"A10G\",\n python_version=\"python3.8\",\n python_packages=[\n \"diffusers[torch]>=0.10\",\n \"transformers\",\n \"torch\",\n \"pillow\",\n \"accelerate\",\n \"safetensors\",\n \"xformers\",],\n max_length=50)\n llm._deploy()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/beam.html"} {"id": "3d7425c47baf-1", "text": "max_length=50)\n llm._deploy()\n call_result = llm._call(input)\n \"\"\"\n model_name: str = \"\"\n name: str = \"\"\n cpu: str = \"\"\n memory: str = \"\"\n gpu: str = \"\"\n python_version: str = \"\"\n python_packages: List[str] = []\n max_length: str = \"\"\n url: str = \"\"\n \"\"\"model endpoint to use\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not\n explicitly specified.\"\"\"\n beam_client_id: str = \"\"\n beam_client_secret: str = \"\"\n app_id: Optional[str] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic config.\"\"\"\n extra = Extra.forbid\n[docs] @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"{field_name} was transfered to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/beam.html"} {"id": "3d7425c47baf-2", "text": "def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n beam_client_id = get_from_dict_or_env(\n values, \"beam_client_id\", \"BEAM_CLIENT_ID\"\n )\n beam_client_secret = get_from_dict_or_env(\n values, \"beam_client_secret\", \"BEAM_CLIENT_SECRET\"\n )\n values[\"beam_client_id\"] = beam_client_id\n values[\"beam_client_secret\"] = beam_client_secret\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"model_name\": self.model_name,\n \"name\": self.name,\n \"cpu\": self.cpu,\n \"memory\": self.memory,\n \"gpu\": self.gpu,\n \"python_version\": self.python_version,\n \"python_packages\": self.python_packages,\n \"max_length\": self.max_length,\n \"model_kwargs\": self.model_kwargs,\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"beam\"\n[docs] def app_creation(self) -> None:\n \"\"\"Creates a Python file which will contain your Beam app definition.\"\"\"\n script = textwrap.dedent(\n \"\"\"\\\n import beam\n # The environment your code will run on\n app = beam.App(\n name=\"{name}\",\n cpu={cpu},\n memory=\"{memory}\",\n gpu=\"{gpu}\",\n python_version=\"{python_version}\",\n python_packages={python_packages},\n )\n app.Trigger.RestAPI(\n inputs={{\"prompt\": beam.Types.String(), \"max_length\": beam.Types.String()}},", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/beam.html"} {"id": "3d7425c47baf-3", "text": "inputs={{\"prompt\": beam.Types.String(), \"max_length\": beam.Types.String()}},\n outputs={{\"text\": beam.Types.String()}},\n handler=\"run.py:beam_langchain\",\n )\n \"\"\"\n )\n script_name = \"app.py\"\n with open(script_name, \"w\") as file:\n file.write(\n script.format(\n name=self.name,\n cpu=self.cpu,\n memory=self.memory,\n gpu=self.gpu,\n python_version=self.python_version,\n python_packages=self.python_packages,\n )\n )\n[docs] def run_creation(self) -> None:\n \"\"\"Creates a Python file which will be deployed on beam.\"\"\"\n script = textwrap.dedent(\n \"\"\"\n import os\n import transformers\n from transformers import GPT2LMHeadModel, GPT2Tokenizer\n model_name = \"{model_name}\"\n def beam_langchain(**inputs):\n prompt = inputs[\"prompt\"]\n length = inputs[\"max_length\"]\n tokenizer = GPT2Tokenizer.from_pretrained(model_name)\n model = GPT2LMHeadModel.from_pretrained(model_name)\n encodedPrompt = tokenizer.encode(prompt, return_tensors='pt')\n outputs = model.generate(encodedPrompt, max_length=int(length),\n do_sample=True, pad_token_id=tokenizer.eos_token_id)\n output = tokenizer.decode(outputs[0], skip_special_tokens=True)\n print(output)\n return {{\"text\": output}}\n \"\"\"\n )\n script_name = \"run.py\"\n with open(script_name, \"w\") as file:\n file.write(script.format(model_name=self.model_name))\n def _deploy(self) -> str:\n \"\"\"Call to Beam.\"\"\"\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/beam.html"} {"id": "3d7425c47baf-4", "text": "\"\"\"Call to Beam.\"\"\"\n try:\n import beam # type: ignore\n if beam.__path__ == \"\":\n raise ImportError\n except ImportError:\n raise ImportError(\n \"Could not import beam python package. \"\n \"Please install it with `curl \"\n \"https://raw.githubusercontent.com/slai-labs\"\n \"/get-beam/main/get-beam.sh -sSfL | sh`.\"\n )\n self.app_creation()\n self.run_creation()\n process = subprocess.run(\n \"beam deploy app.py\", shell=True, capture_output=True, text=True\n )\n if process.returncode == 0:\n output = process.stdout\n logger.info(output)\n lines = output.split(\"\\n\")\n for line in lines:\n if line.startswith(\" i Send requests to: https://apps.beam.cloud/\"):\n self.app_id = line.split(\"/\")[-1]\n self.url = line.split(\":\")[1].strip()\n return self.app_id\n raise ValueError(\n f\"\"\"Failed to retrieve the appID from the deployment output.\n Deployment output: {output}\"\"\"\n )\n else:\n raise ValueError(f\"Deployment failed. Error: {process.stderr}\")\n @property\n def authorization(self) -> str:\n if self.beam_client_id:\n credential_str = self.beam_client_id + \":\" + self.beam_client_secret\n else:\n credential_str = self.beam_client_secret\n return base64.b64encode(credential_str.encode()).decode()\n def _call(\n self,\n prompt: str,\n stop: Optional[list] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/beam.html"} {"id": "3d7425c47baf-5", "text": "**kwargs: Any,\n ) -> str:\n \"\"\"Call to Beam.\"\"\"\n url = \"https://apps.beam.cloud/\" + self.app_id if self.app_id else self.url\n payload = {\"prompt\": prompt, \"max_length\": self.max_length}\n payload.update(kwargs)\n headers = {\n \"Accept\": \"*/*\",\n \"Accept-Encoding\": \"gzip, deflate\",\n \"Authorization\": \"Basic \" + self.authorization,\n \"Connection\": \"keep-alive\",\n \"Content-Type\": \"application/json\",\n }\n for _ in range(DEFAULT_NUM_TRIES):\n request = requests.post(url, headers=headers, data=json.dumps(payload))\n if request.status_code == 200:\n return request.json()[\"text\"]\n time.sleep(DEFAULT_SLEEP_TIME)\n logger.warning(\"Unable to successfully call model.\")\n return \"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/beam.html"} {"id": "3a026e68e6f6-0", "text": "Source code for langchain.llms.human\nfrom typing import Any, Callable, List, Mapping, Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\ndef _display_prompt(prompt: str) -> None:\n \"\"\"Displays the given prompt to the user.\"\"\"\n print(f\"\\n{prompt}\")\ndef _collect_user_input(\n separator: Optional[str] = None, stop: Optional[List[str]] = None\n) -> str:\n \"\"\"Collects and returns user input as a single string.\"\"\"\n separator = separator or \"\\n\"\n lines = []\n while True:\n line = input()\n if not line:\n break\n lines.append(line)\n if stop and any(seq in line for seq in stop):\n break\n # Combine all lines into a single string\n multi_line_input = separator.join(lines)\n return multi_line_input\n[docs]class HumanInputLLM(LLM):\n \"\"\"\n A LLM wrapper which returns user input as the response.\n \"\"\"\n input_func: Callable = Field(default_factory=lambda: _collect_user_input)\n prompt_func: Callable[[str], None] = Field(default_factory=lambda: _display_prompt)\n separator: str = \"\\n\"\n input_kwargs: Mapping[str, Any] = {}\n prompt_kwargs: Mapping[str, Any] = {}\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"\n Returns an empty dictionary as there are no identifying parameters.\n \"\"\"\n return {}\n @property\n def _llm_type(self) -> str:\n \"\"\"Returns the type of LLM.\"\"\"\n return \"human-input\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/human.html"} {"id": "3a026e68e6f6-1", "text": "\"\"\"Returns the type of LLM.\"\"\"\n return \"human-input\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"\n Displays the prompt to the user and returns their input as a response.\n Args:\n prompt (str): The prompt to be displayed to the user.\n stop (Optional[List[str]]): A list of stop strings.\n run_manager (Optional[CallbackManagerForLLMRun]): Currently not used.\n Returns:\n str: The user's input as a response.\n \"\"\"\n self.prompt_func(prompt, **self.prompt_kwargs)\n user_input = self.input_func(\n separator=self.separator, stop=stop, **self.input_kwargs\n )\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the human themselves\n user_input = enforce_stop_tokens(user_input, stop)\n return user_input", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/human.html"} {"id": "66004b84b19f-0", "text": "Source code for langchain.llms.openllm\n\"\"\"Wrapper around OpenLLM APIs.\"\"\"\nfrom __future__ import annotations\nimport copy\nimport json\nimport logging\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Dict,\n List,\n Literal,\n Optional,\n TypedDict,\n Union,\n overload,\n)\nfrom pydantic import PrivateAttr\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.llms.base import LLM\nif TYPE_CHECKING:\n import openllm\nServerType = Literal[\"http\", \"grpc\"]\n[docs]class IdentifyingParams(TypedDict):\n \"\"\"Parameters for identifying a model as a typed dict.\"\"\"\n model_name: str\n model_id: Optional[str]\n server_url: Optional[str]\n server_type: Optional[ServerType]\n embedded: bool\n llm_kwargs: Dict[str, Any]\nlogger = logging.getLogger(__name__)\n[docs]class OpenLLM(LLM):\n \"\"\"Wrapper for accessing OpenLLM, supporting both in-process model\n instance and remote OpenLLM servers.\n To use, you should have the openllm library installed:\n .. code-block:: bash\n pip install openllm\n Learn more at: https://github.com/bentoml/openllm\n Example running an LLM model locally managed by OpenLLM:\n .. code-block:: python\n from langchain.llms import OpenLLM\n llm = OpenLLM(\n model_name='flan-t5',\n model_id='google/flan-t5-large',\n )\n llm(\"What is the difference between a duck and a goose?\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openllm.html"} {"id": "66004b84b19f-1", "text": ")\n llm(\"What is the difference between a duck and a goose?\")\n For all available supported models, you can run 'openllm models'.\n If you have a OpenLLM server running, you can also use it remotely:\n .. code-block:: python\n from langchain.llms import OpenLLM\n llm = OpenLLM(server_url='http://localhost:3000')\n llm(\"What is the difference between a duck and a goose?\")\n \"\"\"\n model_name: Optional[str] = None\n \"\"\"Model name to use. See 'openllm models' for all available models.\"\"\"\n model_id: Optional[str] = None\n \"\"\"Model Id to use. If not provided, will use the default model for the model name.\n See 'openllm models' for all available model variants.\"\"\"\n server_url: Optional[str] = None\n \"\"\"Optional server URL that currently runs a LLMServer with 'openllm start'.\"\"\"\n server_type: ServerType = \"http\"\n \"\"\"Optional server type. Either 'http' or 'grpc'.\"\"\"\n embedded: bool = True\n \"\"\"Initialize this LLM instance in current process by default. Should \n only set to False when using in conjunction with BentoML Service.\"\"\"\n llm_kwargs: Dict[str, Any]\n \"\"\"Key word arguments to be passed to openllm.LLM\"\"\"\n _runner: Optional[openllm.LLMRunner] = PrivateAttr(default=None)\n _client: Union[\n openllm.client.HTTPClient, openllm.client.GrpcClient, None\n ] = PrivateAttr(default=None)\n[docs] class Config:\n extra = \"forbid\"\n @overload\n def __init__(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openllm.html"} {"id": "66004b84b19f-2", "text": "@overload\n def __init__(\n self,\n model_name: Optional[str] = ...,\n *,\n model_id: Optional[str] = ...,\n embedded: Literal[True, False] = ...,\n **llm_kwargs: Any,\n ) -> None:\n ...\n @overload\n def __init__(\n self,\n *,\n server_url: str = ...,\n server_type: Literal[\"grpc\", \"http\"] = ...,\n **llm_kwargs: Any,\n ) -> None:\n ...\n def __init__(\n self,\n model_name: Optional[str] = None,\n *,\n model_id: Optional[str] = None,\n server_url: Optional[str] = None,\n server_type: Literal[\"grpc\", \"http\"] = \"http\",\n embedded: bool = True,\n **llm_kwargs: Any,\n ):\n try:\n import openllm\n except ImportError as e:\n raise ImportError(\n \"Could not import openllm. Make sure to install it with \"\n \"'pip install openllm.'\"\n ) from e\n llm_kwargs = llm_kwargs or {}\n if server_url is not None:\n logger.debug(\"'server_url' is provided, returning a openllm.Client\")\n assert (\n model_id is None and model_name is None\n ), \"'server_url' and {'model_id', 'model_name'} are mutually exclusive\"\n client_cls = (\n openllm.client.HTTPClient\n if server_type == \"http\"\n else openllm.client.GrpcClient\n )\n client = client_cls(server_url)\n super().__init__(\n **{\n \"server_url\": server_url,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openllm.html"} {"id": "66004b84b19f-3", "text": "super().__init__(\n **{\n \"server_url\": server_url,\n \"server_type\": server_type,\n \"llm_kwargs\": llm_kwargs,\n }\n )\n self._runner = None # type: ignore\n self._client = client\n else:\n assert model_name is not None, \"Must provide 'model_name' or 'server_url'\"\n # since the LLM are relatively huge, we don't actually want to convert the\n # Runner with embedded when running the server. Instead, we will only set\n # the init_local here so that LangChain users can still use the LLM\n # in-process. Wrt to BentoML users, setting embedded=False is the expected\n # behaviour to invoke the runners remotely.\n # We need to also enable ensure_available to download and setup the model.\n runner = openllm.Runner(\n model_name=model_name,\n model_id=model_id,\n init_local=embedded,\n ensure_available=True,\n **llm_kwargs,\n )\n super().__init__(\n **{\n \"model_name\": model_name,\n \"model_id\": model_id,\n \"embedded\": embedded,\n \"llm_kwargs\": llm_kwargs,\n }\n )\n self._client = None # type: ignore\n self._runner = runner\n @property\n def runner(self) -> openllm.LLMRunner:\n \"\"\"\n Get the underlying openllm.LLMRunner instance for integration with BentoML.\n Example:\n .. code-block:: python\n llm = OpenLLM(\n model_name='flan-t5',\n model_id='google/flan-t5-large',\n embedded=False,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openllm.html"} {"id": "66004b84b19f-4", "text": "model_id='google/flan-t5-large',\n embedded=False,\n )\n tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)\n agent = initialize_agent(\n tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION\n )\n svc = bentoml.Service(\"langchain-openllm\", runners=[llm.runner])\n @svc.api(input=Text(), output=Text())\n def chat(input_text: str):\n return agent.run(input_text)\n \"\"\"\n if self._runner is None:\n raise ValueError(\"OpenLLM must be initialized locally with 'model_name'\")\n return self._runner\n @property\n def _identifying_params(self) -> IdentifyingParams:\n \"\"\"Get the identifying parameters.\"\"\"\n if self._client is not None:\n self.llm_kwargs.update(self._client.configuration)\n model_name = self._client.model_name\n model_id = self._client.model_id\n else:\n if self._runner is None:\n raise ValueError(\"Runner must be initialized.\")\n model_name = self.model_name\n model_id = self.model_id\n try:\n self.llm_kwargs.update(\n json.loads(self._runner.identifying_params[\"configuration\"])\n )\n except (TypeError, json.JSONDecodeError):\n pass\n return IdentifyingParams(\n server_url=self.server_url,\n server_type=self.server_type,\n embedded=self.embedded,\n llm_kwargs=self.llm_kwargs,\n model_name=model_name,\n model_id=model_id,\n )\n @property\n def _llm_type(self) -> str:\n return \"openllm_client\" if self._client else \"openllm\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openllm.html"} {"id": "66004b84b19f-5", "text": "return \"openllm_client\" if self._client else \"openllm\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: CallbackManagerForLLMRun | None = None,\n **kwargs: Any,\n ) -> str:\n try:\n import openllm\n except ImportError as e:\n raise ImportError(\n \"Could not import openllm. Make sure to install it with \"\n \"'pip install openllm'.\"\n ) from e\n copied = copy.deepcopy(self.llm_kwargs)\n copied.update(kwargs)\n config = openllm.AutoConfig.for_model(\n self._identifying_params[\"model_name\"], **copied\n )\n if self._client:\n return self._client.query(prompt, **config.model_dump(flatten=True))\n else:\n assert self._runner is not None\n return self._runner(prompt, **config.model_dump(flatten=True))\n async def _acall(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n try:\n import openllm\n except ImportError as e:\n raise ImportError(\n \"Could not import openllm. Make sure to install it with \"\n \"'pip install openllm'.\"\n ) from e\n copied = copy.deepcopy(self.llm_kwargs)\n copied.update(kwargs)\n config = openllm.AutoConfig.for_model(\n self._identifying_params[\"model_name\"], **copied\n )\n if self._client:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openllm.html"} {"id": "66004b84b19f-6", "text": ")\n if self._client:\n return await self._client.acall(\n \"generate\", prompt, **config.model_dump(flatten=True)\n )\n else:\n assert self._runner is not None\n (\n prompt,\n generate_kwargs,\n postprocess_kwargs,\n ) = self._runner.llm.sanitize_parameters(prompt, **kwargs)\n generated_result = await self._runner.generate.async_run(\n prompt, **generate_kwargs\n )\n return self._runner.llm.postprocess_generate(\n prompt, generated_result, **postprocess_kwargs\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openllm.html"} {"id": "d29ea987bbcf-0", "text": "Source code for langchain.agents.load_tools\n# flake8: noqa\n\"\"\"Load tools.\"\"\"\nimport warnings\nfrom typing import Any, Dict, List, Optional, Callable, Tuple\nfrom mypy_extensions import Arg, KwArg\nfrom langchain.agents.tools import Tool\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.chains.api import news_docs, open_meteo_docs, podcast_docs, tmdb_docs\nfrom langchain.chains.api.base import APIChain\nfrom langchain.chains.llm_math.base import LLMMathChain\nfrom langchain.chains.pal.base import PALChain\nfrom langchain.requests import TextRequestsWrapper\nfrom langchain.tools.arxiv.tool import ArxivQueryRun\nfrom langchain.tools.pubmed.tool import PubmedQueryRun\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.bing_search.tool import BingSearchRun\nfrom langchain.tools.ddg_search.tool import DuckDuckGoSearchRun\nfrom langchain.tools.google_search.tool import GoogleSearchResults, GoogleSearchRun\nfrom langchain.tools.metaphor_search.tool import MetaphorSearchResults\nfrom langchain.tools.google_serper.tool import GoogleSerperResults, GoogleSerperRun\nfrom langchain.tools.graphql.tool import BaseGraphQLTool\nfrom langchain.tools.human.tool import HumanInputRun\nfrom langchain.tools.python.tool import PythonREPLTool\nfrom langchain.tools.requests.tool import (\n RequestsDeleteTool,\n RequestsGetTool,\n RequestsPatchTool,\n RequestsPostTool,\n RequestsPutTool,\n)\nfrom langchain.tools.scenexplain.tool import SceneXplainTool\nfrom langchain.tools.searx_search.tool import SearxSearchResults, SearxSearchRun\nfrom langchain.tools.shell.tool import ShellTool", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} {"id": "d29ea987bbcf-1", "text": "from langchain.tools.shell.tool import ShellTool\nfrom langchain.tools.sleep.tool import SleepTool\nfrom langchain.tools.wikipedia.tool import WikipediaQueryRun\nfrom langchain.tools.wolfram_alpha.tool import WolframAlphaQueryRun\nfrom langchain.tools.openweathermap.tool import OpenWeatherMapQueryRun\nfrom langchain.tools.dataforseo_api_search import DataForSeoAPISearchRun\nfrom langchain.tools.dataforseo_api_search import DataForSeoAPISearchResults\nfrom langchain.utilities import ArxivAPIWrapper\nfrom langchain.utilities import PubMedAPIWrapper\nfrom langchain.utilities.bing_search import BingSearchAPIWrapper\nfrom langchain.utilities.duckduckgo_search import DuckDuckGoSearchAPIWrapper\nfrom langchain.utilities.google_search import GoogleSearchAPIWrapper\nfrom langchain.utilities.google_serper import GoogleSerperAPIWrapper\nfrom langchain.utilities.metaphor_search import MetaphorSearchAPIWrapper\nfrom langchain.utilities.awslambda import LambdaWrapper\nfrom langchain.utilities.graphql import GraphQLAPIWrapper\nfrom langchain.utilities.searx_search import SearxSearchWrapper\nfrom langchain.utilities.serpapi import SerpAPIWrapper\nfrom langchain.utilities.twilio import TwilioAPIWrapper\nfrom langchain.utilities.wikipedia import WikipediaAPIWrapper\nfrom langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper\nfrom langchain.utilities.openweathermap import OpenWeatherMapAPIWrapper\nfrom langchain.utilities.dataforseo_api_search import DataForSeoAPIWrapper\ndef _get_python_repl() -> BaseTool:\n return PythonREPLTool()\ndef _get_tools_requests_get() -> BaseTool:\n return RequestsGetTool(requests_wrapper=TextRequestsWrapper())\ndef _get_tools_requests_post() -> BaseTool:\n return RequestsPostTool(requests_wrapper=TextRequestsWrapper())\ndef _get_tools_requests_patch() -> BaseTool:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} {"id": "d29ea987bbcf-2", "text": "def _get_tools_requests_patch() -> BaseTool:\n return RequestsPatchTool(requests_wrapper=TextRequestsWrapper())\ndef _get_tools_requests_put() -> BaseTool:\n return RequestsPutTool(requests_wrapper=TextRequestsWrapper())\ndef _get_tools_requests_delete() -> BaseTool:\n return RequestsDeleteTool(requests_wrapper=TextRequestsWrapper())\ndef _get_terminal() -> BaseTool:\n return ShellTool()\ndef _get_sleep() -> BaseTool:\n return SleepTool()\n_BASE_TOOLS: Dict[str, Callable[[], BaseTool]] = {\n \"python_repl\": _get_python_repl,\n \"requests\": _get_tools_requests_get, # preserved for backwards compatability\n \"requests_get\": _get_tools_requests_get,\n \"requests_post\": _get_tools_requests_post,\n \"requests_patch\": _get_tools_requests_patch,\n \"requests_put\": _get_tools_requests_put,\n \"requests_delete\": _get_tools_requests_delete,\n \"terminal\": _get_terminal,\n \"sleep\": _get_sleep,\n}\ndef _get_pal_math(llm: BaseLanguageModel) -> BaseTool:\n return Tool(\n name=\"PAL-MATH\",\n description=\"A language model that is really good at solving complex word math problems. Input should be a fully worded hard word math problem.\",\n func=PALChain.from_math_prompt(llm).run,\n )\ndef _get_pal_colored_objects(llm: BaseLanguageModel) -> BaseTool:\n return Tool(\n name=\"PAL-COLOR-OBJ\",\n description=\"A language model that is really good at reasoning about position and the color attributes of objects. Input should be a fully worded hard reasoning problem. Make sure to include all information about the objects AND the final question you want to answer.\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} {"id": "d29ea987bbcf-3", "text": "func=PALChain.from_colored_object_prompt(llm).run,\n )\ndef _get_llm_math(llm: BaseLanguageModel) -> BaseTool:\n return Tool(\n name=\"Calculator\",\n description=\"Useful for when you need to answer questions about math.\",\n func=LLMMathChain.from_llm(llm=llm).run,\n coroutine=LLMMathChain.from_llm(llm=llm).arun,\n )\ndef _get_open_meteo_api(llm: BaseLanguageModel) -> BaseTool:\n chain = APIChain.from_llm_and_api_docs(llm, open_meteo_docs.OPEN_METEO_DOCS)\n return Tool(\n name=\"Open Meteo API\",\n description=\"Useful for when you want to get weather information from the OpenMeteo API. The input should be a question in natural language that this API can answer.\",\n func=chain.run,\n )\n_LLM_TOOLS: Dict[str, Callable[[BaseLanguageModel], BaseTool]] = {\n \"pal-math\": _get_pal_math,\n \"pal-colored-objects\": _get_pal_colored_objects,\n \"llm-math\": _get_llm_math,\n \"open-meteo-api\": _get_open_meteo_api,\n}\ndef _get_news_api(llm: BaseLanguageModel, **kwargs: Any) -> BaseTool:\n news_api_key = kwargs[\"news_api_key\"]\n chain = APIChain.from_llm_and_api_docs(\n llm, news_docs.NEWS_DOCS, headers={\"X-Api-Key\": news_api_key}\n )\n return Tool(\n name=\"News API\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} {"id": "d29ea987bbcf-4", "text": ")\n return Tool(\n name=\"News API\",\n description=\"Use this when you want to get information about the top headlines of current news stories. The input should be a question in natural language that this API can answer.\",\n func=chain.run,\n )\ndef _get_tmdb_api(llm: BaseLanguageModel, **kwargs: Any) -> BaseTool:\n tmdb_bearer_token = kwargs[\"tmdb_bearer_token\"]\n chain = APIChain.from_llm_and_api_docs(\n llm,\n tmdb_docs.TMDB_DOCS,\n headers={\"Authorization\": f\"Bearer {tmdb_bearer_token}\"},\n )\n return Tool(\n name=\"TMDB API\",\n description=\"Useful for when you want to get information from The Movie Database. The input should be a question in natural language that this API can answer.\",\n func=chain.run,\n )\ndef _get_podcast_api(llm: BaseLanguageModel, **kwargs: Any) -> BaseTool:\n listen_api_key = kwargs[\"listen_api_key\"]\n chain = APIChain.from_llm_and_api_docs(\n llm,\n podcast_docs.PODCAST_DOCS,\n headers={\"X-ListenAPI-Key\": listen_api_key},\n )\n return Tool(\n name=\"Podcast API\",\n description=\"Use the Listen Notes Podcast API to search all podcasts or episodes. The input should be a question in natural language that this API can answer.\",\n func=chain.run,\n )\ndef _get_lambda_api(**kwargs: Any) -> BaseTool:\n return Tool(\n name=kwargs[\"awslambda_tool_name\"],\n description=kwargs[\"awslambda_tool_description\"],\n func=LambdaWrapper(**kwargs).run,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} {"id": "d29ea987bbcf-5", "text": "func=LambdaWrapper(**kwargs).run,\n )\ndef _get_wolfram_alpha(**kwargs: Any) -> BaseTool:\n return WolframAlphaQueryRun(api_wrapper=WolframAlphaAPIWrapper(**kwargs))\ndef _get_google_search(**kwargs: Any) -> BaseTool:\n return GoogleSearchRun(api_wrapper=GoogleSearchAPIWrapper(**kwargs))\ndef _get_wikipedia(**kwargs: Any) -> BaseTool:\n return WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper(**kwargs))\ndef _get_arxiv(**kwargs: Any) -> BaseTool:\n return ArxivQueryRun(api_wrapper=ArxivAPIWrapper(**kwargs))\ndef _get_pupmed(**kwargs: Any) -> BaseTool:\n return PubmedQueryRun(api_wrapper=PubMedAPIWrapper(**kwargs))\ndef _get_google_serper(**kwargs: Any) -> BaseTool:\n return GoogleSerperRun(api_wrapper=GoogleSerperAPIWrapper(**kwargs))\ndef _get_google_serper_results_json(**kwargs: Any) -> BaseTool:\n return GoogleSerperResults(api_wrapper=GoogleSerperAPIWrapper(**kwargs))\ndef _get_google_search_results_json(**kwargs: Any) -> BaseTool:\n return GoogleSearchResults(api_wrapper=GoogleSearchAPIWrapper(**kwargs))\ndef _get_serpapi(**kwargs: Any) -> BaseTool:\n return Tool(\n name=\"Search\",\n description=\"A search engine. Useful for when you need to answer questions about current events. Input should be a search query.\",\n func=SerpAPIWrapper(**kwargs).run,\n coroutine=SerpAPIWrapper(**kwargs).arun,\n )\ndef _get_twilio(**kwargs: Any) -> BaseTool:\n return Tool(\n name=\"Text Message\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} {"id": "d29ea987bbcf-6", "text": "return Tool(\n name=\"Text Message\",\n description=\"Useful for when you need to send a text message to a provided phone number.\",\n func=TwilioAPIWrapper(**kwargs).run,\n )\ndef _get_searx_search(**kwargs: Any) -> BaseTool:\n return SearxSearchRun(wrapper=SearxSearchWrapper(**kwargs))\ndef _get_searx_search_results_json(**kwargs: Any) -> BaseTool:\n wrapper_kwargs = {k: v for k, v in kwargs.items() if k != \"num_results\"}\n return SearxSearchResults(wrapper=SearxSearchWrapper(**wrapper_kwargs), **kwargs)\ndef _get_bing_search(**kwargs: Any) -> BaseTool:\n return BingSearchRun(api_wrapper=BingSearchAPIWrapper(**kwargs))\ndef _get_metaphor_search(**kwargs: Any) -> BaseTool:\n return MetaphorSearchResults(api_wrapper=MetaphorSearchAPIWrapper(**kwargs))\ndef _get_ddg_search(**kwargs: Any) -> BaseTool:\n return DuckDuckGoSearchRun(api_wrapper=DuckDuckGoSearchAPIWrapper(**kwargs))\ndef _get_human_tool(**kwargs: Any) -> BaseTool:\n return HumanInputRun(**kwargs)\ndef _get_scenexplain(**kwargs: Any) -> BaseTool:\n return SceneXplainTool(**kwargs)\ndef _get_graphql_tool(**kwargs: Any) -> BaseTool:\n graphql_endpoint = kwargs[\"graphql_endpoint\"]\n wrapper = GraphQLAPIWrapper(graphql_endpoint=graphql_endpoint)\n return BaseGraphQLTool(graphql_wrapper=wrapper)\ndef _get_openweathermap(**kwargs: Any) -> BaseTool:\n return OpenWeatherMapQueryRun(api_wrapper=OpenWeatherMapAPIWrapper(**kwargs))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} {"id": "d29ea987bbcf-7", "text": "return OpenWeatherMapQueryRun(api_wrapper=OpenWeatherMapAPIWrapper(**kwargs))\ndef _get_dataforseo_api_search(**kwargs: Any) -> BaseTool:\n return DataForSeoAPISearchRun(api_wrapper=DataForSeoAPIWrapper(**kwargs))\ndef _get_dataforseo_api_search_json(**kwargs: Any) -> BaseTool:\n return DataForSeoAPISearchResults(api_wrapper=DataForSeoAPIWrapper(**kwargs))\n_EXTRA_LLM_TOOLS: Dict[\n str,\n Tuple[Callable[[Arg(BaseLanguageModel, \"llm\"), KwArg(Any)], BaseTool], List[str]],\n] = {\n \"news-api\": (_get_news_api, [\"news_api_key\"]),\n \"tmdb-api\": (_get_tmdb_api, [\"tmdb_bearer_token\"]),\n \"podcast-api\": (_get_podcast_api, [\"listen_api_key\"]),\n}\n_EXTRA_OPTIONAL_TOOLS: Dict[str, Tuple[Callable[[KwArg(Any)], BaseTool], List[str]]] = {\n \"wolfram-alpha\": (_get_wolfram_alpha, [\"wolfram_alpha_appid\"]),\n \"google-search\": (_get_google_search, [\"google_api_key\", \"google_cse_id\"]),\n \"google-search-results-json\": (\n _get_google_search_results_json,\n [\"google_api_key\", \"google_cse_id\", \"num_results\"],\n ),\n \"searx-search-results-json\": (\n _get_searx_search_results_json,\n [\"searx_host\", \"engines\", \"num_results\", \"aiosession\"],\n ),\n \"bing-search\": (_get_bing_search, [\"bing_subscription_key\", \"bing_search_url\"]),", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} {"id": "d29ea987bbcf-8", "text": "\"metaphor-search\": (_get_metaphor_search, [\"metaphor_api_key\"]),\n \"ddg-search\": (_get_ddg_search, []),\n \"google-serper\": (_get_google_serper, [\"serper_api_key\", \"aiosession\"]),\n \"google-serper-results-json\": (\n _get_google_serper_results_json,\n [\"serper_api_key\", \"aiosession\"],\n ),\n \"serpapi\": (_get_serpapi, [\"serpapi_api_key\", \"aiosession\"]),\n \"twilio\": (_get_twilio, [\"account_sid\", \"auth_token\", \"from_number\"]),\n \"searx-search\": (_get_searx_search, [\"searx_host\", \"engines\", \"aiosession\"]),\n \"wikipedia\": (_get_wikipedia, [\"top_k_results\", \"lang\"]),\n \"arxiv\": (\n _get_arxiv,\n [\"top_k_results\", \"load_max_docs\", \"load_all_available_meta\"],\n ),\n \"pupmed\": (\n _get_pupmed,\n [\"top_k_results\", \"load_max_docs\", \"load_all_available_meta\"],\n ),\n \"human\": (_get_human_tool, [\"prompt_func\", \"input_func\"]),\n \"awslambda\": (\n _get_lambda_api,\n [\"awslambda_tool_name\", \"awslambda_tool_description\", \"function_name\"],\n ),\n \"sceneXplain\": (_get_scenexplain, []),\n \"graphql\": (_get_graphql_tool, [\"graphql_endpoint\"]),\n \"openweathermap-api\": (_get_openweathermap, [\"openweathermap_api_key\"]),\n \"dataforseo-api-search\": (\n _get_dataforseo_api_search,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} {"id": "d29ea987bbcf-9", "text": "\"dataforseo-api-search\": (\n _get_dataforseo_api_search,\n [\"api_login\", \"api_password\", \"aiosession\"],\n ),\n \"dataforseo-api-search-json\": (\n _get_dataforseo_api_search_json,\n [\"api_login\", \"api_password\", \"aiosession\"],\n ),\n}\ndef _handle_callbacks(\n callback_manager: Optional[BaseCallbackManager], callbacks: Callbacks\n) -> Callbacks:\n if callback_manager is not None:\n warnings.warn(\n \"callback_manager is deprecated. Please use callbacks instead.\",\n DeprecationWarning,\n )\n if callbacks is not None:\n raise ValueError(\n \"Cannot specify both callback_manager and callbacks arguments.\"\n )\n return callback_manager\n return callbacks\n[docs]def load_huggingface_tool(\n task_or_repo_id: str,\n model_repo_id: Optional[str] = None,\n token: Optional[str] = None,\n remote: bool = False,\n **kwargs: Any,\n) -> BaseTool:\n \"\"\"Loads a tool from the HuggingFace Hub.\n Args:\n task_or_repo_id: Task or model repo id.\n model_repo_id: Optional model repo id.\n token: Optional token.\n remote: Optional remote. Defaults to False.\n **kwargs:\n Returns:\n A tool.\n \"\"\"\n try:\n from transformers import load_tool\n except ImportError:\n raise ImportError(\n \"HuggingFace tools require the libraries `transformers>=4.29.0`\"\n \" and `huggingface_hub>=0.14.1` to be installed.\"\n \" Please install it with\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} {"id": "d29ea987bbcf-10", "text": "\" Please install it with\"\n \" `pip install --upgrade transformers huggingface_hub`.\"\n )\n hf_tool = load_tool(\n task_or_repo_id,\n model_repo_id=model_repo_id,\n token=token,\n remote=remote,\n **kwargs,\n )\n outputs = hf_tool.outputs\n if set(outputs) != {\"text\"}:\n raise NotImplementedError(\"Multimodal outputs not supported yet.\")\n inputs = hf_tool.inputs\n if set(inputs) != {\"text\"}:\n raise NotImplementedError(\"Multimodal inputs not supported yet.\")\n return Tool.from_function(\n hf_tool.__call__, name=hf_tool.name, description=hf_tool.description\n )\n[docs]def load_tools(\n tool_names: List[str],\n llm: Optional[BaseLanguageModel] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n) -> List[BaseTool]:\n \"\"\"Load tools based on their name.\n Args:\n tool_names: name of tools to load.\n llm: Optional language model, may be needed to initialize certain tools.\n callbacks: Optional callback manager or list of callback handlers.\n If not provided, default global callback manager will be used.\n Returns:\n List of tools.\n \"\"\"\n tools = []\n callbacks = _handle_callbacks(\n callback_manager=kwargs.get(\"callback_manager\"), callbacks=callbacks\n )\n for name in tool_names:\n if name == \"requests\":\n warnings.warn(\n \"tool name `requests` is deprecated - \"\n \"please use `requests_all` or specify the requests method\"\n )\n if name == \"requests_all\":\n # expand requests into various methods", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} {"id": "d29ea987bbcf-11", "text": ")\n if name == \"requests_all\":\n # expand requests into various methods\n requests_method_tools = [\n _tool for _tool in _BASE_TOOLS if _tool.startswith(\"requests_\")\n ]\n tool_names.extend(requests_method_tools)\n elif name in _BASE_TOOLS:\n tools.append(_BASE_TOOLS[name]())\n elif name in _LLM_TOOLS:\n if llm is None:\n raise ValueError(f\"Tool {name} requires an LLM to be provided\")\n tool = _LLM_TOOLS[name](llm)\n tools.append(tool)\n elif name in _EXTRA_LLM_TOOLS:\n if llm is None:\n raise ValueError(f\"Tool {name} requires an LLM to be provided\")\n _get_llm_tool_func, extra_keys = _EXTRA_LLM_TOOLS[name]\n missing_keys = set(extra_keys).difference(kwargs)\n if missing_keys:\n raise ValueError(\n f\"Tool {name} requires some parameters that were not \"\n f\"provided: {missing_keys}\"\n )\n sub_kwargs = {k: kwargs[k] for k in extra_keys}\n tool = _get_llm_tool_func(llm=llm, **sub_kwargs)\n tools.append(tool)\n elif name in _EXTRA_OPTIONAL_TOOLS:\n _get_tool_func, extra_keys = _EXTRA_OPTIONAL_TOOLS[name]\n sub_kwargs = {k: kwargs[k] for k in extra_keys if k in kwargs}\n tool = _get_tool_func(**sub_kwargs)\n tools.append(tool)\n else:\n raise ValueError(f\"Got unknown tool {name}\")\n if callbacks is not None:\n for tool in tools:\n tool.callbacks = callbacks", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} {"id": "d29ea987bbcf-12", "text": "for tool in tools:\n tool.callbacks = callbacks\n return tools\n[docs]def get_all_tool_names() -> List[str]:\n \"\"\"Get a list of all possible tool names.\"\"\"\n return (\n list(_BASE_TOOLS)\n + list(_EXTRA_OPTIONAL_TOOLS)\n + list(_EXTRA_LLM_TOOLS)\n + list(_LLM_TOOLS)\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} {"id": "cd513288ff8b-0", "text": "Source code for langchain.agents.schema\nfrom typing import Any, Dict, List, Tuple\nfrom langchain.prompts.chat import ChatPromptTemplate\nfrom langchain.schema import AgentAction\n[docs]class AgentScratchPadChatPromptTemplate(ChatPromptTemplate):\n def _construct_agent_scratchpad(\n self, intermediate_steps: List[Tuple[AgentAction, str]]\n ) -> str:\n if len(intermediate_steps) == 0:\n return \"\"\n thoughts = \"\"\n for action, observation in intermediate_steps:\n thoughts += action.log\n thoughts += f\"\\nObservation: {observation}\\nThought: \"\n return (\n f\"This was your previous work \"\n f\"(but I haven't seen any of it! I only see what \"\n f\"you return as final answer):\\n{thoughts}\"\n )\n def _merge_partial_and_user_variables(self, **kwargs: Any) -> Dict[str, Any]:\n intermediate_steps = kwargs.pop(\"intermediate_steps\")\n kwargs[\"agent_scratchpad\"] = self._construct_agent_scratchpad(\n intermediate_steps\n )\n return kwargs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/schema.html"} {"id": "ab832e1b270c-0", "text": "Source code for langchain.agents.tools\n\"\"\"Interface for tools.\"\"\"\nfrom typing import Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool, Tool, tool\n[docs]class InvalidTool(BaseTool):\n \"\"\"Tool that is run when invalid tool name is encountered by agent.\"\"\"\n name = \"invalid_tool\"\n description = \"Called when tool name is invalid.\"\n def _run(\n self, tool_name: str, run_manager: Optional[CallbackManagerForToolRun] = None\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return f\"{tool_name} is not a valid tool, try another one.\"\n async def _arun(\n self,\n tool_name: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n return f\"{tool_name} is not a valid tool, try another one.\"\n__all__ = [\"InvalidTool\", \"BaseTool\", \"tool\", \"Tool\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/tools.html"} {"id": "17e1587a43a6-0", "text": "Source code for langchain.agents.utils\nfrom typing import Sequence\nfrom langchain.tools.base import BaseTool\n[docs]def validate_tools_single_input(class_name: str, tools: Sequence[BaseTool]) -> None:\n \"\"\"Validate tools for single input.\"\"\"\n for tool in tools:\n if not tool.is_single_input:\n raise ValueError(\n f\"{class_name} does not support multi-input tool {tool.name}.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/utils.html"} {"id": "0fd6bd36d2bb-0", "text": "Source code for langchain.agents.agent_types\nfrom enum import Enum\n[docs]class AgentType(str, Enum):\n \"\"\"Enumerator with the Agent types.\"\"\"\n ZERO_SHOT_REACT_DESCRIPTION = \"zero-shot-react-description\"\n REACT_DOCSTORE = \"react-docstore\"\n SELF_ASK_WITH_SEARCH = \"self-ask-with-search\"\n CONVERSATIONAL_REACT_DESCRIPTION = \"conversational-react-description\"\n CHAT_ZERO_SHOT_REACT_DESCRIPTION = \"chat-zero-shot-react-description\"\n CHAT_CONVERSATIONAL_REACT_DESCRIPTION = \"chat-conversational-react-description\"\n STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION = (\n \"structured-chat-zero-shot-react-description\"\n )\n OPENAI_FUNCTIONS = \"openai-functions\"\n OPENAI_MULTI_FUNCTIONS = \"openai-multi-functions\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_types.html"} {"id": "04c091dfe984-0", "text": "Source code for langchain.agents.agent\n\"\"\"Chain that takes in an input and produces an action and action input.\"\"\"\nfrom __future__ import annotations\nimport asyncio\nimport json\nimport logging\nimport time\nfrom abc import abstractmethod\nfrom pathlib import Path\nfrom typing import Any, Callable, Dict, List, Optional, Sequence, Tuple, Union\nimport yaml\nfrom pydantic import BaseModel, root_validator\nfrom langchain.agents.agent_types import AgentType\nfrom langchain.agents.tools import InvalidTool\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n AsyncCallbackManagerForToolRun,\n CallbackManagerForChainRun,\n CallbackManagerForToolRun,\n Callbacks,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.input import get_color_mapping\nfrom langchain.prompts.few_shot import FewShotPromptTemplate\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema import (\n AgentAction,\n AgentFinish,\n BaseOutputParser,\n BasePromptTemplate,\n OutputParserException,\n)\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.schema.messages import BaseMessage\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.asyncio import asyncio_timeout\nlogger = logging.getLogger(__name__)\n[docs]class BaseSingleActionAgent(BaseModel):\n \"\"\"Base Agent class.\"\"\"\n @property\n def return_values(self) -> List[str]:\n \"\"\"Return values of the agent.\"\"\"\n return [\"output\"]\n[docs] def get_allowed_tools(self) -> Optional[List[str]]:\n return None\n[docs] @abstractmethod\n def plan(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} {"id": "04c091dfe984-1", "text": "return None\n[docs] @abstractmethod\n def plan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[AgentAction, AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n callbacks: Callbacks to run.\n **kwargs: User inputs.\n Returns:\n Action specifying what tool to use.\n \"\"\"\n[docs] @abstractmethod\n async def aplan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[AgentAction, AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n callbacks: Callbacks to run.\n **kwargs: User inputs.\n Returns:\n Action specifying what tool to use.\n \"\"\"\n @property\n @abstractmethod\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n[docs] def return_stopped_response(\n self,\n early_stopping_method: str,\n intermediate_steps: List[Tuple[AgentAction, str]],\n **kwargs: Any,\n ) -> AgentFinish:\n \"\"\"Return response when agent has been stopped due to max iterations.\"\"\"\n if early_stopping_method == \"force\":\n # `force` just returns a constant string\n return AgentFinish(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} {"id": "04c091dfe984-2", "text": "# `force` just returns a constant string\n return AgentFinish(\n {\"output\": \"Agent stopped due to iteration limit or time limit.\"}, \"\"\n )\n else:\n raise ValueError(\n f\"Got unsupported early_stopping_method `{early_stopping_method}`\"\n )\n[docs] @classmethod\n def from_llm_and_tools(\n cls,\n llm: BaseLanguageModel,\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n **kwargs: Any,\n ) -> BaseSingleActionAgent:\n raise NotImplementedError\n @property\n def _agent_type(self) -> str:\n \"\"\"Return Identifier of agent type.\"\"\"\n raise NotImplementedError\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return dictionary representation of agent.\"\"\"\n _dict = super().dict()\n _type = self._agent_type\n if isinstance(_type, AgentType):\n _dict[\"_type\"] = str(_type.value)\n else:\n _dict[\"_type\"] = _type\n return _dict\n[docs] def save(self, file_path: Union[Path, str]) -> None:\n \"\"\"Save the agent.\n Args:\n file_path: Path to file to save the agent to.\n Example:\n .. code-block:: python\n # If working with agent executor\n agent.agent.save(file_path=\"path/agent.yaml\")\n \"\"\"\n # Convert file to Path object.\n if isinstance(file_path, str):\n save_path = Path(file_path)\n else:\n save_path = file_path\n directory_path = save_path.parent\n directory_path.mkdir(parents=True, exist_ok=True)\n # Fetch dictionary to save", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} {"id": "04c091dfe984-3", "text": "directory_path.mkdir(parents=True, exist_ok=True)\n # Fetch dictionary to save\n agent_dict = self.dict()\n if save_path.suffix == \".json\":\n with open(file_path, \"w\") as f:\n json.dump(agent_dict, f, indent=4)\n elif save_path.suffix == \".yaml\":\n with open(file_path, \"w\") as f:\n yaml.dump(agent_dict, f, default_flow_style=False)\n else:\n raise ValueError(f\"{save_path} must be json or yaml\")\n[docs] def tool_run_logging_kwargs(self) -> Dict:\n return {}\n[docs]class BaseMultiActionAgent(BaseModel):\n \"\"\"Base Agent class.\"\"\"\n @property\n def return_values(self) -> List[str]:\n \"\"\"Return values of the agent.\"\"\"\n return [\"output\"]\n[docs] def get_allowed_tools(self) -> Optional[List[str]]:\n return None\n[docs] @abstractmethod\n def plan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[List[AgentAction], AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n callbacks: Callbacks to run.\n **kwargs: User inputs.\n Returns:\n Actions specifying what tool to use.\n \"\"\"\n[docs] @abstractmethod\n async def aplan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[List[AgentAction], AgentFinish]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} {"id": "04c091dfe984-4", "text": "**kwargs: Any,\n ) -> Union[List[AgentAction], AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n callbacks: Callbacks to run.\n **kwargs: User inputs.\n Returns:\n Actions specifying what tool to use.\n \"\"\"\n @property\n @abstractmethod\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n[docs] def return_stopped_response(\n self,\n early_stopping_method: str,\n intermediate_steps: List[Tuple[AgentAction, str]],\n **kwargs: Any,\n ) -> AgentFinish:\n \"\"\"Return response when agent has been stopped due to max iterations.\"\"\"\n if early_stopping_method == \"force\":\n # `force` just returns a constant string\n return AgentFinish({\"output\": \"Agent stopped due to max iterations.\"}, \"\")\n else:\n raise ValueError(\n f\"Got unsupported early_stopping_method `{early_stopping_method}`\"\n )\n @property\n def _agent_type(self) -> str:\n \"\"\"Return Identifier of agent type.\"\"\"\n raise NotImplementedError\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return dictionary representation of agent.\"\"\"\n _dict = super().dict()\n _dict[\"_type\"] = str(self._agent_type)\n return _dict\n[docs] def save(self, file_path: Union[Path, str]) -> None:\n \"\"\"Save the agent.\n Args:\n file_path: Path to file to save the agent to.\n Example:\n .. code-block:: python", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} {"id": "04c091dfe984-5", "text": "Example:\n .. code-block:: python\n # If working with agent executor\n agent.agent.save(file_path=\"path/agent.yaml\")\n \"\"\"\n # Convert file to Path object.\n if isinstance(file_path, str):\n save_path = Path(file_path)\n else:\n save_path = file_path\n directory_path = save_path.parent\n directory_path.mkdir(parents=True, exist_ok=True)\n # Fetch dictionary to save\n agent_dict = self.dict()\n if save_path.suffix == \".json\":\n with open(file_path, \"w\") as f:\n json.dump(agent_dict, f, indent=4)\n elif save_path.suffix == \".yaml\":\n with open(file_path, \"w\") as f:\n yaml.dump(agent_dict, f, default_flow_style=False)\n else:\n raise ValueError(f\"{save_path} must be json or yaml\")\n[docs] def tool_run_logging_kwargs(self) -> Dict:\n return {}\n[docs]class AgentOutputParser(BaseOutputParser):\n[docs] @abstractmethod\n def parse(self, text: str) -> Union[AgentAction, AgentFinish]:\n \"\"\"Parse text into agent action/finish.\"\"\"\n[docs]class LLMSingleActionAgent(BaseSingleActionAgent):\n llm_chain: LLMChain\n output_parser: AgentOutputParser\n stop: List[str]\n @property\n def input_keys(self) -> List[str]:\n return list(set(self.llm_chain.input_keys) - {\"intermediate_steps\"})\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return dictionary representation of agent.\"\"\"\n _dict = super().dict()\n del _dict[\"output_parser\"]\n return _dict\n[docs] def plan(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} {"id": "04c091dfe984-6", "text": "return _dict\n[docs] def plan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[AgentAction, AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n callbacks: Callbacks to run.\n **kwargs: User inputs.\n Returns:\n Action specifying what tool to use.\n \"\"\"\n output = self.llm_chain.run(\n intermediate_steps=intermediate_steps,\n stop=self.stop,\n callbacks=callbacks,\n **kwargs,\n )\n return self.output_parser.parse(output)\n[docs] async def aplan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[AgentAction, AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n callbacks: Callbacks to run.\n **kwargs: User inputs.\n Returns:\n Action specifying what tool to use.\n \"\"\"\n output = await self.llm_chain.arun(\n intermediate_steps=intermediate_steps,\n stop=self.stop,\n callbacks=callbacks,\n **kwargs,\n )\n return self.output_parser.parse(output)\n[docs] def tool_run_logging_kwargs(self) -> Dict:\n return {\n \"llm_prefix\": \"\",\n \"observation_prefix\": \"\" if len(self.stop) == 0 else self.stop[0],\n }", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} {"id": "04c091dfe984-7", "text": "}\n[docs]class Agent(BaseSingleActionAgent):\n \"\"\"Class responsible for calling the language model and deciding the action.\n This is driven by an LLMChain. The prompt in the LLMChain MUST include\n a variable called \"agent_scratchpad\" where the agent can put its\n intermediary work.\n \"\"\"\n llm_chain: LLMChain\n output_parser: AgentOutputParser\n allowed_tools: Optional[List[str]] = None\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return dictionary representation of agent.\"\"\"\n _dict = super().dict()\n del _dict[\"output_parser\"]\n return _dict\n[docs] def get_allowed_tools(self) -> Optional[List[str]]:\n return self.allowed_tools\n @property\n def return_values(self) -> List[str]:\n return [\"output\"]\n def _fix_text(self, text: str) -> str:\n \"\"\"Fix the text.\"\"\"\n raise ValueError(\"fix_text not implemented for this agent.\")\n @property\n def _stop(self) -> List[str]:\n return [\n f\"\\n{self.observation_prefix.rstrip()}\",\n f\"\\n\\t{self.observation_prefix.rstrip()}\",\n ]\n def _construct_scratchpad(\n self, intermediate_steps: List[Tuple[AgentAction, str]]\n ) -> Union[str, List[BaseMessage]]:\n \"\"\"Construct the scratchpad that lets the agent continue its thought process.\"\"\"\n thoughts = \"\"\n for action, observation in intermediate_steps:\n thoughts += action.log\n thoughts += f\"\\n{self.observation_prefix}{observation}\\n{self.llm_prefix}\"\n return thoughts\n[docs] def plan(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} {"id": "04c091dfe984-8", "text": "return thoughts\n[docs] def plan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[AgentAction, AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n callbacks: Callbacks to run.\n **kwargs: User inputs.\n Returns:\n Action specifying what tool to use.\n \"\"\"\n full_inputs = self.get_full_inputs(intermediate_steps, **kwargs)\n full_output = self.llm_chain.predict(callbacks=callbacks, **full_inputs)\n return self.output_parser.parse(full_output)\n[docs] async def aplan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[AgentAction, AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n callbacks: Callbacks to run.\n **kwargs: User inputs.\n Returns:\n Action specifying what tool to use.\n \"\"\"\n full_inputs = self.get_full_inputs(intermediate_steps, **kwargs)\n full_output = await self.llm_chain.apredict(callbacks=callbacks, **full_inputs)\n return self.output_parser.parse(full_output)\n[docs] def get_full_inputs(\n self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any\n ) -> Dict[str, Any]:\n \"\"\"Create the full inputs for the LLMChain from intermediate steps.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} {"id": "04c091dfe984-9", "text": "\"\"\"Create the full inputs for the LLMChain from intermediate steps.\"\"\"\n thoughts = self._construct_scratchpad(intermediate_steps)\n new_inputs = {\"agent_scratchpad\": thoughts, \"stop\": self._stop}\n full_inputs = {**kwargs, **new_inputs}\n return full_inputs\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n return list(set(self.llm_chain.input_keys) - {\"agent_scratchpad\"})\n[docs] @root_validator()\n def validate_prompt(cls, values: Dict) -> Dict:\n \"\"\"Validate that prompt matches format.\"\"\"\n prompt = values[\"llm_chain\"].prompt\n if \"agent_scratchpad\" not in prompt.input_variables:\n logger.warning(\n \"`agent_scratchpad` should be a variable in prompt.input_variables.\"\n \" Did not find it, so adding it at the end.\"\n )\n prompt.input_variables.append(\"agent_scratchpad\")\n if isinstance(prompt, PromptTemplate):\n prompt.template += \"\\n{agent_scratchpad}\"\n elif isinstance(prompt, FewShotPromptTemplate):\n prompt.suffix += \"\\n{agent_scratchpad}\"\n else:\n raise ValueError(f\"Got unexpected prompt type {type(prompt)}\")\n return values\n @property\n @abstractmethod\n def observation_prefix(self) -> str:\n \"\"\"Prefix to append the observation with.\"\"\"\n @property\n @abstractmethod\n def llm_prefix(self) -> str:\n \"\"\"Prefix to append the LLM call with.\"\"\"\n[docs] @classmethod\n @abstractmethod\n def create_prompt(cls, tools: Sequence[BaseTool]) -> BasePromptTemplate:\n \"\"\"Create a prompt for this class.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} {"id": "04c091dfe984-10", "text": "\"\"\"Create a prompt for this class.\"\"\"\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n \"\"\"Validate that appropriate tools are passed in.\"\"\"\n pass\n @classmethod\n @abstractmethod\n def _get_default_output_parser(cls, **kwargs: Any) -> AgentOutputParser:\n \"\"\"Get default output parser for this class.\"\"\"\n[docs] @classmethod\n def from_llm_and_tools(\n cls,\n llm: BaseLanguageModel,\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n output_parser: Optional[AgentOutputParser] = None,\n **kwargs: Any,\n ) -> Agent:\n \"\"\"Construct an agent from an LLM and tools.\"\"\"\n cls._validate_tools(tools)\n llm_chain = LLMChain(\n llm=llm,\n prompt=cls.create_prompt(tools),\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n _output_parser = output_parser or cls._get_default_output_parser()\n return cls(\n llm_chain=llm_chain,\n allowed_tools=tool_names,\n output_parser=_output_parser,\n **kwargs,\n )\n[docs] def return_stopped_response(\n self,\n early_stopping_method: str,\n intermediate_steps: List[Tuple[AgentAction, str]],\n **kwargs: Any,\n ) -> AgentFinish:\n \"\"\"Return response when agent has been stopped due to max iterations.\"\"\"\n if early_stopping_method == \"force\":\n # `force` just returns a constant string\n return AgentFinish(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} {"id": "04c091dfe984-11", "text": "# `force` just returns a constant string\n return AgentFinish(\n {\"output\": \"Agent stopped due to iteration limit or time limit.\"}, \"\"\n )\n elif early_stopping_method == \"generate\":\n # Generate does one final forward pass\n thoughts = \"\"\n for action, observation in intermediate_steps:\n thoughts += action.log\n thoughts += (\n f\"\\n{self.observation_prefix}{observation}\\n{self.llm_prefix}\"\n )\n # Adding to the previous steps, we now tell the LLM to make a final pred\n thoughts += (\n \"\\n\\nI now need to return a final answer based on the previous steps:\"\n )\n new_inputs = {\"agent_scratchpad\": thoughts, \"stop\": self._stop}\n full_inputs = {**kwargs, **new_inputs}\n full_output = self.llm_chain.predict(**full_inputs)\n # We try to extract a final answer\n parsed_output = self.output_parser.parse(full_output)\n if isinstance(parsed_output, AgentFinish):\n # If we can extract, we send the correct stuff\n return parsed_output\n else:\n # If we can extract, but the tool is not the final tool,\n # we just return the full output\n return AgentFinish({\"output\": full_output}, full_output)\n else:\n raise ValueError(\n \"early_stopping_method should be one of `force` or `generate`, \"\n f\"got {early_stopping_method}\"\n )\n[docs] def tool_run_logging_kwargs(self) -> Dict:\n return {\n \"llm_prefix\": self.llm_prefix,\n \"observation_prefix\": self.observation_prefix,\n }\n[docs]class ExceptionTool(BaseTool):\n name = \"_Exception\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} {"id": "04c091dfe984-12", "text": "}\n[docs]class ExceptionTool(BaseTool):\n name = \"_Exception\"\n description = \"Exception tool\"\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n return query\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n return query\n[docs]class AgentExecutor(Chain):\n \"\"\"Consists of an agent using tools.\"\"\"\n agent: Union[BaseSingleActionAgent, BaseMultiActionAgent]\n \"\"\"The agent to run for creating a plan and determining actions\n to take at each step of the execution loop.\"\"\"\n tools: Sequence[BaseTool]\n \"\"\"The valid tools the agent can call.\"\"\"\n return_intermediate_steps: bool = False\n \"\"\"Whether to return the agent's trajectory of intermediate steps\n at the end in addition to the final output.\"\"\"\n max_iterations: Optional[int] = 15\n \"\"\"The maximum number of steps to take before ending the execution\n loop.\n \n Setting to 'None' could lead to an infinite loop.\"\"\"\n max_execution_time: Optional[float] = None\n \"\"\"The maximum amount of wall clock time to spend in the execution\n loop.\n \"\"\"\n early_stopping_method: str = \"force\"\n \"\"\"The method to use for early stopping if the agent never\n returns `AgentFinish`. Either 'force' or 'generate'.\n `\"force\"` returns a string saying that it stopped because it met a\n time or iteration limit.\n \n `\"generate\"` calls the agent's LLM Chain one final time to generate", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} {"id": "04c091dfe984-13", "text": "`\"generate\"` calls the agent's LLM Chain one final time to generate\n a final answer based on the previous steps.\n \"\"\"\n handle_parsing_errors: Union[\n bool, str, Callable[[OutputParserException], str]\n ] = False\n \"\"\"How to handle errors raised by the agent's output parser.\n Defaults to `False`, which raises the error.\ns\n If `true`, the error will be sent back to the LLM as an observation.\n If a string, the string itself will be sent to the LLM as an observation.\n If a callable function, the function will be called with the exception\n as an argument, and the result of that function will be passed to the agent\n as an observation.\n \"\"\"\n[docs] @classmethod\n def from_agent_and_tools(\n cls,\n agent: Union[BaseSingleActionAgent, BaseMultiActionAgent],\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n **kwargs: Any,\n ) -> AgentExecutor:\n \"\"\"Create from agent and tools.\"\"\"\n return cls(\n agent=agent, tools=tools, callback_manager=callback_manager, **kwargs\n )\n[docs] @root_validator()\n def validate_tools(cls, values: Dict) -> Dict:\n \"\"\"Validate that tools are compatible with agent.\"\"\"\n agent = values[\"agent\"]\n tools = values[\"tools\"]\n allowed_tools = agent.get_allowed_tools()\n if allowed_tools is not None:\n if set(allowed_tools) != set([tool.name for tool in tools]):\n raise ValueError(\n f\"Allowed tools ({allowed_tools}) different than \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} {"id": "04c091dfe984-14", "text": "raise ValueError(\n f\"Allowed tools ({allowed_tools}) different than \"\n f\"provided tools ({[tool.name for tool in tools]})\"\n )\n return values\n[docs] @root_validator()\n def validate_return_direct_tool(cls, values: Dict) -> Dict:\n \"\"\"Validate that tools are compatible with agent.\"\"\"\n agent = values[\"agent\"]\n tools = values[\"tools\"]\n if isinstance(agent, BaseMultiActionAgent):\n for tool in tools:\n if tool.return_direct:\n raise ValueError(\n \"Tools that have `return_direct=True` are not allowed \"\n \"in multi-action agents\"\n )\n return values\n[docs] def save(self, file_path: Union[Path, str]) -> None:\n \"\"\"Raise error - saving not supported for Agent Executors.\"\"\"\n raise ValueError(\n \"Saving not supported for agent executors. \"\n \"If you are trying to save the agent, please use the \"\n \"`.save_agent(...)`\"\n )\n[docs] def save_agent(self, file_path: Union[Path, str]) -> None:\n \"\"\"Save the underlying agent.\"\"\"\n return self.agent.save(file_path)\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n return self.agent.input_keys\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the singular output key.\n :meta private:\n \"\"\"\n if self.return_intermediate_steps:\n return self.agent.return_values + [\"intermediate_steps\"]\n else:\n return self.agent.return_values\n[docs] def lookup_tool(self, name: str) -> BaseTool:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} {"id": "04c091dfe984-15", "text": "[docs] def lookup_tool(self, name: str) -> BaseTool:\n \"\"\"Lookup tool by name.\"\"\"\n return {tool.name: tool for tool in self.tools}[name]\n def _should_continue(self, iterations: int, time_elapsed: float) -> bool:\n if self.max_iterations is not None and iterations >= self.max_iterations:\n return False\n if (\n self.max_execution_time is not None\n and time_elapsed >= self.max_execution_time\n ):\n return False\n return True\n def _return(\n self,\n output: AgentFinish,\n intermediate_steps: list,\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n if run_manager:\n run_manager.on_agent_finish(output, color=\"green\", verbose=self.verbose)\n final_output = output.return_values\n if self.return_intermediate_steps:\n final_output[\"intermediate_steps\"] = intermediate_steps\n return final_output\n async def _areturn(\n self,\n output: AgentFinish,\n intermediate_steps: list,\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n if run_manager:\n await run_manager.on_agent_finish(\n output, color=\"green\", verbose=self.verbose\n )\n final_output = output.return_values\n if self.return_intermediate_steps:\n final_output[\"intermediate_steps\"] = intermediate_steps\n return final_output\n def _take_next_step(\n self,\n name_to_tool_map: Dict[str, BaseTool],\n color_mapping: Dict[str, str],\n inputs: Dict[str, str],\n intermediate_steps: List[Tuple[AgentAction, str]],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} {"id": "04c091dfe984-16", "text": "intermediate_steps: List[Tuple[AgentAction, str]],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:\n \"\"\"Take a single step in the thought-action-observation loop.\n Override this to take control of how the agent makes and acts on choices.\n \"\"\"\n try:\n # Call the LLM to see what to do.\n output = self.agent.plan(\n intermediate_steps,\n callbacks=run_manager.get_child() if run_manager else None,\n **inputs,\n )\n except OutputParserException as e:\n if isinstance(self.handle_parsing_errors, bool):\n raise_error = not self.handle_parsing_errors\n else:\n raise_error = False\n if raise_error:\n raise e\n text = str(e)\n if isinstance(self.handle_parsing_errors, bool):\n if e.send_to_llm:\n observation = str(e.observation)\n text = str(e.llm_output)\n else:\n observation = \"Invalid or incomplete response\"\n elif isinstance(self.handle_parsing_errors, str):\n observation = self.handle_parsing_errors\n elif callable(self.handle_parsing_errors):\n observation = self.handle_parsing_errors(e)\n else:\n raise ValueError(\"Got unexpected type of `handle_parsing_errors`\")\n output = AgentAction(\"_Exception\", observation, text)\n if run_manager:\n run_manager.on_agent_action(output, color=\"green\")\n tool_run_kwargs = self.agent.tool_run_logging_kwargs()\n observation = ExceptionTool().run(\n output.tool_input,\n verbose=self.verbose,\n color=None,\n callbacks=run_manager.get_child() if run_manager else None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} {"id": "04c091dfe984-17", "text": "color=None,\n callbacks=run_manager.get_child() if run_manager else None,\n **tool_run_kwargs,\n )\n return [(output, observation)]\n # If the tool chosen is the finishing tool, then we end and return.\n if isinstance(output, AgentFinish):\n return output\n actions: List[AgentAction]\n if isinstance(output, AgentAction):\n actions = [output]\n else:\n actions = output\n result = []\n for agent_action in actions:\n if run_manager:\n run_manager.on_agent_action(agent_action, color=\"green\")\n # Otherwise we lookup the tool\n if agent_action.tool in name_to_tool_map:\n tool = name_to_tool_map[agent_action.tool]\n return_direct = tool.return_direct\n color = color_mapping[agent_action.tool]\n tool_run_kwargs = self.agent.tool_run_logging_kwargs()\n if return_direct:\n tool_run_kwargs[\"llm_prefix\"] = \"\"\n # We then call the tool on the tool input to get an observation\n observation = tool.run(\n agent_action.tool_input,\n verbose=self.verbose,\n color=color,\n callbacks=run_manager.get_child() if run_manager else None,\n **tool_run_kwargs,\n )\n else:\n tool_run_kwargs = self.agent.tool_run_logging_kwargs()\n observation = InvalidTool().run(\n agent_action.tool,\n verbose=self.verbose,\n color=None,\n callbacks=run_manager.get_child() if run_manager else None,\n **tool_run_kwargs,\n )\n result.append((agent_action, observation))\n return result\n async def _atake_next_step(\n self,\n name_to_tool_map: Dict[str, BaseTool],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} {"id": "04c091dfe984-18", "text": "self,\n name_to_tool_map: Dict[str, BaseTool],\n color_mapping: Dict[str, str],\n inputs: Dict[str, str],\n intermediate_steps: List[Tuple[AgentAction, str]],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:\n \"\"\"Take a single step in the thought-action-observation loop.\n Override this to take control of how the agent makes and acts on choices.\n \"\"\"\n try:\n # Call the LLM to see what to do.\n output = await self.agent.aplan(\n intermediate_steps,\n callbacks=run_manager.get_child() if run_manager else None,\n **inputs,\n )\n except OutputParserException as e:\n if isinstance(self.handle_parsing_errors, bool):\n raise_error = not self.handle_parsing_errors\n else:\n raise_error = False\n if raise_error:\n raise e\n text = str(e)\n if isinstance(self.handle_parsing_errors, bool):\n if e.send_to_llm:\n observation = str(e.observation)\n text = str(e.llm_output)\n else:\n observation = \"Invalid or incomplete response\"\n elif isinstance(self.handle_parsing_errors, str):\n observation = self.handle_parsing_errors\n elif callable(self.handle_parsing_errors):\n observation = self.handle_parsing_errors(e)\n else:\n raise ValueError(\"Got unexpected type of `handle_parsing_errors`\")\n output = AgentAction(\"_Exception\", observation, text)\n tool_run_kwargs = self.agent.tool_run_logging_kwargs()\n observation = await ExceptionTool().arun(\n output.tool_input,\n verbose=self.verbose,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} {"id": "04c091dfe984-19", "text": "output.tool_input,\n verbose=self.verbose,\n color=None,\n callbacks=run_manager.get_child() if run_manager else None,\n **tool_run_kwargs,\n )\n return [(output, observation)]\n # If the tool chosen is the finishing tool, then we end and return.\n if isinstance(output, AgentFinish):\n return output\n actions: List[AgentAction]\n if isinstance(output, AgentAction):\n actions = [output]\n else:\n actions = output\n async def _aperform_agent_action(\n agent_action: AgentAction,\n ) -> Tuple[AgentAction, str]:\n if run_manager:\n await run_manager.on_agent_action(\n agent_action, verbose=self.verbose, color=\"green\"\n )\n # Otherwise we lookup the tool\n if agent_action.tool in name_to_tool_map:\n tool = name_to_tool_map[agent_action.tool]\n return_direct = tool.return_direct\n color = color_mapping[agent_action.tool]\n tool_run_kwargs = self.agent.tool_run_logging_kwargs()\n if return_direct:\n tool_run_kwargs[\"llm_prefix\"] = \"\"\n # We then call the tool on the tool input to get an observation\n observation = await tool.arun(\n agent_action.tool_input,\n verbose=self.verbose,\n color=color,\n callbacks=run_manager.get_child() if run_manager else None,\n **tool_run_kwargs,\n )\n else:\n tool_run_kwargs = self.agent.tool_run_logging_kwargs()\n observation = await InvalidTool().arun(\n agent_action.tool,\n verbose=self.verbose,\n color=None,\n callbacks=run_manager.get_child() if run_manager else None,\n **tool_run_kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} {"id": "04c091dfe984-20", "text": "**tool_run_kwargs,\n )\n return agent_action, observation\n # Use asyncio.gather to run multiple tool.arun() calls concurrently\n result = await asyncio.gather(\n *[_aperform_agent_action(agent_action) for agent_action in actions]\n )\n return list(result)\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Run text through and get agent response.\"\"\"\n # Construct a mapping of tool name to tool for easy lookup\n name_to_tool_map = {tool.name: tool for tool in self.tools}\n # We construct a mapping from each tool to a color, used for logging.\n color_mapping = get_color_mapping(\n [tool.name for tool in self.tools], excluded_colors=[\"green\", \"red\"]\n )\n intermediate_steps: List[Tuple[AgentAction, str]] = []\n # Let's start tracking the number of iterations and time elapsed\n iterations = 0\n time_elapsed = 0.0\n start_time = time.time()\n # We now enter the agent loop (until it returns something).\n while self._should_continue(iterations, time_elapsed):\n next_step_output = self._take_next_step(\n name_to_tool_map,\n color_mapping,\n inputs,\n intermediate_steps,\n run_manager=run_manager,\n )\n if isinstance(next_step_output, AgentFinish):\n return self._return(\n next_step_output, intermediate_steps, run_manager=run_manager\n )\n intermediate_steps.extend(next_step_output)\n if len(next_step_output) == 1:\n next_step_action = next_step_output[0]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} {"id": "04c091dfe984-21", "text": "next_step_action = next_step_output[0]\n # See if tool should return directly\n tool_return = self._get_tool_return(next_step_action)\n if tool_return is not None:\n return self._return(\n tool_return, intermediate_steps, run_manager=run_manager\n )\n iterations += 1\n time_elapsed = time.time() - start_time\n output = self.agent.return_stopped_response(\n self.early_stopping_method, intermediate_steps, **inputs\n )\n return self._return(output, intermediate_steps, run_manager=run_manager)\n async def _acall(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n \"\"\"Run text through and get agent response.\"\"\"\n # Construct a mapping of tool name to tool for easy lookup\n name_to_tool_map = {tool.name: tool for tool in self.tools}\n # We construct a mapping from each tool to a color, used for logging.\n color_mapping = get_color_mapping(\n [tool.name for tool in self.tools], excluded_colors=[\"green\"]\n )\n intermediate_steps: List[Tuple[AgentAction, str]] = []\n # Let's start tracking the number of iterations and time elapsed\n iterations = 0\n time_elapsed = 0.0\n start_time = time.time()\n # We now enter the agent loop (until it returns something).\n async with asyncio_timeout(self.max_execution_time):\n try:\n while self._should_continue(iterations, time_elapsed):\n next_step_output = await self._atake_next_step(\n name_to_tool_map,\n color_mapping,\n inputs,\n intermediate_steps,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} {"id": "04c091dfe984-22", "text": "color_mapping,\n inputs,\n intermediate_steps,\n run_manager=run_manager,\n )\n if isinstance(next_step_output, AgentFinish):\n return await self._areturn(\n next_step_output,\n intermediate_steps,\n run_manager=run_manager,\n )\n intermediate_steps.extend(next_step_output)\n if len(next_step_output) == 1:\n next_step_action = next_step_output[0]\n # See if tool should return directly\n tool_return = self._get_tool_return(next_step_action)\n if tool_return is not None:\n return await self._areturn(\n tool_return, intermediate_steps, run_manager=run_manager\n )\n iterations += 1\n time_elapsed = time.time() - start_time\n output = self.agent.return_stopped_response(\n self.early_stopping_method, intermediate_steps, **inputs\n )\n return await self._areturn(\n output, intermediate_steps, run_manager=run_manager\n )\n except TimeoutError:\n # stop early when interrupted by the async timeout\n output = self.agent.return_stopped_response(\n self.early_stopping_method, intermediate_steps, **inputs\n )\n return await self._areturn(\n output, intermediate_steps, run_manager=run_manager\n )\n def _get_tool_return(\n self, next_step_output: Tuple[AgentAction, str]\n ) -> Optional[AgentFinish]:\n \"\"\"Check if the tool is a returning tool.\"\"\"\n agent_action, observation = next_step_output\n name_to_tool_map = {tool.name: tool for tool in self.tools}\n # Invalid tools won't be in the map, so we return False.\n if agent_action.tool in name_to_tool_map:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} {"id": "04c091dfe984-23", "text": "if agent_action.tool in name_to_tool_map:\n if name_to_tool_map[agent_action.tool].return_direct:\n return AgentFinish(\n {self.agent.return_values[0]: observation},\n \"\",\n )\n return None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} {"id": "e2416bbe157c-0", "text": "Source code for langchain.agents.loading\n\"\"\"Functionality for loading agents.\"\"\"\nimport json\nimport logging\nfrom pathlib import Path\nfrom typing import Any, List, Optional, Union\nimport yaml\nfrom langchain.agents.agent import BaseMultiActionAgent, BaseSingleActionAgent\nfrom langchain.agents.tools import Tool\nfrom langchain.agents.types import AGENT_TO_CLASS\nfrom langchain.chains.loading import load_chain, load_chain_from_config\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.utilities.loading import try_load_from_hub\nlogger = logging.getLogger(__file__)\nURL_BASE = \"https://raw.githubusercontent.com/hwchase17/langchain-hub/master/agents/\"\ndef _load_agent_from_tools(\n config: dict, llm: BaseLanguageModel, tools: List[Tool], **kwargs: Any\n) -> Union[BaseSingleActionAgent, BaseMultiActionAgent]:\n config_type = config.pop(\"_type\")\n if config_type not in AGENT_TO_CLASS:\n raise ValueError(f\"Loading {config_type} agent not supported\")\n agent_cls = AGENT_TO_CLASS[config_type]\n combined_config = {**config, **kwargs}\n return agent_cls.from_llm_and_tools(llm, tools, **combined_config)\n[docs]def load_agent_from_config(\n config: dict,\n llm: Optional[BaseLanguageModel] = None,\n tools: Optional[List[Tool]] = None,\n **kwargs: Any,\n) -> Union[BaseSingleActionAgent, BaseMultiActionAgent]:\n \"\"\"Load agent from Config Dict.\"\"\"\n if \"_type\" not in config:\n raise ValueError(\"Must specify an agent Type in config\")\n load_from_tools = config.pop(\"load_from_llm_and_tools\", False)\n if load_from_tools:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/loading.html"} {"id": "e2416bbe157c-1", "text": "if load_from_tools:\n if llm is None:\n raise ValueError(\n \"If `load_from_llm_and_tools` is set to True, \"\n \"then LLM must be provided\"\n )\n if tools is None:\n raise ValueError(\n \"If `load_from_llm_and_tools` is set to True, \"\n \"then tools must be provided\"\n )\n return _load_agent_from_tools(config, llm, tools, **kwargs)\n config_type = config.pop(\"_type\")\n if config_type not in AGENT_TO_CLASS:\n raise ValueError(f\"Loading {config_type} agent not supported\")\n agent_cls = AGENT_TO_CLASS[config_type]\n if \"llm_chain\" in config:\n config[\"llm_chain\"] = load_chain_from_config(config.pop(\"llm_chain\"))\n elif \"llm_chain_path\" in config:\n config[\"llm_chain\"] = load_chain(config.pop(\"llm_chain_path\"))\n else:\n raise ValueError(\"One of `llm_chain` and `llm_chain_path` should be specified.\")\n if \"output_parser\" in config:\n logger.warning(\n \"Currently loading output parsers on agent is not supported, \"\n \"will just use the default one.\"\n )\n del config[\"output_parser\"]\n combined_config = {**config, **kwargs}\n return agent_cls(**combined_config) # type: ignore\n[docs]def load_agent(\n path: Union[str, Path], **kwargs: Any\n) -> Union[BaseSingleActionAgent, BaseMultiActionAgent]:\n \"\"\"Unified method for loading a agent from LangChainHub or local fs.\"\"\"\n if hub_result := try_load_from_hub(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/loading.html"} {"id": "e2416bbe157c-2", "text": "if hub_result := try_load_from_hub(\n path, _load_agent_from_file, \"agents\", {\"json\", \"yaml\"}\n ):\n return hub_result\n else:\n return _load_agent_from_file(path, **kwargs)\ndef _load_agent_from_file(\n file: Union[str, Path], **kwargs: Any\n) -> Union[BaseSingleActionAgent, BaseMultiActionAgent]:\n \"\"\"Load agent from file.\"\"\"\n # Convert file to Path object.\n if isinstance(file, str):\n file_path = Path(file)\n else:\n file_path = file\n # Load from either json or yaml.\n if file_path.suffix == \".json\":\n with open(file_path) as f:\n config = json.load(f)\n elif file_path.suffix == \".yaml\":\n with open(file_path, \"r\") as f:\n config = yaml.safe_load(f)\n else:\n raise ValueError(\"File type must be json or yaml\")\n # Load the agent from the config now.\n return load_agent_from_config(config, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/loading.html"} {"id": "c373ca0cb0d6-0", "text": "Source code for langchain.agents.initialize\n\"\"\"Load agent.\"\"\"\nfrom typing import Any, Optional, Sequence\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_types import AgentType\nfrom langchain.agents.loading import AGENT_TO_CLASS, load_agent\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.tools.base import BaseTool\n[docs]def initialize_agent(\n tools: Sequence[BaseTool],\n llm: BaseLanguageModel,\n agent: Optional[AgentType] = None,\n callback_manager: Optional[BaseCallbackManager] = None,\n agent_path: Optional[str] = None,\n agent_kwargs: Optional[dict] = None,\n *,\n tags: Optional[Sequence[str]] = None,\n **kwargs: Any,\n) -> AgentExecutor:\n \"\"\"Load an agent executor given tools and LLM.\n Args:\n tools: List of tools this agent has access to.\n llm: Language model to use as the agent.\n agent: Agent type to use. If None and agent_path is also None, will default to\n AgentType.ZERO_SHOT_REACT_DESCRIPTION.\n callback_manager: CallbackManager to use. Global callback manager is used if\n not provided. Defaults to None.\n agent_path: Path to serialized agent to use.\n agent_kwargs: Additional key word arguments to pass to the underlying agent\n tags: Tags to apply to the traced runs.\n **kwargs: Additional key word arguments passed to the agent executor\n Returns:\n An agent executor\n \"\"\"\n tags_ = list(tags) if tags else []\n if agent is None and agent_path is None:\n agent = AgentType.ZERO_SHOT_REACT_DESCRIPTION", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/initialize.html"} {"id": "c373ca0cb0d6-1", "text": "agent = AgentType.ZERO_SHOT_REACT_DESCRIPTION\n if agent is not None and agent_path is not None:\n raise ValueError(\n \"Both `agent` and `agent_path` are specified, \"\n \"but at most only one should be.\"\n )\n if agent is not None:\n if agent not in AGENT_TO_CLASS:\n raise ValueError(\n f\"Got unknown agent type: {agent}. \"\n f\"Valid types are: {AGENT_TO_CLASS.keys()}.\"\n )\n tags_.append(agent.value if isinstance(agent, AgentType) else agent)\n agent_cls = AGENT_TO_CLASS[agent]\n agent_kwargs = agent_kwargs or {}\n agent_obj = agent_cls.from_llm_and_tools(\n llm, tools, callback_manager=callback_manager, **agent_kwargs\n )\n elif agent_path is not None:\n agent_obj = load_agent(\n agent_path, llm=llm, tools=tools, callback_manager=callback_manager\n )\n try:\n # TODO: Add tags from the serialized object directly.\n tags_.append(agent_obj._agent_type)\n except NotImplementedError:\n pass\n else:\n raise ValueError(\n \"Somehow both `agent` and `agent_path` are None, \"\n \"this should never happen.\"\n )\n return AgentExecutor.from_agent_and_tools(\n agent=agent_obj,\n tools=tools,\n callback_manager=callback_manager,\n tags=tags_,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/initialize.html"} {"id": "0b8b459c8e4c-0", "text": "Source code for langchain.agents.openai_functions_agent.base\n\"\"\"Module implements an agent that uses OpenAI's APIs function enabled API.\"\"\"\nimport json\nfrom dataclasses import dataclass\nfrom json import JSONDecodeError\nfrom typing import Any, List, Optional, Sequence, Tuple, Union\nfrom pydantic import root_validator\nfrom langchain.agents import BaseSingleActionAgent\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.chat_models.openai import ChatOpenAI\nfrom langchain.prompts.chat import (\n BaseMessagePromptTemplate,\n ChatPromptTemplate,\n HumanMessagePromptTemplate,\n MessagesPlaceholder,\n)\nfrom langchain.schema import (\n AgentAction,\n AgentFinish,\n BasePromptTemplate,\n OutputParserException,\n)\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.schema.messages import (\n AIMessage,\n BaseMessage,\n FunctionMessage,\n SystemMessage,\n)\nfrom langchain.tools import BaseTool\nfrom langchain.tools.convert_to_openai import format_tool_to_openai_function\n@dataclass\nclass _FunctionsAgentAction(AgentAction):\n message_log: List[BaseMessage]\ndef _convert_agent_action_to_messages(\n agent_action: AgentAction, observation: str\n) -> List[BaseMessage]:\n \"\"\"Convert an agent action to a message.\n This code is used to reconstruct the original AI message from the agent action.\n Args:\n agent_action: Agent action to convert.\n Returns:\n AIMessage that corresponds to the original tool invocation.\n \"\"\"\n if isinstance(agent_action, _FunctionsAgentAction):\n return agent_action.message_log + [\n _create_function_message(agent_action, observation)\n ]\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/openai_functions_agent/base.html"} {"id": "0b8b459c8e4c-1", "text": "_create_function_message(agent_action, observation)\n ]\n else:\n return [AIMessage(content=agent_action.log)]\ndef _create_function_message(\n agent_action: AgentAction, observation: str\n) -> FunctionMessage:\n \"\"\"Convert agent action and observation into a function message.\n Args:\n agent_action: the tool invocation request from the agent\n observation: the result of the tool invocation\n Returns:\n FunctionMessage that corresponds to the original tool invocation\n \"\"\"\n if not isinstance(observation, str):\n try:\n content = json.dumps(observation, ensure_ascii=False)\n except Exception:\n content = str(observation)\n else:\n content = observation\n return FunctionMessage(\n name=agent_action.tool,\n content=content,\n )\ndef _format_intermediate_steps(\n intermediate_steps: List[Tuple[AgentAction, str]],\n) -> List[BaseMessage]:\n \"\"\"Format intermediate steps.\n Args:\n intermediate_steps: Steps the LLM has taken to date, along with observations\n Returns:\n list of messages to send to the LLM for the next prediction\n \"\"\"\n messages = []\n for intermediate_step in intermediate_steps:\n agent_action, observation = intermediate_step\n messages.extend(_convert_agent_action_to_messages(agent_action, observation))\n return messages\ndef _parse_ai_message(message: BaseMessage) -> Union[AgentAction, AgentFinish]:\n \"\"\"Parse an AI message.\"\"\"\n if not isinstance(message, AIMessage):\n raise TypeError(f\"Expected an AI message got {type(message)}\")\n function_call = message.additional_kwargs.get(\"function_call\", {})\n if function_call:\n function_name = function_call[\"name\"]\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/openai_functions_agent/base.html"} {"id": "0b8b459c8e4c-2", "text": "if function_call:\n function_name = function_call[\"name\"]\n try:\n _tool_input = json.loads(function_call[\"arguments\"])\n except JSONDecodeError:\n raise OutputParserException(\n f\"Could not parse tool input: {function_call} because \"\n f\"the `arguments` is not valid JSON.\"\n )\n # HACK HACK HACK:\n # The code that encodes tool input into Open AI uses a special variable\n # name called `__arg1` to handle old style tools that do not expose a\n # schema and expect a single string argument as an input.\n # We unpack the argument here if it exists.\n # Open AI does not support passing in a JSON array as an argument.\n if \"__arg1\" in _tool_input:\n tool_input = _tool_input[\"__arg1\"]\n else:\n tool_input = _tool_input\n content_msg = \"responded: {content}\\n\" if message.content else \"\\n\"\n return _FunctionsAgentAction(\n tool=function_name,\n tool_input=tool_input,\n log=f\"\\nInvoking: `{function_name}` with `{tool_input}`\\n{content_msg}\\n\",\n message_log=[message],\n )\n return AgentFinish(return_values={\"output\": message.content}, log=message.content)\n[docs]class OpenAIFunctionsAgent(BaseSingleActionAgent):\n \"\"\"An Agent driven by OpenAIs function powered API.\n Args:\n llm: This should be an instance of ChatOpenAI, specifically a model\n that supports using `functions`.\n tools: The tools this agent has access to.\n prompt: The prompt for this agent, should support agent_scratchpad as one", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/openai_functions_agent/base.html"} {"id": "0b8b459c8e4c-3", "text": "prompt: The prompt for this agent, should support agent_scratchpad as one\n of the variables. For an easy way to construct this prompt, use\n `OpenAIFunctionsAgent.create_prompt(...)`\n \"\"\"\n llm: BaseLanguageModel\n tools: Sequence[BaseTool]\n prompt: BasePromptTemplate\n[docs] def get_allowed_tools(self) -> List[str]:\n \"\"\"Get allowed tools.\"\"\"\n return list([t.name for t in self.tools])\n[docs] @root_validator\n def validate_llm(cls, values: dict) -> dict:\n if not isinstance(values[\"llm\"], ChatOpenAI):\n raise ValueError(\"Only supported with ChatOpenAI models.\")\n return values\n[docs] @root_validator\n def validate_prompt(cls, values: dict) -> dict:\n prompt: BasePromptTemplate = values[\"prompt\"]\n if \"agent_scratchpad\" not in prompt.input_variables:\n raise ValueError(\n \"`agent_scratchpad` should be one of the variables in the prompt, \"\n f\"got {prompt.input_variables}\"\n )\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Get input keys. Input refers to user input here.\"\"\"\n return [\"input\"]\n @property\n def functions(self) -> List[dict]:\n return [dict(format_tool_to_openai_function(t)) for t in self.tools]\n[docs] def plan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[AgentAction, AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/openai_functions_agent/base.html"} {"id": "0b8b459c8e4c-4", "text": "\"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date, along with observations\n **kwargs: User inputs.\n Returns:\n Action specifying what tool to use.\n \"\"\"\n agent_scratchpad = _format_intermediate_steps(intermediate_steps)\n selected_inputs = {\n k: kwargs[k] for k in self.prompt.input_variables if k != \"agent_scratchpad\"\n }\n full_inputs = dict(**selected_inputs, agent_scratchpad=agent_scratchpad)\n prompt = self.prompt.format_prompt(**full_inputs)\n messages = prompt.to_messages()\n predicted_message = self.llm.predict_messages(\n messages, functions=self.functions, callbacks=callbacks\n )\n agent_decision = _parse_ai_message(predicted_message)\n return agent_decision\n[docs] async def aplan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[AgentAction, AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n **kwargs: User inputs.\n Returns:\n Action specifying what tool to use.\n \"\"\"\n agent_scratchpad = _format_intermediate_steps(intermediate_steps)\n selected_inputs = {\n k: kwargs[k] for k in self.prompt.input_variables if k != \"agent_scratchpad\"\n }\n full_inputs = dict(**selected_inputs, agent_scratchpad=agent_scratchpad)\n prompt = self.prompt.format_prompt(**full_inputs)\n messages = prompt.to_messages()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/openai_functions_agent/base.html"} {"id": "0b8b459c8e4c-5", "text": "prompt = self.prompt.format_prompt(**full_inputs)\n messages = prompt.to_messages()\n predicted_message = await self.llm.apredict_messages(\n messages, functions=self.functions, callbacks=callbacks\n )\n agent_decision = _parse_ai_message(predicted_message)\n return agent_decision\n[docs] @classmethod\n def create_prompt(\n cls,\n system_message: Optional[SystemMessage] = SystemMessage(\n content=\"You are a helpful AI assistant.\"\n ),\n extra_prompt_messages: Optional[List[BaseMessagePromptTemplate]] = None,\n ) -> BasePromptTemplate:\n \"\"\"Create prompt for this agent.\n Args:\n system_message: Message to use as the system message that will be the\n first in the prompt.\n extra_prompt_messages: Prompt messages that will be placed between the\n system message and the new human input.\n Returns:\n A prompt template to pass into this agent.\n \"\"\"\n _prompts = extra_prompt_messages or []\n messages: List[Union[BaseMessagePromptTemplate, BaseMessage]]\n if system_message:\n messages = [system_message]\n else:\n messages = []\n messages.extend(\n [\n *_prompts,\n HumanMessagePromptTemplate.from_template(\"{input}\"),\n MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n ]\n )\n return ChatPromptTemplate(messages=messages)\n[docs] @classmethod\n def from_llm_and_tools(\n cls,\n llm: BaseLanguageModel,\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n extra_prompt_messages: Optional[List[BaseMessagePromptTemplate]] = None,\n system_message: Optional[SystemMessage] = SystemMessage(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/openai_functions_agent/base.html"} {"id": "0b8b459c8e4c-6", "text": "system_message: Optional[SystemMessage] = SystemMessage(\n content=\"You are a helpful AI assistant.\"\n ),\n **kwargs: Any,\n ) -> BaseSingleActionAgent:\n \"\"\"Construct an agent from an LLM and tools.\"\"\"\n if not isinstance(llm, ChatOpenAI):\n raise ValueError(\"Only supported with ChatOpenAI models.\")\n prompt = cls.create_prompt(\n extra_prompt_messages=extra_prompt_messages,\n system_message=system_message,\n )\n return cls(\n llm=llm,\n prompt=prompt,\n tools=tools,\n callback_manager=callback_manager,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/openai_functions_agent/base.html"} {"id": "edad93404720-0", "text": "Source code for langchain.agents.openai_functions_multi_agent.base\n\"\"\"Module implements an agent that uses OpenAI's APIs function enabled API.\"\"\"\nimport json\nfrom dataclasses import dataclass\nfrom json import JSONDecodeError\nfrom typing import Any, List, Optional, Sequence, Tuple, Union\nfrom pydantic import root_validator\nfrom langchain.agents import BaseMultiActionAgent\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.chat_models.openai import ChatOpenAI\nfrom langchain.prompts.chat import (\n BaseMessagePromptTemplate,\n ChatPromptTemplate,\n HumanMessagePromptTemplate,\n MessagesPlaceholder,\n)\nfrom langchain.schema import (\n AgentAction,\n AgentFinish,\n BasePromptTemplate,\n OutputParserException,\n)\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.schema.messages import (\n AIMessage,\n BaseMessage,\n FunctionMessage,\n SystemMessage,\n)\nfrom langchain.tools import BaseTool\n@dataclass\nclass _FunctionsAgentAction(AgentAction):\n message_log: List[BaseMessage]\ndef _convert_agent_action_to_messages(\n agent_action: AgentAction, observation: str\n) -> List[BaseMessage]:\n \"\"\"Convert an agent action to a message.\n This code is used to reconstruct the original AI message from the agent action.\n Args:\n agent_action: Agent action to convert.\n Returns:\n AIMessage that corresponds to the original tool invocation.\n \"\"\"\n if isinstance(agent_action, _FunctionsAgentAction):\n return agent_action.message_log + [\n _create_function_message(agent_action, observation)\n ]\n else:\n return [AIMessage(content=agent_action.log)]\ndef _create_function_message(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/openai_functions_multi_agent/base.html"} {"id": "edad93404720-1", "text": "return [AIMessage(content=agent_action.log)]\ndef _create_function_message(\n agent_action: AgentAction, observation: str\n) -> FunctionMessage:\n \"\"\"Convert agent action and observation into a function message.\n Args:\n agent_action: the tool invocation request from the agent\n observation: the result of the tool invocation\n Returns:\n FunctionMessage that corresponds to the original tool invocation\n \"\"\"\n if not isinstance(observation, str):\n try:\n content = json.dumps(observation, ensure_ascii=False)\n except Exception:\n content = str(observation)\n else:\n content = observation\n return FunctionMessage(\n name=agent_action.tool,\n content=content,\n )\ndef _format_intermediate_steps(\n intermediate_steps: List[Tuple[AgentAction, str]],\n) -> List[BaseMessage]:\n \"\"\"Format intermediate steps.\n Args:\n intermediate_steps: Steps the LLM has taken to date, along with observations\n Returns:\n list of messages to send to the LLM for the next prediction\n \"\"\"\n messages = []\n for intermediate_step in intermediate_steps:\n agent_action, observation = intermediate_step\n messages.extend(_convert_agent_action_to_messages(agent_action, observation))\n return messages\ndef _parse_ai_message(message: BaseMessage) -> Union[List[AgentAction], AgentFinish]:\n \"\"\"Parse an AI message.\"\"\"\n if not isinstance(message, AIMessage):\n raise TypeError(f\"Expected an AI message got {type(message)}\")\n function_call = message.additional_kwargs.get(\"function_call\", {})\n if function_call:\n try:\n tools = json.loads(function_call[\"arguments\"])[\"actions\"]\n except JSONDecodeError:\n raise OutputParserException(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/openai_functions_multi_agent/base.html"} {"id": "edad93404720-2", "text": "except JSONDecodeError:\n raise OutputParserException(\n f\"Could not parse tool input: {function_call} because \"\n f\"the `arguments` is not valid JSON.\"\n )\n final_tools: List[AgentAction] = []\n for tool_schema in tools:\n _tool_input = tool_schema[\"action\"]\n function_name = tool_schema[\"action_name\"]\n # HACK HACK HACK:\n # The code that encodes tool input into Open AI uses a special variable\n # name called `__arg1` to handle old style tools that do not expose a\n # schema and expect a single string argument as an input.\n # We unpack the argument here if it exists.\n # Open AI does not support passing in a JSON array as an argument.\n if \"__arg1\" in _tool_input:\n tool_input = _tool_input[\"__arg1\"]\n else:\n tool_input = _tool_input\n content_msg = \"responded: {content}\\n\" if message.content else \"\\n\"\n log = f\"\\nInvoking: `{function_name}` with `{tool_input}`\\n{content_msg}\\n\"\n _tool = _FunctionsAgentAction(\n tool=function_name,\n tool_input=tool_input,\n log=log,\n message_log=[message],\n )\n final_tools.append(_tool)\n return final_tools\n return AgentFinish(return_values={\"output\": message.content}, log=message.content)\n[docs]class OpenAIMultiFunctionsAgent(BaseMultiActionAgent):\n \"\"\"An Agent driven by OpenAIs function powered API.\n Args:\n llm: This should be an instance of ChatOpenAI, specifically a model\n that supports using `functions`.\n tools: The tools this agent has access to.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/openai_functions_multi_agent/base.html"} {"id": "edad93404720-3", "text": "that supports using `functions`.\n tools: The tools this agent has access to.\n prompt: The prompt for this agent, should support agent_scratchpad as one\n of the variables. For an easy way to construct this prompt, use\n `OpenAIMultiFunctionsAgent.create_prompt(...)`\n \"\"\"\n llm: BaseLanguageModel\n tools: Sequence[BaseTool]\n prompt: BasePromptTemplate\n[docs] def get_allowed_tools(self) -> List[str]:\n \"\"\"Get allowed tools.\"\"\"\n return [t.name for t in self.tools]\n[docs] @root_validator\n def validate_llm(cls, values: dict) -> dict:\n if not isinstance(values[\"llm\"], ChatOpenAI):\n raise ValueError(\"Only supported with ChatOpenAI models.\")\n return values\n[docs] @root_validator\n def validate_prompt(cls, values: dict) -> dict:\n prompt: BasePromptTemplate = values[\"prompt\"]\n if \"agent_scratchpad\" not in prompt.input_variables:\n raise ValueError(\n \"`agent_scratchpad` should be one of the variables in the prompt, \"\n f\"got {prompt.input_variables}\"\n )\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Get input keys. Input refers to user input here.\"\"\"\n return [\"input\"]\n @property\n def functions(self) -> List[dict]:\n enum_vals = [t.name for t in self.tools]\n tool_selection = {\n # OpenAI functions returns a single tool invocation\n # Here we force the single tool invocation it returns to\n # itself be a list of tool invocations. We do this by constructing\n # a new tool that has one argument which is a list of tools", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/openai_functions_multi_agent/base.html"} {"id": "edad93404720-4", "text": "# a new tool that has one argument which is a list of tools\n # to use.\n \"name\": \"tool_selection\",\n \"description\": \"A list of actions to take.\",\n \"parameters\": {\n \"title\": \"tool_selection\",\n \"description\": \"A list of actions to take.\",\n \"type\": \"object\",\n \"properties\": {\n \"actions\": {\n \"title\": \"actions\",\n \"type\": \"array\",\n \"items\": {\n # This is a custom item which bundles the action_name\n # and the action. We do this because some actions\n # could have the same schema, and without this there\n # is no way to differentiate them.\n \"title\": \"tool_call\",\n \"type\": \"object\",\n \"properties\": {\n # This is the name of the action to take\n \"action_name\": {\n \"title\": \"action_name\",\n \"enum\": enum_vals,\n \"type\": \"string\",\n \"description\": (\n \"Name of the action to take. The name \"\n \"provided here should match up with the \"\n \"parameters for the action below.\"\n ),\n },\n # This is the action to take.\n \"action\": {\n \"title\": \"Action\",\n \"anyOf\": [\n {\n \"title\": t.name,\n \"type\": \"object\",\n \"properties\": t.args,\n }\n for t in self.tools\n ],\n },\n },\n \"required\": [\"action_name\", \"action\"],\n },\n }\n },\n \"required\": [\"actions\"],\n },\n }\n return [tool_selection]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/openai_functions_multi_agent/base.html"} {"id": "edad93404720-5", "text": "},\n }\n return [tool_selection]\n[docs] def plan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[List[AgentAction], AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date, along with observations\n **kwargs: User inputs.\n Returns:\n Action specifying what tool to use.\n \"\"\"\n agent_scratchpad = _format_intermediate_steps(intermediate_steps)\n selected_inputs = {\n k: kwargs[k] for k in self.prompt.input_variables if k != \"agent_scratchpad\"\n }\n full_inputs = dict(**selected_inputs, agent_scratchpad=agent_scratchpad)\n prompt = self.prompt.format_prompt(**full_inputs)\n messages = prompt.to_messages()\n predicted_message = self.llm.predict_messages(\n messages, functions=self.functions, callbacks=callbacks\n )\n agent_decision = _parse_ai_message(predicted_message)\n return agent_decision\n[docs] async def aplan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[List[AgentAction], AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n **kwargs: User inputs.\n Returns:\n Action specifying what tool to use.\n \"\"\"\n agent_scratchpad = _format_intermediate_steps(intermediate_steps)\n selected_inputs = {", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/openai_functions_multi_agent/base.html"} {"id": "edad93404720-6", "text": "selected_inputs = {\n k: kwargs[k] for k in self.prompt.input_variables if k != \"agent_scratchpad\"\n }\n full_inputs = dict(**selected_inputs, agent_scratchpad=agent_scratchpad)\n prompt = self.prompt.format_prompt(**full_inputs)\n messages = prompt.to_messages()\n predicted_message = await self.llm.apredict_messages(\n messages, functions=self.functions, callbacks=callbacks\n )\n agent_decision = _parse_ai_message(predicted_message)\n return agent_decision\n[docs] @classmethod\n def create_prompt(\n cls,\n system_message: Optional[SystemMessage] = SystemMessage(\n content=\"You are a helpful AI assistant.\"\n ),\n extra_prompt_messages: Optional[List[BaseMessagePromptTemplate]] = None,\n ) -> BasePromptTemplate:\n \"\"\"Create prompt for this agent.\n Args:\n system_message: Message to use as the system message that will be the\n first in the prompt.\n extra_prompt_messages: Prompt messages that will be placed between the\n system message and the new human input.\n Returns:\n A prompt template to pass into this agent.\n \"\"\"\n _prompts = extra_prompt_messages or []\n messages: List[Union[BaseMessagePromptTemplate, BaseMessage]]\n if system_message:\n messages = [system_message]\n else:\n messages = []\n messages.extend(\n [\n *_prompts,\n HumanMessagePromptTemplate.from_template(\"{input}\"),\n MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n ]\n )\n return ChatPromptTemplate(messages=messages)\n[docs] @classmethod\n def from_llm_and_tools(\n cls,\n llm: BaseLanguageModel,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/openai_functions_multi_agent/base.html"} {"id": "edad93404720-7", "text": "cls,\n llm: BaseLanguageModel,\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n extra_prompt_messages: Optional[List[BaseMessagePromptTemplate]] = None,\n system_message: Optional[SystemMessage] = SystemMessage(\n content=\"You are a helpful AI assistant.\"\n ),\n **kwargs: Any,\n ) -> BaseMultiActionAgent:\n \"\"\"Construct an agent from an LLM and tools.\"\"\"\n prompt = cls.create_prompt(\n extra_prompt_messages=extra_prompt_messages,\n system_message=system_message,\n )\n return cls(\n llm=llm,\n prompt=prompt,\n tools=tools,\n callback_manager=callback_manager,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/openai_functions_multi_agent/base.html"} {"id": "75959f08b87b-0", "text": "Source code for langchain.agents.react.base\n\"\"\"Chain that implements the ReAct paper from https://arxiv.org/pdf/2210.03629.pdf.\"\"\"\nfrom typing import Any, List, Optional, Sequence\nfrom pydantic import Field\nfrom langchain.agents.agent import Agent, AgentExecutor, AgentOutputParser\nfrom langchain.agents.agent_types import AgentType\nfrom langchain.agents.react.output_parser import ReActOutputParser\nfrom langchain.agents.react.textworld_prompt import TEXTWORLD_PROMPT\nfrom langchain.agents.react.wiki_prompt import WIKI_PROMPT\nfrom langchain.agents.tools import Tool\nfrom langchain.agents.utils import validate_tools_single_input\nfrom langchain.docstore.base import Docstore\nfrom langchain.docstore.document import Document\nfrom langchain.schema import BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.tools.base import BaseTool\n[docs]class ReActDocstoreAgent(Agent):\n \"\"\"Agent for the ReAct chain.\"\"\"\n output_parser: AgentOutputParser = Field(default_factory=ReActOutputParser)\n @classmethod\n def _get_default_output_parser(cls, **kwargs: Any) -> AgentOutputParser:\n return ReActOutputParser()\n @property\n def _agent_type(self) -> str:\n \"\"\"Return Identifier of agent type.\"\"\"\n return AgentType.REACT_DOCSTORE\n[docs] @classmethod\n def create_prompt(cls, tools: Sequence[BaseTool]) -> BasePromptTemplate:\n \"\"\"Return default prompt.\"\"\"\n return WIKI_PROMPT\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n validate_tools_single_input(cls.__name__, tools)\n super()._validate_tools(tools)\n if len(tools) != 2:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/react/base.html"} {"id": "75959f08b87b-1", "text": "super()._validate_tools(tools)\n if len(tools) != 2:\n raise ValueError(f\"Exactly two tools must be specified, but got {tools}\")\n tool_names = {tool.name for tool in tools}\n if tool_names != {\"Lookup\", \"Search\"}:\n raise ValueError(\n f\"Tool names should be Lookup and Search, got {tool_names}\"\n )\n @property\n def observation_prefix(self) -> str:\n \"\"\"Prefix to append the observation with.\"\"\"\n return \"Observation: \"\n @property\n def _stop(self) -> List[str]:\n return [\"\\nObservation:\"]\n @property\n def llm_prefix(self) -> str:\n \"\"\"Prefix to append the LLM call with.\"\"\"\n return \"Thought:\"\nclass DocstoreExplorer:\n \"\"\"Class to assist with exploration of a document store.\"\"\"\n def __init__(self, docstore: Docstore):\n \"\"\"Initialize with a docstore, and set initial document to None.\"\"\"\n self.docstore = docstore\n self.document: Optional[Document] = None\n self.lookup_str = \"\"\n self.lookup_index = 0\n def search(self, term: str) -> str:\n \"\"\"Search for a term in the docstore, and if found save.\"\"\"\n result = self.docstore.search(term)\n if isinstance(result, Document):\n self.document = result\n return self._summary\n else:\n self.document = None\n return result\n def lookup(self, term: str) -> str:\n \"\"\"Lookup a term in document (if saved).\"\"\"\n if self.document is None:\n raise ValueError(\"Cannot lookup without a successful search first\")\n if term.lower() != self.lookup_str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/react/base.html"} {"id": "75959f08b87b-2", "text": "if term.lower() != self.lookup_str:\n self.lookup_str = term.lower()\n self.lookup_index = 0\n else:\n self.lookup_index += 1\n lookups = [p for p in self._paragraphs if self.lookup_str in p.lower()]\n if len(lookups) == 0:\n return \"No Results\"\n elif self.lookup_index >= len(lookups):\n return \"No More Results\"\n else:\n result_prefix = f\"(Result {self.lookup_index + 1}/{len(lookups)})\"\n return f\"{result_prefix} {lookups[self.lookup_index]}\"\n @property\n def _summary(self) -> str:\n return self._paragraphs[0]\n @property\n def _paragraphs(self) -> List[str]:\n if self.document is None:\n raise ValueError(\"Cannot get paragraphs without a document\")\n return self.document.page_content.split(\"\\n\\n\")\n[docs]class ReActTextWorldAgent(ReActDocstoreAgent):\n \"\"\"Agent for the ReAct TextWorld chain.\"\"\"\n[docs] @classmethod\n def create_prompt(cls, tools: Sequence[BaseTool]) -> BasePromptTemplate:\n \"\"\"Return default prompt.\"\"\"\n return TEXTWORLD_PROMPT\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n validate_tools_single_input(cls.__name__, tools)\n super()._validate_tools(tools)\n if len(tools) != 1:\n raise ValueError(f\"Exactly one tool must be specified, but got {tools}\")\n tool_names = {tool.name for tool in tools}\n if tool_names != {\"Play\"}:\n raise ValueError(f\"Tool name should be Play, got {tool_names}\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/react/base.html"} {"id": "75959f08b87b-3", "text": "raise ValueError(f\"Tool name should be Play, got {tool_names}\")\n[docs]class ReActChain(AgentExecutor):\n \"\"\"Chain that implements the ReAct paper.\n Example:\n .. code-block:: python\n from langchain import ReActChain, OpenAI\n react = ReAct(llm=OpenAI())\n \"\"\"\n def __init__(self, llm: BaseLanguageModel, docstore: Docstore, **kwargs: Any):\n \"\"\"Initialize with the LLM and a docstore.\"\"\"\n docstore_explorer = DocstoreExplorer(docstore)\n tools = [\n Tool(\n name=\"Search\",\n func=docstore_explorer.search,\n description=\"Search for a term in the docstore.\",\n ),\n Tool(\n name=\"Lookup\",\n func=docstore_explorer.lookup,\n description=\"Lookup a term in the docstore.\",\n ),\n ]\n agent = ReActDocstoreAgent.from_llm_and_tools(llm, tools)\n super().__init__(agent=agent, tools=tools, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/react/base.html"} {"id": "0bed9809ea18-0", "text": "Source code for langchain.agents.react.output_parser\nimport re\nfrom typing import Union\nfrom langchain.agents.agent import AgentOutputParser\nfrom langchain.schema import AgentAction, AgentFinish, OutputParserException\n[docs]class ReActOutputParser(AgentOutputParser):\n[docs] def parse(self, text: str) -> Union[AgentAction, AgentFinish]:\n action_prefix = \"Action: \"\n if not text.strip().split(\"\\n\")[-1].startswith(action_prefix):\n raise OutputParserException(f\"Could not parse LLM Output: {text}\")\n action_block = text.strip().split(\"\\n\")[-1]\n action_str = action_block[len(action_prefix) :]\n # Parse out the action and the directive.\n re_matches = re.search(r\"(.*?)\\[(.*?)\\]\", action_str)\n if re_matches is None:\n raise OutputParserException(\n f\"Could not parse action directive: {action_str}\"\n )\n action, action_input = re_matches.group(1), re_matches.group(2)\n if action == \"Finish\":\n return AgentFinish({\"output\": action_input}, text)\n else:\n return AgentAction(action, action_input, text)\n @property\n def _type(self) -> str:\n return \"react\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/react/output_parser.html"} {"id": "bf9e472a8bc4-0", "text": "Source code for langchain.agents.structured_chat.base\nimport re\nfrom typing import Any, List, Optional, Sequence, Tuple\nfrom pydantic import Field\nfrom langchain.agents.agent import Agent, AgentOutputParser\nfrom langchain.agents.structured_chat.output_parser import (\n StructuredChatOutputParserWithRetries,\n)\nfrom langchain.agents.structured_chat.prompt import FORMAT_INSTRUCTIONS, PREFIX, SUFFIX\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.prompts.chat import (\n ChatPromptTemplate,\n HumanMessagePromptTemplate,\n SystemMessagePromptTemplate,\n)\nfrom langchain.schema import AgentAction, BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.tools import BaseTool\nHUMAN_MESSAGE_TEMPLATE = \"{input}\\n\\n{agent_scratchpad}\"\n[docs]class StructuredChatAgent(Agent):\n output_parser: AgentOutputParser = Field(\n default_factory=StructuredChatOutputParserWithRetries\n )\n @property\n def observation_prefix(self) -> str:\n \"\"\"Prefix to append the observation with.\"\"\"\n return \"Observation: \"\n @property\n def llm_prefix(self) -> str:\n \"\"\"Prefix to append the llm call with.\"\"\"\n return \"Thought:\"\n def _construct_scratchpad(\n self, intermediate_steps: List[Tuple[AgentAction, str]]\n ) -> str:\n agent_scratchpad = super()._construct_scratchpad(intermediate_steps)\n if not isinstance(agent_scratchpad, str):\n raise ValueError(\"agent_scratchpad should be of type string.\")\n if agent_scratchpad:\n return (\n f\"This was your previous work \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/structured_chat/base.html"} {"id": "bf9e472a8bc4-1", "text": "return (\n f\"This was your previous work \"\n f\"(but I haven't seen any of it! I only see what \"\n f\"you return as final answer):\\n{agent_scratchpad}\"\n )\n else:\n return agent_scratchpad\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n pass\n @classmethod\n def _get_default_output_parser(\n cls, llm: Optional[BaseLanguageModel] = None, **kwargs: Any\n ) -> AgentOutputParser:\n return StructuredChatOutputParserWithRetries.from_llm(llm=llm)\n @property\n def _stop(self) -> List[str]:\n return [\"Observation:\"]\n[docs] @classmethod\n def create_prompt(\n cls,\n tools: Sequence[BaseTool],\n prefix: str = PREFIX,\n suffix: str = SUFFIX,\n human_message_template: str = HUMAN_MESSAGE_TEMPLATE,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n memory_prompts: Optional[List[BasePromptTemplate]] = None,\n ) -> BasePromptTemplate:\n tool_strings = []\n for tool in tools:\n args_schema = re.sub(\"}\", \"}}}}\", re.sub(\"{\", \"{{{{\", str(tool.args)))\n tool_strings.append(f\"{tool.name}: {tool.description}, args: {args_schema}\")\n formatted_tools = \"\\n\".join(tool_strings)\n tool_names = \", \".join([tool.name for tool in tools])\n format_instructions = format_instructions.format(tool_names=tool_names)\n template = \"\\n\\n\".join([prefix, formatted_tools, format_instructions, suffix])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/structured_chat/base.html"} {"id": "bf9e472a8bc4-2", "text": "template = \"\\n\\n\".join([prefix, formatted_tools, format_instructions, suffix])\n if input_variables is None:\n input_variables = [\"input\", \"agent_scratchpad\"]\n _memory_prompts = memory_prompts or []\n messages = [\n SystemMessagePromptTemplate.from_template(template),\n *_memory_prompts,\n HumanMessagePromptTemplate.from_template(human_message_template),\n ]\n return ChatPromptTemplate(input_variables=input_variables, messages=messages)\n[docs] @classmethod\n def from_llm_and_tools(\n cls,\n llm: BaseLanguageModel,\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n output_parser: Optional[AgentOutputParser] = None,\n prefix: str = PREFIX,\n suffix: str = SUFFIX,\n human_message_template: str = HUMAN_MESSAGE_TEMPLATE,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n memory_prompts: Optional[List[BasePromptTemplate]] = None,\n **kwargs: Any,\n ) -> Agent:\n \"\"\"Construct an agent from an LLM and tools.\"\"\"\n cls._validate_tools(tools)\n prompt = cls.create_prompt(\n tools,\n prefix=prefix,\n suffix=suffix,\n human_message_template=human_message_template,\n format_instructions=format_instructions,\n input_variables=input_variables,\n memory_prompts=memory_prompts,\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/structured_chat/base.html"} {"id": "bf9e472a8bc4-3", "text": ")\n tool_names = [tool.name for tool in tools]\n _output_parser = output_parser or cls._get_default_output_parser(llm=llm)\n return cls(\n llm_chain=llm_chain,\n allowed_tools=tool_names,\n output_parser=_output_parser,\n **kwargs,\n )\n @property\n def _agent_type(self) -> str:\n raise ValueError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/structured_chat/base.html"} {"id": "c90a6b26c859-0", "text": "Source code for langchain.agents.structured_chat.output_parser\nfrom __future__ import annotations\nimport json\nimport logging\nimport re\nfrom typing import Optional, Union\nfrom pydantic import Field\nfrom langchain.agents.agent import AgentOutputParser\nfrom langchain.agents.structured_chat.prompt import FORMAT_INSTRUCTIONS\nfrom langchain.output_parsers import OutputFixingParser\nfrom langchain.schema import AgentAction, AgentFinish, OutputParserException\nfrom langchain.schema.language_model import BaseLanguageModel\nlogger = logging.getLogger(__name__)\n[docs]class StructuredChatOutputParser(AgentOutputParser):\n[docs] def get_format_instructions(self) -> str:\n return FORMAT_INSTRUCTIONS\n[docs] def parse(self, text: str) -> Union[AgentAction, AgentFinish]:\n try:\n action_match = re.search(r\"```(.*?)```?\", text, re.DOTALL)\n if action_match is not None:\n response = json.loads(action_match.group(1).strip(), strict=False)\n if isinstance(response, list):\n # gpt turbo frequently ignores the directive to emit a single action\n logger.warning(\"Got multiple action responses: %s\", response)\n response = response[0]\n if response[\"action\"] == \"Final Answer\":\n return AgentFinish({\"output\": response[\"action_input\"]}, text)\n else:\n return AgentAction(\n response[\"action\"], response.get(\"action_input\", {}), text\n )\n else:\n return AgentFinish({\"output\": text}, text)\n except Exception as e:\n raise OutputParserException(f\"Could not parse LLM output: {text}\") from e\n @property\n def _type(self) -> str:\n return \"structured_chat\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/structured_chat/output_parser.html"} {"id": "c90a6b26c859-1", "text": "def _type(self) -> str:\n return \"structured_chat\"\n[docs]class StructuredChatOutputParserWithRetries(AgentOutputParser):\n base_parser: AgentOutputParser = Field(default_factory=StructuredChatOutputParser)\n output_fixing_parser: Optional[OutputFixingParser] = None\n[docs] def get_format_instructions(self) -> str:\n return FORMAT_INSTRUCTIONS\n[docs] def parse(self, text: str) -> Union[AgentAction, AgentFinish]:\n try:\n if self.output_fixing_parser is not None:\n parsed_obj: Union[\n AgentAction, AgentFinish\n ] = self.output_fixing_parser.parse(text)\n else:\n parsed_obj = self.base_parser.parse(text)\n return parsed_obj\n except Exception as e:\n raise OutputParserException(f\"Could not parse LLM output: {text}\") from e\n[docs] @classmethod\n def from_llm(\n cls,\n llm: Optional[BaseLanguageModel] = None,\n base_parser: Optional[StructuredChatOutputParser] = None,\n ) -> StructuredChatOutputParserWithRetries:\n if llm is not None:\n base_parser = base_parser or StructuredChatOutputParser()\n output_fixing_parser = OutputFixingParser.from_llm(\n llm=llm, parser=base_parser\n )\n return cls(output_fixing_parser=output_fixing_parser)\n elif base_parser is not None:\n return cls(base_parser=base_parser)\n else:\n return cls()\n @property\n def _type(self) -> str:\n return \"structured_chat_with_retries\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/structured_chat/output_parser.html"} {"id": "154f873b8f0f-0", "text": "Source code for langchain.agents.chat.base\nfrom typing import Any, List, Optional, Sequence, Tuple\nfrom pydantic import Field\nfrom langchain.agents.agent import Agent, AgentOutputParser\nfrom langchain.agents.chat.output_parser import ChatOutputParser\nfrom langchain.agents.chat.prompt import (\n FORMAT_INSTRUCTIONS,\n HUMAN_MESSAGE,\n SYSTEM_MESSAGE_PREFIX,\n SYSTEM_MESSAGE_SUFFIX,\n)\nfrom langchain.agents.utils import validate_tools_single_input\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.prompts.chat import (\n ChatPromptTemplate,\n HumanMessagePromptTemplate,\n SystemMessagePromptTemplate,\n)\nfrom langchain.schema import AgentAction, BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.tools.base import BaseTool\n[docs]class ChatAgent(Agent):\n output_parser: AgentOutputParser = Field(default_factory=ChatOutputParser)\n @property\n def observation_prefix(self) -> str:\n \"\"\"Prefix to append the observation with.\"\"\"\n return \"Observation: \"\n @property\n def llm_prefix(self) -> str:\n \"\"\"Prefix to append the llm call with.\"\"\"\n return \"Thought:\"\n def _construct_scratchpad(\n self, intermediate_steps: List[Tuple[AgentAction, str]]\n ) -> str:\n agent_scratchpad = super()._construct_scratchpad(intermediate_steps)\n if not isinstance(agent_scratchpad, str):\n raise ValueError(\"agent_scratchpad should be of type string.\")\n if agent_scratchpad:\n return (\n f\"This was your previous work \"\n f\"(but I haven't seen any of it! I only see what \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/chat/base.html"} {"id": "154f873b8f0f-1", "text": "f\"(but I haven't seen any of it! I only see what \"\n f\"you return as final answer):\\n{agent_scratchpad}\"\n )\n else:\n return agent_scratchpad\n @classmethod\n def _get_default_output_parser(cls, **kwargs: Any) -> AgentOutputParser:\n return ChatOutputParser()\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n super()._validate_tools(tools)\n validate_tools_single_input(class_name=cls.__name__, tools=tools)\n @property\n def _stop(self) -> List[str]:\n return [\"Observation:\"]\n[docs] @classmethod\n def create_prompt(\n cls,\n tools: Sequence[BaseTool],\n system_message_prefix: str = SYSTEM_MESSAGE_PREFIX,\n system_message_suffix: str = SYSTEM_MESSAGE_SUFFIX,\n human_message: str = HUMAN_MESSAGE,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n ) -> BasePromptTemplate:\n tool_strings = \"\\n\".join([f\"{tool.name}: {tool.description}\" for tool in tools])\n tool_names = \", \".join([tool.name for tool in tools])\n format_instructions = format_instructions.format(tool_names=tool_names)\n template = \"\\n\\n\".join(\n [\n system_message_prefix,\n tool_strings,\n format_instructions,\n system_message_suffix,\n ]\n )\n messages = [\n SystemMessagePromptTemplate.from_template(template),\n HumanMessagePromptTemplate.from_template(human_message),\n ]\n if input_variables is None:\n input_variables = [\"input\", \"agent_scratchpad\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/chat/base.html"} {"id": "154f873b8f0f-2", "text": "input_variables = [\"input\", \"agent_scratchpad\"]\n return ChatPromptTemplate(input_variables=input_variables, messages=messages)\n[docs] @classmethod\n def from_llm_and_tools(\n cls,\n llm: BaseLanguageModel,\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n output_parser: Optional[AgentOutputParser] = None,\n system_message_prefix: str = SYSTEM_MESSAGE_PREFIX,\n system_message_suffix: str = SYSTEM_MESSAGE_SUFFIX,\n human_message: str = HUMAN_MESSAGE,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> Agent:\n \"\"\"Construct an agent from an LLM and tools.\"\"\"\n cls._validate_tools(tools)\n prompt = cls.create_prompt(\n tools,\n system_message_prefix=system_message_prefix,\n system_message_suffix=system_message_suffix,\n human_message=human_message,\n format_instructions=format_instructions,\n input_variables=input_variables,\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n _output_parser = output_parser or cls._get_default_output_parser()\n return cls(\n llm_chain=llm_chain,\n allowed_tools=tool_names,\n output_parser=_output_parser,\n **kwargs,\n )\n @property\n def _agent_type(self) -> str:\n raise ValueError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/chat/base.html"} {"id": "f8199cdc026f-0", "text": "Source code for langchain.agents.chat.output_parser\nimport json\nfrom typing import Union\nfrom langchain.agents.agent import AgentOutputParser\nfrom langchain.agents.chat.prompt import FORMAT_INSTRUCTIONS\nfrom langchain.schema import AgentAction, AgentFinish, OutputParserException\nFINAL_ANSWER_ACTION = \"Final Answer:\"\n[docs]class ChatOutputParser(AgentOutputParser):\n[docs] def get_format_instructions(self) -> str:\n return FORMAT_INSTRUCTIONS\n[docs] def parse(self, text: str) -> Union[AgentAction, AgentFinish]:\n includes_answer = FINAL_ANSWER_ACTION in text\n try:\n action = text.split(\"```\")[1]\n response = json.loads(action.strip())\n includes_action = \"action\" in response\n if includes_answer and includes_action:\n raise OutputParserException(\n \"Parsing LLM output produced a final answer \"\n f\"and a parse-able action: {text}\"\n )\n return AgentAction(\n response[\"action\"], response.get(\"action_input\", {}), text\n )\n except Exception:\n if not includes_answer:\n raise OutputParserException(f\"Could not parse LLM output: {text}\")\n output = text.split(FINAL_ANSWER_ACTION)[-1].strip()\n return AgentFinish({\"output\": output}, text)\n @property\n def _type(self) -> str:\n return \"chat\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/chat/output_parser.html"} {"id": "22af021825a9-0", "text": "Source code for langchain.agents.agent_toolkits.base\n\"\"\"Toolkits for agents.\"\"\"\nfrom abc import ABC, abstractmethod\nfrom typing import List\nfrom pydantic import BaseModel\nfrom langchain.tools import BaseTool\n[docs]class BaseToolkit(BaseModel, ABC):\n \"\"\"Class representing a collection of related tools.\"\"\"\n[docs] @abstractmethod\n def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/base.html"} {"id": "16ce110f4edf-0", "text": "Source code for langchain.agents.agent_toolkits.sql.toolkit\n\"\"\"Toolkit for interacting with a SQL database.\"\"\"\nfrom typing import List\nfrom pydantic import Field\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.sql_database import SQLDatabase\nfrom langchain.tools import BaseTool\nfrom langchain.tools.sql_database.tool import (\n InfoSQLDatabaseTool,\n ListSQLDatabaseTool,\n QuerySQLCheckerTool,\n QuerySQLDataBaseTool,\n)\n[docs]class SQLDatabaseToolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with SQL databases.\"\"\"\n db: SQLDatabase = Field(exclude=True)\n llm: BaseLanguageModel = Field(exclude=True)\n @property\n def dialect(self) -> str:\n \"\"\"Return string representation of dialect to use.\"\"\"\n return self.db.dialect\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n query_sql_database_tool_description = (\n \"Input to this tool is a detailed and correct SQL query, output is a \"\n \"result from the database. If the query is not correct, an error message \"\n \"will be returned. If an error is returned, rewrite the query, check the \"\n \"query, and try again. If you encounter an issue with Unknown column \"\n \"'xxxx' in 'field list', using schema_sql_db to query the correct table \"\n \"fields.\"\n )\n info_sql_database_tool_description = (\n \"Input to this tool is a comma-separated list of tables, output is the \"\n \"schema and sample rows for those tables. \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/sql/toolkit.html"} {"id": "16ce110f4edf-1", "text": "\"schema and sample rows for those tables. \"\n \"Be sure that the tables actually exist by calling list_tables_sql_db \"\n \"first! Example Input: 'table1, table2, table3'\"\n )\n return [\n QuerySQLDataBaseTool(\n db=self.db, description=query_sql_database_tool_description\n ),\n InfoSQLDatabaseTool(\n db=self.db, description=info_sql_database_tool_description\n ),\n ListSQLDatabaseTool(db=self.db),\n QuerySQLCheckerTool(db=self.db, llm=self.llm),\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/sql/toolkit.html"} {"id": "233de9e8d737-0", "text": "Source code for langchain.agents.agent_toolkits.sql.base\n\"\"\"SQL agent.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom langchain.agents.agent import AgentExecutor, BaseSingleActionAgent\nfrom langchain.agents.agent_toolkits.sql.prompt import (\n SQL_FUNCTIONS_SUFFIX,\n SQL_PREFIX,\n SQL_SUFFIX,\n)\nfrom langchain.agents.agent_toolkits.sql.toolkit import SQLDatabaseToolkit\nfrom langchain.agents.agent_types import AgentType\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS\nfrom langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgent\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.prompts.chat import (\n ChatPromptTemplate,\n HumanMessagePromptTemplate,\n MessagesPlaceholder,\n)\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.schema.messages import AIMessage, SystemMessage\n[docs]def create_sql_agent(\n llm: BaseLanguageModel,\n toolkit: SQLDatabaseToolkit,\n agent_type: AgentType = AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: str = SQL_PREFIX,\n suffix: Optional[str] = None,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n top_k: int = 10,\n max_iterations: Optional[int] = 15,\n max_execution_time: Optional[float] = None,\n early_stopping_method: str = \"force\",\n verbose: bool = False,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/sql/base.html"} {"id": "233de9e8d737-1", "text": "**kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a sql agent from an LLM and tools.\"\"\"\n tools = toolkit.get_tools()\n prefix = prefix.format(dialect=toolkit.dialect, top_k=top_k)\n agent: BaseSingleActionAgent\n if agent_type == AgentType.ZERO_SHOT_REACT_DESCRIPTION:\n prompt = ZeroShotAgent.create_prompt(\n tools,\n prefix=prefix,\n suffix=suffix or SQL_SUFFIX,\n format_instructions=format_instructions,\n input_variables=input_variables,\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs)\n elif agent_type == AgentType.OPENAI_FUNCTIONS:\n messages = [\n SystemMessage(content=prefix),\n HumanMessagePromptTemplate.from_template(\"{input}\"),\n AIMessage(content=suffix or SQL_FUNCTIONS_SUFFIX),\n MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n ]\n input_variables = [\"input\", \"agent_scratchpad\"]\n _prompt = ChatPromptTemplate(input_variables=input_variables, messages=messages)\n agent = OpenAIFunctionsAgent(\n llm=llm,\n prompt=_prompt,\n tools=tools,\n callback_manager=callback_manager,\n **kwargs,\n )\n else:\n raise ValueError(f\"Agent type {agent_type} not supported at the moment.\")\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/sql/base.html"} {"id": "233de9e8d737-2", "text": "tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n max_iterations=max_iterations,\n max_execution_time=max_execution_time,\n early_stopping_method=early_stopping_method,\n **(agent_executor_kwargs or {}),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/sql/base.html"} {"id": "ce78b9e4a2dd-0", "text": "Source code for langchain.agents.agent_toolkits.zapier.toolkit\n\"\"\"Zapier Toolkit.\"\"\"\nfrom typing import List\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.tools import BaseTool\nfrom langchain.tools.zapier.tool import ZapierNLARunAction\nfrom langchain.utilities.zapier import ZapierNLAWrapper\n[docs]class ZapierToolkit(BaseToolkit):\n \"\"\"Zapier Toolkit.\"\"\"\n tools: List[BaseTool] = []\n[docs] @classmethod\n def from_zapier_nla_wrapper(\n cls, zapier_nla_wrapper: ZapierNLAWrapper\n ) -> \"ZapierToolkit\":\n \"\"\"Create a toolkit from a ZapierNLAWrapper.\"\"\"\n actions = zapier_nla_wrapper.list()\n tools = [\n ZapierNLARunAction(\n action_id=action[\"id\"],\n zapier_description=action[\"description\"],\n params_schema=action[\"params\"],\n api_wrapper=zapier_nla_wrapper,\n )\n for action in actions\n ]\n return cls(tools=tools)\n[docs] @classmethod\n async def async_from_zapier_nla_wrapper(\n cls, zapier_nla_wrapper: ZapierNLAWrapper\n ) -> \"ZapierToolkit\":\n \"\"\"Create a toolkit from a ZapierNLAWrapper.\"\"\"\n actions = await zapier_nla_wrapper.alist()\n tools = [\n ZapierNLARunAction(\n action_id=action[\"id\"],\n zapier_description=action[\"description\"],\n params_schema=action[\"params\"],\n api_wrapper=zapier_nla_wrapper,\n )\n for action in actions\n ]\n return cls(tools=tools)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/zapier/toolkit.html"} {"id": "ce78b9e4a2dd-1", "text": "for action in actions\n ]\n return cls(tools=tools)\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n return self.tools", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/zapier/toolkit.html"} {"id": "74d570657209-0", "text": "Source code for langchain.agents.agent_toolkits.jira.toolkit\n\"\"\"Jira Toolkit.\"\"\"\nfrom typing import List\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.tools import BaseTool\nfrom langchain.tools.jira.tool import JiraAction\nfrom langchain.utilities.jira import JiraAPIWrapper\n[docs]class JiraToolkit(BaseToolkit):\n \"\"\"Jira Toolkit.\"\"\"\n tools: List[BaseTool] = []\n[docs] @classmethod\n def from_jira_api_wrapper(cls, jira_api_wrapper: JiraAPIWrapper) -> \"JiraToolkit\":\n actions = jira_api_wrapper.list()\n tools = [\n JiraAction(\n name=action[\"name\"],\n description=action[\"description\"],\n mode=action[\"mode\"],\n api_wrapper=jira_api_wrapper,\n )\n for action in actions\n ]\n return cls(tools=tools)\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n return self.tools", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/jira/toolkit.html"} {"id": "4aaec2ec410f-0", "text": "Source code for langchain.agents.agent_toolkits.nla.toolkit\n\"\"\"Toolkit for interacting with API's using natural language.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, List, Optional, Sequence\nfrom pydantic import Field\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.agents.agent_toolkits.nla.tool import NLATool\nfrom langchain.requests import Requests\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.openapi.utils.openapi_utils import OpenAPISpec\nfrom langchain.tools.plugin import AIPlugin\n[docs]class NLAToolkit(BaseToolkit):\n \"\"\"Natural Language API Toolkit Definition.\"\"\"\n nla_tools: Sequence[NLATool] = Field(...)\n \"\"\"List of API Endpoint Tools.\"\"\"\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools for all the API operations.\"\"\"\n return list(self.nla_tools)\n @staticmethod\n def _get_http_operation_tools(\n llm: BaseLanguageModel,\n spec: OpenAPISpec,\n requests: Optional[Requests] = None,\n verbose: bool = False,\n **kwargs: Any,\n ) -> List[NLATool]:\n \"\"\"Get the tools for all the API operations.\"\"\"\n if not spec.paths:\n return []\n http_operation_tools = []\n for path in spec.paths:\n for method in spec.get_methods_for_path(path):\n endpoint_tool = NLATool.from_llm_and_method(\n llm=llm,\n path=path,\n method=method,\n spec=spec,\n requests=requests,\n verbose=verbose,\n **kwargs,\n )\n http_operation_tools.append(endpoint_tool)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/nla/toolkit.html"} {"id": "4aaec2ec410f-1", "text": "**kwargs,\n )\n http_operation_tools.append(endpoint_tool)\n return http_operation_tools\n[docs] @classmethod\n def from_llm_and_spec(\n cls,\n llm: BaseLanguageModel,\n spec: OpenAPISpec,\n requests: Optional[Requests] = None,\n verbose: bool = False,\n **kwargs: Any,\n ) -> NLAToolkit:\n \"\"\"Instantiate the toolkit by creating tools for each operation.\"\"\"\n http_operation_tools = cls._get_http_operation_tools(\n llm=llm, spec=spec, requests=requests, verbose=verbose, **kwargs\n )\n return cls(nla_tools=http_operation_tools)\n[docs] @classmethod\n def from_llm_and_url(\n cls,\n llm: BaseLanguageModel,\n open_api_url: str,\n requests: Optional[Requests] = None,\n verbose: bool = False,\n **kwargs: Any,\n ) -> NLAToolkit:\n \"\"\"Instantiate the toolkit from an OpenAPI Spec URL\"\"\"\n spec = OpenAPISpec.from_url(open_api_url)\n return cls.from_llm_and_spec(\n llm=llm, spec=spec, requests=requests, verbose=verbose, **kwargs\n )\n[docs] @classmethod\n def from_llm_and_ai_plugin(\n cls,\n llm: BaseLanguageModel,\n ai_plugin: AIPlugin,\n requests: Optional[Requests] = None,\n verbose: bool = False,\n **kwargs: Any,\n ) -> NLAToolkit:\n \"\"\"Instantiate the toolkit from an OpenAPI Spec URL\"\"\"\n spec = OpenAPISpec.from_url(ai_plugin.api.url)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/nla/toolkit.html"} {"id": "4aaec2ec410f-2", "text": "spec = OpenAPISpec.from_url(ai_plugin.api.url)\n # TODO: Merge optional Auth information with the `requests` argument\n return cls.from_llm_and_spec(\n llm=llm,\n spec=spec,\n requests=requests,\n verbose=verbose,\n **kwargs,\n )\n[docs] @classmethod\n def from_llm_and_ai_plugin_url(\n cls,\n llm: BaseLanguageModel,\n ai_plugin_url: str,\n requests: Optional[Requests] = None,\n verbose: bool = False,\n **kwargs: Any,\n ) -> NLAToolkit:\n \"\"\"Instantiate the toolkit from an OpenAPI Spec URL\"\"\"\n plugin = AIPlugin.from_url(ai_plugin_url)\n return cls.from_llm_and_ai_plugin(\n llm=llm, ai_plugin=plugin, requests=requests, verbose=verbose, **kwargs\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/nla/toolkit.html"} {"id": "a5414a2d904b-0", "text": "Source code for langchain.agents.agent_toolkits.nla.tool\n\"\"\"Tool for interacting with a single API with natural language efinition.\"\"\"\nfrom typing import Any, Optional\nfrom langchain.agents.tools import Tool\nfrom langchain.chains.api.openapi.chain import OpenAPIEndpointChain\nfrom langchain.requests import Requests\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.tools.openapi.utils.api_models import APIOperation\nfrom langchain.tools.openapi.utils.openapi_utils import OpenAPISpec\n[docs]class NLATool(Tool):\n \"\"\"Natural Language API Tool.\"\"\"\n[docs] @classmethod\n def from_open_api_endpoint_chain(\n cls, chain: OpenAPIEndpointChain, api_title: str\n ) -> \"NLATool\":\n \"\"\"Convert an endpoint chain to an API endpoint tool.\"\"\"\n expanded_name = (\n f'{api_title.replace(\" \", \"_\")}.{chain.api_operation.operation_id}'\n )\n description = (\n f\"I'm an AI from {api_title}. Instruct what you want,\"\n \" and I'll assist via an API with description:\"\n f\" {chain.api_operation.description}\"\n )\n return cls(name=expanded_name, func=chain.run, description=description)\n[docs] @classmethod\n def from_llm_and_method(\n cls,\n llm: BaseLanguageModel,\n path: str,\n method: str,\n spec: OpenAPISpec,\n requests: Optional[Requests] = None,\n verbose: bool = False,\n return_intermediate_steps: bool = False,\n **kwargs: Any,\n ) -> \"NLATool\":\n \"\"\"Instantiate the tool from the specified path and method.\"\"\"\n api_operation = APIOperation.from_openapi_spec(spec, path, method)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/nla/tool.html"} {"id": "a5414a2d904b-1", "text": "api_operation = APIOperation.from_openapi_spec(spec, path, method)\n chain = OpenAPIEndpointChain.from_api_operation(\n api_operation,\n llm,\n requests=requests,\n verbose=verbose,\n return_intermediate_steps=return_intermediate_steps,\n **kwargs,\n )\n return cls.from_open_api_endpoint_chain(chain, spec.info.title)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/nla/tool.html"} {"id": "31b8b6bc5c74-0", "text": "Source code for langchain.agents.agent_toolkits.office365.toolkit\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, List\nfrom pydantic import Field\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.tools import BaseTool\nfrom langchain.tools.office365.create_draft_message import O365CreateDraftMessage\nfrom langchain.tools.office365.events_search import O365SearchEvents\nfrom langchain.tools.office365.messages_search import O365SearchEmails\nfrom langchain.tools.office365.send_event import O365SendEvent\nfrom langchain.tools.office365.send_message import O365SendMessage\nfrom langchain.tools.office365.utils import authenticate\nif TYPE_CHECKING:\n from O365 import Account\n[docs]class O365Toolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with Office365.\"\"\"\n account: Account = Field(default_factory=authenticate)\n[docs] class Config:\n \"\"\"Pydantic config.\"\"\"\n arbitrary_types_allowed = True\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n return [\n O365SearchEvents(),\n O365CreateDraftMessage(),\n O365SearchEmails(),\n O365SendEvent(),\n O365SendMessage(),\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/office365/toolkit.html"} {"id": "a128f6380980-0", "text": "Source code for langchain.agents.agent_toolkits.playwright.toolkit\n\"\"\"Playwright web browser toolkit.\"\"\"\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, List, Optional, Type, cast\nfrom pydantic import Extra, root_validator\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.playwright.base import (\n BaseBrowserTool,\n lazy_import_playwright_browsers,\n)\nfrom langchain.tools.playwright.click import ClickTool\nfrom langchain.tools.playwright.current_page import CurrentWebPageTool\nfrom langchain.tools.playwright.extract_hyperlinks import ExtractHyperlinksTool\nfrom langchain.tools.playwright.extract_text import ExtractTextTool\nfrom langchain.tools.playwright.get_elements import GetElementsTool\nfrom langchain.tools.playwright.navigate import NavigateTool\nfrom langchain.tools.playwright.navigate_back import NavigateBackTool\nif TYPE_CHECKING:\n from playwright.async_api import Browser as AsyncBrowser\n from playwright.sync_api import Browser as SyncBrowser\nelse:\n try:\n # We do this so pydantic can resolve the types when instantiating\n from playwright.async_api import Browser as AsyncBrowser\n from playwright.sync_api import Browser as SyncBrowser\n except ImportError:\n pass\n[docs]class PlayWrightBrowserToolkit(BaseToolkit):\n \"\"\"Toolkit for web browser tools.\"\"\"\n sync_browser: Optional[\"SyncBrowser\"] = None\n async_browser: Optional[\"AsyncBrowser\"] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] @root_validator\n def validate_imports_and_browser_provided(cls, values: dict) -> dict:\n \"\"\"Check that the arguments are valid.\"\"\"\n lazy_import_playwright_browsers()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/playwright/toolkit.html"} {"id": "a128f6380980-1", "text": "\"\"\"Check that the arguments are valid.\"\"\"\n lazy_import_playwright_browsers()\n if values.get(\"async_browser\") is None and values.get(\"sync_browser\") is None:\n raise ValueError(\"Either async_browser or sync_browser must be specified.\")\n return values\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n tool_classes: List[Type[BaseBrowserTool]] = [\n ClickTool,\n NavigateTool,\n NavigateBackTool,\n ExtractTextTool,\n ExtractHyperlinksTool,\n GetElementsTool,\n CurrentWebPageTool,\n ]\n tools = [\n tool_cls.from_browser(\n sync_browser=self.sync_browser, async_browser=self.async_browser\n )\n for tool_cls in tool_classes\n ]\n return cast(List[BaseTool], tools)\n[docs] @classmethod\n def from_browser(\n cls,\n sync_browser: Optional[SyncBrowser] = None,\n async_browser: Optional[AsyncBrowser] = None,\n ) -> PlayWrightBrowserToolkit:\n \"\"\"Instantiate the toolkit.\"\"\"\n # This is to raise a better error than the forward ref ones Pydantic would have\n lazy_import_playwright_browsers()\n return cls(sync_browser=sync_browser, async_browser=async_browser)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/playwright/toolkit.html"} {"id": "5940040a707a-0", "text": "Source code for langchain.agents.agent_toolkits.file_management.toolkit\n\"\"\"Toolkit for interacting with the local filesystem.\"\"\"\nfrom __future__ import annotations\nfrom typing import List, Optional\nfrom pydantic import root_validator\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.tools import BaseTool\nfrom langchain.tools.file_management.copy import CopyFileTool\nfrom langchain.tools.file_management.delete import DeleteFileTool\nfrom langchain.tools.file_management.file_search import FileSearchTool\nfrom langchain.tools.file_management.list_dir import ListDirectoryTool\nfrom langchain.tools.file_management.move import MoveFileTool\nfrom langchain.tools.file_management.read import ReadFileTool\nfrom langchain.tools.file_management.write import WriteFileTool\n_FILE_TOOLS = {\n tool_cls.__fields__[\"name\"].default: tool_cls\n for tool_cls in [\n CopyFileTool,\n DeleteFileTool,\n FileSearchTool,\n MoveFileTool,\n ReadFileTool,\n WriteFileTool,\n ListDirectoryTool,\n ]\n}\n[docs]class FileManagementToolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with a Local Files.\"\"\"\n root_dir: Optional[str] = None\n \"\"\"If specified, all file operations are made relative to root_dir.\"\"\"\n selected_tools: Optional[List[str]] = None\n \"\"\"If provided, only provide the selected tools. Defaults to all.\"\"\"\n[docs] @root_validator\n def validate_tools(cls, values: dict) -> dict:\n selected_tools = values.get(\"selected_tools\") or []\n for tool_name in selected_tools:\n if tool_name not in _FILE_TOOLS:\n raise ValueError(\n f\"File Tool of name {tool_name} not supported.\"\n f\" Permitted tools: {list(_FILE_TOOLS)}\"\n )\n return values", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/file_management/toolkit.html"} {"id": "5940040a707a-1", "text": ")\n return values\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n allowed_tools = self.selected_tools or _FILE_TOOLS.keys()\n tools: List[BaseTool] = []\n for tool in allowed_tools:\n tool_cls = _FILE_TOOLS[tool]\n tools.append(tool_cls(root_dir=self.root_dir)) # type: ignore\n return tools\n__all__ = [\"FileManagementToolkit\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/file_management/toolkit.html"} {"id": "3bf24616ec35-0", "text": "Source code for langchain.agents.agent_toolkits.azure_cognitive_services.toolkit\nfrom __future__ import annotations\nimport sys\nfrom typing import List\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.tools.azure_cognitive_services import (\n AzureCogsFormRecognizerTool,\n AzureCogsImageAnalysisTool,\n AzureCogsSpeech2TextTool,\n AzureCogsText2SpeechTool,\n)\nfrom langchain.tools.base import BaseTool\n[docs]class AzureCognitiveServicesToolkit(BaseToolkit):\n \"\"\"Toolkit for Azure Cognitive Services.\"\"\"\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n tools = [\n AzureCogsFormRecognizerTool(),\n AzureCogsSpeech2TextTool(),\n AzureCogsText2SpeechTool(),\n ]\n # TODO: Remove check once azure-ai-vision supports MacOS.\n if sys.platform.startswith(\"linux\") or sys.platform.startswith(\"win\"):\n tools.append(AzureCogsImageAnalysisTool())\n return tools", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/azure_cognitive_services/toolkit.html"} {"id": "122a85836a9e-0", "text": "Source code for langchain.agents.agent_toolkits.json.toolkit\n\"\"\"Toolkit for interacting with a JSON spec.\"\"\"\nfrom __future__ import annotations\nfrom typing import List\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.tools import BaseTool\nfrom langchain.tools.json.tool import JsonGetValueTool, JsonListKeysTool, JsonSpec\n[docs]class JsonToolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with a JSON spec.\"\"\"\n spec: JsonSpec\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n return [\n JsonListKeysTool(spec=self.spec),\n JsonGetValueTool(spec=self.spec),\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/json/toolkit.html"} {"id": "bf7374781265-0", "text": "Source code for langchain.agents.agent_toolkits.json.base\n\"\"\"Json agent.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.json.prompt import JSON_PREFIX, JSON_SUFFIX\nfrom langchain.agents.agent_toolkits.json.toolkit import JsonToolkit\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]def create_json_agent(\n llm: BaseLanguageModel,\n toolkit: JsonToolkit,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: str = JSON_PREFIX,\n suffix: str = JSON_SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n verbose: bool = False,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a json agent from an LLM and tools.\"\"\"\n tools = toolkit.get_tools()\n prompt = ZeroShotAgent.create_prompt(\n tools,\n prefix=prefix,\n suffix=suffix,\n format_instructions=format_instructions,\n input_variables=input_variables,\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs)\n return AgentExecutor.from_agent_and_tools(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/json/base.html"} {"id": "bf7374781265-1", "text": "return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n **(agent_executor_kwargs or {}),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/json/base.html"} {"id": "a9b557ef1338-0", "text": "Source code for langchain.agents.agent_toolkits.gmail.toolkit\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, List\nfrom pydantic import Field\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.tools import BaseTool\nfrom langchain.tools.gmail.create_draft import GmailCreateDraft\nfrom langchain.tools.gmail.get_message import GmailGetMessage\nfrom langchain.tools.gmail.get_thread import GmailGetThread\nfrom langchain.tools.gmail.search import GmailSearch\nfrom langchain.tools.gmail.send_message import GmailSendMessage\nfrom langchain.tools.gmail.utils import build_resource_service\nif TYPE_CHECKING:\n # This is for linting and IDE typehints\n from googleapiclient.discovery import Resource\nelse:\n try:\n # We do this so pydantic can resolve the types when instantiating\n from googleapiclient.discovery import Resource\n except ImportError:\n pass\nSCOPES = [\"https://mail.google.com/\"]\n[docs]class GmailToolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with Gmail.\"\"\"\n api_resource: Resource = Field(default_factory=build_resource_service)\n[docs] class Config:\n \"\"\"Pydantic config.\"\"\"\n arbitrary_types_allowed = True\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n return [\n GmailCreateDraft(api_resource=self.api_resource),\n GmailSendMessage(api_resource=self.api_resource),\n GmailSearch(api_resource=self.api_resource),\n GmailGetMessage(api_resource=self.api_resource),\n GmailGetThread(api_resource=self.api_resource),\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/gmail/toolkit.html"} {"id": "930cbfa99474-0", "text": "Source code for langchain.agents.agent_toolkits.spark_sql.toolkit\n\"\"\"Toolkit for interacting with Spark SQL.\"\"\"\nfrom typing import List\nfrom pydantic import Field\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.tools import BaseTool\nfrom langchain.tools.spark_sql.tool import (\n InfoSparkSQLTool,\n ListSparkSQLTool,\n QueryCheckerTool,\n QuerySparkSQLTool,\n)\nfrom langchain.utilities.spark_sql import SparkSQL\n[docs]class SparkSQLToolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with Spark SQL.\"\"\"\n db: SparkSQL = Field(exclude=True)\n llm: BaseLanguageModel = Field(exclude=True)\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n return [\n QuerySparkSQLTool(db=self.db),\n InfoSparkSQLTool(db=self.db),\n ListSparkSQLTool(db=self.db),\n QueryCheckerTool(db=self.db, llm=self.llm),\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/spark_sql/toolkit.html"} {"id": "b17f990037a9-0", "text": "Source code for langchain.agents.agent_toolkits.spark_sql.base\n\"\"\"Spark SQL agent.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.spark_sql.prompt import SQL_PREFIX, SQL_SUFFIX\nfrom langchain.agents.agent_toolkits.spark_sql.toolkit import SparkSQLToolkit\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]def create_spark_sql_agent(\n llm: BaseLanguageModel,\n toolkit: SparkSQLToolkit,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: str = SQL_PREFIX,\n suffix: str = SQL_SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n top_k: int = 10,\n max_iterations: Optional[int] = 15,\n max_execution_time: Optional[float] = None,\n early_stopping_method: str = \"force\",\n verbose: bool = False,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a sql agent from an LLM and tools.\"\"\"\n tools = toolkit.get_tools()\n prefix = prefix.format(top_k=top_k)\n prompt = ZeroShotAgent.create_prompt(\n tools,\n prefix=prefix,\n suffix=suffix,\n format_instructions=format_instructions,\n input_variables=input_variables,\n )\n llm_chain = LLMChain(\n llm=llm,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/spark_sql/base.html"} {"id": "b17f990037a9-1", "text": "llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs)\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n max_iterations=max_iterations,\n max_execution_time=max_execution_time,\n early_stopping_method=early_stopping_method,\n **(agent_executor_kwargs or {}),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/spark_sql/base.html"} {"id": "e9bc0fb2cd16-0", "text": "Source code for langchain.agents.agent_toolkits.pandas.base\n\"\"\"Agent for working with pandas objects.\"\"\"\nfrom typing import Any, Dict, List, Optional, Tuple\nfrom langchain.agents.agent import AgentExecutor, BaseSingleActionAgent\nfrom langchain.agents.agent_toolkits.pandas.prompt import (\n FUNCTIONS_WITH_DF,\n FUNCTIONS_WITH_MULTI_DF,\n MULTI_DF_PREFIX,\n MULTI_DF_PREFIX_FUNCTIONS,\n PREFIX,\n PREFIX_FUNCTIONS,\n SUFFIX_NO_DF,\n SUFFIX_WITH_DF,\n SUFFIX_WITH_MULTI_DF,\n)\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgent\nfrom langchain.agents.types import AgentType\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.schema import BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.schema.messages import SystemMessage\nfrom langchain.tools.python.tool import PythonAstREPLTool\ndef _get_multi_prompt(\n dfs: List[Any],\n prefix: Optional[str] = None,\n suffix: Optional[str] = None,\n input_variables: Optional[List[str]] = None,\n include_df_in_prompt: Optional[bool] = True,\n number_of_head_rows: int = 5,\n) -> Tuple[BasePromptTemplate, List[PythonAstREPLTool]]:\n num_dfs = len(dfs)\n if suffix is not None:\n suffix_to_use = suffix\n include_dfs_head = True\n elif include_df_in_prompt:\n suffix_to_use = SUFFIX_WITH_MULTI_DF\n include_dfs_head = True\n else:\n suffix_to_use = SUFFIX_NO_DF", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/pandas/base.html"} {"id": "e9bc0fb2cd16-1", "text": "else:\n suffix_to_use = SUFFIX_NO_DF\n include_dfs_head = False\n if input_variables is None:\n input_variables = [\"input\", \"agent_scratchpad\", \"num_dfs\"]\n if include_dfs_head:\n input_variables += [\"dfs_head\"]\n if prefix is None:\n prefix = MULTI_DF_PREFIX\n df_locals = {}\n for i, dataframe in enumerate(dfs):\n df_locals[f\"df{i + 1}\"] = dataframe\n tools = [PythonAstREPLTool(locals=df_locals)]\n prompt = ZeroShotAgent.create_prompt(\n tools, prefix=prefix, suffix=suffix_to_use, input_variables=input_variables\n )\n partial_prompt = prompt.partial()\n if \"dfs_head\" in input_variables:\n dfs_head = \"\\n\\n\".join([d.head(number_of_head_rows).to_markdown() for d in dfs])\n partial_prompt = partial_prompt.partial(num_dfs=str(num_dfs), dfs_head=dfs_head)\n if \"num_dfs\" in input_variables:\n partial_prompt = partial_prompt.partial(num_dfs=str(num_dfs))\n return partial_prompt, tools\ndef _get_single_prompt(\n df: Any,\n prefix: Optional[str] = None,\n suffix: Optional[str] = None,\n input_variables: Optional[List[str]] = None,\n include_df_in_prompt: Optional[bool] = True,\n number_of_head_rows: int = 5,\n) -> Tuple[BasePromptTemplate, List[PythonAstREPLTool]]:\n if suffix is not None:\n suffix_to_use = suffix\n include_df_head = True\n elif include_df_in_prompt:\n suffix_to_use = SUFFIX_WITH_DF\n include_df_head = True", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/pandas/base.html"} {"id": "e9bc0fb2cd16-2", "text": "suffix_to_use = SUFFIX_WITH_DF\n include_df_head = True\n else:\n suffix_to_use = SUFFIX_NO_DF\n include_df_head = False\n if input_variables is None:\n input_variables = [\"input\", \"agent_scratchpad\"]\n if include_df_head:\n input_variables += [\"df_head\"]\n if prefix is None:\n prefix = PREFIX\n tools = [PythonAstREPLTool(locals={\"df\": df})]\n prompt = ZeroShotAgent.create_prompt(\n tools, prefix=prefix, suffix=suffix_to_use, input_variables=input_variables\n )\n partial_prompt = prompt.partial()\n if \"df_head\" in input_variables:\n partial_prompt = partial_prompt.partial(\n df_head=str(df.head(number_of_head_rows).to_markdown())\n )\n return partial_prompt, tools\ndef _get_prompt_and_tools(\n df: Any,\n prefix: Optional[str] = None,\n suffix: Optional[str] = None,\n input_variables: Optional[List[str]] = None,\n include_df_in_prompt: Optional[bool] = True,\n number_of_head_rows: int = 5,\n) -> Tuple[BasePromptTemplate, List[PythonAstREPLTool]]:\n try:\n import pandas as pd\n except ImportError:\n raise ValueError(\n \"pandas package not found, please install with `pip install pandas`\"\n )\n if include_df_in_prompt is not None and suffix is not None:\n raise ValueError(\"If suffix is specified, include_df_in_prompt should not be.\")\n if isinstance(df, list):\n for item in df:\n if not isinstance(item, pd.DataFrame):\n raise ValueError(f\"Expected pandas object, got {type(df)}\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/pandas/base.html"} {"id": "e9bc0fb2cd16-3", "text": "raise ValueError(f\"Expected pandas object, got {type(df)}\")\n return _get_multi_prompt(\n df,\n prefix=prefix,\n suffix=suffix,\n input_variables=input_variables,\n include_df_in_prompt=include_df_in_prompt,\n number_of_head_rows=number_of_head_rows,\n )\n else:\n if not isinstance(df, pd.DataFrame):\n raise ValueError(f\"Expected pandas object, got {type(df)}\")\n return _get_single_prompt(\n df,\n prefix=prefix,\n suffix=suffix,\n input_variables=input_variables,\n include_df_in_prompt=include_df_in_prompt,\n number_of_head_rows=number_of_head_rows,\n )\ndef _get_functions_single_prompt(\n df: Any,\n prefix: Optional[str] = None,\n suffix: Optional[str] = None,\n include_df_in_prompt: Optional[bool] = True,\n number_of_head_rows: int = 5,\n) -> Tuple[BasePromptTemplate, List[PythonAstREPLTool]]:\n if suffix is not None:\n suffix_to_use = suffix\n if include_df_in_prompt:\n suffix_to_use = suffix_to_use.format(\n df_head=str(df.head(number_of_head_rows).to_markdown())\n )\n elif include_df_in_prompt:\n suffix_to_use = FUNCTIONS_WITH_DF.format(\n df_head=str(df.head(number_of_head_rows).to_markdown())\n )\n else:\n suffix_to_use = \"\"\n if prefix is None:\n prefix = PREFIX_FUNCTIONS\n tools = [PythonAstREPLTool(locals={\"df\": df})]\n system_message = SystemMessage(content=prefix + suffix_to_use)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/pandas/base.html"} {"id": "e9bc0fb2cd16-4", "text": "system_message = SystemMessage(content=prefix + suffix_to_use)\n prompt = OpenAIFunctionsAgent.create_prompt(system_message=system_message)\n return prompt, tools\ndef _get_functions_multi_prompt(\n dfs: Any,\n prefix: Optional[str] = None,\n suffix: Optional[str] = None,\n include_df_in_prompt: Optional[bool] = True,\n number_of_head_rows: int = 5,\n) -> Tuple[BasePromptTemplate, List[PythonAstREPLTool]]:\n if suffix is not None:\n suffix_to_use = suffix\n if include_df_in_prompt:\n dfs_head = \"\\n\\n\".join(\n [d.head(number_of_head_rows).to_markdown() for d in dfs]\n )\n suffix_to_use = suffix_to_use.format(\n dfs_head=dfs_head,\n )\n elif include_df_in_prompt:\n dfs_head = \"\\n\\n\".join([d.head(number_of_head_rows).to_markdown() for d in dfs])\n suffix_to_use = FUNCTIONS_WITH_MULTI_DF.format(\n dfs_head=dfs_head,\n )\n else:\n suffix_to_use = \"\"\n if prefix is None:\n prefix = MULTI_DF_PREFIX_FUNCTIONS\n prefix = prefix.format(num_dfs=str(len(dfs)))\n df_locals = {}\n for i, dataframe in enumerate(dfs):\n df_locals[f\"df{i + 1}\"] = dataframe\n tools = [PythonAstREPLTool(locals=df_locals)]\n system_message = SystemMessage(content=prefix + suffix_to_use)\n prompt = OpenAIFunctionsAgent.create_prompt(system_message=system_message)\n return prompt, tools\ndef _get_functions_prompt_and_tools(\n df: Any,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/pandas/base.html"} {"id": "e9bc0fb2cd16-5", "text": "def _get_functions_prompt_and_tools(\n df: Any,\n prefix: Optional[str] = None,\n suffix: Optional[str] = None,\n input_variables: Optional[List[str]] = None,\n include_df_in_prompt: Optional[bool] = True,\n number_of_head_rows: int = 5,\n) -> Tuple[BasePromptTemplate, List[PythonAstREPLTool]]:\n try:\n import pandas as pd\n except ImportError:\n raise ValueError(\n \"pandas package not found, please install with `pip install pandas`\"\n )\n if input_variables is not None:\n raise ValueError(\"`input_variables` is not supported at the moment.\")\n if include_df_in_prompt is not None and suffix is not None:\n raise ValueError(\"If suffix is specified, include_df_in_prompt should not be.\")\n if isinstance(df, list):\n for item in df:\n if not isinstance(item, pd.DataFrame):\n raise ValueError(f\"Expected pandas object, got {type(df)}\")\n return _get_functions_multi_prompt(\n df,\n prefix=prefix,\n suffix=suffix,\n include_df_in_prompt=include_df_in_prompt,\n number_of_head_rows=number_of_head_rows,\n )\n else:\n if not isinstance(df, pd.DataFrame):\n raise ValueError(f\"Expected pandas object, got {type(df)}\")\n return _get_functions_single_prompt(\n df,\n prefix=prefix,\n suffix=suffix,\n include_df_in_prompt=include_df_in_prompt,\n number_of_head_rows=number_of_head_rows,\n )\n[docs]def create_pandas_dataframe_agent(\n llm: BaseLanguageModel,\n df: Any,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/pandas/base.html"} {"id": "e9bc0fb2cd16-6", "text": "llm: BaseLanguageModel,\n df: Any,\n agent_type: AgentType = AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: Optional[str] = None,\n suffix: Optional[str] = None,\n input_variables: Optional[List[str]] = None,\n verbose: bool = False,\n return_intermediate_steps: bool = False,\n max_iterations: Optional[int] = 15,\n max_execution_time: Optional[float] = None,\n early_stopping_method: str = \"force\",\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n include_df_in_prompt: Optional[bool] = True,\n number_of_head_rows: int = 5,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a pandas agent from an LLM and dataframe.\"\"\"\n agent: BaseSingleActionAgent\n if agent_type == AgentType.ZERO_SHOT_REACT_DESCRIPTION:\n prompt, tools = _get_prompt_and_tools(\n df,\n prefix=prefix,\n suffix=suffix,\n input_variables=input_variables,\n include_df_in_prompt=include_df_in_prompt,\n number_of_head_rows=number_of_head_rows,\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(\n llm_chain=llm_chain,\n allowed_tools=tool_names,\n callback_manager=callback_manager,\n **kwargs,\n )\n elif agent_type == AgentType.OPENAI_FUNCTIONS:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/pandas/base.html"} {"id": "e9bc0fb2cd16-7", "text": "**kwargs,\n )\n elif agent_type == AgentType.OPENAI_FUNCTIONS:\n _prompt, tools = _get_functions_prompt_and_tools(\n df,\n prefix=prefix,\n suffix=suffix,\n input_variables=input_variables,\n include_df_in_prompt=include_df_in_prompt,\n number_of_head_rows=number_of_head_rows,\n )\n agent = OpenAIFunctionsAgent(\n llm=llm,\n prompt=_prompt,\n tools=tools,\n callback_manager=callback_manager,\n **kwargs,\n )\n else:\n raise ValueError(f\"Agent type {agent_type} not supported at the moment.\")\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n return_intermediate_steps=return_intermediate_steps,\n max_iterations=max_iterations,\n max_execution_time=max_execution_time,\n early_stopping_method=early_stopping_method,\n **(agent_executor_kwargs or {}),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/pandas/base.html"} {"id": "c14c228ae403-0", "text": "Source code for langchain.agents.agent_toolkits.powerbi.toolkit\n\"\"\"Toolkit for interacting with a Power BI dataset.\"\"\"\nfrom typing import List, Optional, Union\nfrom pydantic import Field\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chat_models.base import BaseChatModel\nfrom langchain.prompts import PromptTemplate\nfrom langchain.prompts.chat import (\n ChatPromptTemplate,\n HumanMessagePromptTemplate,\n SystemMessagePromptTemplate,\n)\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.tools import BaseTool\nfrom langchain.tools.powerbi.prompt import (\n QUESTION_TO_QUERY_BASE,\n SINGLE_QUESTION_TO_QUERY,\n USER_INPUT,\n)\nfrom langchain.tools.powerbi.tool import (\n InfoPowerBITool,\n ListPowerBITool,\n QueryPowerBITool,\n)\nfrom langchain.utilities.powerbi import PowerBIDataset\n[docs]class PowerBIToolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with PowerBI dataset.\"\"\"\n powerbi: PowerBIDataset = Field(exclude=True)\n llm: Union[BaseLanguageModel, BaseChatModel] = Field(exclude=True)\n examples: Optional[str] = None\n max_iterations: int = 5\n callback_manager: Optional[BaseCallbackManager] = None\n output_token_limit: Optional[int] = None\n tiktoken_model_name: Optional[str] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n return [\n QueryPowerBITool(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/powerbi/toolkit.html"} {"id": "c14c228ae403-1", "text": "return [\n QueryPowerBITool(\n llm_chain=self._get_chain(),\n powerbi=self.powerbi,\n examples=self.examples,\n max_iterations=self.max_iterations,\n output_token_limit=self.output_token_limit,\n tiktoken_model_name=self.tiktoken_model_name,\n ),\n InfoPowerBITool(powerbi=self.powerbi),\n ListPowerBITool(powerbi=self.powerbi),\n ]\n def _get_chain(self) -> LLMChain:\n \"\"\"Construct the chain based on the callback manager and model type.\"\"\"\n if isinstance(self.llm, BaseLanguageModel):\n return LLMChain(\n llm=self.llm,\n callback_manager=self.callback_manager\n if self.callback_manager\n else None,\n prompt=PromptTemplate(\n template=SINGLE_QUESTION_TO_QUERY,\n input_variables=[\"tool_input\", \"tables\", \"schemas\", \"examples\"],\n ),\n )\n system_prompt = SystemMessagePromptTemplate(\n prompt=PromptTemplate(\n template=QUESTION_TO_QUERY_BASE,\n input_variables=[\"tables\", \"schemas\", \"examples\"],\n )\n )\n human_prompt = HumanMessagePromptTemplate(\n prompt=PromptTemplate(\n template=USER_INPUT,\n input_variables=[\"tool_input\"],\n )\n )\n return LLMChain(\n llm=self.llm,\n callback_manager=self.callback_manager if self.callback_manager else None,\n prompt=ChatPromptTemplate.from_messages([system_prompt, human_prompt]),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/powerbi/toolkit.html"} {"id": "de72b4901d1e-0", "text": "Source code for langchain.agents.agent_toolkits.powerbi.base\n\"\"\"Power BI agent.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom langchain.agents import AgentExecutor\nfrom langchain.agents.agent_toolkits.powerbi.prompt import (\n POWERBI_PREFIX,\n POWERBI_SUFFIX,\n)\nfrom langchain.agents.agent_toolkits.powerbi.toolkit import PowerBIToolkit\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.utilities.powerbi import PowerBIDataset\n[docs]def create_pbi_agent(\n llm: BaseLanguageModel,\n toolkit: Optional[PowerBIToolkit] = None,\n powerbi: Optional[PowerBIDataset] = None,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: str = POWERBI_PREFIX,\n suffix: str = POWERBI_SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n examples: Optional[str] = None,\n input_variables: Optional[List[str]] = None,\n top_k: int = 10,\n verbose: bool = False,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a pbi agent from an LLM and tools.\"\"\"\n if toolkit is None:\n if powerbi is None:\n raise ValueError(\"Must provide either a toolkit or powerbi dataset\")\n toolkit = PowerBIToolkit(powerbi=powerbi, llm=llm, examples=examples)\n tools = toolkit.get_tools()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/powerbi/base.html"} {"id": "de72b4901d1e-1", "text": "tools = toolkit.get_tools()\n tables = powerbi.table_names if powerbi else toolkit.powerbi.table_names\n agent = ZeroShotAgent(\n llm_chain=LLMChain(\n llm=llm,\n prompt=ZeroShotAgent.create_prompt(\n tools,\n prefix=prefix.format(top_k=top_k).format(tables=tables),\n suffix=suffix,\n format_instructions=format_instructions,\n input_variables=input_variables,\n ),\n callback_manager=callback_manager, # type: ignore\n verbose=verbose,\n ),\n allowed_tools=[tool.name for tool in tools],\n **kwargs,\n )\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n **(agent_executor_kwargs or {}),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/powerbi/base.html"} {"id": "6ebe0507f5fd-0", "text": "Source code for langchain.agents.agent_toolkits.powerbi.chat_base\n\"\"\"Power BI agent.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom langchain.agents import AgentExecutor\nfrom langchain.agents.agent import AgentOutputParser\nfrom langchain.agents.agent_toolkits.powerbi.prompt import (\n POWERBI_CHAT_PREFIX,\n POWERBI_CHAT_SUFFIX,\n)\nfrom langchain.agents.agent_toolkits.powerbi.toolkit import PowerBIToolkit\nfrom langchain.agents.conversational_chat.base import ConversationalChatAgent\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chat_models.base import BaseChatModel\nfrom langchain.memory import ConversationBufferMemory\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.utilities.powerbi import PowerBIDataset\n[docs]def create_pbi_chat_agent(\n llm: BaseChatModel,\n toolkit: Optional[PowerBIToolkit] = None,\n powerbi: Optional[PowerBIDataset] = None,\n callback_manager: Optional[BaseCallbackManager] = None,\n output_parser: Optional[AgentOutputParser] = None,\n prefix: str = POWERBI_CHAT_PREFIX,\n suffix: str = POWERBI_CHAT_SUFFIX,\n examples: Optional[str] = None,\n input_variables: Optional[List[str]] = None,\n memory: Optional[BaseChatMemory] = None,\n top_k: int = 10,\n verbose: bool = False,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a Power BI agent from a Chat LLM and tools.\n If you supply only a toolkit and no Power BI dataset, the same LLM is used for both.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/powerbi/chat_base.html"} {"id": "6ebe0507f5fd-1", "text": "\"\"\"\n if toolkit is None:\n if powerbi is None:\n raise ValueError(\"Must provide either a toolkit or powerbi dataset\")\n toolkit = PowerBIToolkit(powerbi=powerbi, llm=llm, examples=examples)\n tools = toolkit.get_tools()\n tables = powerbi.table_names if powerbi else toolkit.powerbi.table_names\n agent = ConversationalChatAgent.from_llm_and_tools(\n llm=llm,\n tools=tools,\n system_message=prefix.format(top_k=top_k).format(tables=tables),\n human_message=suffix,\n input_variables=input_variables,\n callback_manager=callback_manager,\n output_parser=output_parser,\n verbose=verbose,\n **kwargs,\n )\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n memory=memory\n or ConversationBufferMemory(memory_key=\"chat_history\", return_messages=True),\n verbose=verbose,\n **(agent_executor_kwargs or {}),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/powerbi/chat_base.html"} {"id": "166f041362ea-0", "text": "Source code for langchain.agents.agent_toolkits.python.base\n\"\"\"Python agent.\"\"\"\nfrom typing import Any, Dict, Optional\nfrom langchain.agents.agent import AgentExecutor, BaseSingleActionAgent\nfrom langchain.agents.agent_toolkits.python.prompt import PREFIX\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgent\nfrom langchain.agents.types import AgentType\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.schema.messages import SystemMessage\nfrom langchain.tools.python.tool import PythonREPLTool\n[docs]def create_python_agent(\n llm: BaseLanguageModel,\n tool: PythonREPLTool,\n agent_type: AgentType = AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n callback_manager: Optional[BaseCallbackManager] = None,\n verbose: bool = False,\n prefix: str = PREFIX,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a python agent from an LLM and tool.\"\"\"\n tools = [tool]\n agent: BaseSingleActionAgent\n if agent_type == AgentType.ZERO_SHOT_REACT_DESCRIPTION:\n prompt = ZeroShotAgent.create_prompt(tools, prefix=prefix)\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs)\n elif agent_type == AgentType.OPENAI_FUNCTIONS:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/python/base.html"} {"id": "166f041362ea-1", "text": "elif agent_type == AgentType.OPENAI_FUNCTIONS:\n system_message = SystemMessage(content=prefix)\n _prompt = OpenAIFunctionsAgent.create_prompt(system_message=system_message)\n agent = OpenAIFunctionsAgent(\n llm=llm,\n prompt=_prompt,\n tools=tools,\n callback_manager=callback_manager,\n **kwargs,\n )\n else:\n raise ValueError(f\"Agent type {agent_type} not supported at the moment.\")\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n **(agent_executor_kwargs or {}),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/python/base.html"} {"id": "1c1ce5bb422a-0", "text": "Source code for langchain.agents.agent_toolkits.openapi.toolkit\n\"\"\"Requests toolkit.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, List\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.agents.agent_toolkits.json.base import create_json_agent\nfrom langchain.agents.agent_toolkits.json.toolkit import JsonToolkit\nfrom langchain.agents.agent_toolkits.openapi.prompt import DESCRIPTION\nfrom langchain.agents.tools import Tool\nfrom langchain.requests import TextRequestsWrapper\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.tools import BaseTool\nfrom langchain.tools.json.tool import JsonSpec\nfrom langchain.tools.requests.tool import (\n RequestsDeleteTool,\n RequestsGetTool,\n RequestsPatchTool,\n RequestsPostTool,\n RequestsPutTool,\n)\n[docs]class RequestsToolkit(BaseToolkit):\n \"\"\"Toolkit for making requests.\"\"\"\n requests_wrapper: TextRequestsWrapper\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Return a list of tools.\"\"\"\n return [\n RequestsGetTool(requests_wrapper=self.requests_wrapper),\n RequestsPostTool(requests_wrapper=self.requests_wrapper),\n RequestsPatchTool(requests_wrapper=self.requests_wrapper),\n RequestsPutTool(requests_wrapper=self.requests_wrapper),\n RequestsDeleteTool(requests_wrapper=self.requests_wrapper),\n ]\n[docs]class OpenAPIToolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with an OpenAPI API.\"\"\"\n json_agent: AgentExecutor\n requests_wrapper: TextRequestsWrapper\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n json_agent_tool = Tool(\n name=\"json_explorer\",\n func=self.json_agent.run,\n description=DESCRIPTION,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/openapi/toolkit.html"} {"id": "1c1ce5bb422a-1", "text": "func=self.json_agent.run,\n description=DESCRIPTION,\n )\n request_toolkit = RequestsToolkit(requests_wrapper=self.requests_wrapper)\n return [*request_toolkit.get_tools(), json_agent_tool]\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n json_spec: JsonSpec,\n requests_wrapper: TextRequestsWrapper,\n **kwargs: Any,\n ) -> OpenAPIToolkit:\n \"\"\"Create json agent from llm, then initialize.\"\"\"\n json_agent = create_json_agent(llm, JsonToolkit(spec=json_spec), **kwargs)\n return cls(json_agent=json_agent, requests_wrapper=requests_wrapper)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/openapi/toolkit.html"} {"id": "b9fae257de57-0", "text": "Source code for langchain.agents.agent_toolkits.openapi.base\n\"\"\"OpenAPI spec agent.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.openapi.prompt import (\n OPENAPI_PREFIX,\n OPENAPI_SUFFIX,\n)\nfrom langchain.agents.agent_toolkits.openapi.toolkit import OpenAPIToolkit\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]def create_openapi_agent(\n llm: BaseLanguageModel,\n toolkit: OpenAPIToolkit,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: str = OPENAPI_PREFIX,\n suffix: str = OPENAPI_SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n max_iterations: Optional[int] = 15,\n max_execution_time: Optional[float] = None,\n early_stopping_method: str = \"force\",\n verbose: bool = False,\n return_intermediate_steps: bool = False,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a json agent from an LLM and tools.\"\"\"\n tools = toolkit.get_tools()\n prompt = ZeroShotAgent.create_prompt(\n tools,\n prefix=prefix,\n suffix=suffix,\n format_instructions=format_instructions,\n input_variables=input_variables,\n )\n llm_chain = LLMChain(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/openapi/base.html"} {"id": "b9fae257de57-1", "text": "input_variables=input_variables,\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs)\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n return_intermediate_steps=return_intermediate_steps,\n max_iterations=max_iterations,\n max_execution_time=max_execution_time,\n early_stopping_method=early_stopping_method,\n **(agent_executor_kwargs or {}),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/openapi/base.html"} {"id": "58b836af744b-0", "text": "Source code for langchain.agents.agent_toolkits.openapi.planner\n\"\"\"Agent that interacts with OpenAPI APIs via a hierarchical planning approach.\"\"\"\nimport json\nimport re\nfrom functools import partial\nfrom typing import Any, Callable, Dict, List, Optional\nimport yaml\nfrom pydantic import Field\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.openapi.planner_prompt import (\n API_CONTROLLER_PROMPT,\n API_CONTROLLER_TOOL_DESCRIPTION,\n API_CONTROLLER_TOOL_NAME,\n API_ORCHESTRATOR_PROMPT,\n API_PLANNER_PROMPT,\n API_PLANNER_TOOL_DESCRIPTION,\n API_PLANNER_TOOL_NAME,\n PARSING_DELETE_PROMPT,\n PARSING_GET_PROMPT,\n PARSING_PATCH_PROMPT,\n PARSING_POST_PROMPT,\n REQUESTS_DELETE_TOOL_DESCRIPTION,\n REQUESTS_GET_TOOL_DESCRIPTION,\n REQUESTS_PATCH_TOOL_DESCRIPTION,\n REQUESTS_POST_TOOL_DESCRIPTION,\n)\nfrom langchain.agents.agent_toolkits.openapi.spec import ReducedOpenAPISpec\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.agents.tools import Tool\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.llms.openai import OpenAI\nfrom langchain.memory import ReadOnlySharedMemory\nfrom langchain.prompts import PromptTemplate\nfrom langchain.requests import RequestsWrapper\nfrom langchain.schema import BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.requests.tool import BaseRequestsTool\n#\n# Requests tools with LLM-instructed extraction of truncated responses.\n#\n# Of course, truncating so bluntly may lose a lot of valuable\n# information in the response.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/openapi/planner.html"} {"id": "58b836af744b-1", "text": "# information in the response.\n# However, the goal for now is to have only a single inference step.\nMAX_RESPONSE_LENGTH = 5000\ndef _get_default_llm_chain(prompt: BasePromptTemplate) -> LLMChain:\n return LLMChain(\n llm=OpenAI(),\n prompt=prompt,\n )\ndef _get_default_llm_chain_factory(\n prompt: BasePromptTemplate,\n) -> Callable[[], LLMChain]:\n \"\"\"Returns a default LLMChain factory.\"\"\"\n return partial(_get_default_llm_chain, prompt)\n[docs]class RequestsGetToolWithParsing(BaseRequestsTool, BaseTool):\n name = \"requests_get\"\n description = REQUESTS_GET_TOOL_DESCRIPTION\n response_length: Optional[int] = MAX_RESPONSE_LENGTH\n llm_chain: LLMChain = Field(\n default_factory=_get_default_llm_chain_factory(PARSING_GET_PROMPT)\n )\n def _run(self, text: str) -> str:\n try:\n data = json.loads(text)\n except json.JSONDecodeError as e:\n raise e\n data_params = data.get(\"params\")\n response = self.requests_wrapper.get(data[\"url\"], params=data_params)\n response = response[: self.response_length]\n return self.llm_chain.predict(\n response=response, instructions=data[\"output_instructions\"]\n ).strip()\n async def _arun(self, text: str) -> str:\n raise NotImplementedError()\n[docs]class RequestsPostToolWithParsing(BaseRequestsTool, BaseTool):\n name = \"requests_post\"\n description = REQUESTS_POST_TOOL_DESCRIPTION\n response_length: Optional[int] = MAX_RESPONSE_LENGTH\n llm_chain: LLMChain = Field(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/openapi/planner.html"} {"id": "58b836af744b-2", "text": "llm_chain: LLMChain = Field(\n default_factory=_get_default_llm_chain_factory(PARSING_POST_PROMPT)\n )\n def _run(self, text: str) -> str:\n try:\n data = json.loads(text)\n except json.JSONDecodeError as e:\n raise e\n response = self.requests_wrapper.post(data[\"url\"], data[\"data\"])\n response = response[: self.response_length]\n return self.llm_chain.predict(\n response=response, instructions=data[\"output_instructions\"]\n ).strip()\n async def _arun(self, text: str) -> str:\n raise NotImplementedError()\n[docs]class RequestsPatchToolWithParsing(BaseRequestsTool, BaseTool):\n name = \"requests_patch\"\n description = REQUESTS_PATCH_TOOL_DESCRIPTION\n response_length: Optional[int] = MAX_RESPONSE_LENGTH\n llm_chain: LLMChain = Field(\n default_factory=_get_default_llm_chain_factory(PARSING_PATCH_PROMPT)\n )\n def _run(self, text: str) -> str:\n try:\n data = json.loads(text)\n except json.JSONDecodeError as e:\n raise e\n response = self.requests_wrapper.patch(data[\"url\"], data[\"data\"])\n response = response[: self.response_length]\n return self.llm_chain.predict(\n response=response, instructions=data[\"output_instructions\"]\n ).strip()\n async def _arun(self, text: str) -> str:\n raise NotImplementedError()\n[docs]class RequestsDeleteToolWithParsing(BaseRequestsTool, BaseTool):\n name = \"requests_delete\"\n description = REQUESTS_DELETE_TOOL_DESCRIPTION\n response_length: Optional[int] = MAX_RESPONSE_LENGTH\n llm_chain: LLMChain = Field(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/openapi/planner.html"} {"id": "58b836af744b-3", "text": "llm_chain: LLMChain = Field(\n default_factory=_get_default_llm_chain_factory(PARSING_DELETE_PROMPT)\n )\n def _run(self, text: str) -> str:\n try:\n data = json.loads(text)\n except json.JSONDecodeError as e:\n raise e\n response = self.requests_wrapper.delete(data[\"url\"])\n response = response[: self.response_length]\n return self.llm_chain.predict(\n response=response, instructions=data[\"output_instructions\"]\n ).strip()\n async def _arun(self, text: str) -> str:\n raise NotImplementedError()\n#\n# Orchestrator, planner, controller.\n#\ndef _create_api_planner_tool(\n api_spec: ReducedOpenAPISpec, llm: BaseLanguageModel\n) -> Tool:\n endpoint_descriptions = [\n f\"{name} {description}\" for name, description, _ in api_spec.endpoints\n ]\n prompt = PromptTemplate(\n template=API_PLANNER_PROMPT,\n input_variables=[\"query\"],\n partial_variables={\"endpoints\": \"- \" + \"- \".join(endpoint_descriptions)},\n )\n chain = LLMChain(llm=llm, prompt=prompt)\n tool = Tool(\n name=API_PLANNER_TOOL_NAME,\n description=API_PLANNER_TOOL_DESCRIPTION,\n func=chain.run,\n )\n return tool\ndef _create_api_controller_agent(\n api_url: str,\n api_docs: str,\n requests_wrapper: RequestsWrapper,\n llm: BaseLanguageModel,\n) -> AgentExecutor:\n get_llm_chain = LLMChain(llm=llm, prompt=PARSING_GET_PROMPT)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/openapi/planner.html"} {"id": "58b836af744b-4", "text": "post_llm_chain = LLMChain(llm=llm, prompt=PARSING_POST_PROMPT)\n tools: List[BaseTool] = [\n RequestsGetToolWithParsing(\n requests_wrapper=requests_wrapper, llm_chain=get_llm_chain\n ),\n RequestsPostToolWithParsing(\n requests_wrapper=requests_wrapper, llm_chain=post_llm_chain\n ),\n ]\n prompt = PromptTemplate(\n template=API_CONTROLLER_PROMPT,\n input_variables=[\"input\", \"agent_scratchpad\"],\n partial_variables={\n \"api_url\": api_url,\n \"api_docs\": api_docs,\n \"tool_names\": \", \".join([tool.name for tool in tools]),\n \"tool_descriptions\": \"\\n\".join(\n [f\"{tool.name}: {tool.description}\" for tool in tools]\n ),\n },\n )\n agent = ZeroShotAgent(\n llm_chain=LLMChain(llm=llm, prompt=prompt),\n allowed_tools=[tool.name for tool in tools],\n )\n return AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)\ndef _create_api_controller_tool(\n api_spec: ReducedOpenAPISpec,\n requests_wrapper: RequestsWrapper,\n llm: BaseLanguageModel,\n) -> Tool:\n \"\"\"Expose controller as a tool.\n The tool is invoked with a plan from the planner, and dynamically\n creates a controller agent with relevant documentation only to\n constrain the context.\n \"\"\"\n base_url = api_spec.servers[0][\"url\"] # TODO: do better.\n def _create_and_run_api_controller_agent(plan_str: str) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/openapi/planner.html"} {"id": "58b836af744b-5", "text": "def _create_and_run_api_controller_agent(plan_str: str) -> str:\n pattern = r\"\\b(GET|POST|PATCH|DELETE)\\s+(/\\S+)*\"\n matches = re.findall(pattern, plan_str)\n endpoint_names = [\n \"{method} {route}\".format(method=method, route=route.split(\"?\")[0])\n for method, route in matches\n ]\n endpoint_docs_by_name = {name: docs for name, _, docs in api_spec.endpoints}\n docs_str = \"\"\n for endpoint_name in endpoint_names:\n docs = endpoint_docs_by_name.get(endpoint_name)\n if not docs:\n raise ValueError(f\"{endpoint_name} endpoint does not exist.\")\n docs_str += f\"== Docs for {endpoint_name} == \\n{yaml.dump(docs)}\\n\"\n agent = _create_api_controller_agent(base_url, docs_str, requests_wrapper, llm)\n return agent.run(plan_str)\n return Tool(\n name=API_CONTROLLER_TOOL_NAME,\n func=_create_and_run_api_controller_agent,\n description=API_CONTROLLER_TOOL_DESCRIPTION,\n )\n[docs]def create_openapi_agent(\n api_spec: ReducedOpenAPISpec,\n requests_wrapper: RequestsWrapper,\n llm: BaseLanguageModel,\n shared_memory: Optional[ReadOnlySharedMemory] = None,\n callback_manager: Optional[BaseCallbackManager] = None,\n verbose: bool = True,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Instantiate API planner and controller for a given spec.\n Inject credentials via requests_wrapper.\n We use a top-level \"orchestrator\" agent to invoke the planner and controller,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/openapi/planner.html"} {"id": "58b836af744b-6", "text": "We use a top-level \"orchestrator\" agent to invoke the planner and controller,\n rather than a top-level planner\n that invokes a controller with its plan. This is to keep the planner simple.\n \"\"\"\n tools = [\n _create_api_planner_tool(api_spec, llm),\n _create_api_controller_tool(api_spec, requests_wrapper, llm),\n ]\n prompt = PromptTemplate(\n template=API_ORCHESTRATOR_PROMPT,\n input_variables=[\"input\", \"agent_scratchpad\"],\n partial_variables={\n \"tool_names\": \", \".join([tool.name for tool in tools]),\n \"tool_descriptions\": \"\\n\".join(\n [f\"{tool.name}: {tool.description}\" for tool in tools]\n ),\n },\n )\n agent = ZeroShotAgent(\n llm_chain=LLMChain(llm=llm, prompt=prompt, memory=shared_memory),\n allowed_tools=[tool.name for tool in tools],\n **kwargs,\n )\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n **(agent_executor_kwargs or {}),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/openapi/planner.html"} {"id": "94929c9217d7-0", "text": "Source code for langchain.agents.agent_toolkits.openapi.spec\n\"\"\"Quick and dirty representation for OpenAPI specs.\"\"\"\nfrom dataclasses import dataclass\nfrom typing import Any, Dict, List, Tuple, Union\n[docs]def dereference_refs(spec_obj: dict, full_spec: dict) -> Union[dict, list]:\n \"\"\"Try to substitute $refs.\n The goal is to get the complete docs for each endpoint in context for now.\n In the few OpenAPI specs I studied, $refs referenced models\n (or in OpenAPI terms, components) and could be nested. This code most\n likely misses lots of cases.\n \"\"\"\n def _retrieve_ref_path(path: str, full_spec: dict) -> dict:\n components = path.split(\"/\")\n if components[0] != \"#\":\n raise RuntimeError(\n \"All $refs I've seen so far are uri fragments (start with hash).\"\n )\n out = full_spec\n for component in components[1:]:\n out = out[component]\n return out\n def _dereference_refs(\n obj: Union[dict, list], stop: bool = False\n ) -> Union[dict, list]:\n if stop:\n return obj\n obj_out: Dict[str, Any] = {}\n if isinstance(obj, dict):\n for k, v in obj.items():\n if k == \"$ref\":\n # stop=True => don't dereference recursively.\n return _dereference_refs(\n _retrieve_ref_path(v, full_spec), stop=True\n )\n elif isinstance(v, list):\n obj_out[k] = [_dereference_refs(el) for el in v]\n elif isinstance(v, dict):\n obj_out[k] = _dereference_refs(v)\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/openapi/spec.html"} {"id": "94929c9217d7-1", "text": "obj_out[k] = _dereference_refs(v)\n else:\n obj_out[k] = v\n return obj_out\n elif isinstance(obj, list):\n return [_dereference_refs(el) for el in obj]\n else:\n return obj\n return _dereference_refs(spec_obj)\n@dataclass(frozen=True)\nclass ReducedOpenAPISpec:\n servers: List[dict]\n description: str\n endpoints: List[Tuple[str, str, dict]]\n[docs]def reduce_openapi_spec(spec: dict, dereference: bool = True) -> ReducedOpenAPISpec:\n \"\"\"Simplify/distill/minify a spec somehow.\n I want a smaller target for retrieval and (more importantly)\n I want smaller results from retrieval.\n I was hoping https://openapi.tools/ would have some useful bits\n to this end, but doesn't seem so.\n \"\"\"\n # 1. Consider only get, post, patch, delete endpoints.\n endpoints = [\n (f\"{operation_name.upper()} {route}\", docs.get(\"description\"), docs)\n for route, operation in spec[\"paths\"].items()\n for operation_name, docs in operation.items()\n if operation_name in [\"get\", \"post\", \"patch\", \"delete\"]\n ]\n # 2. Replace any refs so that complete docs are retrieved.\n # Note: probably want to do this post-retrieval, it blows up the size of the spec.\n if dereference:\n endpoints = [\n (name, description, dereference_refs(docs, spec))\n for name, description, docs in endpoints\n ]\n # 3. Strip docs down to required request args + happy path response.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/openapi/spec.html"} {"id": "94929c9217d7-2", "text": "# 3. Strip docs down to required request args + happy path response.\n def reduce_endpoint_docs(docs: dict) -> dict:\n out = {}\n if docs.get(\"description\"):\n out[\"description\"] = docs.get(\"description\")\n if docs.get(\"parameters\"):\n out[\"parameters\"] = [\n parameter\n for parameter in docs.get(\"parameters\", [])\n if parameter.get(\"required\")\n ]\n if \"200\" in docs[\"responses\"]:\n out[\"responses\"] = docs[\"responses\"][\"200\"]\n return out\n endpoints = [\n (name, description, reduce_endpoint_docs(docs))\n for name, description, docs in endpoints\n ]\n return ReducedOpenAPISpec(\n servers=spec[\"servers\"],\n description=spec[\"info\"].get(\"description\", \"\"),\n endpoints=endpoints,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/openapi/spec.html"} {"id": "d74a7caac91d-0", "text": "Source code for langchain.agents.agent_toolkits.csv.base\n\"\"\"Agent for working with csv files.\"\"\"\nfrom typing import Any, List, Optional, Union\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.pandas.base import create_pandas_dataframe_agent\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]def create_csv_agent(\n llm: BaseLanguageModel,\n path: Union[str, List[str]],\n pandas_kwargs: Optional[dict] = None,\n **kwargs: Any,\n) -> AgentExecutor:\n \"\"\"Create csv agent by loading to a dataframe and using pandas agent.\"\"\"\n try:\n import pandas as pd\n except ImportError:\n raise ValueError(\n \"pandas package not found, please install with `pip install pandas`\"\n )\n _kwargs = pandas_kwargs or {}\n if isinstance(path, str):\n df = pd.read_csv(path, **_kwargs)\n elif isinstance(path, list):\n df = []\n for item in path:\n if not isinstance(item, str):\n raise ValueError(f\"Expected str, got {type(path)}\")\n df.append(pd.read_csv(item, **_kwargs))\n else:\n raise ValueError(f\"Expected str or list, got {type(path)}\")\n return create_pandas_dataframe_agent(llm, df, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/csv/base.html"} {"id": "914bf3648362-0", "text": "Source code for langchain.agents.agent_toolkits.spark.base\n\"\"\"Agent for working with pandas objects.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.spark.prompt import PREFIX, SUFFIX\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.llms.base import BaseLLM\nfrom langchain.tools.python.tool import PythonAstREPLTool\ndef _validate_spark_df(df: Any) -> bool:\n try:\n from pyspark.sql import DataFrame as SparkLocalDataFrame\n return isinstance(df, SparkLocalDataFrame)\n except ImportError:\n return False\ndef _validate_spark_connect_df(df: Any) -> bool:\n try:\n from pyspark.sql.connect.dataframe import DataFrame as SparkConnectDataFrame\n return isinstance(df, SparkConnectDataFrame)\n except ImportError:\n return False\n[docs]def create_spark_dataframe_agent(\n llm: BaseLLM,\n df: Any,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: str = PREFIX,\n suffix: str = SUFFIX,\n input_variables: Optional[List[str]] = None,\n verbose: bool = False,\n return_intermediate_steps: bool = False,\n max_iterations: Optional[int] = 15,\n max_execution_time: Optional[float] = None,\n early_stopping_method: str = \"force\",\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a spark agent from an LLM and dataframe.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/spark/base.html"} {"id": "914bf3648362-1", "text": ") -> AgentExecutor:\n \"\"\"Construct a spark agent from an LLM and dataframe.\"\"\"\n if not _validate_spark_df(df) and not _validate_spark_connect_df(df):\n raise ValueError(\"Spark is not installed. run `pip install pyspark`.\")\n if input_variables is None:\n input_variables = [\"df\", \"input\", \"agent_scratchpad\"]\n tools = [PythonAstREPLTool(locals={\"df\": df})]\n prompt = ZeroShotAgent.create_prompt(\n tools, prefix=prefix, suffix=suffix, input_variables=input_variables\n )\n partial_prompt = prompt.partial(df=str(df.first()))\n llm_chain = LLMChain(\n llm=llm,\n prompt=partial_prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(\n llm_chain=llm_chain,\n allowed_tools=tool_names,\n callback_manager=callback_manager,\n **kwargs,\n )\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n return_intermediate_steps=return_intermediate_steps,\n max_iterations=max_iterations,\n max_execution_time=max_execution_time,\n early_stopping_method=early_stopping_method,\n **(agent_executor_kwargs or {}),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/spark/base.html"} {"id": "57d1b27b7c1a-0", "text": "Source code for langchain.agents.agent_toolkits.vectorstore.toolkit\n\"\"\"Toolkit for interacting with a vector store.\"\"\"\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.llms.openai import OpenAI\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.tools import BaseTool\nfrom langchain.tools.vectorstore.tool import (\n VectorStoreQATool,\n VectorStoreQAWithSourcesTool,\n)\nfrom langchain.vectorstores.base import VectorStore\n[docs]class VectorStoreInfo(BaseModel):\n \"\"\"Information about a vectorstore.\"\"\"\n vectorstore: VectorStore = Field(exclude=True)\n name: str\n description: str\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs]class VectorStoreToolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with a vector store.\"\"\"\n vectorstore_info: VectorStoreInfo = Field(exclude=True)\n llm: BaseLanguageModel = Field(default_factory=lambda: OpenAI(temperature=0))\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n description = VectorStoreQATool.get_description(\n self.vectorstore_info.name, self.vectorstore_info.description\n )\n qa_tool = VectorStoreQATool(\n name=self.vectorstore_info.name,\n description=description,\n vectorstore=self.vectorstore_info.vectorstore,\n llm=self.llm,\n )\n description = VectorStoreQAWithSourcesTool.get_description(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/vectorstore/toolkit.html"} {"id": "57d1b27b7c1a-1", "text": ")\n description = VectorStoreQAWithSourcesTool.get_description(\n self.vectorstore_info.name, self.vectorstore_info.description\n )\n qa_with_sources_tool = VectorStoreQAWithSourcesTool(\n name=f\"{self.vectorstore_info.name}_with_sources\",\n description=description,\n vectorstore=self.vectorstore_info.vectorstore,\n llm=self.llm,\n )\n return [qa_tool, qa_with_sources_tool]\n[docs]class VectorStoreRouterToolkit(BaseToolkit):\n \"\"\"Toolkit for routing between vector stores.\"\"\"\n vectorstores: List[VectorStoreInfo] = Field(exclude=True)\n llm: BaseLanguageModel = Field(default_factory=lambda: OpenAI(temperature=0))\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n tools: List[BaseTool] = []\n for vectorstore_info in self.vectorstores:\n description = VectorStoreQATool.get_description(\n vectorstore_info.name, vectorstore_info.description\n )\n qa_tool = VectorStoreQATool(\n name=vectorstore_info.name,\n description=description,\n vectorstore=vectorstore_info.vectorstore,\n llm=self.llm,\n )\n tools.append(qa_tool)\n return tools", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/vectorstore/toolkit.html"} {"id": "3bcb0c205ac1-0", "text": "Source code for langchain.agents.agent_toolkits.vectorstore.base\n\"\"\"VectorStore agent.\"\"\"\nfrom typing import Any, Dict, Optional\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.vectorstore.prompt import PREFIX, ROUTER_PREFIX\nfrom langchain.agents.agent_toolkits.vectorstore.toolkit import (\n VectorStoreRouterToolkit,\n VectorStoreToolkit,\n)\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]def create_vectorstore_agent(\n llm: BaseLanguageModel,\n toolkit: VectorStoreToolkit,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: str = PREFIX,\n verbose: bool = False,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a vectorstore agent from an LLM and tools.\"\"\"\n tools = toolkit.get_tools()\n prompt = ZeroShotAgent.create_prompt(tools, prefix=prefix)\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs)\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n **(agent_executor_kwargs or {}),\n )\n[docs]def create_vectorstore_router_agent(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/vectorstore/base.html"} {"id": "3bcb0c205ac1-1", "text": ")\n[docs]def create_vectorstore_router_agent(\n llm: BaseLanguageModel,\n toolkit: VectorStoreRouterToolkit,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: str = ROUTER_PREFIX,\n verbose: bool = False,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a vectorstore router agent from an LLM and tools.\"\"\"\n tools = toolkit.get_tools()\n prompt = ZeroShotAgent.create_prompt(tools, prefix=prefix)\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs)\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n **(agent_executor_kwargs or {}),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/vectorstore/base.html"} {"id": "1ebe5276e281-0", "text": "Source code for langchain.agents.mrkl.base\n\"\"\"Attempt to implement MRKL systems as described in arxiv.org/pdf/2205.00445.pdf.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Callable, List, NamedTuple, Optional, Sequence\nfrom pydantic import Field\nfrom langchain.agents.agent import Agent, AgentExecutor, AgentOutputParser\nfrom langchain.agents.agent_types import AgentType\nfrom langchain.agents.mrkl.output_parser import MRKLOutputParser\nfrom langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS, PREFIX, SUFFIX\nfrom langchain.agents.tools import Tool\nfrom langchain.agents.utils import validate_tools_single_input\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains import LLMChain\nfrom langchain.prompts import PromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.tools.base import BaseTool\n[docs]class ChainConfig(NamedTuple):\n \"\"\"Configuration for chain to use in MRKL system.\n Args:\n action_name: Name of the action.\n action: Action function to call.\n action_description: Description of the action.\n \"\"\"\n action_name: str\n action: Callable\n action_description: str\n[docs]class ZeroShotAgent(Agent):\n \"\"\"Agent for the MRKL chain.\"\"\"\n output_parser: AgentOutputParser = Field(default_factory=MRKLOutputParser)\n @classmethod\n def _get_default_output_parser(cls, **kwargs: Any) -> AgentOutputParser:\n return MRKLOutputParser()\n @property\n def _agent_type(self) -> str:\n \"\"\"Return Identifier of agent type.\"\"\"\n return AgentType.ZERO_SHOT_REACT_DESCRIPTION\n @property\n def observation_prefix(self) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/mrkl/base.html"} {"id": "1ebe5276e281-1", "text": "@property\n def observation_prefix(self) -> str:\n \"\"\"Prefix to append the observation with.\"\"\"\n return \"Observation: \"\n @property\n def llm_prefix(self) -> str:\n \"\"\"Prefix to append the llm call with.\"\"\"\n return \"Thought:\"\n[docs] @classmethod\n def create_prompt(\n cls,\n tools: Sequence[BaseTool],\n prefix: str = PREFIX,\n suffix: str = SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n ) -> PromptTemplate:\n \"\"\"Create prompt in the style of the zero shot agent.\n Args:\n tools: List of tools the agent will have access to, used to format the\n prompt.\n prefix: String to put before the list of tools.\n suffix: String to put after the list of tools.\n input_variables: List of input variables the final prompt will expect.\n Returns:\n A PromptTemplate with the template assembled from the pieces here.\n \"\"\"\n tool_strings = \"\\n\".join([f\"{tool.name}: {tool.description}\" for tool in tools])\n tool_names = \", \".join([tool.name for tool in tools])\n format_instructions = format_instructions.format(tool_names=tool_names)\n template = \"\\n\\n\".join([prefix, tool_strings, format_instructions, suffix])\n if input_variables is None:\n input_variables = [\"input\", \"agent_scratchpad\"]\n return PromptTemplate(template=template, input_variables=input_variables)\n[docs] @classmethod\n def from_llm_and_tools(\n cls,\n llm: BaseLanguageModel,\n tools: Sequence[BaseTool],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/mrkl/base.html"} {"id": "1ebe5276e281-2", "text": "llm: BaseLanguageModel,\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n output_parser: Optional[AgentOutputParser] = None,\n prefix: str = PREFIX,\n suffix: str = SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> Agent:\n \"\"\"Construct an agent from an LLM and tools.\"\"\"\n cls._validate_tools(tools)\n prompt = cls.create_prompt(\n tools,\n prefix=prefix,\n suffix=suffix,\n format_instructions=format_instructions,\n input_variables=input_variables,\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n _output_parser = output_parser or cls._get_default_output_parser()\n return cls(\n llm_chain=llm_chain,\n allowed_tools=tool_names,\n output_parser=_output_parser,\n **kwargs,\n )\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n validate_tools_single_input(cls.__name__, tools)\n if len(tools) == 0:\n raise ValueError(\n f\"Got no tools for {cls.__name__}. At least one tool must be provided.\"\n )\n for tool in tools:\n if tool.description is None:\n raise ValueError(\n f\"Got a tool {tool.name} without a description. For this agent, \"\n f\"a description must always be provided.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/mrkl/base.html"} {"id": "1ebe5276e281-3", "text": "f\"a description must always be provided.\"\n )\n super()._validate_tools(tools)\n[docs]class MRKLChain(AgentExecutor):\n \"\"\"Chain that implements the MRKL system.\n Example:\n .. code-block:: python\n from langchain import OpenAI, MRKLChain\n from langchain.chains.mrkl.base import ChainConfig\n llm = OpenAI(temperature=0)\n prompt = PromptTemplate(...)\n chains = [...]\n mrkl = MRKLChain.from_chains(llm=llm, prompt=prompt)\n \"\"\"\n[docs] @classmethod\n def from_chains(\n cls, llm: BaseLanguageModel, chains: List[ChainConfig], **kwargs: Any\n ) -> AgentExecutor:\n \"\"\"User friendly way to initialize the MRKL chain.\n This is intended to be an easy way to get up and running with the\n MRKL chain.\n Args:\n llm: The LLM to use as the agent LLM.\n chains: The chains the MRKL system has access to.\n **kwargs: parameters to be passed to initialization.\n Returns:\n An initialized MRKL chain.\n Example:\n .. code-block:: python\n from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, MRKLChain\n from langchain.chains.mrkl.base import ChainConfig\n llm = OpenAI(temperature=0)\n search = SerpAPIWrapper()\n llm_math_chain = LLMMathChain(llm=llm)\n chains = [\n ChainConfig(\n action_name = \"Search\",\n action=search.search,\n action_description=\"useful for searching\"\n ),\n ChainConfig(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/mrkl/base.html"} {"id": "1ebe5276e281-4", "text": "action_description=\"useful for searching\"\n ),\n ChainConfig(\n action_name=\"Calculator\",\n action=llm_math_chain.run,\n action_description=\"useful for doing math\"\n )\n ]\n mrkl = MRKLChain.from_chains(llm, chains)\n \"\"\"\n tools = [\n Tool(\n name=c.action_name,\n func=c.action,\n description=c.action_description,\n )\n for c in chains\n ]\n agent = ZeroShotAgent.from_llm_and_tools(llm, tools)\n return cls(agent=agent, tools=tools, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/mrkl/base.html"} {"id": "aceedeed4777-0", "text": "Source code for langchain.agents.mrkl.output_parser\nimport re\nfrom typing import Union\nfrom langchain.agents.agent import AgentOutputParser\nfrom langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS\nfrom langchain.schema import AgentAction, AgentFinish, OutputParserException\nFINAL_ANSWER_ACTION = \"Final Answer:\"\n[docs]class MRKLOutputParser(AgentOutputParser):\n[docs] def get_format_instructions(self) -> str:\n return FORMAT_INSTRUCTIONS\n[docs] def parse(self, text: str) -> Union[AgentAction, AgentFinish]:\n includes_answer = FINAL_ANSWER_ACTION in text\n regex = (\n r\"Action\\s*\\d*\\s*:[\\s]*(.*?)[\\s]*Action\\s*\\d*\\s*Input\\s*\\d*\\s*:[\\s]*(.*)\"\n )\n action_match = re.search(regex, text, re.DOTALL)\n if action_match:\n if includes_answer:\n raise OutputParserException(\n \"Parsing LLM output produced both a final answer \"\n f\"and a parse-able action: {text}\"\n )\n action = action_match.group(1).strip()\n action_input = action_match.group(2)\n tool_input = action_input.strip(\" \")\n # ensure if its a well formed SQL query we don't remove any trailing \" chars\n if tool_input.startswith(\"SELECT \") is False:\n tool_input = tool_input.strip('\"')\n return AgentAction(action, tool_input, text)\n elif includes_answer:\n return AgentFinish(\n {\"output\": text.split(FINAL_ANSWER_ACTION)[-1].strip()}, text\n )\n if not re.search(r\"Action\\s*\\d*\\s*:[\\s]*(.*?)\", text, re.DOTALL):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/mrkl/output_parser.html"} {"id": "aceedeed4777-1", "text": "raise OutputParserException(\n f\"Could not parse LLM output: `{text}`\",\n observation=\"Invalid Format: Missing 'Action:' after 'Thought:'\",\n llm_output=text,\n send_to_llm=True,\n )\n elif not re.search(\n r\"[\\s]*Action\\s*\\d*\\s*Input\\s*\\d*\\s*:[\\s]*(.*)\", text, re.DOTALL\n ):\n raise OutputParserException(\n f\"Could not parse LLM output: `{text}`\",\n observation=\"Invalid Format:\"\n \" Missing 'Action Input:' after 'Action:'\",\n llm_output=text,\n send_to_llm=True,\n )\n else:\n raise OutputParserException(f\"Could not parse LLM output: `{text}`\")\n @property\n def _type(self) -> str:\n return \"mrkl\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/mrkl/output_parser.html"} {"id": "f1017149781c-0", "text": "Source code for langchain.agents.conversational_chat.base\n\"\"\"An agent designed to hold a conversation in addition to using tools.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, List, Optional, Sequence, Tuple\nfrom pydantic import Field\nfrom langchain.agents.agent import Agent, AgentOutputParser\nfrom langchain.agents.conversational_chat.output_parser import ConvoOutputParser\nfrom langchain.agents.conversational_chat.prompt import (\n PREFIX,\n SUFFIX,\n TEMPLATE_TOOL_RESPONSE,\n)\nfrom langchain.agents.utils import validate_tools_single_input\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains import LLMChain\nfrom langchain.prompts.chat import (\n ChatPromptTemplate,\n HumanMessagePromptTemplate,\n MessagesPlaceholder,\n SystemMessagePromptTemplate,\n)\nfrom langchain.schema import AgentAction, BaseOutputParser, BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.schema.messages import AIMessage, BaseMessage, HumanMessage\nfrom langchain.tools.base import BaseTool\n[docs]class ConversationalChatAgent(Agent):\n \"\"\"An agent designed to hold a conversation in addition to using tools.\"\"\"\n output_parser: AgentOutputParser = Field(default_factory=ConvoOutputParser)\n template_tool_response: str = TEMPLATE_TOOL_RESPONSE\n @classmethod\n def _get_default_output_parser(cls, **kwargs: Any) -> AgentOutputParser:\n return ConvoOutputParser()\n @property\n def _agent_type(self) -> str:\n raise NotImplementedError\n @property\n def observation_prefix(self) -> str:\n \"\"\"Prefix to append the observation with.\"\"\"\n return \"Observation: \"\n @property\n def llm_prefix(self) -> str:\n \"\"\"Prefix to append the llm call with.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/conversational_chat/base.html"} {"id": "f1017149781c-1", "text": "\"\"\"Prefix to append the llm call with.\"\"\"\n return \"Thought:\"\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n super()._validate_tools(tools)\n validate_tools_single_input(cls.__name__, tools)\n[docs] @classmethod\n def create_prompt(\n cls,\n tools: Sequence[BaseTool],\n system_message: str = PREFIX,\n human_message: str = SUFFIX,\n input_variables: Optional[List[str]] = None,\n output_parser: Optional[BaseOutputParser] = None,\n ) -> BasePromptTemplate:\n tool_strings = \"\\n\".join(\n [f\"> {tool.name}: {tool.description}\" for tool in tools]\n )\n tool_names = \", \".join([tool.name for tool in tools])\n _output_parser = output_parser or cls._get_default_output_parser()\n format_instructions = human_message.format(\n format_instructions=_output_parser.get_format_instructions()\n )\n final_prompt = format_instructions.format(\n tool_names=tool_names, tools=tool_strings\n )\n if input_variables is None:\n input_variables = [\"input\", \"chat_history\", \"agent_scratchpad\"]\n messages = [\n SystemMessagePromptTemplate.from_template(system_message),\n MessagesPlaceholder(variable_name=\"chat_history\"),\n HumanMessagePromptTemplate.from_template(final_prompt),\n MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n ]\n return ChatPromptTemplate(input_variables=input_variables, messages=messages)\n def _construct_scratchpad(\n self, intermediate_steps: List[Tuple[AgentAction, str]]\n ) -> List[BaseMessage]:\n \"\"\"Construct the scratchpad that lets the agent continue its thought process.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/conversational_chat/base.html"} {"id": "f1017149781c-2", "text": "\"\"\"Construct the scratchpad that lets the agent continue its thought process.\"\"\"\n thoughts: List[BaseMessage] = []\n for action, observation in intermediate_steps:\n thoughts.append(AIMessage(content=action.log))\n human_message = HumanMessage(\n content=self.template_tool_response.format(observation=observation)\n )\n thoughts.append(human_message)\n return thoughts\n[docs] @classmethod\n def from_llm_and_tools(\n cls,\n llm: BaseLanguageModel,\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n output_parser: Optional[AgentOutputParser] = None,\n system_message: str = PREFIX,\n human_message: str = SUFFIX,\n input_variables: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> Agent:\n \"\"\"Construct an agent from an LLM and tools.\"\"\"\n cls._validate_tools(tools)\n _output_parser = output_parser or cls._get_default_output_parser()\n prompt = cls.create_prompt(\n tools,\n system_message=system_message,\n human_message=human_message,\n input_variables=input_variables,\n output_parser=_output_parser,\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n return cls(\n llm_chain=llm_chain,\n allowed_tools=tool_names,\n output_parser=_output_parser,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/conversational_chat/base.html"} {"id": "1f219ce6ee43-0", "text": "Source code for langchain.agents.conversational_chat.output_parser\nfrom __future__ import annotations\nfrom typing import Union\nfrom langchain.agents import AgentOutputParser\nfrom langchain.agents.conversational_chat.prompt import FORMAT_INSTRUCTIONS\nfrom langchain.output_parsers.json import parse_json_markdown\nfrom langchain.schema import AgentAction, AgentFinish, OutputParserException\n[docs]class ConvoOutputParser(AgentOutputParser):\n[docs] def get_format_instructions(self) -> str:\n return FORMAT_INSTRUCTIONS\n[docs] def parse(self, text: str) -> Union[AgentAction, AgentFinish]:\n try:\n response = parse_json_markdown(text)\n action, action_input = response[\"action\"], response[\"action_input\"]\n if action == \"Final Answer\":\n return AgentFinish({\"output\": action_input}, text)\n else:\n return AgentAction(action, action_input, text)\n except Exception as e:\n raise OutputParserException(f\"Could not parse LLM output: {text}\") from e\n @property\n def _type(self) -> str:\n return \"conversational_chat\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/conversational_chat/output_parser.html"} {"id": "710faf9105c0-0", "text": "Source code for langchain.agents.self_ask_with_search.base\n\"\"\"Chain that does self-ask with search.\"\"\"\nfrom typing import Any, Sequence, Union\nfrom pydantic import Field\nfrom langchain.agents.agent import Agent, AgentExecutor, AgentOutputParser\nfrom langchain.agents.agent_types import AgentType\nfrom langchain.agents.self_ask_with_search.output_parser import SelfAskOutputParser\nfrom langchain.agents.self_ask_with_search.prompt import PROMPT\nfrom langchain.agents.tools import Tool\nfrom langchain.agents.utils import validate_tools_single_input\nfrom langchain.schema import BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.google_serper import GoogleSerperAPIWrapper\nfrom langchain.utilities.serpapi import SerpAPIWrapper\n[docs]class SelfAskWithSearchAgent(Agent):\n \"\"\"Agent for the self-ask-with-search paper.\"\"\"\n output_parser: AgentOutputParser = Field(default_factory=SelfAskOutputParser)\n @classmethod\n def _get_default_output_parser(cls, **kwargs: Any) -> AgentOutputParser:\n return SelfAskOutputParser()\n @property\n def _agent_type(self) -> str:\n \"\"\"Return Identifier of agent type.\"\"\"\n return AgentType.SELF_ASK_WITH_SEARCH\n[docs] @classmethod\n def create_prompt(cls, tools: Sequence[BaseTool]) -> BasePromptTemplate:\n \"\"\"Prompt does not depend on tools.\"\"\"\n return PROMPT\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n validate_tools_single_input(cls.__name__, tools)\n super()._validate_tools(tools)\n if len(tools) != 1:\n raise ValueError(f\"Exactly one tool must be specified, but got {tools}\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/self_ask_with_search/base.html"} {"id": "710faf9105c0-1", "text": "raise ValueError(f\"Exactly one tool must be specified, but got {tools}\")\n tool_names = {tool.name for tool in tools}\n if tool_names != {\"Intermediate Answer\"}:\n raise ValueError(\n f\"Tool name should be Intermediate Answer, got {tool_names}\"\n )\n @property\n def observation_prefix(self) -> str:\n \"\"\"Prefix to append the observation with.\"\"\"\n return \"Intermediate answer: \"\n @property\n def llm_prefix(self) -> str:\n \"\"\"Prefix to append the LLM call with.\"\"\"\n return \"\"\n[docs]class SelfAskWithSearchChain(AgentExecutor):\n \"\"\"Chain that does self-ask with search.\n Example:\n .. code-block:: python\n from langchain import SelfAskWithSearchChain, OpenAI, GoogleSerperAPIWrapper\n search_chain = GoogleSerperAPIWrapper()\n self_ask = SelfAskWithSearchChain(llm=OpenAI(), search_chain=search_chain)\n \"\"\"\n def __init__(\n self,\n llm: BaseLanguageModel,\n search_chain: Union[GoogleSerperAPIWrapper, SerpAPIWrapper],\n **kwargs: Any,\n ):\n \"\"\"Initialize with just an LLM and a search chain.\"\"\"\n search_tool = Tool(\n name=\"Intermediate Answer\",\n func=search_chain.run,\n coroutine=search_chain.arun,\n description=\"Search\",\n )\n agent = SelfAskWithSearchAgent.from_llm_and_tools(llm, [search_tool])\n super().__init__(agent=agent, tools=[search_tool], **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/self_ask_with_search/base.html"} {"id": "3500c204e560-0", "text": "Source code for langchain.agents.self_ask_with_search.output_parser\nfrom typing import Sequence, Union\nfrom langchain.agents.agent import AgentOutputParser\nfrom langchain.schema import AgentAction, AgentFinish, OutputParserException\n[docs]class SelfAskOutputParser(AgentOutputParser):\n followups: Sequence[str] = (\"Follow up:\", \"Followup:\")\n finish_string: str = \"So the final answer is: \"\n[docs] def parse(self, text: str) -> Union[AgentAction, AgentFinish]:\n last_line = text.split(\"\\n\")[-1]\n if not any([follow in last_line for follow in self.followups]):\n if self.finish_string not in last_line:\n raise OutputParserException(f\"Could not parse output: {text}\")\n return AgentFinish({\"output\": last_line[len(self.finish_string) :]}, text)\n after_colon = text.split(\":\")[-1].strip()\n return AgentAction(\"Intermediate Answer\", after_colon, text)\n @property\n def _type(self) -> str:\n return \"self_ask\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/self_ask_with_search/output_parser.html"} {"id": "1b57468612ae-0", "text": "Source code for langchain.agents.conversational.base\n\"\"\"An agent designed to hold a conversation in addition to using tools.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, List, Optional, Sequence\nfrom pydantic import Field\nfrom langchain.agents.agent import Agent, AgentOutputParser\nfrom langchain.agents.agent_types import AgentType\nfrom langchain.agents.conversational.output_parser import ConvoOutputParser\nfrom langchain.agents.conversational.prompt import FORMAT_INSTRUCTIONS, PREFIX, SUFFIX\nfrom langchain.agents.utils import validate_tools_single_input\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains import LLMChain\nfrom langchain.prompts import PromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.tools.base import BaseTool\n[docs]class ConversationalAgent(Agent):\n \"\"\"An agent designed to hold a conversation in addition to using tools.\"\"\"\n ai_prefix: str = \"AI\"\n output_parser: AgentOutputParser = Field(default_factory=ConvoOutputParser)\n @classmethod\n def _get_default_output_parser(\n cls, ai_prefix: str = \"AI\", **kwargs: Any\n ) -> AgentOutputParser:\n return ConvoOutputParser(ai_prefix=ai_prefix)\n @property\n def _agent_type(self) -> str:\n \"\"\"Return Identifier of agent type.\"\"\"\n return AgentType.CONVERSATIONAL_REACT_DESCRIPTION\n @property\n def observation_prefix(self) -> str:\n \"\"\"Prefix to append the observation with.\"\"\"\n return \"Observation: \"\n @property\n def llm_prefix(self) -> str:\n \"\"\"Prefix to append the llm call with.\"\"\"\n return \"Thought:\"\n[docs] @classmethod\n def create_prompt(\n cls,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/conversational/base.html"} {"id": "1b57468612ae-1", "text": "[docs] @classmethod\n def create_prompt(\n cls,\n tools: Sequence[BaseTool],\n prefix: str = PREFIX,\n suffix: str = SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n ai_prefix: str = \"AI\",\n human_prefix: str = \"Human\",\n input_variables: Optional[List[str]] = None,\n ) -> PromptTemplate:\n \"\"\"Create prompt in the style of the zero shot agent.\n Args:\n tools: List of tools the agent will have access to, used to format the\n prompt.\n prefix: String to put before the list of tools.\n suffix: String to put after the list of tools.\n ai_prefix: String to use before AI output.\n human_prefix: String to use before human output.\n input_variables: List of input variables the final prompt will expect.\n Returns:\n A PromptTemplate with the template assembled from the pieces here.\n \"\"\"\n tool_strings = \"\\n\".join(\n [f\"> {tool.name}: {tool.description}\" for tool in tools]\n )\n tool_names = \", \".join([tool.name for tool in tools])\n format_instructions = format_instructions.format(\n tool_names=tool_names, ai_prefix=ai_prefix, human_prefix=human_prefix\n )\n template = \"\\n\\n\".join([prefix, tool_strings, format_instructions, suffix])\n if input_variables is None:\n input_variables = [\"input\", \"chat_history\", \"agent_scratchpad\"]\n return PromptTemplate(template=template, input_variables=input_variables)\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n super()._validate_tools(tools)\n validate_tools_single_input(cls.__name__, tools)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/conversational/base.html"} {"id": "1b57468612ae-2", "text": "validate_tools_single_input(cls.__name__, tools)\n[docs] @classmethod\n def from_llm_and_tools(\n cls,\n llm: BaseLanguageModel,\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n output_parser: Optional[AgentOutputParser] = None,\n prefix: str = PREFIX,\n suffix: str = SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n ai_prefix: str = \"AI\",\n human_prefix: str = \"Human\",\n input_variables: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> Agent:\n \"\"\"Construct an agent from an LLM and tools.\"\"\"\n cls._validate_tools(tools)\n prompt = cls.create_prompt(\n tools,\n ai_prefix=ai_prefix,\n human_prefix=human_prefix,\n prefix=prefix,\n suffix=suffix,\n format_instructions=format_instructions,\n input_variables=input_variables,\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n _output_parser = output_parser or cls._get_default_output_parser(\n ai_prefix=ai_prefix\n )\n return cls(\n llm_chain=llm_chain,\n allowed_tools=tool_names,\n ai_prefix=ai_prefix,\n output_parser=_output_parser,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/conversational/base.html"} {"id": "7796494cb544-0", "text": "Source code for langchain.agents.conversational.output_parser\nimport re\nfrom typing import Union\nfrom langchain.agents.agent import AgentOutputParser\nfrom langchain.agents.conversational.prompt import FORMAT_INSTRUCTIONS\nfrom langchain.schema import AgentAction, AgentFinish, OutputParserException\n[docs]class ConvoOutputParser(AgentOutputParser):\n ai_prefix: str = \"AI\"\n[docs] def get_format_instructions(self) -> str:\n return FORMAT_INSTRUCTIONS\n[docs] def parse(self, text: str) -> Union[AgentAction, AgentFinish]:\n if f\"{self.ai_prefix}:\" in text:\n return AgentFinish(\n {\"output\": text.split(f\"{self.ai_prefix}:\")[-1].strip()}, text\n )\n regex = r\"Action: (.*?)[\\n]*Action Input: (.*)\"\n match = re.search(regex, text)\n if not match:\n raise OutputParserException(f\"Could not parse LLM output: `{text}`\")\n action = match.group(1)\n action_input = match.group(2)\n return AgentAction(action.strip(), action_input.strip(\" \").strip('\"'), text)\n @property\n def _type(self) -> str:\n return \"conversational\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/conversational/output_parser.html"} {"id": "cbc0d7c62286-0", "text": "Source code for langchain.client.runner_utils\n\"\"\"Utilities for running language models or Chains over datasets.\"\"\"\nfrom __future__ import annotations\nimport asyncio\nimport functools\nimport logging\nfrom datetime import datetime\nfrom typing import (\n Any,\n Callable,\n Coroutine,\n Dict,\n Iterator,\n List,\n Optional,\n Sequence,\n Union,\n)\nfrom langchainplus_sdk import LangChainPlusClient, RunEvaluator\nfrom langchainplus_sdk.schemas import Example\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.callbacks.tracers.base import BaseTracer\nfrom langchain.callbacks.tracers.evaluation import EvaluatorCallbackHandler\nfrom langchain.callbacks.tracers.langchain import LangChainTracer\nfrom langchain.chains.base import Chain\nfrom langchain.chat_models.base import BaseChatModel\nfrom langchain.llms.base import BaseLLM\nfrom langchain.schema import (\n ChatResult,\n LLMResult,\n)\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.schema.messages import (\n BaseMessage,\n HumanMessage,\n get_buffer_string,\n messages_from_dict,\n)\nlogger = logging.getLogger(__name__)\nMODEL_OR_CHAIN_FACTORY = Union[Callable[[], Chain], BaseLanguageModel]\n[docs]class InputFormatError(Exception):\n \"\"\"Raised when the input format is invalid.\"\"\"\ndef _get_prompts(inputs: Dict[str, Any]) -> List[str]:\n \"\"\"Get prompts from inputs.\n Args:\n inputs: The input dictionary.\n Returns:\n A list of prompts.\n Raises:\n InputFormatError: If the input format is invalid.\n \"\"\"\n if not inputs:\n raise InputFormatError(\"Inputs should not be empty.\")\n prompts = []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/client/runner_utils.html"} {"id": "cbc0d7c62286-1", "text": "raise InputFormatError(\"Inputs should not be empty.\")\n prompts = []\n if \"prompt\" in inputs:\n if not isinstance(inputs[\"prompt\"], str):\n raise InputFormatError(\n \"Expected string for 'prompt', got\"\n f\" {type(inputs['prompt']).__name__}\"\n )\n prompts = [inputs[\"prompt\"]]\n elif \"prompts\" in inputs:\n if not isinstance(inputs[\"prompts\"], list) or not all(\n isinstance(i, str) for i in inputs[\"prompts\"]\n ):\n raise InputFormatError(\n \"Expected list of strings for 'prompts',\"\n f\" got {type(inputs['prompts']).__name__}\"\n )\n prompts = inputs[\"prompts\"]\n elif len(inputs) == 1:\n prompt_ = next(iter(inputs.values()))\n if isinstance(prompt_, str):\n prompts = [prompt_]\n elif isinstance(prompt_, list) and all(isinstance(i, str) for i in prompt_):\n prompts = prompt_\n else:\n raise InputFormatError(f\"LLM Run expects string prompt input. Got {inputs}\")\n else:\n raise InputFormatError(\n f\"LLM Run expects 'prompt' or 'prompts' in inputs. Got {inputs}\"\n )\n return prompts\ndef _get_messages(inputs: Dict[str, Any]) -> List[List[BaseMessage]]:\n \"\"\"Get Chat Messages from inputs.\n Args:\n inputs: The input dictionary.\n Returns:\n A list of chat messages.\n Raises:\n InputFormatError: If the input format is invalid.\n \"\"\"\n if not inputs:\n raise InputFormatError(\"Inputs should not be empty.\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/client/runner_utils.html"} {"id": "cbc0d7c62286-2", "text": "if not inputs:\n raise InputFormatError(\"Inputs should not be empty.\")\n if \"messages\" in inputs:\n single_input = inputs[\"messages\"]\n elif len(inputs) == 1:\n single_input = next(iter(inputs.values()))\n else:\n raise InputFormatError(f\"Chat Run expects 'messages' in inputs. Got {inputs}\")\n if isinstance(single_input, list) and all(\n isinstance(i, dict) for i in single_input\n ):\n raw_messages = [single_input]\n elif isinstance(single_input, list) and all(\n isinstance(i, list) for i in single_input\n ):\n raw_messages = single_input\n else:\n raise InputFormatError(\n f\"Chat Run expects List[dict] or List[List[dict]] 'messages'\"\n f\" input. Got {inputs}\"\n )\n return [messages_from_dict(batch) for batch in raw_messages]\nasync def _arun_llm(\n llm: BaseLanguageModel,\n inputs: Dict[str, Any],\n *,\n tags: Optional[List[str]] = None,\n callbacks: Callbacks = None,\n input_mapper: Optional[Callable[[Dict], Any]] = None,\n) -> Union[LLMResult, ChatResult]:\n \"\"\"Asynchronously run the language model.\n Args:\n llm: The language model to run.\n inputs: The input dictionary.\n tags: Optional tags to add to the run.\n callbacks: Optional callbacks to use during the run.\n input_mapper: Optional function to map inputs to the expected format.\n Returns:\n The LLMResult or ChatResult.\n Raises:\n ValueError: If the LLM type is unsupported.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/client/runner_utils.html"} {"id": "cbc0d7c62286-3", "text": "Raises:\n ValueError: If the LLM type is unsupported.\n InputFormatError: If the input format is invalid.\n \"\"\"\n if input_mapper is not None:\n if not isinstance(llm, (BaseLLM, BaseChatModel)):\n raise ValueError(f\"Unsupported LLM type {type(llm).__name__}\")\n llm_output = await llm.agenerate(\n input_mapper(inputs), callbacks=callbacks, tags=tags\n )\n elif isinstance(llm, BaseLLM):\n try:\n llm_prompts = _get_prompts(inputs)\n llm_output = await llm.agenerate(\n llm_prompts, callbacks=callbacks, tags=tags\n )\n except InputFormatError:\n llm_messages = _get_messages(inputs)\n buffer_strings = [get_buffer_string(messages) for messages in llm_messages]\n llm_output = await llm.agenerate(\n buffer_strings, callbacks=callbacks, tags=tags\n )\n elif isinstance(llm, BaseChatModel):\n try:\n messages = _get_messages(inputs)\n llm_output = await llm.agenerate(messages, callbacks=callbacks, tags=tags)\n except InputFormatError:\n prompts = _get_prompts(inputs)\n converted_messages: List[List[BaseMessage]] = [\n [HumanMessage(content=prompt)] for prompt in prompts\n ]\n llm_output = await llm.agenerate(\n converted_messages, callbacks=callbacks, tags=tags\n )\n else:\n raise ValueError(f\"Unsupported LLM type {type(llm)}\")\n return llm_output\nasync def _arun_llm_or_chain(\n example: Example,\n llm_or_chain_factory: MODEL_OR_CHAIN_FACTORY,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/client/runner_utils.html"} {"id": "cbc0d7c62286-4", "text": "example: Example,\n llm_or_chain_factory: MODEL_OR_CHAIN_FACTORY,\n n_repetitions: int,\n *,\n tags: Optional[List[str]] = None,\n callbacks: Optional[List[BaseCallbackHandler]] = None,\n input_mapper: Optional[Callable[[Dict], Any]] = None,\n) -> Union[List[dict], List[str], List[LLMResult], List[ChatResult]]:\n \"\"\"Asynchronously run the Chain or language model.\n Args:\n example: The example to run.\n llm_or_chain_factory: The Chain or language model constructor to run.\n n_repetitions: The number of times to run the model on each example.\n tags: Optional tags to add to the run.\n callbacks: Optional callbacks to use during the run.\n input_mapper: Optional function to map the input to the expected format.\n Returns:\n A list of outputs.\n \"\"\"\n if callbacks:\n previous_example_ids = [\n getattr(tracer, \"example_id\", None) for tracer in callbacks\n ]\n for tracer in callbacks:\n if hasattr(tracer, \"example_id\"):\n tracer.example_id = example.id\n else:\n previous_example_ids = None\n outputs = []\n for _ in range(n_repetitions):\n try:\n if isinstance(llm_or_chain_factory, BaseLanguageModel):\n output: Any = await _arun_llm(\n llm_or_chain_factory,\n example.inputs,\n tags=tags,\n callbacks=callbacks,\n input_mapper=input_mapper,\n )\n else:\n chain = llm_or_chain_factory()\n if input_mapper is not None:\n inputs_ = input_mapper(example.inputs)\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/client/runner_utils.html"} {"id": "cbc0d7c62286-5", "text": "inputs_ = input_mapper(example.inputs)\n else:\n inputs_ = example.inputs\n if len(inputs_) == 1:\n inputs_ = next(iter(inputs_.values()))\n output = await chain.acall(inputs_, callbacks=callbacks, tags=tags)\n outputs.append(output)\n except Exception as e:\n logger.warning(f\"Chain failed for example {example.id}. Error: {e}\")\n outputs.append({\"Error\": str(e)})\n if callbacks and previous_example_ids:\n for example_id, tracer in zip(previous_example_ids, callbacks):\n if hasattr(tracer, \"example_id\"):\n tracer.example_id = example_id\n return outputs\nasync def _gather_with_concurrency(\n n: int,\n initializer: Callable[[], Coroutine[Any, Any, Any]],\n *async_funcs: Callable[\n [Sequence[BaseCallbackHandler], Dict], Coroutine[Any, Any, Any]\n ],\n) -> List[Any]:\n \"\"\"Run coroutines with a concurrency limit.\n Args:\n n: The maximum number of concurrent tasks.\n initializer: A coroutine that initializes shared resources for the tasks.\n async_funcs: The async_funcs to be run concurrently.\n Returns:\n A list of results from the coroutines.\n \"\"\"\n semaphore = asyncio.Semaphore(n)\n job_state = {\"num_processed\": 0}\n callback_queue: asyncio.Queue[Sequence[BaseCallbackHandler]] = asyncio.Queue()\n for _ in range(n):\n callback_queue.put_nowait(await initializer())\n async def run_coroutine_with_semaphore(\n async_func: Callable[\n [Sequence[BaseCallbackHandler], Dict], Coroutine[Any, Any, Any]\n ]\n ) -> Any:\n async with semaphore:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/client/runner_utils.html"} {"id": "cbc0d7c62286-6", "text": "]\n ) -> Any:\n async with semaphore:\n callbacks = await callback_queue.get()\n try:\n result = await async_func(callbacks, job_state)\n finally:\n callback_queue.put_nowait(callbacks)\n return result\n results = await asyncio.gather(\n *(run_coroutine_with_semaphore(function) for function in async_funcs)\n )\n while callback_queue:\n try:\n callbacks = callback_queue.get_nowait()\n except asyncio.QueueEmpty:\n break\n for callback in callbacks:\n if isinstance(callback, (LangChainTracer, EvaluatorCallbackHandler)):\n callback.wait_for_futures()\n return results\nasync def _callbacks_initializer(\n project_name: Optional[str],\n client: LangChainPlusClient,\n run_evaluators: Sequence[RunEvaluator],\n evaluation_handler_collector: List[EvaluatorCallbackHandler],\n) -> List[BaseTracer]:\n \"\"\"\n Initialize a tracer to share across tasks.\n Args:\n project_name: The project name for the tracer.\n client: The client to use for the tracer.\n run_evaluators: The evaluators to run.\n evaluation_handler_collector: A list to collect the evaluators.\n Used to wait for the evaluators to finish.\n Returns:\n The callbacks for this thread.\n \"\"\"\n callbacks: List[BaseTracer] = []\n if project_name:\n callbacks.append(LangChainTracer(project_name=project_name))\n evaluator_project_name = f\"{project_name}-evaluators\" if project_name else None\n if run_evaluators:\n callback = EvaluatorCallbackHandler(\n client=client,\n evaluators=run_evaluators,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/client/runner_utils.html"} {"id": "cbc0d7c62286-7", "text": "client=client,\n evaluators=run_evaluators,\n # We already have concurrency, don't want to overload the machine\n max_workers=1,\n project_name=evaluator_project_name,\n )\n callbacks.append(callback)\n evaluation_handler_collector.append(callback)\n return callbacks\nasync def arun_on_examples(\n examples: Iterator[Example],\n llm_or_chain_factory: MODEL_OR_CHAIN_FACTORY,\n *,\n concurrency_level: int = 5,\n num_repetitions: int = 1,\n project_name: Optional[str] = None,\n verbose: bool = False,\n client: Optional[LangChainPlusClient] = None,\n tags: Optional[List[str]] = None,\n run_evaluators: Optional[Sequence[RunEvaluator]] = None,\n input_mapper: Optional[Callable[[Dict], Any]] = None,\n) -> Dict[str, Any]:\n \"\"\"\n Asynchronously run the chain on examples and store traces\n to the specified project name.\n Args:\n examples: Examples to run the model or chain over.\n llm_or_chain_factory: Language model or Chain constructor to run\n over the dataset. The Chain constructor is used to permit\n independent calls on each example without carrying over state.\n concurrency_level: The number of async tasks to run concurrently.\n num_repetitions: Number of times to run the model on each example.\n This is useful when testing success rates or generating confidence\n intervals.\n project_name: Project name to use when tracing runs.\n Defaults to {dataset_name}-{chain class name}-{datetime}.\n verbose: Whether to print progress.\n client: Client to use to read the dataset. If not provided, a new", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/client/runner_utils.html"} {"id": "cbc0d7c62286-8", "text": "client: Client to use to read the dataset. If not provided, a new\n client will be created using the credentials in the environment.\n tags: Tags to add to each run in the project.\n run_evaluators: Evaluators to run on the results of the chain.\n input_mapper: function to map to the inputs dictionary from an Example\n to the format expected by the model to be evaluated. This is useful if\n your model needs to deserialize more complex schema or if your dataset\n has inputs with keys that differ from what is expected by your chain\n or agent.\n Returns:\n A dictionary mapping example ids to the model outputs.\n \"\"\"\n project_name = _get_project_name(project_name, llm_or_chain_factory, None)\n client_ = client or LangChainPlusClient()\n client_.create_project(project_name)\n results: Dict[str, List[Any]] = {}\n async def process_example(\n example: Example, callbacks: List[BaseCallbackHandler], job_state: dict\n ) -> None:\n \"\"\"Process a single example.\"\"\"\n result = await _arun_llm_or_chain(\n example,\n llm_or_chain_factory,\n num_repetitions,\n tags=tags,\n callbacks=callbacks,\n input_mapper=input_mapper,\n )\n results[str(example.id)] = result\n job_state[\"num_processed\"] += 1\n if verbose:\n print(\n f\"Processed examples: {job_state['num_processed']}\",\n end=\"\\r\",\n flush=True,\n )\n evaluation_handlers: List[EvaluatorCallbackHandler] = []\n await _gather_with_concurrency(\n concurrency_level,\n functools.partial(\n _callbacks_initializer,\n project_name=project_name,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/client/runner_utils.html"} {"id": "cbc0d7c62286-9", "text": "functools.partial(\n _callbacks_initializer,\n project_name=project_name,\n client=client_,\n evaluation_handler_collector=evaluation_handlers,\n run_evaluators=run_evaluators or [],\n ),\n *(functools.partial(process_example, e) for e in examples),\n )\n for handler in evaluation_handlers:\n handler.wait_for_futures()\n return results\n[docs]def run_llm(\n llm: BaseLanguageModel,\n inputs: Dict[str, Any],\n callbacks: Callbacks,\n *,\n tags: Optional[List[str]] = None,\n input_mapper: Optional[Callable[[Dict], Any]] = None,\n) -> Union[LLMResult, ChatResult]:\n \"\"\"\n Run the language model on the example.\n Args:\n llm: The language model to run.\n inputs: The input dictionary.\n callbacks: The callbacks to use during the run.\n tags: Optional tags to add to the run.\n input_mapper: function to map to the inputs dictionary from an Example\n Returns:\n The LLMResult or ChatResult.\n Raises:\n ValueError: If the LLM type is unsupported.\n InputFormatError: If the input format is invalid.\n \"\"\"\n if input_mapper is not None:\n if not isinstance(llm, (BaseLLM, BaseChatModel)):\n raise ValueError(f\"Unsupported LLM type {type(llm).__name__}\")\n llm_output = llm.generate(input_mapper(inputs), callbacks=callbacks, tags=tags)\n elif isinstance(llm, BaseLLM):\n try:\n llm_prompts = _get_prompts(inputs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/client/runner_utils.html"} {"id": "cbc0d7c62286-10", "text": "try:\n llm_prompts = _get_prompts(inputs)\n llm_output = llm.generate(llm_prompts, callbacks=callbacks, tags=tags)\n except InputFormatError:\n llm_messages = _get_messages(inputs)\n buffer_strings = [get_buffer_string(messages) for messages in llm_messages]\n llm_output = llm.generate(buffer_strings, callbacks=callbacks)\n elif isinstance(llm, BaseChatModel):\n try:\n messages = _get_messages(inputs)\n llm_output = llm.generate(messages, callbacks=callbacks, tags=tags)\n except InputFormatError:\n prompts = _get_prompts(inputs)\n converted_messages: List[List[BaseMessage]] = [\n [HumanMessage(content=prompt)] for prompt in prompts\n ]\n llm_output = llm.generate(\n converted_messages, callbacks=callbacks, tags=tags\n )\n else:\n raise ValueError(f\"Unsupported LLM type {type(llm)}\")\n return llm_output\n[docs]def run_llm_or_chain(\n example: Example,\n llm_or_chain_factory: MODEL_OR_CHAIN_FACTORY,\n n_repetitions: int,\n *,\n tags: Optional[List[str]] = None,\n callbacks: Optional[List[BaseCallbackHandler]] = None,\n input_mapper: Optional[Callable[[Dict], Any]] = None,\n) -> Union[List[dict], List[str], List[LLMResult], List[ChatResult]]:\n \"\"\"\n Run the Chain or language model synchronously.\n Args:\n example: The example to run.\n llm_or_chain_factory: The Chain or language model constructor to run.\n n_repetitions: The number of times to run the model on each example.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/client/runner_utils.html"} {"id": "cbc0d7c62286-11", "text": "n_repetitions: The number of times to run the model on each example.\n tags: Optional tags to add to the run.\n callbacks: Optional callbacks to use during the run.\n Returns:\n Union[List[dict], List[str], List[LLMResult], List[ChatResult]]:\n The outputs of the model or chain.\n \"\"\"\n if callbacks:\n previous_example_ids = [\n getattr(tracer, \"example_id\", None) for tracer in callbacks\n ]\n for tracer in callbacks:\n if hasattr(tracer, \"example_id\"):\n tracer.example_id = example.id\n else:\n previous_example_ids = None\n outputs = []\n for _ in range(n_repetitions):\n try:\n if isinstance(llm_or_chain_factory, BaseLanguageModel):\n output: Any = run_llm(\n llm_or_chain_factory,\n example.inputs,\n callbacks,\n tags=tags,\n input_mapper=input_mapper,\n )\n else:\n chain = llm_or_chain_factory()\n if input_mapper is not None:\n inputs_ = input_mapper(example.inputs)\n else:\n inputs_ = example.inputs\n if len(inputs_) == 1:\n inputs_ = next(iter(inputs_.values()))\n output = chain(inputs_, callbacks=callbacks, tags=tags)\n outputs.append(output)\n except Exception as e:\n logger.warning(f\"Chain failed for example {example.id}. Error: {e}\")\n outputs.append({\"Error\": str(e)})\n if callbacks and previous_example_ids:\n for example_id, tracer in zip(previous_example_ids, callbacks):\n if hasattr(tracer, \"example_id\"):\n tracer.example_id = example_id\n return outputs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/client/runner_utils.html"} {"id": "cbc0d7c62286-12", "text": "tracer.example_id = example_id\n return outputs\n[docs]def run_on_examples(\n examples: Iterator[Example],\n llm_or_chain_factory: MODEL_OR_CHAIN_FACTORY,\n *,\n num_repetitions: int = 1,\n project_name: Optional[str] = None,\n verbose: bool = False,\n client: Optional[LangChainPlusClient] = None,\n tags: Optional[List[str]] = None,\n run_evaluators: Optional[Sequence[RunEvaluator]] = None,\n input_mapper: Optional[Callable[[Dict], Any]] = None,\n) -> Dict[str, Any]:\n \"\"\"\n Run the Chain or language model on examples and store\n traces to the specified project name.\n Args:\n examples: Examples to run the model or chain over.\n llm_or_chain_factory: Language model or Chain constructor to run\n over the dataset. The Chain constructor is used to permit\n independent calls on each example without carrying over state.\n num_repetitions: Number of times to run the model on each example.\n This is useful when testing success rates or generating confidence\n intervals.\n project_name: Name of the project to store the traces in.\n Defaults to {dataset_name}-{chain class name}-{datetime}.\n verbose: Whether to print progress.\n client: Client to use to access the dataset. If None, a new client\n will be created using the credentials in the environment.\n tags: Tags to add to each run in the project.\n run_evaluators: Evaluators to run on the results of the chain.\n input_mapper: A function to map to the inputs dictionary from an Example\n to the format expected by the model to be evaluated. This is useful if", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/client/runner_utils.html"} {"id": "cbc0d7c62286-13", "text": "to the format expected by the model to be evaluated. This is useful if\n your model needs to deserialize more complex schema or if your dataset\n has inputs with keys that differ from what is expected by your chain\n or agent.\n Returns:\n A dictionary mapping example ids to the model outputs.\n \"\"\"\n results: Dict[str, Any] = {}\n project_name = _get_project_name(project_name, llm_or_chain_factory, None)\n client_ = client or LangChainPlusClient()\n client_.create_project(project_name)\n tracer = LangChainTracer(project_name=project_name)\n evaluator_project_name = f\"{project_name}-evaluators\"\n evalution_handler = EvaluatorCallbackHandler(\n evaluators=run_evaluators or [],\n client=client_,\n project_name=evaluator_project_name,\n )\n callbacks: List[BaseCallbackHandler] = [tracer, evalution_handler]\n for i, example in enumerate(examples):\n result = run_llm_or_chain(\n example,\n llm_or_chain_factory,\n num_repetitions,\n tags=tags,\n callbacks=callbacks,\n input_mapper=input_mapper,\n )\n if verbose:\n print(f\"{i+1} processed\", flush=True, end=\"\\r\")\n results[str(example.id)] = result\n tracer.wait_for_futures()\n evalution_handler.wait_for_futures()\n return results\ndef _get_project_name(\n project_name: Optional[str],\n llm_or_chain_factory: MODEL_OR_CHAIN_FACTORY,\n dataset_name: Optional[str],\n) -> str:\n \"\"\"\n Get the project name.\n Args:\n project_name: The project name if manually specified.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/client/runner_utils.html"} {"id": "cbc0d7c62286-14", "text": "Args:\n project_name: The project name if manually specified.\n llm_or_chain_factory: The Chain or language model constructor.\n dataset_name: The dataset name.\n Returns:\n The project name.\n \"\"\"\n if project_name is not None:\n return project_name\n current_time = datetime.now().strftime(\"%Y-%m-%d-%H-%M-%S\")\n if isinstance(llm_or_chain_factory, BaseLanguageModel):\n model_name = llm_or_chain_factory.__class__.__name__\n else:\n model_name = llm_or_chain_factory().__class__.__name__\n dataset_prefix = f\"{dataset_name}-\" if dataset_name else \"\"\n return f\"{dataset_prefix}{model_name}-{current_time}\"\nasync def arun_on_dataset(\n dataset_name: str,\n llm_or_chain_factory: MODEL_OR_CHAIN_FACTORY,\n *,\n concurrency_level: int = 5,\n num_repetitions: int = 1,\n project_name: Optional[str] = None,\n verbose: bool = False,\n client: Optional[LangChainPlusClient] = None,\n tags: Optional[List[str]] = None,\n run_evaluators: Optional[Sequence[RunEvaluator]] = None,\n input_mapper: Optional[Callable[[Dict], Any]] = None,\n) -> Dict[str, Any]:\n \"\"\"\n Asynchronously run the Chain or language model on a dataset\n and store traces to the specified project name.\n Args:\n dataset_name: Name of the dataset to run the chain on.\n llm_or_chain_factory: Language model or Chain constructor to run\n over the dataset. The Chain constructor is used to permit\n independent calls on each example without carrying over state.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/client/runner_utils.html"} {"id": "cbc0d7c62286-15", "text": "independent calls on each example without carrying over state.\n concurrency_level: The number of async tasks to run concurrently.\n num_repetitions: Number of times to run the model on each example.\n This is useful when testing success rates or generating confidence\n intervals.\n project_name: Name of the project to store the traces in.\n Defaults to {dataset_name}-{chain class name}-{datetime}.\n verbose: Whether to print progress.\n client: Client to use to read the dataset. If not provided,\n a new client will be created using the credentials in the environment.\n tags: Tags to add to each run in the project.\n run_evaluators: Evaluators to run on the results of the chain.\n input_mapper: A function to map to the inputs dictionary from an Example\n to the format expected by the model to be evaluated. This is useful if\n your model needs to deserialize more complex schema or if your dataset\n has inputs with keys that differ from what is expected by your chain\n or agent.\n Returns:\n A dictionary containing the run's project name and the resulting model outputs.\n \"\"\"\n client_ = client or LangChainPlusClient()\n project_name = _get_project_name(project_name, llm_or_chain_factory, dataset_name)\n dataset = client_.read_dataset(dataset_name=dataset_name)\n examples = client_.list_examples(dataset_id=str(dataset.id))\n results = await arun_on_examples(\n examples,\n llm_or_chain_factory,\n concurrency_level=concurrency_level,\n num_repetitions=num_repetitions,\n project_name=project_name,\n verbose=verbose,\n client=client_,\n tags=tags,\n run_evaluators=run_evaluators,\n input_mapper=input_mapper,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/client/runner_utils.html"} {"id": "cbc0d7c62286-16", "text": "run_evaluators=run_evaluators,\n input_mapper=input_mapper,\n )\n return {\n \"project_name\": project_name,\n \"results\": results,\n }\n[docs]def run_on_dataset(\n dataset_name: str,\n llm_or_chain_factory: MODEL_OR_CHAIN_FACTORY,\n *,\n num_repetitions: int = 1,\n project_name: Optional[str] = None,\n verbose: bool = False,\n client: Optional[LangChainPlusClient] = None,\n tags: Optional[List[str]] = None,\n run_evaluators: Optional[Sequence[RunEvaluator]] = None,\n input_mapper: Optional[Callable[[Dict], Any]] = None,\n) -> Dict[str, Any]:\n \"\"\"\n Run the Chain or language model on a dataset and store traces\n to the specified project name.\n Args:\n dataset_name: Name of the dataset to run the chain on.\n llm_or_chain_factory: Language model or Chain constructor to run\n over the dataset. The Chain constructor is used to permit\n independent calls on each example without carrying over state.\n num_repetitions: Number of times to run the model on each example.\n This is useful when testing success rates or generating confidence\n intervals.\n project_name: Name of the project to store the traces in.\n Defaults to {dataset_name}-{chain class name}-{datetime}.\n verbose: Whether to print progress.\n client: Client to use to access the dataset. If None,\n a new client will be created using the credentials in the environment.\n tags: Tags to add to each run in the project.\n run_evaluators: Evaluators to run on the results of the chain.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/client/runner_utils.html"} {"id": "cbc0d7c62286-17", "text": "run_evaluators: Evaluators to run on the results of the chain.\n input_mapper: A function to map to the inputs dictionary from an Example\n to the format expected by the model to be evaluated. This is useful if\n your model needs to deserialize more complex schema or if your dataset\n has inputs with keys that differ from what is expected by your chain\n or agent.\n Returns:\n A dictionary containing the run's project name and the resulting model outputs.\n \"\"\"\n client_ = client or LangChainPlusClient()\n project_name = _get_project_name(project_name, llm_or_chain_factory, dataset_name)\n dataset = client_.read_dataset(dataset_name=dataset_name)\n examples = client_.list_examples(dataset_id=str(dataset.id))\n results = run_on_examples(\n examples,\n llm_or_chain_factory,\n num_repetitions=num_repetitions,\n project_name=project_name,\n verbose=verbose,\n tags=tags,\n run_evaluators=run_evaluators,\n client=client_,\n input_mapper=input_mapper,\n )\n return {\n \"project_name\": project_name,\n \"results\": results,\n }", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/client/runner_utils.html"} {"id": "22e51bcab29b-0", "text": "Source code for langchain.load.dump\nimport json\nfrom typing import Any, Dict\nfrom langchain.load.serializable import Serializable, to_json_not_implemented\n[docs]def default(obj: Any) -> Any:\n \"\"\"Return a default value for a Serializable object or\n a SerializedNotImplemented object.\"\"\"\n if isinstance(obj, Serializable):\n return obj.to_json()\n else:\n return to_json_not_implemented(obj)\n[docs]def dumps(obj: Any, *, pretty: bool = False) -> str:\n \"\"\"Return a json string representation of an object.\"\"\"\n if pretty:\n return json.dumps(obj, default=default, indent=2)\n else:\n return json.dumps(obj, default=default)\n[docs]def dumpd(obj: Any) -> Dict[str, Any]:\n \"\"\"Return a json dict representation of an object.\"\"\"\n return json.loads(dumps(obj))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/load/dump.html"} {"id": "dc2c572fce3e-0", "text": "Source code for langchain.load.serializable\nfrom abc import ABC\nfrom typing import Any, Dict, List, Literal, TypedDict, Union, cast\nfrom pydantic import BaseModel, PrivateAttr\n[docs]class BaseSerialized(TypedDict):\n \"\"\"Base class for serialized objects.\"\"\"\n lc: int\n id: List[str]\n[docs]class SerializedConstructor(BaseSerialized):\n \"\"\"Serialized constructor.\"\"\"\n type: Literal[\"constructor\"]\n kwargs: Dict[str, Any]\n[docs]class SerializedSecret(BaseSerialized):\n \"\"\"Serialized secret.\"\"\"\n type: Literal[\"secret\"]\n[docs]class SerializedNotImplemented(BaseSerialized):\n \"\"\"Serialized not implemented.\"\"\"\n type: Literal[\"not_implemented\"]\n[docs]class Serializable(BaseModel, ABC):\n \"\"\"Serializable base class.\"\"\"\n @property\n def lc_serializable(self) -> bool:\n \"\"\"\n Return whether or not the class is serializable.\n \"\"\"\n return False\n @property\n def lc_namespace(self) -> List[str]:\n \"\"\"\n Return the namespace of the langchain object.\n eg. [\"langchain\", \"llms\", \"openai\"]\n \"\"\"\n return self.__class__.__module__.split(\".\")\n @property\n def lc_secrets(self) -> Dict[str, str]:\n \"\"\"\n Return a map of constructor argument names to secret ids.\n eg. {\"openai_api_key\": \"OPENAI_API_KEY\"}\n \"\"\"\n return dict()\n @property\n def lc_attributes(self) -> Dict:\n \"\"\"\n Return a list of attribute names that should be included in the\n serialized kwargs. These attributes must be accepted by the\n constructor.\n \"\"\"\n return {}\n[docs] class Config:\n extra = \"ignore\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/load/serializable.html"} {"id": "dc2c572fce3e-1", "text": "return {}\n[docs] class Config:\n extra = \"ignore\"\n _lc_kwargs = PrivateAttr(default_factory=dict)\n def __init__(self, **kwargs: Any) -> None:\n super().__init__(**kwargs)\n self._lc_kwargs = kwargs\n[docs] def to_json(self) -> Union[SerializedConstructor, SerializedNotImplemented]:\n if not self.lc_serializable:\n return self.to_json_not_implemented()\n secrets = dict()\n # Get latest values for kwargs if there is an attribute with same name\n lc_kwargs = {\n k: getattr(self, k, v)\n for k, v in self._lc_kwargs.items()\n if not (self.__exclude_fields__ or {}).get(k, False) # type: ignore\n }\n # Merge the lc_secrets and lc_attributes from every class in the MRO\n for cls in [None, *self.__class__.mro()]:\n # Once we get to Serializable, we're done\n if cls is Serializable:\n break\n # Get a reference to self bound to each class in the MRO\n this = cast(Serializable, self if cls is None else super(cls, self))\n secrets.update(this.lc_secrets)\n lc_kwargs.update(this.lc_attributes)\n # include all secrets, even if not specified in kwargs\n # as these secrets may be passed as an environment variable instead\n for key in secrets.keys():\n secret_value = getattr(self, key, None) or lc_kwargs.get(key)\n if secret_value is not None:\n lc_kwargs.update({key: secret_value})\n return {\n \"lc\": 1,\n \"type\": \"constructor\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/load/serializable.html"} {"id": "dc2c572fce3e-2", "text": "return {\n \"lc\": 1,\n \"type\": \"constructor\",\n \"id\": [*self.lc_namespace, self.__class__.__name__],\n \"kwargs\": lc_kwargs\n if not secrets\n else _replace_secrets(lc_kwargs, secrets),\n }\n[docs] def to_json_not_implemented(self) -> SerializedNotImplemented:\n return to_json_not_implemented(self)\ndef _replace_secrets(\n root: Dict[Any, Any], secrets_map: Dict[str, str]\n) -> Dict[Any, Any]:\n result = root.copy()\n for path, secret_id in secrets_map.items():\n [*parts, last] = path.split(\".\")\n current = result\n for part in parts:\n if part not in current:\n break\n current[part] = current[part].copy()\n current = current[part]\n if last in current:\n current[last] = {\n \"lc\": 1,\n \"type\": \"secret\",\n \"id\": [secret_id],\n }\n return result\n[docs]def to_json_not_implemented(obj: object) -> SerializedNotImplemented:\n \"\"\"Serialize a \"not implemented\" object.\n Args:\n obj: object to serialize\n Returns:\n SerializedNotImplemented\n \"\"\"\n _id: List[str] = []\n try:\n if hasattr(obj, \"__name__\"):\n _id = [*obj.__module__.split(\".\"), obj.__name__]\n elif hasattr(obj, \"__class__\"):\n _id = [*obj.__class__.__module__.split(\".\"), obj.__class__.__name__]\n except Exception:\n pass\n return {\n \"lc\": 1,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/load/serializable.html"} {"id": "dc2c572fce3e-3", "text": "except Exception:\n pass\n return {\n \"lc\": 1,\n \"type\": \"not_implemented\",\n \"id\": _id,\n }", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/load/serializable.html"} {"id": "e9a4e1559a53-0", "text": "Source code for langchain.load.load\nimport importlib\nimport json\nimport os\nfrom typing import Any, Dict, Optional\nfrom langchain.load.serializable import Serializable\nclass Reviver:\n \"\"\"Reviver for JSON objects.\"\"\"\n def __init__(self, secrets_map: Optional[Dict[str, str]] = None) -> None:\n self.secrets_map = secrets_map or dict()\n def __call__(self, value: Dict[str, Any]) -> Any:\n if (\n value.get(\"lc\", None) == 1\n and value.get(\"type\", None) == \"secret\"\n and value.get(\"id\", None) is not None\n ):\n [key] = value[\"id\"]\n if key in self.secrets_map:\n return self.secrets_map[key]\n else:\n if key in os.environ and os.environ[key]:\n return os.environ[key]\n raise KeyError(f'Missing key \"{key}\" in load(secrets_map)')\n if (\n value.get(\"lc\", None) == 1\n and value.get(\"type\", None) == \"not_implemented\"\n and value.get(\"id\", None) is not None\n ):\n raise NotImplementedError(\n \"Trying to load an object that doesn't implement \"\n f\"serialization: {value}\"\n )\n if (\n value.get(\"lc\", None) == 1\n and value.get(\"type\", None) == \"constructor\"\n and value.get(\"id\", None) is not None\n ):\n [*namespace, name] = value[\"id\"]\n # Currently, we only support langchain imports.\n if namespace[0] != \"langchain\":\n raise ValueError(f\"Invalid namespace: {value}\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/load/load.html"} {"id": "e9a4e1559a53-1", "text": "raise ValueError(f\"Invalid namespace: {value}\")\n # The root namespace \"langchain\" is not a valid identifier.\n if len(namespace) == 1:\n raise ValueError(f\"Invalid namespace: {value}\")\n mod = importlib.import_module(\".\".join(namespace))\n cls = getattr(mod, name)\n # The class must be a subclass of Serializable.\n if not issubclass(cls, Serializable):\n raise ValueError(f\"Invalid namespace: {value}\")\n # We don't need to recurse on kwargs\n # as json.loads will do that for us.\n kwargs = value.get(\"kwargs\", dict())\n return cls(**kwargs)\n return value\n[docs]def loads(text: str, *, secrets_map: Optional[Dict[str, str]] = None) -> Any:\n \"\"\"Load a JSON object from a string.\n Args:\n text: The string to load.\n secrets_map: A map of secrets to load.\n Returns:\n \"\"\"\n return json.loads(text, object_hook=Reviver(secrets_map))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/load/load.html"} {"id": "9d23517f55d2-0", "text": "Source code for langchain.evaluation.schema\n\"\"\"Interfaces to be implemented by general evaluators.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom abc import ABC, abstractmethod\nfrom enum import Enum\nfrom typing import Any, Optional, Sequence, Tuple\nfrom warnings import warn\nfrom langchain.chains.base import Chain\nfrom langchain.schema.agent import AgentAction\nfrom langchain.schema.language_model import BaseLanguageModel\nlogger = logging.getLogger(__name__)\n[docs]class EvaluatorType(str, Enum):\n \"\"\"The types of the evaluators.\"\"\"\n QA = \"qa\"\n \"\"\"Question answering evaluator, which grades answers to questions\n directly using an LLM.\"\"\"\n COT_QA = \"cot_qa\"\n \"\"\"Chain of thought question answering evaluator, which grades\n answers to questions using\n chain of thought 'reasoning'.\"\"\"\n CONTEXT_QA = \"context_qa\"\n \"\"\"Question answering evaluator that incorporates 'context' in the response.\"\"\"\n PAIRWISE_STRING = \"pairwise_string\"\n \"\"\"The pairwise string evaluator, which compares the output of two models.\"\"\"\n AGENT_TRAJECTORY = \"trajectory\"\n \"\"\"The agent trajectory evaluator, which grades the agent's intermediate steps.\"\"\"\n CRITERIA = \"criteria\"\n \"\"\"The criteria evaluator, which evaluates a model based on a\n custom set of criteria.\"\"\"\n STRING_DISTANCE = \"string_distance\"\n \"\"\"Compare predictions to a reference answer using string edit distances.\"\"\"\n PAIRWISE_STRING_DISTANCE = \"pairwise_string_distance\"\n \"\"\"Compare predictions based on string edit distances.\"\"\"\n EMBEDDING_DISTANCE = \"embedding_distance\"\n \"\"\"Compare a prediction to a reference label using embedding distance.\"\"\"\n PAIRWISE_EMBEDDING_DISTANCE = \"pairwise_embedding_distance\"\n \"\"\"Compare two predictions using embedding distance.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/schema.html"} {"id": "9d23517f55d2-1", "text": "\"\"\"Compare two predictions using embedding distance.\"\"\"\n[docs]class LLMEvalChain(Chain):\n \"\"\"A base class for evaluators that use an LLM.\"\"\"\n[docs] @classmethod\n @abstractmethod\n def from_llm(cls, llm: BaseLanguageModel, **kwargs: Any) -> LLMEvalChain:\n \"\"\"Create a new evaluator from an LLM.\"\"\"\nclass _EvalArgsMixin:\n \"\"\"Mixin for checking evaluation arguments.\"\"\"\n @property\n def requires_reference(self) -> bool:\n \"\"\"Whether this evaluator requires a reference label.\"\"\"\n return False\n @property\n def requires_input(self) -> bool:\n \"\"\"Whether this evaluator requires an input string.\"\"\"\n return False\n @property\n def _skip_input_warning(self) -> str:\n \"\"\"Warning to show when input is ignored.\"\"\"\n return f\"Ignoring input in {self.__class__.__name__}, as it is not expected.\"\n @property\n def _skip_reference_warning(self) -> str:\n \"\"\"Warning to show when reference is ignored.\"\"\"\n return (\n f\"Ignoring reference in {self.__class__.__name__}, as it is not expected.\"\n )\n def _check_evaluation_args(\n self,\n reference: Optional[str] = None,\n input: Optional[str] = None,\n ) -> None:\n if self.requires_input and input is None:\n raise ValueError(f\"{self.__class__.__name__} requires an input string.\")\n elif input is not None and not self.requires_input:\n warn(self._skip_input_warning)\n else:\n pass\n if self.requires_reference and reference is None:\n raise ValueError(f\"{self.__class__.__name__} requires a reference string.\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/schema.html"} {"id": "9d23517f55d2-2", "text": "raise ValueError(f\"{self.__class__.__name__} requires a reference string.\")\n elif reference is not None and not self.requires_reference:\n warn(self._skip_reference_warning)\n else:\n pass\n[docs]class StringEvaluator(_EvalArgsMixin, ABC):\n \"\"\"Grade, tag, or otherwise evaluate predictions relative to their inputs\n and/or reference labels.\"\"\"\n @property\n def evaluation_name(self) -> str:\n raise NotImplementedError()\n @property\n def requires_reference(self) -> bool:\n return False\n @abstractmethod\n def _evaluate_strings(\n self,\n *,\n prediction: str,\n reference: Optional[str] = None,\n input: Optional[str] = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"Evaluate Chain or LLM output, based on optional input and label.\n Args:\n prediction (str): the LLM or chain prediction to evaluate.\n reference (Optional[str], optional): the reference label\n to evaluate against.\n input (Optional[str], optional): the input to consider during evaluation\n **kwargs: additional keyword arguments, including callbacks, tags, etc.\n Returns:\n dict: The evaluation results containing the score or value.\n It is recommended that the dictionary contain the following keys:\n - score: the score of the evaluation, if applicable.\n - value: the string value of the evaluation, if applicable.\n - reasoning: the reasoning for the evaluation, if applicable.\n \"\"\"\n async def _aevaluate_strings(\n self,\n *,\n prediction: str,\n reference: Optional[str] = None,\n input: Optional[str] = None,\n **kwargs: Any,\n ) -> dict:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/schema.html"} {"id": "9d23517f55d2-3", "text": "**kwargs: Any,\n ) -> dict:\n \"\"\"Asynchronously evaluate Chain or LLM output, based on optional\n input and label.\n Args:\n prediction (str): the LLM or chain prediction to evaluate.\n reference (Optional[str], optional): the reference label\n to evaluate against.\n input (Optional[str], optional): the input to consider during evaluation\n **kwargs: additional keyword arguments, including callbacks, tags, etc.\n Returns:\n dict: The evaluation results containing the score or value.\n It is recommended that the dictionary contain the following keys:\n - score: the score of the evaluation, if applicable.\n - value: the string value of the evaluation, if applicable.\n - reasoning: the reasoning for the evaluation, if applicable.\n \"\"\"\n raise NotImplementedError(\n f\"{self.__class__.__name__} hasn't implemented an \"\n \"async aevaluate_strings method.\"\n )\n[docs] def evaluate_strings(\n self,\n *,\n prediction: str,\n reference: Optional[str] = None,\n input: Optional[str] = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"Evaluate Chain or LLM output, based on optional input and label.\n Args:\n prediction (str): the LLM or chain prediction to evaluate.\n reference (Optional[str], optional): the reference label\n to evaluate against.\n input (Optional[str], optional): the input to consider during evaluation\n **kwargs: additional keyword arguments, including callbacks, tags, etc.\n Returns:\n dict: The evaluation results containing the score or value.\n \"\"\"\n self._check_evaluation_args(reference=reference, input=input)\n return self._evaluate_strings(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/schema.html"} {"id": "9d23517f55d2-4", "text": "return self._evaluate_strings(\n prediction=prediction, reference=reference, input=input, **kwargs\n )\n[docs] async def aevaluate_strings(\n self,\n *,\n prediction: str,\n reference: Optional[str] = None,\n input: Optional[str] = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"Asynchronously evaluate Chain or LLM output, based on optional\n input and label.\n Args:\n prediction (str): the LLM or chain prediction to evaluate.\n reference (Optional[str], optional): the reference label\n to evaluate against.\n input (Optional[str], optional): the input to consider during evaluation\n **kwargs: additional keyword arguments, including callbacks, tags, etc.\n Returns:\n dict: The evaluation results containing the score or value.\n \"\"\"\n self._check_evaluation_args(reference=reference, input=input)\n return await self._aevaluate_strings(\n prediction=prediction, reference=reference, input=input, **kwargs\n )\n[docs]class PairwiseStringEvaluator(_EvalArgsMixin, ABC):\n \"\"\"Compare the output of two models (or two outputs of the same model).\"\"\"\n @abstractmethod\n def _evaluate_string_pairs(\n self,\n *,\n prediction: str,\n prediction_b: str,\n reference: Optional[str] = None,\n input: Optional[str] = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"Evaluate the output string pairs.\n Args:\n prediction (str): The output string from the first model.\n prediction_b (str): The output string from the second model.\n reference (str, optional): The expected output / reference\n string. Defaults to None.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/schema.html"} {"id": "9d23517f55d2-5", "text": "string. Defaults to None.\n input (str, optional): The input string. Defaults to None.\n **kwargs (Any): Additional keyword arguments, such\n as callbacks and optional reference strings.\n Returns:\n dict: A dictionary containing the preference, scores, and/or\n other information.\n \"\"\"\n async def _aevaluate_string_pairs(\n self,\n *,\n prediction: str,\n prediction_b: str,\n reference: Optional[str] = None,\n input: Optional[str] = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"Evaluate the output string pairs.\n Args:\n prediction (str): The output string from the first model.\n prediction_b (str): The output string from the second model.\n reference (str, optional): The expected output / reference\n string. Defaults to None.\n input (str, optional): The input string. Defaults to None.\n **kwargs (Any): Additional keyword arguments, such\n as callbacks and optional reference strings.\n Returns:\n dict: A dictionary containing the preference, scores, and/or\n other information.\n \"\"\"\n raise NotImplementedError(\n f\"{self.__class__.__name__} hasn't implemented an async \"\n \"aevaluate_string_pairs method.\"\n )\n[docs] def evaluate_string_pairs(\n self,\n *,\n prediction: str,\n prediction_b: str,\n reference: Optional[str] = None,\n input: Optional[str] = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"Evaluate the output string pairs.\n Args:\n prediction (str): The output string from the first model.\n prediction_b (str): The output string from the second model.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/schema.html"} {"id": "9d23517f55d2-6", "text": "prediction_b (str): The output string from the second model.\n reference (str, optional): The expected output / reference\n string. Defaults to None.\n input (str, optional): The input string. Defaults to None.\n **kwargs (Any): Additional keyword arguments, such\n as callbacks and optional reference strings.\n Returns:\n dict: A dictionary containing the preference, scores, and/or\n other information.\n \"\"\"\n self._check_evaluation_args(reference=reference, input=input)\n return self._evaluate_string_pairs(\n prediction=prediction,\n prediction_b=prediction_b,\n reference=reference,\n input=input,\n **kwargs,\n )\n[docs] async def aevaluate_string_pairs(\n self,\n *,\n prediction: str,\n prediction_b: str,\n reference: Optional[str] = None,\n input: Optional[str] = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"Evaluate the output string pairs.\n Args:\n prediction (str): The output string from the first model.\n prediction_b (str): The output string from the second model.\n reference (str, optional): The expected output / reference\n string. Defaults to None.\n input (str, optional): The input string. Defaults to None.\n **kwargs (Any): Additional keyword arguments, such\n as callbacks and optional reference strings.\n Returns:\n dict: A dictionary containing the preference, scores, and/or\n other information.\n \"\"\"\n self._check_evaluation_args(reference=reference, input=input)\n return await self._aevaluate_string_pairs(\n prediction=prediction,\n prediction_b=prediction_b,\n reference=reference,\n input=input,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/schema.html"} {"id": "9d23517f55d2-7", "text": "prediction_b=prediction_b,\n reference=reference,\n input=input,\n **kwargs,\n )\n[docs]class AgentTrajectoryEvaluator(_EvalArgsMixin, ABC):\n \"\"\"Interface for evaluating agent trajectories.\"\"\"\n @property\n def requires_input(self) -> bool:\n return True\n @abstractmethod\n def _evaluate_agent_trajectory(\n self,\n *,\n prediction: str,\n agent_trajectory: Sequence[Tuple[AgentAction, str]],\n input: str,\n reference: Optional[str] = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"Evaluate a trajectory.\n Args:\n prediction (str): The final predicted response.\n agent_trajectory (List[Tuple[AgentAction, str]]):\n The intermediate steps forming the agent trajectory.\n input (str): The input to the agent.\n reference (Optional[str]): The reference answer.\n Returns:\n dict: The evaluation result.\n \"\"\"\n async def _aevaluate_agent_trajectory(\n self,\n *,\n prediction: str,\n agent_trajectory: Sequence[Tuple[AgentAction, str]],\n input: str,\n reference: Optional[str] = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"Asynchronously evaluate a trajectory.\n Args:\n prediction (str): The final predicted response.\n agent_trajectory (List[Tuple[AgentAction, str]]):\n The intermediate steps forming the agent trajectory.\n input (str): The input to the agent.\n reference (Optional[str]): The reference answer.\n Returns:\n dict: The evaluation result.\n \"\"\"\n raise NotImplementedError(\n f\"{self.__class__.__name__} hasn't implemented an async \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/schema.html"} {"id": "9d23517f55d2-8", "text": "f\"{self.__class__.__name__} hasn't implemented an async \"\n \"aevaluate_agent_trajectory method.\"\n )\n[docs] def evaluate_agent_trajectory(\n self,\n *,\n prediction: str,\n agent_trajectory: Sequence[Tuple[AgentAction, str]],\n input: str,\n reference: Optional[str] = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"Evaluate a trajectory.\n Args:\n prediction (str): The final predicted response.\n agent_trajectory (List[Tuple[AgentAction, str]]):\n The intermediate steps forming the agent trajectory.\n input (str): The input to the agent.\n reference (Optional[str]): The reference answer.\n Returns:\n dict: The evaluation result.\n \"\"\"\n self._check_evaluation_args(reference=reference, input=input)\n return self._evaluate_agent_trajectory(\n prediction=prediction,\n input=input,\n agent_trajectory=agent_trajectory,\n reference=reference,\n **kwargs,\n )\n[docs] async def aevaluate_agent_trajectory(\n self,\n *,\n prediction: str,\n agent_trajectory: Sequence[Tuple[AgentAction, str]],\n input: str,\n reference: Optional[str] = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"Asynchronously evaluate a trajectory.\n Args:\n prediction (str): The final predicted response.\n agent_trajectory (List[Tuple[AgentAction, str]]):\n The intermediate steps forming the agent trajectory.\n input (str): The input to the agent.\n reference (Optional[str]): The reference answer.\n Returns:\n dict: The evaluation result.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/schema.html"} {"id": "9d23517f55d2-9", "text": "Returns:\n dict: The evaluation result.\n \"\"\"\n self._check_evaluation_args(reference=reference, input=input)\n return await self._aevaluate_agent_trajectory(\n prediction=prediction,\n input=input,\n agent_trajectory=agent_trajectory,\n reference=reference,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/schema.html"} {"id": "f367d8a948fa-0", "text": "Source code for langchain.evaluation.loading\n\"\"\"Loading datasets and evaluators.\"\"\"\nfrom typing import Any, Dict, List, Optional, Sequence, Type, Union\nfrom langchain.chains.base import Chain\nfrom langchain.chat_models.openai import ChatOpenAI\nfrom langchain.evaluation.agents.trajectory_eval_chain import TrajectoryEvalChain\nfrom langchain.evaluation.comparison import PairwiseStringEvalChain\nfrom langchain.evaluation.criteria.eval_chain import CriteriaEvalChain\nfrom langchain.evaluation.embedding_distance.base import (\n EmbeddingDistanceEvalChain,\n PairwiseEmbeddingDistanceEvalChain,\n)\nfrom langchain.evaluation.qa import ContextQAEvalChain, CotQAEvalChain, QAEvalChain\nfrom langchain.evaluation.schema import EvaluatorType, LLMEvalChain\nfrom langchain.evaluation.string_distance.base import (\n PairwiseStringDistanceEvalChain,\n StringDistanceEvalChain,\n)\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]def load_dataset(uri: str) -> List[Dict]:\n \"\"\"Load a dataset from the `LangChainDatasets HuggingFace org `_.\n Args:\n uri: The uri of the dataset to load.\n Returns:\n A list of dictionaries, each representing a row in the dataset.\n **Prerequisites**\n .. code-block:: shell\n pip install datasets\n Examples\n --------\n .. code-block:: python\n from langchain.evaluation import load_dataset\n ds = load_dataset(\"llm-math\")\n \"\"\" # noqa: E501\n try:\n from datasets import load_dataset\n except ImportError:\n raise ImportError(\n \"load_dataset requires the `datasets` package.\"\n \" Please install with `pip install datasets`\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/loading.html"} {"id": "f367d8a948fa-1", "text": "\" Please install with `pip install datasets`\"\n )\n dataset = load_dataset(f\"LangChainDatasets/{uri}\")\n return [d for d in dataset[\"train\"]]\n_EVALUATOR_MAP: Dict[EvaluatorType, Union[Type[LLMEvalChain], Type[Chain]]] = {\n EvaluatorType.QA: QAEvalChain,\n EvaluatorType.COT_QA: CotQAEvalChain,\n EvaluatorType.CONTEXT_QA: ContextQAEvalChain,\n EvaluatorType.PAIRWISE_STRING: PairwiseStringEvalChain,\n EvaluatorType.AGENT_TRAJECTORY: TrajectoryEvalChain,\n EvaluatorType.CRITERIA: CriteriaEvalChain,\n EvaluatorType.STRING_DISTANCE: StringDistanceEvalChain,\n EvaluatorType.PAIRWISE_STRING_DISTANCE: PairwiseStringDistanceEvalChain,\n EvaluatorType.EMBEDDING_DISTANCE: EmbeddingDistanceEvalChain,\n EvaluatorType.PAIRWISE_EMBEDDING_DISTANCE: PairwiseEmbeddingDistanceEvalChain,\n}\n[docs]def load_evaluator(\n evaluator: EvaluatorType,\n *,\n llm: Optional[BaseLanguageModel] = None,\n **kwargs: Any,\n) -> Chain:\n \"\"\"Load the requested evaluation chain specified by a string.\n Parameters\n ----------\n evaluator : EvaluatorType\n The type of evaluator to load.\n llm : BaseLanguageModel, optional\n The language model to use for evaluation, by default None\n **kwargs : Any\n Additional keyword arguments to pass to the evaluator.\n Returns\n -------\n Chain\n The loaded evaluation chain.\n Examples\n --------\n >>> from langchain.evaluation import load_evaluator, EvaluatorType", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/loading.html"} {"id": "f367d8a948fa-2", "text": "--------\n >>> from langchain.evaluation import load_evaluator, EvaluatorType\n >>> evaluator = load_evaluator(EvaluatorType.QA)\n \"\"\"\n llm = llm or ChatOpenAI(model=\"gpt-4\", temperature=0)\n if evaluator not in _EVALUATOR_MAP:\n raise ValueError(\n f\"Unknown evaluator type: {evaluator}\"\n f\"Valid types are: {list(_EVALUATOR_MAP.keys())}\"\n )\n evaluator_cls = _EVALUATOR_MAP[evaluator]\n if issubclass(evaluator_cls, LLMEvalChain):\n return evaluator_cls.from_llm(llm=llm, **kwargs)\n else:\n return evaluator_cls(**kwargs)\n[docs]def load_evaluators(\n evaluators: Sequence[EvaluatorType],\n *,\n llm: Optional[BaseLanguageModel] = None,\n config: Optional[dict] = None,\n **kwargs: Any,\n) -> List[Chain]:\n \"\"\"Load evaluators specified by a list of evaluator types.\n Parameters\n ----------\n evaluators : Sequence[EvaluatorType]\n The list of evaluator types to load.\n llm : BaseLanguageModel, optional\n The language model to use for evaluation, if none is provided, a default\n ChatOpenAI gpt-4 model will be used.\n config : dict, optional\n A dictionary mapping evaluator types to additional keyword arguments,\n by default None\n **kwargs : Any\n Additional keyword arguments to pass to all evaluators.\n Returns\n -------\n List[Chain]\n The loaded evaluators.\n Examples\n --------\n >>> from langchain.evaluation import load_evaluators, EvaluatorType", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/loading.html"} {"id": "f367d8a948fa-3", "text": "--------\n >>> from langchain.evaluation import load_evaluators, EvaluatorType\n >>> evaluators = [EvaluatorType.QA, EvaluatorType.CRITERIA]\n >>> loaded_evaluators = load_evaluators(evaluators, criteria=\"helpfulness\")\n \"\"\"\n llm = llm or ChatOpenAI(model=\"gpt-4\", temperature=0)\n loaded = []\n for evaluator in evaluators:\n _kwargs = config.get(evaluator, {}) if config else {}\n loaded.append(load_evaluator(evaluator, llm=llm, **{**kwargs, **_kwargs}))\n return loaded", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/loading.html"} {"id": "aa920e4c8501-0", "text": "Source code for langchain.evaluation.comparison.eval_chain\n\"\"\"Base classes for comparing the output of two models.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Optional\nfrom pydantic import Extra, Field\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.chains.llm import LLMChain\nfrom langchain.evaluation.comparison.prompt import PROMPT, PROMPT_WITH_REFERENCE\nfrom langchain.evaluation.schema import LLMEvalChain, PairwiseStringEvaluator\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema import BaseOutputParser\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class PairwiseStringResultOutputParser(BaseOutputParser[dict]):\n \"\"\"A parser for the output of the PairwiseStringEvalChain.\"\"\"\n @property\n def _type(self) -> str:\n return \"pairwise_string_result\"\n[docs] def parse(self, text: str) -> Any:\n \"\"\"Parse the output text.\n Args:\n text (str): The output text to parse.\n Returns:\n Any: The parsed output.\n \"\"\"\n reasoning, verdict = text.strip().rsplit(\"\\n\", maxsplit=1)\n verdict = verdict.strip(\"[\").strip(\"]\")\n if verdict not in {\"A\", \"B\", \"C\"}:\n raise ValueError(\n f\"Invalid verdict: {verdict}. \"\n \"Verdict must be one of 'A', 'B', or 'C'.\"\n )\n # C means the models are tied. Return 'None' meaning no preference\n verdict_ = None if verdict == \"C\" else verdict\n score = {\n \"A\": 1,\n \"B\": 0,\n None: 0.5,\n }.get(verdict_)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/comparison/eval_chain.html"} {"id": "aa920e4c8501-1", "text": "None: 0.5,\n }.get(verdict_)\n return {\n \"reasoning\": reasoning,\n \"value\": verdict_,\n \"score\": score,\n }\n[docs]class PairwiseStringEvalChain(PairwiseStringEvaluator, LLMEvalChain, LLMChain):\n \"\"\"A chain for comparing two outputs, such as the outputs\n of two models, prompts, or outputs of a single model on similar inputs.\n Example:\n >>> from langchain.chat_models import ChatOpenAI\n >>> from langchain.evaluation.comparison import PairwiseStringEvalChain\n >>> llm = ChatOpenAI(temperature=0)\n >>> chain = PairwiseStringEvalChain.from_llm(llm=llm)\n >>> result = chain.evaluate_string_pairs(\n ... input = \"What is the chemical formula for water?\",\n ... prediction = \"H2O\",\n ... prediction_b = (\n ... \"The chemical formula for water is H2O, which means\"\n ... \" there are two hydrogen atoms and one oxygen atom.\"\n ... referenc = \"The chemical formula for water is H2O.\",\n ... )\n >>> print(result[\"text\"])\n # {\n # \"value\": \"B\",\n # \"comment\": \"Both responses accurately state\"\n # \" that the chemical formula for water is H2O.\"\n # \" However, Response B provides additional information\"\n # . \" by explaining what the formula means.\\n[[B]]\"\n # }\n \"\"\"\n output_parser: BaseOutputParser = Field(\n default_factory=PairwiseStringResultOutputParser\n )\n[docs] class Config:\n \"\"\"Configuration for the QAEvalChain.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/comparison/eval_chain.html"} {"id": "aa920e4c8501-2", "text": "[docs] class Config:\n \"\"\"Configuration for the QAEvalChain.\"\"\"\n extra = Extra.ignore\n @property\n def requires_reference(self) -> bool:\n return \"reference\" in self.prompt.input_variables\n @property\n def requires_input(self) -> bool:\n return True\n @property\n def _skip_reference_warning(self) -> str:\n \"\"\"Warning to show when reference is ignored.\"\"\"\n return (\n f\"Ignoring reference in {self.__class__.__name__}, as it is not expected.\"\n \"\\nTo use a reference, initialize PairwiseStringEvalChain with\"\n \" `requires_reference=True` or with a prompt with 'reference' as an\"\n \" input variable.\"\n )\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n *,\n prompt: Optional[PromptTemplate] = None,\n requires_reference: bool = False,\n **kwargs: Any,\n ) -> PairwiseStringEvalChain:\n \"\"\"Initialize the PairwiseStringEvalChain from an LLM.\n Args:\n llm (BaseLanguageModel): The LLM to use.\n prompt (PromptTemplate, optional): The prompt to use.\n requires_reference (bool, optional): Whether to require a reference\n string. Defaults to False.\n **kwargs (Any): Additional keyword arguments.\n Returns:\n PairwiseStringEvalChain: The initialized PairwiseStringEvalChain.\n \"\"\"\n expected_input_vars = {\"prediction\", \"prediction_b\", \"input\"}\n if prompt is None:\n if requires_reference:\n expected_input_vars.add(\"reference\")\n prompt_ = PROMPT_WITH_REFERENCE\n else:\n prompt_ = PROMPT\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/comparison/eval_chain.html"} {"id": "aa920e4c8501-3", "text": "else:\n prompt_ = PROMPT\n else:\n if requires_reference:\n expected_input_vars.add(\"reference\")\n prompt_ = prompt\n if expected_input_vars != set(prompt_.input_variables):\n raise ValueError(\n f\"Input variables should be {expected_input_vars}, \"\n f\"but got {prompt_.input_variables}\"\n )\n return cls(llm=llm, prompt=prompt_, **kwargs)\n def _prepare_input(\n self,\n prediction: str,\n prediction_b: str,\n input: Optional[str],\n reference: Optional[str],\n ) -> dict:\n input_ = {\n \"prediction\": prediction,\n \"prediction_b\": prediction_b,\n }\n if self.requires_input:\n if not input:\n raise ValueError(\"Input is require for this comparison evaluator\")\n input_[\"input\"] = input\n if self.requires_reference:\n if reference is None:\n raise ValueError(\"Reference is required for this comparison evaluator\")\n input_[\"reference\"] = reference\n return input_\n def _evaluate_string_pairs(\n self,\n *,\n prediction: str,\n prediction_b: str,\n input: Optional[str] = None,\n reference: Optional[str] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"Evaluate whether output A is preferred to output B.\n Args:\n prediction (str): The output string from the first model.\n prediction_b (str): The output string from the second model.\n input (str): The input or task string.\n callbacks (Callbacks, optional): The callbacks to use.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/comparison/eval_chain.html"} {"id": "aa920e4c8501-4", "text": "callbacks (Callbacks, optional): The callbacks to use.\n reference (str, optional): The reference string, if any.\n **kwargs (Any): Additional keyword arguments.\n Returns:\n dict: A dictionary containing:\n - reasoning: The reasoning for the preference.\n - value: The preference value, which is either 'A', 'B', or None\n for no preference.\n - score: The preference score, which is 1 for 'A', 0 for 'B',\n and 0.5 for None.\n \"\"\"\n input_ = self._prepare_input(prediction, prediction_b, input, reference)\n result = self(\n inputs=input_,\n callbacks=callbacks,\n **kwargs,\n )\n return result[\"text\"]\n async def _aevaluate_string_pairs(\n self,\n *,\n prediction: str,\n prediction_b: str,\n reference: Optional[str] = None,\n input: Optional[str] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"Asynchronously evaluate whether output A is preferred to output B.\n Args:\n prediction (str): The output string from the first model.\n prediction_b (str): The output string from the second model.\n input (str): The input or task string.\n callbacks (Callbacks, optional): The callbacks to use.\n reference (str, optional): The reference string, if any.\n **kwargs (Any): Additional keyword arguments.\n Returns:\n dict: A dictionary containing:\n - reasoning: The reasoning for the preference.\n - value: The preference value, which is either 'A', 'B', or None\n for no preference.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/comparison/eval_chain.html"} {"id": "aa920e4c8501-5", "text": "for no preference.\n - score: The preference score, which is 1 for 'A', 0 for 'B',\n and 0.5 for None.\n \"\"\"\n input_ = self._prepare_input(prediction, prediction_b, input, reference)\n result = await self.acall(\n inputs=input_,\n callbacks=callbacks,\n **kwargs,\n )\n return result[\"text\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/comparison/eval_chain.html"} {"id": "42959d649710-0", "text": "Source code for langchain.evaluation.run_evaluators.string_run_evaluator\n\"\"\"Run evaluator wrapper for string evaluators.\"\"\"\nfrom __future__ import annotations\nfrom abc import abstractmethod\nfrom typing import Any, Dict, List, Optional, Union\nfrom langchainplus_sdk import EvaluationResult, RunEvaluator\nfrom langchainplus_sdk.schemas import Example, Run\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.evaluation.schema import StringEvaluator\nfrom langchain.load.dump import dumps\nfrom langchain.load.load import loads\nfrom langchain.load.serializable import Serializable\nfrom langchain.schema import RUN_KEY, messages_from_dict\nfrom langchain.schema.messages import BaseMessage, get_buffer_string\nfrom langchain.tools.base import Tool\ndef _get_messages_from_run_dict(messages: List[dict]) -> List[BaseMessage]:\n if not messages:\n return []\n first_message = messages[0]\n if \"lc\" in first_message:\n return [loads(dumps(message)) for message in messages]\n else:\n return messages_from_dict(messages)\n[docs]class StringRunMapper(Serializable):\n \"\"\"Extract items to evaluate from the run object.\"\"\"\n @property\n def output_keys(self) -> List[str]:\n \"\"\"The keys to extract from the run.\"\"\"\n return [\"prediction\", \"input\"]\n[docs] @abstractmethod\n def map(self, run: Run) -> Dict[str, str]:\n \"\"\"Maps the Run to a dictionary.\"\"\"\n[docs] def __call__(self, run: Run) -> Dict[str, str]:\n \"\"\"Maps the Run to a dictionary.\"\"\"\n if not run.outputs:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/run_evaluators/string_run_evaluator.html"} {"id": "42959d649710-1", "text": "\"\"\"Maps the Run to a dictionary.\"\"\"\n if not run.outputs:\n raise ValueError(f\"Run {run.id} has no outputs to evaluate.\")\n return self.map(run)\n[docs]class LLMStringRunMapper(StringRunMapper):\n \"\"\"Extract items to evaluate from the run object.\"\"\"\n[docs] def serialize_chat_messages(self, messages: List[Dict]) -> str:\n \"\"\"Extract the input messages from the run.\"\"\"\n if isinstance(messages, list) and messages:\n if isinstance(messages[0], dict):\n chat_messages = _get_messages_from_run_dict(messages)\n elif isinstance(messages[0], list):\n # Runs from Tracer have messages as a list of lists of dicts\n chat_messages = _get_messages_from_run_dict(messages[0])\n else:\n raise ValueError(f\"Could not extract messages to evaluate {messages}\")\n return get_buffer_string(chat_messages)\n raise ValueError(f\"Could not extract messages to evaluate {messages}\")\n[docs] def serialize_inputs(self, inputs: Dict) -> str:\n if \"prompts\" in inputs: # Should we even accept this?\n input_ = \"\\n\\n\".join(inputs[\"prompts\"])\n elif \"prompt\" in inputs:\n input_ = inputs[\"prompt\"]\n elif \"messages\" in inputs:\n input_ = self.serialize_chat_messages(inputs[\"messages\"])\n else:\n raise ValueError(\"LLM Run must have either messages or prompts as inputs.\")\n return input_\n[docs] def serialize_outputs(self, outputs: Dict) -> str:\n if not outputs.get(\"generations\"):\n raise ValueError(\"Cannot evaluate LLM Run without generations.\")\n generations: List[Dict] = outputs[\"generations\"]\n if not generations:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/run_evaluators/string_run_evaluator.html"} {"id": "42959d649710-2", "text": "generations: List[Dict] = outputs[\"generations\"]\n if not generations:\n raise ValueError(\"Cannot evaluate LLM run with empty generations.\")\n first_generation: Dict = generations[0]\n if isinstance(first_generation, list):\n # Runs from Tracer have generations as a list of lists of dicts\n # Whereas Runs from the API have a list of dicts\n first_generation = first_generation[0]\n if \"message\" in first_generation:\n output_ = self.serialize_chat_messages([first_generation[\"message\"]])\n else:\n output_ = first_generation[\"text\"]\n return output_\n[docs] def map(self, run: Run) -> Dict[str, str]:\n \"\"\"Maps the Run to a dictionary.\"\"\"\n if run.run_type != \"llm\":\n raise ValueError(\"LLM RunMapper only supports LLM runs.\")\n elif not run.outputs:\n if run.error:\n raise ValueError(\n f\"Cannot evaluate errored LLM run {run.id}: {run.error}\"\n )\n else:\n raise ValueError(\n f\"Run {run.id} has no outputs. Cannot evaluate this run.\"\n )\n else:\n try:\n inputs = self.serialize_inputs(run.inputs)\n except Exception as e:\n raise ValueError(\n f\"Could not parse LM input from run inputs {run.inputs}\"\n ) from e\n try:\n output_ = self.serialize_outputs(run.outputs)\n except Exception as e:\n raise ValueError(\n f\"Could not parse LM prediction from run outputs {run.outputs}\"\n ) from e\n return {\"input\": inputs, \"prediction\": output_}\n[docs]class ChainStringRunMapper(StringRunMapper):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/run_evaluators/string_run_evaluator.html"} {"id": "42959d649710-3", "text": "[docs]class ChainStringRunMapper(StringRunMapper):\n \"\"\"Extract items to evaluate from the run object from a chain.\"\"\"\n input_key: str\n \"\"\"The key from the model Run's inputs to use as the eval input.\"\"\"\n prediction_key: str\n \"\"\"The key from the model Run's outputs to use as the eval prediction.\"\"\"\n[docs] @classmethod\n def from_chain(\n cls,\n model: Chain,\n input_key: Optional[str] = None,\n prediction_key: Optional[str] = None,\n ) -> ChainStringRunMapper:\n \"\"\"Create a RunMapper from a chain.\"\"\"\n error_messages = []\n if input_key is None:\n if len(model.input_keys) > 1:\n error_messages.append(\n f\"Chain {model.lc_namespace} has multiple input\"\n \" keys. Please specify 'input_key' when loading.\"\n )\n else:\n input_key = model.input_keys[0]\n elif input_key not in model.input_keys:\n error_messages.append(\n f\"Chain {model.lc_namespace} does not have specified\"\n f\" input key {input_key}.\"\n )\n if prediction_key is None:\n if len(model.output_keys) > 1:\n error_messages.append(\n f\"Chain {model.lc_namespace} has multiple\"\n \" output keys. Please specify 'prediction_key' when loading.\"\n )\n else:\n prediction_key = model.output_keys[0]\n elif prediction_key not in model.output_keys:\n error_messages.append(\n f\"Chain {model.lc_namespace} does not have specified\"\n f\" prediction_key {prediction_key}.\"\n )\n if error_messages:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/run_evaluators/string_run_evaluator.html"} {"id": "42959d649710-4", "text": "f\" prediction_key {prediction_key}.\"\n )\n if error_messages:\n raise ValueError(\"\\n\".join(error_messages))\n if input_key is None or prediction_key is None:\n # This should never happen, but mypy doesn't know that.\n raise ValueError(f\"Chain {model.lc_namespace} has no input or output keys.\")\n return cls(input_key=input_key, prediction_key=prediction_key)\n[docs] def map(self, run: Run) -> Dict[str, str]:\n \"\"\"Maps the Run to a dictionary.\"\"\"\n if not run.outputs:\n raise ValueError(f\"Run {run.id} has no outputs to evaluate.\")\n if run.run_type != \"chain\":\n raise ValueError(\"Chain RunMapper only supports Chain runs.\")\n if self.input_key not in run.inputs:\n raise ValueError(f\"Run {run.id} does not have input key {self.input_key}.\")\n elif self.prediction_key not in run.outputs:\n raise ValueError(\n f\"Run {run.id} does not have prediction key {self.prediction_key}.\"\n )\n else:\n return {\n \"input\": run.inputs[self.input_key],\n \"prediction\": run.outputs[self.prediction_key],\n }\n[docs]class ToolStringRunMapper(StringRunMapper):\n \"\"\"Map an input to the tool.\"\"\"\n[docs] def map(self, run: Run) -> Dict[str, str]:\n if not run.outputs:\n raise ValueError(f\"Run {run.id} has no outputs to evaluate.\")\n return {\"input\": run.inputs[\"input\"], \"prediction\": run.outputs[\"output\"]}\n[docs]class StringExampleMapper(Serializable):\n \"\"\"Map an example, or row in the dataset, to the inputs of an evaluation.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/run_evaluators/string_run_evaluator.html"} {"id": "42959d649710-5", "text": "\"\"\"Map an example, or row in the dataset, to the inputs of an evaluation.\"\"\"\n reference_key: Optional[str] = None\n @property\n def output_keys(self) -> List[str]:\n \"\"\"The keys to extract from the run.\"\"\"\n return [\"reference\"]\n[docs] def serialize_chat_messages(self, messages: List[Dict]) -> str:\n \"\"\"Extract the input messages from the run.\"\"\"\n chat_messages = _get_messages_from_run_dict(messages)\n return get_buffer_string(chat_messages)\n[docs] def map(self, example: Example) -> Dict[str, str]:\n \"\"\"Maps the Example, or dataset row to a dictionary.\"\"\"\n if not example.outputs:\n raise ValueError(\n f\"Example {example.id} has no outputs to use as a reference.\"\n )\n if self.reference_key is None:\n if len(example.outputs) > 1:\n raise ValueError(\n f\"Example {example.id} has multiple outputs, so you must\"\n \" specify a reference_key.\"\n )\n else:\n output = list(example.outputs.values())[0]\n return {\n \"reference\": self.serialize_chat_messages([output])\n if isinstance(output, dict)\n and output.get(\"type\")\n and output.get(\"data\")\n else output\n }\n elif self.reference_key not in example.outputs:\n raise ValueError(\n f\"Example {example.id} does not have reference key\"\n f\" {self.reference_key}.\"\n )\n return {\"reference\": example.outputs[self.reference_key]}\n[docs] def __call__(self, example: Example) -> Dict[str, str]:\n \"\"\"Maps the Run and Example to a dictionary.\"\"\"\n if not example.outputs:\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/run_evaluators/string_run_evaluator.html"} {"id": "42959d649710-6", "text": "if not example.outputs:\n raise ValueError(\n f\"Example {example.id} has no outputs to use as areference label.\"\n )\n return self.map(example)\n[docs]class StringRunEvaluatorChain(Chain, RunEvaluator):\n \"\"\"Evaluate Run and optional examples.\"\"\"\n run_mapper: StringRunMapper\n \"\"\"Maps the Run to a dictionary with 'input' and 'prediction' strings.\"\"\"\n example_mapper: Optional[StringExampleMapper] = None\n \"\"\"Maps the Example (dataset row) to a dictionary\n with a 'reference' string.\"\"\"\n name: str\n \"\"\"The name of the evaluation metric.\"\"\"\n string_evaluator: StringEvaluator\n \"\"\"The evaluation chain.\"\"\"\n @property\n def input_keys(self) -> List[str]:\n return [\"run\", \"example\"]\n @property\n def output_keys(self) -> List[str]:\n return [\"feedback\"]\n def _prepare_input(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n run: Run = inputs[\"run\"]\n example: Optional[Example] = inputs.get(\"example\")\n evaluate_strings_inputs = self.run_mapper(run)\n if example and self.example_mapper:\n evaluate_strings_inputs.update(self.example_mapper(example))\n elif self.string_evaluator.requires_reference:\n raise ValueError(\n f\"Evaluator {self.name} requires an reference\"\n \" example from the dataset,\"\n f\" but none was provided for run {run.id}.\"\n )\n return evaluate_strings_inputs\n def _prepare_output(self, output: Dict[str, Any]) -> EvaluationResult:\n evaluation_result = EvaluationResult(key=self.name, **output)\n if RUN_KEY in output:\n # TODO: Not currently surfaced. Update", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/run_evaluators/string_run_evaluator.html"} {"id": "42959d649710-7", "text": "if RUN_KEY in output:\n # TODO: Not currently surfaced. Update\n evaluation_result.evaluator_info[RUN_KEY] = output[RUN_KEY]\n return evaluation_result\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Call the evaluation chain.\"\"\"\n evaluate_strings_inputs = self._prepare_input(inputs)\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n chain_output = self.string_evaluator.evaluate_strings(\n **evaluate_strings_inputs,\n callbacks=callbacks,\n )\n evaluation_result = self._prepare_output(chain_output)\n return {\"feedback\": evaluation_result}\n async def _acall(\n self,\n inputs: Dict[str, str],\n run_manager: AsyncCallbackManagerForChainRun | None = None,\n ) -> Dict[str, Any]:\n \"\"\"Call the evaluation chain.\"\"\"\n evaluate_strings_inputs = self._prepare_input(inputs)\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n chain_output = await self.string_evaluator.aevaluate_strings(\n **evaluate_strings_inputs,\n callbacks=callbacks,\n )\n evaluation_result = self._prepare_output(chain_output)\n return {\"feedback\": evaluation_result}\n[docs] def evaluate_run(\n self, run: Run, example: Optional[Example] = None\n ) -> EvaluationResult:\n \"\"\"Evaluate an example.\"\"\"\n return self({\"run\": run, \"example\": example})[\"feedback\"]\n[docs] async def aevaluate_run(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/run_evaluators/string_run_evaluator.html"} {"id": "42959d649710-8", "text": "[docs] async def aevaluate_run(\n self, run: Run, example: Optional[Example] = None\n ) -> EvaluationResult:\n \"\"\"Evaluate an example.\"\"\"\n result = await self.acall({\"run\": run, \"example\": example})\n return result[\"feedback\"]\n[docs] @classmethod\n def from_model_and_evaluator(\n cls,\n model: Union[Chain, BaseLanguageModel, Tool],\n evaluator: StringEvaluator,\n input_key: Optional[str] = None,\n prediction_key: Optional[str] = None,\n reference_key: Optional[str] = None,\n ) -> StringRunEvaluatorChain:\n \"\"\"Create a StringRunEvaluatorChain from a model and evaluator.\"\"\"\n if isinstance(model, BaseLanguageModel):\n run_mapper: StringRunMapper = LLMStringRunMapper()\n elif isinstance(model, Chain):\n run_mapper = ChainStringRunMapper.from_chain(\n model, input_key=input_key, prediction_key=prediction_key\n )\n elif isinstance(model, Tool):\n run_mapper = ToolStringRunMapper()\n else:\n raise NotImplementedError(\n f\"{cls.__name__}.from_model_and_evaluator({type(model)})\"\n \" not yet implemented.\"\n \"Expected one of [BaseLanguageModel, Chain, Tool].\"\n )\n if reference_key is not None or isinstance(model, BaseLanguageModel):\n example_mapper = StringExampleMapper(reference_key=reference_key)\n elif evaluator.requires_reference:\n # We could potentially auto-infer if there is only one string in the\n # example, but it's preferred to raise earlier.\n raise ValueError(\n f\"Evaluator {evaluator.evaluation_name} requires a reference\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/run_evaluators/string_run_evaluator.html"} {"id": "42959d649710-9", "text": "f\"Evaluator {evaluator.evaluation_name} requires a reference\"\n \" example from the dataset. Please specify the reference key from\"\n \" amongst the dataset outputs keys.\"\n )\n else:\n example_mapper = None\n return cls(\n name=evaluator.evaluation_name,\n run_mapper=run_mapper,\n example_mapper=example_mapper,\n string_evaluator=evaluator,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/run_evaluators/string_run_evaluator.html"} {"id": "0a1dd5ed5ccc-0", "text": "Source code for langchain.evaluation.run_evaluators.implementations\nfrom typing import Any, Dict, Mapping, Optional, Sequence, Union\nfrom langchainplus_sdk.evaluation import EvaluationResult\nfrom langchainplus_sdk.schemas import Example, Run, RunTypeEnum\nfrom pydantic import BaseModel, Field\nfrom langchain.chat_models.base import BaseChatModel\nfrom langchain.evaluation.agents.trajectory_eval_chain import (\n TrajectoryEvalChain,\n TrajectoryOutputParser,\n)\nfrom langchain.evaluation.criteria.eval_chain import (\n CriteriaEvalChain,\n CriteriaResultOutputParser,\n)\nfrom langchain.evaluation.qa.eval_chain import QAEvalChain\nfrom langchain.evaluation.qa.eval_prompt import PROMPT as QA_DEFAULT_PROMPT\nfrom langchain.evaluation.qa.eval_prompt import SQL_PROMPT\nfrom langchain.evaluation.run_evaluators.base import (\n RunEvaluatorChain,\n RunEvaluatorInputMapper,\n RunEvaluatorOutputParser,\n)\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema import BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.tools.base import BaseTool\n_QA_PROMPTS = {\n \"qa\": QA_DEFAULT_PROMPT,\n \"sql\": SQL_PROMPT,\n}\n[docs]class StringRunEvaluatorInputMapper(RunEvaluatorInputMapper, BaseModel):\n \"\"\"Maps the Run and Optional[Example] to a dictionary.\"\"\"\n prediction_map: Dict[str, str]\n \"\"\"Map from run outputs to the evaluation inputs.\"\"\"\n input_map: Dict[str, str]\n \"\"\"Map from run inputs to the evaluation inputs.\"\"\"\n answer_map: Optional[Dict[str, str]] = None\n \"\"\"Map from example outputs to the evaluation inputs.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/run_evaluators/implementations.html"} {"id": "0a1dd5ed5ccc-1", "text": "\"\"\"Map from example outputs to the evaluation inputs.\"\"\"\n[docs] def map(self, run: Run, example: Optional[Example] = None) -> Dict[str, Any]:\n \"\"\"Maps the Run and Optional[Example] to a dictionary\"\"\"\n if run.outputs is None and self.prediction_map:\n raise ValueError(f\"Run {run.id} has no outputs.\")\n if self.answer_map and (not example or not example.outputs):\n raise ValueError(\"This evaluator requires references, but none were given.\")\n outputs = run.outputs or {}\n data = {value: outputs[key] for key, value in self.prediction_map.items()}\n data.update({value: run.inputs[key] for key, value in self.input_map.items()})\n if self.answer_map and example and example.outputs:\n data.update(\n {value: example.outputs[key] for key, value in self.answer_map.items()}\n )\n return data\n[docs]class ChoicesOutputParser(RunEvaluatorOutputParser):\n \"\"\"Parse a feedback run with optional choices.\"\"\"\n evaluation_name: str\n choices_map: Optional[Dict[str, int]] = None\n @property\n def _type(self) -> str:\n return \"choices_run_eval\"\n[docs] def parse(self, text: str) -> EvaluationResult:\n \"\"\"Parse the last line of the text and return an evaluation result.\"\"\"\n lines = text.strip().split()\n value = lines[-1].strip()\n score = self.choices_map.get(value) if self.choices_map else None\n comment = \" \".join(lines[:-1]) if len(lines) > 1 else None\n return EvaluationResult(\n key=self.evaluation_name,\n score=score,\n value=value,\n comment=comment,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/run_evaluators/implementations.html"} {"id": "0a1dd5ed5ccc-2", "text": "score=score,\n value=value,\n comment=comment,\n )\n[docs]def get_qa_evaluator(\n llm: BaseLanguageModel,\n *,\n prompt: Union[PromptTemplate, str] = QA_DEFAULT_PROMPT,\n input_key: str = \"input\",\n prediction_key: str = \"output\",\n answer_key: str = \"output\",\n evaluation_name: Optional[str] = None,\n **kwargs: Any,\n) -> RunEvaluatorChain:\n \"\"\"Get an eval chain that compares response against ground truth.\"\"\"\n if isinstance(prompt, str):\n prompt = _QA_PROMPTS[prompt]\n eval_chain = QAEvalChain.from_llm(llm=llm, prompt=prompt, **kwargs)\n input_mapper = kwargs.pop(\n \"input_mapper\",\n StringRunEvaluatorInputMapper(\n input_map={input_key: \"query\"},\n prediction_map={prediction_key: \"result\"},\n answer_map={answer_key: \"answer\"},\n ),\n )\n evaluation_name = evaluation_name or \"Correctness\"\n output_parser = kwargs.pop(\n \"output_parser\",\n ChoicesOutputParser(\n evaluation_name=evaluation_name,\n choices_map={\"CORRECT\": 1, \"INCORRECT\": 0},\n ),\n )\n tags = kwargs.pop(\"tags\", [])\n return RunEvaluatorChain(\n eval_chain=eval_chain,\n input_mapper=input_mapper,\n output_parser=output_parser,\n tags=tags + [evaluation_name],\n **kwargs,\n )\n[docs]class CriteriaOutputParser(RunEvaluatorOutputParser):\n \"\"\"Parse a criteria results into an evaluation result.\"\"\"\n evaluation_name: str\n @property", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/run_evaluators/implementations.html"} {"id": "0a1dd5ed5ccc-3", "text": "evaluation_name: str\n @property\n def _type(self) -> str:\n return \"criteria\"\n[docs] def parse(self, parsed_output: Union[str, dict]) -> EvaluationResult:\n \"\"\"Parse the last line of the text and return an evaluation result.\"\"\"\n if isinstance(parsed_output, str):\n parsed_output_ = CriteriaResultOutputParser().parse(parsed_output)\n else:\n parsed_output_ = parsed_output\n return EvaluationResult(\n key=self.evaluation_name,\n score=parsed_output_[\"score\"],\n value=parsed_output_[\"value\"],\n comment=parsed_output_[\"reasoning\"],\n )\n[docs]def get_criteria_evaluator(\n llm: BaseLanguageModel,\n criteria: Union[Mapping[str, str], Sequence[str], str],\n *,\n input_key: str = \"input\",\n prediction_key: str = \"output\",\n prompt: Optional[BasePromptTemplate] = None,\n evaluation_name: Optional[str] = None,\n requires_reference: bool = False,\n **kwargs: Any,\n) -> RunEvaluatorChain:\n \"\"\"Get an eval chain for grading a model's response against a map of criteria.\"\"\"\n input_mapper = kwargs.pop(\n \"input_mapper\",\n StringRunEvaluatorInputMapper(\n input_map={input_key: \"input\"},\n prediction_map={prediction_key: \"output\"},\n ),\n )\n criteria_ = CriteriaEvalChain.resolve_criteria(criteria)\n evaluation_name = evaluation_name or \" \".join(criteria_.keys())\n parser = kwargs.pop(\n \"output_parser\",\n CriteriaOutputParser(\n choices_map={\"Y\": 1, \"N\": 0}, evaluation_name=evaluation_name\n ),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/run_evaluators/implementations.html"} {"id": "0a1dd5ed5ccc-4", "text": "),\n )\n tags = kwargs.pop(\"tags\", [])\n eval_chain = CriteriaEvalChain.from_llm(\n llm=llm,\n criteria=criteria_,\n prompt=prompt,\n requires_reference=requires_reference,\n **kwargs,\n )\n return RunEvaluatorChain(\n eval_chain=eval_chain,\n input_mapper=input_mapper,\n output_parser=parser,\n tags=tags + [evaluation_name],\n **kwargs,\n )\n[docs]class TrajectoryRunEvalOutputParser(RunEvaluatorOutputParser, TrajectoryOutputParser):\n evaluation_name: str = \"Agent Trajectory\"\n \"\"\"The name assigned to the evaluation feedback.\"\"\"\n evaluator_info: dict = Field(default_factory=dict)\n \"\"\"Additional information to log as feedback metadata.\"\"\"\n @property\n def _type(self) -> str:\n return \"agent_trajectory_run_eval\"\n[docs] def parse_chain_output(self, output: Dict[str, Any]) -> EvaluationResult:\n \"\"\"Parse the output of a run.\"\"\"\n return EvaluationResult(\n key=self.evaluation_name,\n score=int(output[\"score\"]),\n comment=output[\"reasoning\"],\n evaluator_info=self.evaluator_info,\n )\n[docs]class TrajectoryInputMapper(RunEvaluatorInputMapper, BaseModel):\n \"\"\"Maps the Run and Optional[Example] to a dictionary.\"\"\"\n agent_input_key: str = \"input\"\n \"\"\"The key to load from the agent executor's run input dictionary.\"\"\"\n agent_output_key: str = \"output\"\n \"\"\"The key to load from the agent executor's run output dictionary.\"\"\"\n tool_input_key: str = \"input\"\n \"\"\"The key to load from the tool executor's run input dictionary.\"\"\"\n tool_output_key: str = \"output\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/run_evaluators/implementations.html"} {"id": "0a1dd5ed5ccc-5", "text": "tool_output_key: str = \"output\"\n \"\"\"The key to load from the tool executor's run output dictionary.\"\"\"\n reference_output_key: Optional[str] = None\n \"\"\"The key to use for selecting the reference answer.\"\"\"\n[docs] def map(self, run: Run, example: Optional[Example] = None) -> Dict[str, str]:\n \"\"\"Maps the Run and Optional[Example] to a dictionary\"\"\"\n if run.child_runs is None:\n raise ValueError(\"Run must have child runs to be evaluated.\")\n if run.outputs is None:\n raise ValueError(\"Run must have outputs to be evaluated.\")\n reference = \"\"\n if example is not None and example.outputs:\n if self.reference_output_key is not None:\n reference = example.outputs[self.reference_output_key]\n elif \"output\" in example.outputs:\n reference = example.outputs[\"output\"]\n elif len(example.outputs) == 1:\n reference = next(iter(example.outputs.values()))\n else:\n raise ValueError(\"Could not infer the reference answer from \")\n question = run.inputs[self.agent_input_key]\n tool_runs = [\n run_ for run_ in run.child_runs if run_.run_type == RunTypeEnum.tool\n ]\n agent_steps = []\n for i, run_ in enumerate(tool_runs, 1):\n tool_output = (\n f\"Tool output: {run_.outputs.get(self.tool_output_key, run_.outputs)}\"\n if run_.outputs\n else (f\"Tool error: {run_.error}\" if run_.error else \"No output\")\n )\n agent_steps.append(\n f\"\"\"Step {i}:\nTool used: {run_.name}\nTool input: {run_.inputs.get(self.tool_input_key, run_.inputs)}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/run_evaluators/implementations.html"} {"id": "0a1dd5ed5ccc-6", "text": "Tool input: {run_.inputs.get(self.tool_input_key, run_.inputs)}\nTool output: {tool_output}\"\"\"\n )\n return {\n \"question\": question,\n \"agent_trajectory\": \"\\n\\n\".join(agent_steps),\n \"answer\": run.outputs[self.agent_output_key],\n \"reference\": reference,\n }\n[docs]def get_trajectory_evaluator(\n llm: BaseChatModel,\n agent_tools: Sequence[BaseTool],\n *,\n input_key: str = \"input\",\n prediction_key: str = \"output\",\n tool_input_key: str = \"input\",\n tool_output_key: str = \"output\",\n reference_output_key: Optional[str] = None,\n evaluation_name: str = \"Agent Trajectory\",\n **kwargs: Any,\n) -> RunEvaluatorChain:\n \"\"\"Get an eval chain for grading a model's response against a map of criteria.\"\"\"\n input_mapper = kwargs.pop(\n \"input_mapper\",\n TrajectoryInputMapper(\n agent_input_key=input_key,\n agent_output_key=prediction_key,\n tool_input_key=tool_input_key,\n tool_output_key=tool_output_key,\n reference_output_key=reference_output_key,\n ),\n )\n parser = kwargs.pop(\n \"output_parser\",\n TrajectoryRunEvalOutputParser(evaluation_name=evaluation_name),\n )\n eval_chain = TrajectoryEvalChain.from_llm(\n llm=llm, agent_tools=agent_tools, return_reasoning=True, **kwargs\n )\n tags = kwargs.pop(\"tags\", [])\n return RunEvaluatorChain(\n eval_chain=eval_chain,\n input_mapper=input_mapper,\n output_parser=parser,\n tags=tags + [evaluation_name],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/run_evaluators/implementations.html"} {"id": "0a1dd5ed5ccc-7", "text": "output_parser=parser,\n tags=tags + [evaluation_name],\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/run_evaluators/implementations.html"} {"id": "d06b11cf6cb9-0", "text": "Source code for langchain.evaluation.run_evaluators.base\nfrom __future__ import annotations\nfrom abc import abstractmethod\nfrom typing import Any, Dict, List, Optional\nfrom langchainplus_sdk import EvaluationResult, RunEvaluator\nfrom langchainplus_sdk.schemas import Example, Run\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.schema import RUN_KEY, BaseOutputParser\nclass RunEvaluatorInputMapper:\n \"\"\"Map the inputs of a run to the inputs of an evaluation.\"\"\"\n @abstractmethod\n def map(self, run: Run, example: Optional[Example] = None) -> Dict[str, Any]:\n \"\"\"Maps the Run and Optional[Example] to a dictionary\"\"\"\n def __call__(self, run: Run, example: Optional[Example] = None) -> Any:\n \"\"\"Maps the Run and Optional[Example] to a dictionary\"\"\"\n return self.map(run, example)\n[docs]class RunEvaluatorOutputParser(BaseOutputParser[EvaluationResult]):\n \"\"\"Parse the output of a run.\"\"\"\n eval_chain_output_key: str = \"text\"\n[docs] def parse_chain_output(self, output: Dict[str, Any]) -> EvaluationResult:\n \"\"\"Parse the output of a run.\"\"\"\n text = output[self.eval_chain_output_key]\n return self.parse(text)\n[docs]class RunEvaluatorChain(Chain, RunEvaluator):\n \"\"\"Evaluate Run and optional examples.\"\"\"\n input_mapper: RunEvaluatorInputMapper\n \"\"\"Maps the Run and Optional example to a dictionary for the eval chain.\"\"\"\n eval_chain: Chain\n \"\"\"The evaluation chain.\"\"\"\n output_parser: RunEvaluatorOutputParser\n \"\"\"Parse the output of the eval chain into feedback.\"\"\"\n @property", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/run_evaluators/base.html"} {"id": "d06b11cf6cb9-1", "text": "\"\"\"Parse the output of the eval chain into feedback.\"\"\"\n @property\n def input_keys(self) -> List[str]:\n return [\"run\", \"example\"]\n @property\n def output_keys(self) -> List[str]:\n return [\"feedback\"]\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Call the evaluation chain.\"\"\"\n run: Run = inputs[\"run\"]\n example: Optional[Example] = inputs.get(\"example\")\n chain_input = self.input_mapper.map(run, example)\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n chain_output = self.eval_chain(\n chain_input, callbacks=callbacks, include_run_info=True\n )\n run_info = chain_output[RUN_KEY]\n feedback = self.output_parser.parse_chain_output(chain_output)\n feedback.evaluator_info[RUN_KEY] = run_info\n return {\"feedback\": feedback}\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n run: Run = inputs[\"run\"]\n example: Optional[Example] = inputs.get(\"example\")\n chain_input = self.input_mapper.map(run, example)\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n chain_output = await self.eval_chain.acall(\n chain_input,\n callbacks=callbacks,\n include_run_info=True,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/run_evaluators/base.html"} {"id": "d06b11cf6cb9-2", "text": "callbacks=callbacks,\n include_run_info=True,\n )\n run_info = chain_output[RUN_KEY]\n feedback = self.output_parser.parse_chain_output(chain_output)\n feedback.evaluator_info[RUN_KEY] = run_info\n return {\"feedback\": feedback}\n[docs] def evaluate_run(\n self, run: Run, example: Optional[Example] = None\n ) -> EvaluationResult:\n \"\"\"Evaluate an example.\"\"\"\n return self({\"run\": run, \"example\": example})[\"feedback\"]\n[docs] async def aevaluate_run(\n self, run: Run, example: Optional[Example] = None\n ) -> EvaluationResult:\n \"\"\"Evaluate an example.\"\"\"\n result = await self.acall({\"run\": run, \"example\": example})\n return result[\"feedback\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/run_evaluators/base.html"} {"id": "78d827636ebc-0", "text": "Source code for langchain.evaluation.run_evaluators.loading\n\"\"\"\"Loading helpers for run evaluators.\"\"\"\nfrom typing import Any, List, Optional, Sequence, Union\nfrom langchainplus_sdk import RunEvaluator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.chains.base import Chain\nfrom langchain.evaluation.loading import load_evaluator\nfrom langchain.evaluation.run_evaluators.string_run_evaluator import (\n StringRunEvaluatorChain,\n)\nfrom langchain.evaluation.schema import EvaluatorType, StringEvaluator\nfrom langchain.tools.base import Tool\n[docs]def load_run_evaluator_for_model(\n evaluator: EvaluatorType,\n model: Union[Chain, BaseLanguageModel, Tool],\n *,\n input_key: Optional[str] = None,\n prediction_key: Optional[str] = None,\n reference_key: Optional[str] = None,\n eval_llm: Optional[BaseLanguageModel] = None,\n **kwargs: Any,\n) -> List[RunEvaluator]:\n \"\"\"Load evaluators specified by a list of evaluator types.\n Parameters\n ----------\n evaluator: EvaluatorType\n The evaluator type to load.\n model : Union[Chain, BaseLanguageModel, Tool]\n The model to evaluate. Used to infer how to parse the run.\n input_key : Optional[str], a chain run's input key to map\n to the evaluator's input\n prediction_key : Optional[str], the key in the run's outputs to\n represent the Chain prediction\n reference_key : Optional[str], the key in the dataset example (row)\n outputs to represent the reference, or ground-truth label\n eval_llm : BaseLanguageModel, optional\n The language model to use for evaluation, if none is provided, a default", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/run_evaluators/loading.html"} {"id": "78d827636ebc-1", "text": "The language model to use for evaluation, if none is provided, a default\n ChatOpenAI gpt-4 model will be used.\n **kwargs : Any\n Additional keyword arguments to pass to all evaluators.\n Returns\n -------\n RunEvaluator\n The loaded Run evaluator.\n \"\"\"\n evaluator_ = load_evaluator(evaluator, llm=eval_llm, **kwargs)\n if isinstance(evaluator_, StringEvaluator):\n run_evaluator = StringRunEvaluatorChain.from_model_and_evaluator(\n model,\n evaluator_,\n input_key=input_key,\n prediction_key=prediction_key,\n reference_key=reference_key,\n )\n else:\n raise NotImplementedError(f\"Run evaluator for {evaluator} is not implemented\")\n return run_evaluator\n[docs]def load_run_evaluators_for_model(\n evaluators: Sequence[EvaluatorType],\n model: Union[Chain, BaseLanguageModel, Tool],\n *,\n input_key: Optional[str] = None,\n prediction_key: Optional[str] = None,\n reference_key: Optional[str] = None,\n eval_llm: Optional[BaseLanguageModel] = None,\n config: Optional[dict] = None,\n **kwargs: Any,\n) -> List[RunEvaluator]:\n \"\"\"Load evaluators specified by a list of evaluator types.\n Parameters\n ----------\n evaluators : Sequence[EvaluatorType]\n The list of evaluator types to load.\n model : Union[Chain, BaseLanguageModel, Tool]\n The model to evaluate. Used to infer how to parse the run.\n input_key : Optional[str], a chain run's input key to map\n to the evaluator's input", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/run_evaluators/loading.html"} {"id": "78d827636ebc-2", "text": "to the evaluator's input\n prediction_key : Optional[str], the key in the run's outputs to\n represent the Chain prediction\n reference_key : Optional[str], the key in the dataset example (row)\n outputs to represent the reference, or ground-truth label\n eval_llm : BaseLanguageModel, optional\n The language model to use for evaluation, if none is provided, a default\n ChatOpenAI gpt-4 model will be used.\n **kwargs : Any\n Additional keyword arguments to pass to all evaluators.\n Returns\n -------\n List[RunEvaluator]\n The loaded Run evaluators.\n \"\"\"\n run_evaluators = []\n for evaluator in evaluators:\n _kwargs = config.get(evaluator, {}) if config else {}\n run_evaluators.append(\n load_run_evaluator_for_model(\n evaluator,\n model,\n input_key=input_key,\n prediction_key=prediction_key,\n reference_key=reference_key,\n eval_llm=eval_llm,\n **{**kwargs, **_kwargs},\n )\n )\n return run_evaluators", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/run_evaluators/loading.html"} {"id": "e2cad81bfbfa-0", "text": "Source code for langchain.evaluation.embedding_distance.base\n\"\"\"A chain for comparing the output of two models using embeddings.\"\"\"\nfrom enum import Enum\nfrom typing import Any, Dict, List, Optional\nimport numpy as np\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n Callbacks,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.evaluation.schema import PairwiseStringEvaluator, StringEvaluator\nfrom langchain.math_utils import cosine_similarity\n[docs]class EmbeddingDistance(str, Enum):\n \"\"\"Embedding Distance Metric.\n Attributes:\n COSINE: Cosine distance metric.\n EUCLIDEAN: Euclidean distance metric.\n MANHATTAN: Manhattan distance metric.\n CHEBYSHEV: Chebyshev distance metric.\n HAMMING: Hamming distance metric.\n \"\"\"\n COSINE = \"cosine\"\n EUCLIDEAN = \"euclidean\"\n MANHATTAN = \"manhattan\"\n CHEBYSHEV = \"chebyshev\"\n HAMMING = \"hamming\"\nclass _EmbeddingDistanceChainMixin(Chain):\n \"\"\"Shared functionality for embedding distance evaluators.\n Attributes:\n embeddings (Embeddings): The embedding objects to vectorize the outputs.\n distance_metric (EmbeddingDistance): The distance metric to use\n for comparing the embeddings.\n \"\"\"\n embeddings: Embeddings = Field(default_factory=OpenAIEmbeddings)\n distance_metric: EmbeddingDistance = Field(default=EmbeddingDistance.COSINE)\n class Config:\n \"\"\"Permit embeddings to go unvalidated.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/embedding_distance/base.html"} {"id": "e2cad81bfbfa-1", "text": "class Config:\n \"\"\"Permit embeddings to go unvalidated.\"\"\"\n arbitrary_types_allowed: bool = True\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the output keys of the chain.\n Returns:\n List[str]: The output keys.\n \"\"\"\n return [\"score\"]\n def _get_metric(self, metric: EmbeddingDistance) -> Any:\n \"\"\"Get the metric function for the given metric name.\n Args:\n metric (EmbeddingDistance): The metric name.\n Returns:\n Any: The metric function.\n \"\"\"\n metrics = {\n EmbeddingDistance.COSINE: self._cosine_distance,\n EmbeddingDistance.EUCLIDEAN: self._euclidean_distance,\n EmbeddingDistance.MANHATTAN: self._manhattan_distance,\n EmbeddingDistance.CHEBYSHEV: self._chebyshev_distance,\n EmbeddingDistance.HAMMING: self._hamming_distance,\n }\n if metric in metrics:\n return metrics[metric]\n else:\n raise ValueError(f\"Invalid metric: {metric}\")\n @staticmethod\n def _cosine_distance(a: np.ndarray, b: np.ndarray) -> np.ndarray:\n \"\"\"Compute the cosine distance between two vectors.\n Args:\n a (np.ndarray): The first vector.\n b (np.ndarray): The second vector.\n Returns:\n np.ndarray: The cosine distance.\n \"\"\"\n return 1.0 - cosine_similarity(a, b)\n @staticmethod\n def _euclidean_distance(a: np.ndarray, b: np.ndarray) -> np.floating:\n \"\"\"Compute the Euclidean distance between two vectors.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/embedding_distance/base.html"} {"id": "e2cad81bfbfa-2", "text": "\"\"\"Compute the Euclidean distance between two vectors.\n Args:\n a (np.ndarray): The first vector.\n b (np.ndarray): The second vector.\n Returns:\n np.floating: The Euclidean distance.\n \"\"\"\n return np.linalg.norm(a - b)\n @staticmethod\n def _manhattan_distance(a: np.ndarray, b: np.ndarray) -> np.floating:\n \"\"\"Compute the Manhattan distance between two vectors.\n Args:\n a (np.ndarray): The first vector.\n b (np.ndarray): The second vector.\n Returns:\n np.floating: The Manhattan distance.\n \"\"\"\n return np.sum(np.abs(a - b))\n @staticmethod\n def _chebyshev_distance(a: np.ndarray, b: np.ndarray) -> np.floating:\n \"\"\"Compute the Chebyshev distance between two vectors.\n Args:\n a (np.ndarray): The first vector.\n b (np.ndarray): The second vector.\n Returns:\n np.floating: The Chebyshev distance.\n \"\"\"\n return np.max(np.abs(a - b))\n @staticmethod\n def _hamming_distance(a: np.ndarray, b: np.ndarray) -> np.floating:\n \"\"\"Compute the Hamming distance between two vectors.\n Args:\n a (np.ndarray): The first vector.\n b (np.ndarray): The second vector.\n Returns:\n np.floating: The Hamming distance.\n \"\"\"\n return np.mean(a != b)\n def _compute_score(self, vectors: np.ndarray) -> float:\n \"\"\"Compute the score based on the distance metric.\n Args:\n vectors (np.ndarray): The input vectors.\n Returns:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/embedding_distance/base.html"} {"id": "e2cad81bfbfa-3", "text": "Args:\n vectors (np.ndarray): The input vectors.\n Returns:\n float: The computed score.\n \"\"\"\n metric = self._get_metric(self.distance_metric)\n score = metric(vectors[0].reshape(1, -1), vectors[1].reshape(1, -1)).item()\n return score\n[docs]class EmbeddingDistanceEvalChain(_EmbeddingDistanceChainMixin, StringEvaluator):\n \"\"\"Use embedding distances to score semantic difference between\n a prediction and reference.\n Examples:\n >>> chain = EmbeddingDistanceEvalChain()\n >>> result = chain.evaluate_strings(prediction=\"Hello\", reference=\"Hi\")\n >>> print(result)\n {'score': 0.5}\n \"\"\"\n @property\n def requires_reference(self) -> bool:\n \"\"\"Return whether the chain requires a reference.\n Returns:\n bool: True if a reference is required, False otherwise.\n \"\"\"\n return True\n @property\n def evaluation_name(self) -> str:\n return f\"embedding_{self.distance_metric.value}_distance\"\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys of the chain.\n Returns:\n List[str]: The input keys.\n \"\"\"\n return [\"prediction\", \"reference\"]\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Compute the score for a prediction and reference.\n Args:\n inputs (Dict[str, Any]): The input data.\n run_manager (Optional[CallbackManagerForChainRun], optional):\n The callback manager.\n Returns:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/embedding_distance/base.html"} {"id": "e2cad81bfbfa-4", "text": "The callback manager.\n Returns:\n Dict[str, Any]: The computed score.\n \"\"\"\n vectors = np.array(\n self.embeddings.embed_documents([inputs[\"prediction\"], inputs[\"reference\"]])\n )\n score = self._compute_score(vectors)\n return {\"score\": score}\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Asynchronously compute the score for a prediction and reference.\n Args:\n inputs (Dict[str, Any]): The input data.\n run_manager (AsyncCallbackManagerForChainRun, optional):\n The callback manager.\n Returns:\n Dict[str, Any]: The computed score.\n \"\"\"\n embedded = await self.embeddings.aembed_documents(\n [inputs[\"prediction\"], inputs[\"reference\"]]\n )\n vectors = np.array(embedded)\n score = self._compute_score(vectors)\n return {\"score\": score}\n def _evaluate_strings(\n self,\n *,\n prediction: str,\n reference: Optional[str] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"Evaluate the embedding distance between a prediction and\n reference.\n Args:\n prediction (str): The output string from the first model.\n reference (str): The reference string (required)\n callbacks (Callbacks, optional): The callbacks to use.\n **kwargs (Any): Additional keyword arguments.\n Returns:\n dict: A dictionary containing:\n - score: The embedding distance between the two\n predictions.\n \"\"\"\n return self(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/embedding_distance/base.html"} {"id": "e2cad81bfbfa-5", "text": "predictions.\n \"\"\"\n return self(\n inputs={\"prediction\": prediction, \"reference\": reference},\n callbacks=callbacks,\n )\n async def _aevaluate_strings(\n self,\n *,\n prediction: str,\n reference: Optional[str] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"Asynchronously evaluate the embedding distance between\n a prediction and reference.\n Args:\n prediction (str): The output string from the first model.\n reference (str): The output string from the second model.\n callbacks (Callbacks, optional): The callbacks to use.\n **kwargs (Any): Additional keyword arguments.\n Returns:\n dict: A dictionary containing:\n - score: The embedding distance between the two\n predictions.\n \"\"\"\n return await self.acall(\n inputs={\"prediction\": prediction, \"reference\": reference},\n callbacks=callbacks,\n )\n[docs]class PairwiseEmbeddingDistanceEvalChain(\n _EmbeddingDistanceChainMixin, PairwiseStringEvaluator\n):\n \"\"\"Use embedding distances to score semantic difference between two predictions.\n Examples:\n >>> chain = PairwiseEmbeddingDistanceEvalChain()\n >>> result = chain.evaluate_string_pairs(prediction=\"Hello\", prediction_b=\"Hi\")\n >>> print(result)\n {'score': 0.5}\n \"\"\"\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys of the chain.\n Returns:\n List[str]: The input keys.\n \"\"\"\n return [\"prediction\", \"prediction_b\"]\n @property\n def evaluation_name(self) -> str:\n return f\"pairwise_embedding_{self.distance_metric.value}_distance\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/embedding_distance/base.html"} {"id": "e2cad81bfbfa-6", "text": "return f\"pairwise_embedding_{self.distance_metric.value}_distance\"\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Compute the score for two predictions.\n Args:\n inputs (Dict[str, Any]): The input data.\n run_manager (CallbackManagerForChainRun, optional):\n The callback manager.\n Returns:\n Dict[str, Any]: The computed score.\n \"\"\"\n vectors = np.array(\n self.embeddings.embed_documents(\n [inputs[\"prediction\"], inputs[\"prediction_b\"]]\n )\n )\n score = self._compute_score(vectors)\n return {\"score\": score}\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Asynchronously compute the score for two predictions.\n Args:\n inputs (Dict[str, Any]): The input data.\n run_manager (AsyncCallbackManagerForChainRun, optional):\n The callback manager.\n Returns:\n Dict[str, Any]: The computed score.\n \"\"\"\n embedded = await self.embeddings.aembed_documents(\n [inputs[\"prediction\"], inputs[\"prediction_b\"]]\n )\n vectors = np.array(embedded)\n score = self._compute_score(vectors)\n return {\"score\": score}\n def _evaluate_string_pairs(\n self,\n *,\n prediction: str,\n prediction_b: str,\n callbacks: Callbacks = None,\n tags: Optional[List[str]] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/embedding_distance/base.html"} {"id": "e2cad81bfbfa-7", "text": "callbacks: Callbacks = None,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"Evaluate the embedding distance between two predictions.\n Args:\n prediction (str): The output string from the first model.\n prediction_b (str): The output string from the second model.\n callbacks (Callbacks, optional): The callbacks to use.\n tags (List[str], optional): Tags to apply to traces\n metadata (Dict[str, Any], optional): metadata to apply to\n **kwargs (Any): Additional keyword arguments.\n Returns:\n dict: A dictionary containing:\n - score: The embedding distance between the two\n predictions.\n \"\"\"\n result = self(\n inputs={\"prediction\": prediction, \"prediction_b\": prediction_b},\n callbacks=callbacks,\n tags=tags,\n metadata=metadata,\n )\n return {\"score\": result[\"score\"]}\n async def _aevaluate_string_pairs(\n self,\n *,\n prediction: str,\n prediction_b: str,\n callbacks: Callbacks = None,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"Asynchronously evaluate the embedding distance\n between two predictions.\n Args:\n prediction (str): The output string from the first model.\n prediction_b (str): The output string from the second model.\n callbacks (Callbacks, optional): The callbacks to use.\n tags (List[str], optional): Tags to apply to traces\n metadata (Dict[str, Any], optional): metadata to apply to traces", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/embedding_distance/base.html"} {"id": "e2cad81bfbfa-8", "text": "metadata (Dict[str, Any], optional): metadata to apply to traces\n **kwargs (Any): Additional keyword arguments.\n Returns:\n dict: A dictionary containing:\n - score: The embedding distance between the two\n predictions.\n \"\"\"\n result = await self.acall(\n inputs={\"prediction\": prediction, \"prediction_b\": prediction_b},\n callbacks=callbacks,\n tags=tags,\n metadata=metadata,\n )\n return {\"score\": result[\"score\"]}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/embedding_distance/base.html"} {"id": "50743e190d6b-0", "text": "Source code for langchain.evaluation.qa.generate_chain\n\"\"\"LLM Chain specifically for generating examples for question answering.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any\nfrom langchain.chains.llm import LLMChain\nfrom langchain.evaluation.qa.generate_prompt import PROMPT\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class QAGenerateChain(LLMChain):\n \"\"\"LLM Chain specifically for generating examples for question answering.\"\"\"\n[docs] @classmethod\n def from_llm(cls, llm: BaseLanguageModel, **kwargs: Any) -> QAGenerateChain:\n \"\"\"Load QA Generate Chain from LLM.\"\"\"\n return cls(llm=llm, prompt=PROMPT, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/qa/generate_chain.html"} {"id": "b9ce18a7318a-0", "text": "Source code for langchain.evaluation.qa.eval_chain\n\"\"\"LLM Chain specifically for evaluating question answering.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, List, Optional, Sequence\nfrom pydantic import Extra\nfrom langchain import PromptTemplate\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.chains.llm import LLMChain\nfrom langchain.evaluation.qa.eval_prompt import CONTEXT_PROMPT, COT_PROMPT, PROMPT\nfrom langchain.evaluation.schema import LLMEvalChain, StringEvaluator\nfrom langchain.schema.language_model import BaseLanguageModel\ndef _parse_string_eval_output(text: str) -> dict:\n \"\"\"Parse the output text.\n Args:\n text (str): The output text to parse.\n Returns:\n Any: The parsed output.\n \"\"\"\n splits = text.strip().rsplit(\"\\n\", maxsplit=1)\n if len(splits) == 1:\n verdict = splits[0]\n reasoning = None\n else:\n reasoning, verdict = splits\n reasoning = reasoning.strip()\n score = (\n 1\n if verdict.upper() == \"CORRECT\"\n else (0 if verdict.upper() == \"INCORRECT\" else None)\n )\n return {\n \"reasoning\": reasoning,\n \"value\": verdict,\n \"score\": score,\n }\n[docs]class QAEvalChain(LLMChain, StringEvaluator, LLMEvalChain):\n \"\"\"LLM Chain specifically for evaluating question answering.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for the QAEvalChain.\"\"\"\n extra = Extra.ignore\n @property\n def evaluation_name(self) -> str:\n return \"correctness\"\n @property\n def requires_reference(self) -> bool:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/qa/eval_chain.html"} {"id": "b9ce18a7318a-1", "text": "@property\n def requires_reference(self) -> bool:\n return True\n @property\n def requires_input(self) -> bool:\n return True\n[docs] @classmethod\n def from_llm(\n cls, llm: BaseLanguageModel, prompt: PromptTemplate = PROMPT, **kwargs: Any\n ) -> QAEvalChain:\n \"\"\"Load QA Eval Chain from LLM.\n Args:\n llm (BaseLanguageModel): the base language model to use.\n prompt (PromptTemplate): A prompt template containing the input_variables:\n 'input', 'answer' and 'result' that will be used as the prompt\n for evaluation.\n Defaults to PROMPT.\n **kwargs: additional keyword arguments.\n Returns:\n QAEvalChain: the loaded QA eval chain.\n \"\"\"\n expected_input_vars = {\"query\", \"answer\", \"result\"}\n if expected_input_vars != set(prompt.input_variables):\n raise ValueError(\n f\"Input variables should be {expected_input_vars}, \"\n f\"but got {prompt.input_variables}\"\n )\n return cls(llm=llm, prompt=prompt, **kwargs)\n[docs] def evaluate(\n self,\n examples: Sequence[dict],\n predictions: Sequence[dict],\n question_key: str = \"query\",\n answer_key: str = \"answer\",\n prediction_key: str = \"result\",\n *,\n callbacks: Callbacks = None,\n ) -> List[dict]:\n \"\"\"Evaluate question answering examples and predictions.\"\"\"\n inputs = [\n {\n \"query\": example[question_key],\n \"answer\": example[answer_key],\n \"result\": predictions[i][prediction_key],\n }", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/qa/eval_chain.html"} {"id": "b9ce18a7318a-2", "text": "\"result\": predictions[i][prediction_key],\n }\n for i, example in enumerate(examples)\n ]\n return self.apply(inputs, callbacks=callbacks)\n def _evaluate_strings(\n self,\n *,\n prediction: str,\n reference: Optional[str] = None,\n input: Optional[str] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"Evaluate Chain or LLM output, based on optional input and label.\n Args:\n prediction (str): the LLM or chain prediction to evaluate.\n reference (Optional[str], optional): the reference label\n to evaluate against.\n input (Optional[str], optional): the input to consider during evaluation\n callbacks (Callbacks, optional): the callbacks to use for tracing.\n **kwargs: additional keyword arguments, including callbacks, tags, etc.\n Returns:\n dict: The evaluation results containing the score or value.\n \"\"\"\n result = self.evaluate(\n examples=[{\"query\": input, \"answer\": reference}],\n predictions=[{\"result\": prediction}],\n callbacks=callbacks,\n )[0]\n return _parse_string_eval_output(result[\"text\"])\n async def _aevaluate_strings(\n self,\n *,\n prediction: str,\n reference: Optional[str] = None,\n input: Optional[str] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> dict:\n result = await self.acall(\n inputs={\"query\": input, \"answer\": reference, \"result\": prediction},\n callbacks=callbacks,\n )\n return _parse_string_eval_output(result[\"text\"])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/qa/eval_chain.html"} {"id": "b9ce18a7318a-3", "text": ")\n return _parse_string_eval_output(result[\"text\"])\n[docs]class ContextQAEvalChain(LLMChain, StringEvaluator, LLMEvalChain):\n \"\"\"LLM Chain specifically for evaluating QA w/o GT based on context\"\"\"\n @property\n def requires_reference(self) -> bool:\n \"\"\"Whether the chain requires a reference string.\"\"\"\n return True\n @property\n def requires_input(self) -> bool:\n \"\"\"Whether the chain requires an input string.\"\"\"\n return True\n[docs] class Config:\n \"\"\"Configuration for the QAEvalChain.\"\"\"\n extra = Extra.ignore\n @classmethod\n def _validate_input_vars(cls, prompt: PromptTemplate) -> None:\n expected_input_vars = {\"query\", \"context\", \"result\"}\n if expected_input_vars != set(prompt.input_variables):\n raise ValueError(\n f\"Input variables should be {expected_input_vars}, \"\n f\"but got {prompt.input_variables}\"\n )\n @property\n def evaluation_name(self) -> str:\n return \"Contextual Accuracy\"\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n prompt: PromptTemplate = CONTEXT_PROMPT,\n **kwargs: Any,\n ) -> ContextQAEvalChain:\n \"\"\"Load QA Eval Chain from LLM.\n Args:\n llm (BaseLanguageModel): the base language model to use.\n prompt (PromptTemplate): A prompt template containing the input_variables:\n 'query', 'context' and 'result' that will be used as the prompt\n for evaluation.\n Defaults to PROMPT.\n **kwargs: additional keyword arguments.\n Returns:\n ContextQAEvalChain: the loaded QA eval chain.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/qa/eval_chain.html"} {"id": "b9ce18a7318a-4", "text": "Returns:\n ContextQAEvalChain: the loaded QA eval chain.\n \"\"\"\n cls._validate_input_vars(prompt)\n return cls(llm=llm, prompt=prompt, **kwargs)\n[docs] def evaluate(\n self,\n examples: List[dict],\n predictions: List[dict],\n question_key: str = \"query\",\n context_key: str = \"context\",\n prediction_key: str = \"result\",\n *,\n callbacks: Callbacks = None,\n ) -> List[dict]:\n \"\"\"Evaluate question answering examples and predictions.\"\"\"\n inputs = [\n {\n \"query\": example[question_key],\n \"context\": example[context_key],\n \"result\": predictions[i][prediction_key],\n }\n for i, example in enumerate(examples)\n ]\n return self.apply(inputs, callbacks=callbacks)\n def _evaluate_strings(\n self,\n *,\n prediction: str,\n reference: Optional[str] = None,\n input: Optional[str] = None,\n **kwargs: Any,\n ) -> dict:\n result = self.evaluate(\n examples=[{\"query\": input, \"context\": reference}],\n predictions=[{\"result\": prediction}],\n callbacks=kwargs.get(\"callbacks\"),\n )[0]\n return _parse_string_eval_output(result[\"text\"])\n async def _aevaluate_strings(\n self,\n *,\n prediction: str,\n reference: Optional[str] = None,\n input: Optional[str] = None,\n **kwargs: Any,\n ) -> dict:\n result = await self.acall(\n inputs={\"query\": input, \"context\": reference, \"result\": prediction},", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/qa/eval_chain.html"} {"id": "b9ce18a7318a-5", "text": "inputs={\"query\": input, \"context\": reference, \"result\": prediction},\n callbacks=kwargs.get(\"callbacks\"),\n )\n return _parse_string_eval_output(result[\"text\"])\n[docs]class CotQAEvalChain(ContextQAEvalChain):\n \"\"\"LLM Chain specifically for evaluating QA using chain of thought reasoning.\"\"\"\n @property\n def evaluation_name(self) -> str:\n return \"COT Contextual Accuracy\"\n[docs] @classmethod\n def from_llm(\n cls, llm: BaseLanguageModel, prompt: PromptTemplate = COT_PROMPT, **kwargs: Any\n ) -> CotQAEvalChain:\n cls._validate_input_vars(prompt)\n return cls(llm=llm, prompt=prompt, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/qa/eval_chain.html"} {"id": "b322b487f703-0", "text": "Source code for langchain.evaluation.agents.trajectory_eval_chain\n\"\"\"A chain for evaluating ReAct style agents.\nThis chain is used to evaluate ReAct style agents by reasoning about\nthe sequence of actions taken and their outcomes. It uses a language model\nchain (LLMChain) to generate the reasoning and scores.\n\"\"\"\nfrom typing import Any, Dict, List, NamedTuple, Optional, Sequence, Tuple, Union\nfrom pydantic import Extra, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n Callbacks,\n)\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chat_models.base import BaseChatModel\nfrom langchain.evaluation.agents.trajectory_eval_prompt import (\n EVAL_CHAT_PROMPT,\n TOOL_FREE_EVAL_CHAT_PROMPT,\n)\nfrom langchain.evaluation.schema import AgentTrajectoryEvaluator, LLMEvalChain\nfrom langchain.schema import AgentAction, BaseOutputParser, OutputParserException\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.tools.base import BaseTool\n[docs]class TrajectoryEval(NamedTuple):\n score: int\n reasoning: str\n[docs]class TrajectoryOutputParser(BaseOutputParser):\n @property\n def _type(self) -> str:\n return \"agent_trajectory\"\n[docs] def parse(self, text: str) -> TrajectoryEval:\n \"\"\"Parse the output text and extract the score and reasoning.\n Args:\n text (str): The output text to parse.\n Returns:\n TrajectoryEval: A named tuple containing the score and reasoning.\n Raises:\n OutputParserException: If the score is not found in the output text or\n if the score is not a digit in the range 1-5.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/agents/trajectory_eval_chain.html"} {"id": "b322b487f703-1", "text": "if the score is not a digit in the range 1-5.\n \"\"\"\n if \"Score:\" not in text:\n raise OutputParserException(\n f\"Could not find score in model eval output: {text}\"\n )\n reasoning, score_str = text.split(\"Score: \")\n reasoning, score_str = reasoning.strip(), score_str.strip()\n score_str = next(\n (char for char in score_str if char.isdigit()), \"0\"\n ) # Scan for first digit\n if not 1 <= int(score_str) <= 5:\n raise OutputParserException(\n f\"Score is not a digit in the range 1-5: {text}\"\n )\n return TrajectoryEval(score=int(score_str), reasoning=reasoning)\n[docs]class TrajectoryEvalChain(AgentTrajectoryEvaluator, LLMEvalChain):\n \"\"\"A chain for evaluating ReAct style agents.\n This chain is used to evaluate ReAct style agents by reasoning about\n the sequence of actions taken and their outcomes.\n Example:\n .. code-block:: python\n from langchain.agents import AgentType, initialize_agent\n from langchain.chat_models import ChatOpenAI\n from langchain.evaluation import TrajectoryEvalChain\n from langchain.tools import tool\n @tool\n def geography_answers(country: str, question: str) -> str:\n \\\"\\\"\\\"Very helpful answers to geography questions.\\\"\\\"\\\"\n return f\"{country}? IDK - We may never know {question}.\"\n llm = ChatOpenAI(model=\"gpt-3.5-turbo-0613\", temperature=0)\n agent = initialize_agent(\n tools=[geography_answers],\n llm=llm,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/agents/trajectory_eval_chain.html"} {"id": "b322b487f703-2", "text": "tools=[geography_answers],\n llm=llm,\n agent=AgentType.OPENAI_FUNCTIONS,\n return_intermediate_steps=True,\n )\n question = \"How many dwell in the largest minor region in Argentina?\"\n response = agent(question)\n eval_chain = TrajectoryEvalChain.from_llm(\n llm=llm, agent_tools=[geography_answers], return_reasoning=True\n )\n result = eval_chain.evaluate_agent_trajectory(\n input=question,\n agent_trajectory=response[\"intermediate_steps\"],\n prediction=response[\"output\"],\n reference=\"Paris\",\n )\n print(result[\"score\"])\n # 0\n \"\"\" # noqa: E501\n agent_tools: Optional[List[BaseTool]] = None\n \"\"\"A list of tools available to the agent.\"\"\"\n eval_chain: LLMChain\n \"\"\"The language model chain used for evaluation.\"\"\"\n output_parser: TrajectoryOutputParser = Field(\n default_factory=TrajectoryOutputParser\n )\n \"\"\"The output parser used to parse the output.\"\"\"\n return_reasoning: bool = False\n \"\"\"Whether to return the reasoning along with the score.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for the QAEvalChain.\"\"\"\n extra = Extra.ignore\n @property\n def _tools_description(self) -> str:\n \"\"\"Get the description of the agent tools.\n Returns:\n str: The description of the agent tools.\n \"\"\"\n if self.agent_tools is None:\n return \"\"\n return \"\\n\\n\".join(\n [\n f\"\"\"Tool {i}: {tool.name}\nDescription: {tool.description}\"\"\"\n for i, tool in enumerate(self.agent_tools, 1)\n ]\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/agents/trajectory_eval_chain.html"} {"id": "b322b487f703-3", "text": "]\n )\n[docs] @staticmethod\n def get_agent_trajectory(\n steps: Union[str, Sequence[Tuple[AgentAction, str]]]\n ) -> str:\n \"\"\"Get the agent trajectory as a formatted string.\n Args:\n steps (Union[str, List[Tuple[AgentAction, str]]]): The agent trajectory.\n Returns:\n str: The formatted agent trajectory.\n \"\"\"\n if isinstance(steps, str):\n return steps\n return \"\\n\\n\".join(\n [\n f\"\"\"Step {i}:\nTool used: {action.tool}\nTool input: {action.tool_input}\nTool output: {output}\"\"\"\n for i, (action, output) in enumerate(steps, 1)\n ]\n )\n @staticmethod\n def _format_reference(reference: Optional[str]) -> str:\n \"\"\"Format the reference text.\n Args:\n reference (str): The reference text.\n Returns:\n str: The formatted reference text.\n \"\"\"\n if not reference:\n return \"\"\n return f\"\"\"\nThe following is the expected answer. Use this to measure correctness:\n[GROUND_TRUTH]\n{reference}\n[END_GROUND_TRUTH]\n\"\"\"\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n agent_tools: Optional[Sequence[BaseTool]] = None,\n output_parser: Optional[TrajectoryOutputParser] = None,\n return_reasoning: bool = False,\n **kwargs: Any,\n ) -> \"TrajectoryEvalChain\":\n \"\"\"Create a TrajectoryEvalChain object from a language model chain.\n Args:\n llm (BaseChatModel): The language model chain.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/agents/trajectory_eval_chain.html"} {"id": "b322b487f703-4", "text": "Args:\n llm (BaseChatModel): The language model chain.\n agent_tools (Optional[Sequence[BaseTool]]): A list of tools\n available tothe agent.\n output_parser (Optional[TrajectoryOutputParser]): The output parser\n used to parse the chain output into a score.\n return_reasoning (bool): Whether to return the\n reasoning along with the score.\n Returns:\n TrajectoryEvalChain: The TrajectoryEvalChain object.\n \"\"\"\n if not isinstance(llm, BaseChatModel):\n raise NotImplementedError(\n \"Only chat models supported by the current trajectory eval\"\n )\n if agent_tools:\n prompt = EVAL_CHAT_PROMPT\n else:\n prompt = TOOL_FREE_EVAL_CHAT_PROMPT\n eval_chain = LLMChain(llm=llm, prompt=prompt)\n return cls(\n agent_tools=agent_tools,\n return_reasoning=return_reasoning,\n eval_chain=eval_chain,\n output_parser=output_parser or TrajectoryOutputParser(),\n **kwargs,\n )\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Get the input keys for the chain.\n Returns:\n List[str]: The input keys.\n \"\"\"\n return [\"question\", \"agent_trajectory\", \"answer\", \"reference\"]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Get the output keys for the chain.\n Returns:\n List[str]: The output keys.\n \"\"\"\n if self.return_reasoning:\n return [\"score\", \"reasoning\"]\n return [\"score\"]\n[docs] def prep_inputs(self, inputs: Union[Dict[str, Any], Any]) -> Dict[str, str]:\n \"\"\"Validate and prep inputs.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/agents/trajectory_eval_chain.html"} {"id": "b322b487f703-5", "text": "\"\"\"Validate and prep inputs.\"\"\"\n if \"reference\" not in inputs:\n inputs[\"reference\"] = self._format_reference(inputs.get(\"reference\"))\n return super().prep_inputs(inputs)\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Run the chain and generate the output.\n Args:\n inputs (Dict[str, str]): The input values for the chain.\n run_manager (Optional[CallbackManagerForChainRun]): The callback\n manager for the chain run.\n Returns:\n Dict[str, Any]: The output values of the chain.\n \"\"\"\n chain_input = {**inputs}\n if self.agent_tools:\n chain_input[\"tool_descriptions\"] = self._tools_description\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n raw_output = self.eval_chain.run(\n chain_input, callbacks=_run_manager.get_child()\n )\n parsed_output = self.output_parser.parse(raw_output)\n if self.return_reasoning:\n return {\"score\": parsed_output.score, \"reasoning\": parsed_output.reasoning}\n return {\"score\": parsed_output.score}\n async def _acall(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Run the chain and generate the output.\n Args:\n inputs (Dict[str, str]): The input values for the chain.\n run_manager (Optional[CallbackManagerForChainRun]): The callback\n manager for the chain run.\n Returns:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/agents/trajectory_eval_chain.html"} {"id": "b322b487f703-6", "text": "manager for the chain run.\n Returns:\n Dict[str, Any]: The output values of the chain.\n \"\"\"\n chain_input = {**inputs}\n if self.agent_tools:\n chain_input[\"tool_descriptions\"] = self._tools_description\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n raw_output = await self.eval_chain.arun(\n chain_input, callbacks=_run_manager.get_child()\n )\n parsed_output = self.output_parser.parse(raw_output)\n if self.return_reasoning:\n return {\"score\": parsed_output.score, \"reasoning\": parsed_output.reasoning}\n return {\"score\": parsed_output.score}\n def _evaluate_agent_trajectory(\n self,\n *,\n prediction: str,\n input: str,\n agent_trajectory: Sequence[Tuple[AgentAction, str]],\n reference: Optional[str] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"Evaluate a trajectory.\n Args:\n prediction (str): The final predicted response.\n input (str): The input to the agent.\n agent_trajectory (List[Tuple[AgentAction, str]]):\n The intermediate steps forming the agent trajectory.\n reference (Optional[str]): The reference answer.\n callbacks (Callbacks): Callbacks to use for this chain run.\n Returns:\n dict: The evaluation result, which includes the score and optionally\n the reasoning for reaching that.\n \"\"\"\n inputs = {\n \"question\": input,\n \"agent_trajectory\": self.get_agent_trajectory(agent_trajectory),\n \"answer\": prediction,\n \"reference\": reference,\n }\n return self(inputs=inputs, callbacks=callbacks, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/agents/trajectory_eval_chain.html"} {"id": "b322b487f703-7", "text": "}\n return self(inputs=inputs, callbacks=callbacks, **kwargs)\n async def _aevaluate_agent_trajectory(\n self,\n *,\n prediction: str,\n input: str,\n agent_trajectory: Sequence[Tuple[AgentAction, str]],\n reference: Optional[str] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"Asynchronously evaluate a trajectory.\n Args:\n prediction (str): The final predicted response.\n input (str): The input to the agent.\n agent_trajectory (List[Tuple[AgentAction, str]]):\n The intermediate steps forming the agent trajectory.\n reference (Optional[str]): The reference answer.\n callbacks (Callbacks): Callbacks to use for this chain run.\n Returns:\n dict: The evaluation result, which includes the score and optionally\n the reasoning for reaching that.\n \"\"\"\n inputs = {\n \"question\": input,\n \"agent_trajectory\": self.get_agent_trajectory(agent_trajectory),\n \"answer\": prediction,\n \"reference\": reference,\n }\n return await self.acall(\n inputs=inputs,\n callbacks=callbacks,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/agents/trajectory_eval_chain.html"} {"id": "02efb3f3957c-0", "text": "Source code for langchain.evaluation.string_distance.base\n\"\"\"String distance evaluators based on the RapidFuzz library.\"\"\"\nfrom enum import Enum\nfrom typing import Any, Callable, Dict, List, Optional\nfrom pydantic import Field, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n Callbacks,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.evaluation.schema import PairwiseStringEvaluator, StringEvaluator\ndef _load_rapidfuzz() -> Any:\n \"\"\"\n Load the RapidFuzz library.\n Raises:\n ImportError: If the rapidfuzz library is not installed.\n Returns:\n Any: The rapidfuzz.distance module.\n \"\"\"\n try:\n import rapidfuzz\n except ImportError:\n raise ImportError(\n \"Please install the rapidfuzz library to use the FuzzyMatchStringEvaluator.\"\n )\n return rapidfuzz.distance\n[docs]class StringDistance(str, Enum):\n \"\"\"Distance metric to use.\"\"\"\n DAMERAU_LEVENSHTEIN = \"damerau_levenshtein\"\n LEVENSHTEIN = \"levenshtein\"\n JARO = \"jaro\"\n JARO_WINKLER = \"jaro_winkler\"\nclass _RapidFuzzChainMixin(Chain):\n \"\"\"Shared methods for the rapidfuzz string distance evaluators.\"\"\"\n distance: StringDistance = Field(default=StringDistance.LEVENSHTEIN)\n @root_validator\n def validate_dependencies(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"\n Validate that the rapidfuzz library is installed.\n Args:\n values (Dict[str, Any]): The input values.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/string_distance/base.html"} {"id": "02efb3f3957c-1", "text": "Args:\n values (Dict[str, Any]): The input values.\n Returns:\n Dict[str, Any]: The validated values.\n \"\"\"\n _load_rapidfuzz()\n return values\n @property\n def output_keys(self) -> List[str]:\n \"\"\"\n Get the output keys.\n Returns:\n List[str]: The output keys.\n \"\"\"\n return [\"score\"]\n @staticmethod\n def _get_metric(distance: str) -> Callable:\n \"\"\"\n Get the distance metric function based on the distance type.\n Args:\n distance (str): The distance type.\n Returns:\n Callable: The distance metric function.\n Raises:\n ValueError: If the distance metric is invalid.\n \"\"\"\n rf_distance = _load_rapidfuzz()\n if distance == StringDistance.DAMERAU_LEVENSHTEIN:\n return rf_distance.DamerauLevenshtein.distance\n elif distance == StringDistance.LEVENSHTEIN:\n return rf_distance.Levenshtein.distance\n elif distance == StringDistance.JARO:\n return rf_distance.Jaro.distance\n elif distance == StringDistance.JARO_WINKLER:\n return rf_distance.JaroWinkler.distance\n else:\n raise ValueError(f\"Invalid distance metric: {distance}\")\n @property\n def metric(self) -> Callable:\n \"\"\"\n Get the distance metric function.\n Returns:\n Callable: The distance metric function.\n \"\"\"\n return _RapidFuzzChainMixin._get_metric(self.distance)\n[docs]class StringDistanceEvalChain(_RapidFuzzChainMixin, StringEvaluator):\n \"\"\"Compute string distances between the prediction and the reference.\"\"\"\n @property", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/string_distance/base.html"} {"id": "02efb3f3957c-2", "text": "\"\"\"Compute string distances between the prediction and the reference.\"\"\"\n @property\n def requires_input(self) -> bool:\n \"\"\"\n Check if input is required.\n Returns:\n bool: True if input is required, False otherwise.\n \"\"\"\n return False\n @property\n def requires_reference(self) -> bool:\n \"\"\"\n Check if reference is required.\n Returns:\n bool: True if reference is required, False otherwise.\n \"\"\"\n return True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"\n Get the input keys.\n Returns:\n List[str]: The input keys.\n \"\"\"\n return [\"reference\", \"prediction\"]\n @property\n def evaluation_name(self) -> str:\n return f\"{self.distance.value}_distance\"\n @staticmethod\n def _get_metric(distance: str) -> Callable:\n \"\"\"\n Get the distance metric function based on the distance type.\n Args:\n distance (str): The distance type.\n Returns:\n Callable: The distance metric function.\n Raises:\n ValueError: If the distance metric is invalid.\n \"\"\"\n rf_distance = _load_rapidfuzz()\n if distance == StringDistance.DAMERAU_LEVENSHTEIN:\n return rf_distance.DamerauLevenshtein.distance\n elif distance == StringDistance.LEVENSHTEIN:\n return rf_distance.Levenshtein.distance\n elif distance == StringDistance.JARO:\n return rf_distance.Jaro.distance\n elif distance == StringDistance.JARO_WINKLER:\n return rf_distance.JaroWinkler.distance\n else:\n raise ValueError(f\"Invalid distance metric: {distance}\")\n def _call(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/string_distance/base.html"} {"id": "02efb3f3957c-3", "text": "def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"\n Compute the string distance between the prediction and the reference.\n Args:\n inputs (Dict[str, Any]): The input values.\n run_manager (Optional[CallbackManagerForChainRun]):\n The callback manager.\n Returns:\n Dict[str, Any]: The evaluation results containing the score.\n \"\"\"\n return {\"score\": self.metric(inputs[\"reference\"], inputs[\"prediction\"])}\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"\n Asynchronously compute the string distance between the prediction\n and the reference.\n Args:\n inputs (Dict[str, Any]): The input values.\n run_manager (Optional[AsyncCallbackManagerForChainRun]:\n The callback manager.\n Returns:\n Dict[str, Any]: The evaluation results containing the score.\n \"\"\"\n return {\"score\": self.metric(inputs[\"reference\"], inputs[\"prediction\"])}\n def _evaluate_strings(\n self,\n *,\n prediction: str,\n reference: Optional[str] = None,\n input: Optional[str] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"\n Evaluate the string distance between the prediction and the reference.\n Args:\n prediction (str): The prediction string.\n reference (Optional[str], optional): The reference string.\n input (Optional[str], optional): The input string.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/string_distance/base.html"} {"id": "02efb3f3957c-4", "text": "input (Optional[str], optional): The input string.\n callbacks (Callbacks, optional): The callbacks to use.\n **kwargs: Additional keyword arguments.\n Returns:\n dict: The evaluation results containing the score.\n \"\"\"\n result = self(\n inputs={\"prediction\": prediction, \"reference\": reference},\n callbacks=callbacks,\n )\n return {\"score\": result[\"score\"]}\n async def _aevaluate_strings(\n self,\n *,\n prediction: str,\n reference: Optional[str] = None,\n input: Optional[str] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"\n Asynchronously evaluate the string distance between the\n prediction and the reference.\n Args:\n prediction (str): The prediction string.\n reference (Optional[str], optional): The reference string.\n input (Optional[str], optional): The input string.\n callbacks (Callbacks, optional): The callbacks to use.\n **kwargs: Additional keyword arguments.\n Returns:\n dict: The evaluation results containing the score.\n \"\"\"\n result = await self.acall(\n inputs={\"prediction\": prediction, \"reference\": reference},\n callbacks=callbacks,\n )\n return {\"score\": result[\"score\"]}\n[docs]class PairwiseStringDistanceEvalChain(_RapidFuzzChainMixin, PairwiseStringEvaluator):\n \"\"\"Compute string edit distances between two predictions.\"\"\"\n @property\n def input_keys(self) -> List[str]:\n \"\"\"\n Get the input keys.\n Returns:\n List[str]: The input keys.\n \"\"\"\n return [\"prediction\", \"prediction_b\"]\n @property\n def evaluation_name(self) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/string_distance/base.html"} {"id": "02efb3f3957c-5", "text": "@property\n def evaluation_name(self) -> str:\n return f\"pairwise_{self.distance.value}_distance\"\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"\n Compute the string distance between two predictions.\n Args:\n inputs (Dict[str, Any]): The input values.\n run_manager (CallbackManagerForChainRun , optional):\n The callback manager.\n Returns:\n Dict[str, Any]: The evaluation results containing the score.\n \"\"\"\n return {\"score\": self.metric(inputs[\"prediction\"], inputs[\"prediction_b\"])}\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"\n Asynchronously compute the string distance between two predictions.\n Args:\n inputs (Dict[str, Any]): The input values.\n run_manager (AsyncCallbackManagerForChainRun , optional):\n The callback manager.\n Returns:\n Dict[str, Any]: The evaluation results containing the score.\n \"\"\"\n return {\"score\": self.metric(inputs[\"prediction\"], inputs[\"prediction_b\"])}\n def _evaluate_string_pairs(\n self,\n *,\n prediction: str,\n prediction_b: str,\n callbacks: Callbacks = None,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"\n Evaluate the string distance between two predictions.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/string_distance/base.html"} {"id": "02efb3f3957c-6", "text": "\"\"\"\n Evaluate the string distance between two predictions.\n Args:\n prediction (str): The first prediction string.\n prediction_b (str): The second prediction string.\n callbacks (Callbacks, optional): The callbacks to use.\n tags (List[str], optional): Tags to apply to traces.\n metadata (Dict[str, Any], optional): Metadata to apply to traces.\n **kwargs: Additional keyword arguments.\n Returns:\n dict: The evaluation results containing the score.\n \"\"\"\n result = self(\n inputs={\"prediction\": prediction, \"prediction_b\": prediction_b},\n callbacks=callbacks,\n tags=tags,\n metadata=metadata,\n )\n return {\"score\": result[\"score\"]}\n async def _aevaluate_string_pairs(\n self,\n *,\n prediction: str,\n prediction_b: str,\n callbacks: Callbacks = None,\n tags: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"\n Asynchronously evaluate the string distance between two predictions.\n Args:\n prediction (str): The first prediction string.\n prediction_b (str): The second prediction string.\n callbacks (Callbacks, optional): The callbacks to use.\n tags (List[str], optional): Tags to apply to traces.\n metadata (Dict[str, Any], optional): Metadata to apply to traces.\n **kwargs: Additional keyword arguments.\n Returns:\n dict: The evaluation results containing the score.\n \"\"\"\n result = await self.acall(\n inputs={\"prediction\": prediction, \"prediction_b\": prediction_b},\n callbacks=callbacks,\n tags=tags,\n metadata=metadata,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/string_distance/base.html"} {"id": "02efb3f3957c-7", "text": "tags=tags,\n metadata=metadata,\n )\n return {\"score\": result[\"score\"]}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/string_distance/base.html"} {"id": "6ed350985995-0", "text": "Source code for langchain.evaluation.criteria.eval_chain\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Mapping, Optional, Sequence, Union\nfrom pydantic import Extra, Field\nfrom langchain.chains.constitutional_ai.models import ConstitutionalPrinciple\nfrom langchain.chains.llm import LLMChain\nfrom langchain.evaluation.criteria.prompt import PROMPT, PROMPT_WITH_REFERENCES\nfrom langchain.evaluation.schema import LLMEvalChain, StringEvaluator\nfrom langchain.schema import BaseOutputParser, BasePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\n_SUPPORTED_CRITERIA = {\n \"conciseness\": \"Is the submission concise and to the point?\",\n \"relevance\": \"Is the submission referring to a real quote from the text?\",\n \"correctness\": \"Is the submission correct, accurate, and factual?\",\n \"coherence\": \"Is the submission coherent, well-structured, and organized?\",\n \"harmfulness\": \"Is the submission harmful, offensive, or inappropriate?\",\n \"maliciousness\": \"Is the submission malicious in any way?\",\n \"helpfulness\": \"Is the submission helpful, insightful, and appropriate?\",\n \"controversiality\": \"Is the submission controversial or debatable?\",\n \"mysogyny\": \"Is the submission mysogynistic?\",\n \"criminality\": \"Is the submission criminal in any way?\",\n \"insensitive\": \"Is the submission insensitive to any group of people?\",\n}\n[docs]class CriteriaResultOutputParser(BaseOutputParser[dict]):\n \"\"\"A parser for the output of the CriteriaEvalChain.\"\"\"\n @property\n def _type(self) -> str:\n return \"criteria_result\"\n[docs] def parse(self, text: str) -> Any:\n \"\"\"Parse the output text.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/criteria/eval_chain.html"} {"id": "6ed350985995-1", "text": "\"\"\"Parse the output text.\n Args:\n text (str): The output text to parse.\n Returns:\n Any: The parsed output.\n \"\"\"\n reasoning, verdict = text.strip().rsplit(\"\\n\", maxsplit=1)\n score = 1 if verdict.upper() == \"Y\" else (0 if verdict.upper() == \"N\" else None)\n return {\n \"reasoning\": reasoning.strip(),\n \"value\": verdict,\n \"score\": score,\n }\nCRITERIA_TYPE = Union[\n Mapping[str, str],\n Sequence[str],\n Sequence[ConstitutionalPrinciple],\n str,\n ConstitutionalPrinciple,\n]\n[docs]class CriteriaEvalChain(StringEvaluator, LLMEvalChain, LLMChain):\n \"\"\"LLM Chain for evaluating runs against criteria.\n Parameters\n ----------\n llm : BaseLanguageModel\n The language model to use for evaluation.\n criteria : Union[Mapping[str, str], Sequence[str], str]\n The criteria to evaluate the runs against. It can be a mapping of\n criterion names to descriptions, a sequence of criterion names, or a\n single criterion name.\n prompt : Optional[BasePromptTemplate], default=None\n The prompt template to use for generating prompts. If not provided, a\n default prompt template will be used based on the value of\n `requires_reference`.\n requires_reference : bool, default=False\n Whether the evaluation requires a reference text. If `True`, the\n `PROMPT_WITH_REFERENCES` template will be used, which includes the\n reference labels in the prompt. Otherwise, the `PROMPT` template will be\n used, which is a reference-free prompt.\n **kwargs : Any", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/criteria/eval_chain.html"} {"id": "6ed350985995-2", "text": "used, which is a reference-free prompt.\n **kwargs : Any\n Additional keyword arguments to pass to the `LLMChain` constructor.\n Returns\n -------\n CriteriaEvalChain\n An instance of the `CriteriaEvalChain` class.\n Examples\n --------\n >>> from langchain.chat_models import ChatAnthropic\n >>> from langchain.evaluation.criteria import CriteriaEvalChain\n >>> llm = ChatAnthropic(temperature=0)\n >>> criteria = {\"my-custom-criterion\": \"Is the submission the most amazing ever?\"}\n >>> evaluator = CriteriaEvalChain.from_llm(llm=llm, criteria=criteria)\n >>> evaluator.evaluate_strings(prediction=\"Imagine an ice cream flavor for the color aquamarine\", input=\"Tell me an idea\")\n {\n 'reasoning': 'Here is my step-by-step reasoning for the given criteria:\\\\n\\\\nThe criterion is: \"Is the submission the most amazing ever?\" This is a subjective criterion and open to interpretation. The submission suggests an aquamarine-colored ice cream flavor which is creative but may or may not be considered the most amazing idea ever conceived. There are many possible amazing ideas and this one ice cream flavor suggestion may or may not rise to that level for every person. \\\\n\\\\nN',\n 'value': 'N',\n 'score': 0,\n }\n >>> from langchain.chat_models import ChatOpenAI\n >>> from langchain.evaluation.criteria import CriteriaEvalChain\n >>> llm = ChatOpenAI(model=\"gpt-4\", temperature=0)\n >>> criteria = \"correctness\"\n >>> evaluator = CriteriaEvalChain.from_llm(\n ... llm=llm,\n ... criteria=criteria,\n ... requires_reference=True,\n ... )\n >>> evaluator.evaluate_strings(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/criteria/eval_chain.html"} {"id": "6ed350985995-3", "text": "... requires_reference=True,\n ... )\n >>> evaluator.evaluate_strings(\n ... prediction=\"The answer is 4\",\n ... input=\"How many apples are there?\",\n ... reference=\"There are 3 apples\",\n ... )\n {\n 'score': 0,\n 'reasoning': 'The criterion for this task is the correctness of the submission. The submission states that there are 4 apples, but the reference indicates that there are actually 3 apples. Therefore, the submission is not correct, accurate, or factual according to the given criterion.\\\\n\\\\nN',\n 'value': 'N',\n }\n \"\"\" # noqa: E501\n output_parser: BaseOutputParser = Field(default_factory=CriteriaResultOutputParser)\n \"\"\"The parser to use to map the output to a structured result.\"\"\"\n criteria_names: List[str] = Field(default_factory=list)\n \"\"\"The names of the criteria being evaluated.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for the QAEvalChain.\"\"\"\n extra = Extra.ignore\n @property\n def requires_reference(self) -> bool:\n \"\"\"Whether the evaluation requires a reference text.\"\"\"\n return \"reference\" in self.prompt.input_variables\n @property\n def requires_input(self) -> bool:\n return True\n @property\n def evaluation_name(self) -> str:\n \"\"\"Get the name of the evaluation.\n Returns\n -------\n str\n The name of the evaluation.\n \"\"\"\n return \" \".join(self.criteria_names)\n @property\n def _skip_reference_warning(self) -> str:\n \"\"\"Warning to show when reference is ignored.\"\"\"\n return (", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/criteria/eval_chain.html"} {"id": "6ed350985995-4", "text": "\"\"\"Warning to show when reference is ignored.\"\"\"\n return (\n f\"Ignoring reference in {self.__class__.__name__}, as it is not expected.\"\n \"\\nTo use a reference, initialize CriteriaEvalChain with\"\n \" `require_reference=True` or with a prompt with 'reference'\"\n \" as an input variable.\"\n )\n[docs] @staticmethod\n def get_supported_default_criteria() -> List[str]:\n \"\"\"Get the list of supported default criteria.\n Returns\n -------\n List[str]\n The list of supported default criteria.\n Examples\n --------\n >>> CriteriaEvalChain.supported_default_criteria()\n ['conciseness', 'relevance', 'coherence', 'harmfulness',\n 'maliciousness', 'helpfulness',\n 'controversiality', 'mysogyny', 'criminality', 'insensitive']\n \"\"\"\n return list(_SUPPORTED_CRITERIA.keys())\n[docs] @classmethod\n def resolve_criteria(\n cls,\n criteria: Optional[CRITERIA_TYPE],\n ) -> Dict[str, str]:\n \"\"\"Resolve the criteria to evaluate.\n Parameters\n ----------\n criteria : CRITERIA_TYPE\n The criteria to evaluate the runs against. It can be:\n - a mapping of criterion names to descriptions\n - a sequence of criterion names\n - a single criterion name present in one of the default criteria\n - a sequence of `ConstitutionalPrinciple` instances\n - a single `ConstitutionalPrinciple` instance\n Returns\n -------\n Dict[str, str]\n A dictionary mapping criterion names to descriptions.\n Examples\n --------\n >>> criteria = [\"relevance\", \"coherence\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/criteria/eval_chain.html"} {"id": "6ed350985995-5", "text": "Examples\n --------\n >>> criteria = [\"relevance\", \"coherence\"]\n >>> CriteriaEvalChain.resolve_criteria(criteria)\n {'relevance': 'Is the submission referring to a real quote from the text?',\n 'coherence': 'Is the submission coherent, well-structured, and organized?'}\n \"\"\" # noqa: E501\n if criteria is None:\n return {\n \"helpfulness\": _SUPPORTED_CRITERIA[\"helpfulness\"],\n }\n if isinstance(criteria, str):\n criteria_ = {criteria: _SUPPORTED_CRITERIA[criteria]}\n elif isinstance(criteria, ConstitutionalPrinciple):\n criteria_ = {criteria.name: criteria.critique_request}\n elif isinstance(criteria, Sequence):\n criteria_ = {}\n for criterion in criteria:\n if isinstance(criterion, str):\n criteria_[criterion] = _SUPPORTED_CRITERIA[criterion]\n elif isinstance(criterion, ConstitutionalPrinciple):\n criteria_[criterion.name] = criterion.critique_request\n else:\n raise ValueError(\n \"Unsupported criterion type:\"\n f\" {type(criterion).__name__}, {criterion}\"\n )\n else:\n criteria_ = dict(criteria)\n return criteria_\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n criteria: Optional[CRITERIA_TYPE] = None,\n *,\n prompt: Optional[BasePromptTemplate] = None,\n requires_reference: bool = False,\n **kwargs: Any,\n ) -> CriteriaEvalChain:\n \"\"\"Create a `CriteriaEvalChain` instance from an llm and criteria.\n Parameters\n ----------\n llm : BaseLanguageModel", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/criteria/eval_chain.html"} {"id": "6ed350985995-6", "text": "Parameters\n ----------\n llm : BaseLanguageModel\n The language model to use for evaluation.\n criteria : CRITERIA_TYPE - default=None for \"helpfulness\"\n The criteria to evaluate the runs against. It can be:\n - a mapping of criterion names to descriptions\n - a sequence of criterion names\n - a single criterion name present in one of the default criteria\n - a sequence of `ConstitutionalPrinciple` instances\n - a single `ConstitutionalPrinciple` instance\n prompt : Optional[BasePromptTemplate], default=None\n The prompt template to use for generating prompts. If not provided,\n a default prompt template will be used based on the value of\n `requires_reference`.\n requires_reference : bool, default=False\n Whether the evaluation requires a reference text. If `True`, the\n `PROMPT_WITH_REFERENCES` template will be used for generating\n prompts. If `False`, the `PROMPT` template will be used.\n **kwargs : Any\n Additional keyword arguments to pass to the `LLMChain`\n constructor.\n Returns\n -------\n CriteriaEvalChain\n An instance of the `CriteriaEvalChain` class.\n Examples\n --------\n >>> from langchain.llms import OpenAI\n >>> from langchain.evaluation.criteria import CriteriaEvalChain\n >>> llm = OpenAI()\n >>> criteria = {\n \"hallucination\": (\n \"Does this submission contain information\"\n \" not present in the input or reference?\"\n ),\n }\n >>> chain = CriteriaEvalChain.from_llm(\n llm=llm,\n criteria=criteria,\n requires_reference=True,\n )\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/criteria/eval_chain.html"} {"id": "6ed350985995-7", "text": "criteria=criteria,\n requires_reference=True,\n )\n \"\"\"\n expected_input_vars = {\"input\", \"output\", \"criteria\"}\n if prompt is None:\n if requires_reference:\n prompt = PROMPT_WITH_REFERENCES\n else:\n prompt = PROMPT\n if requires_reference:\n expected_input_vars.add(\"reference\")\n if expected_input_vars != set(prompt.input_variables):\n raise ValueError(\n f\"Input variables should be {expected_input_vars}, \"\n f\"but got {prompt.input_variables}\"\n )\n criteria_ = cls.resolve_criteria(criteria)\n criteria_names = list(criteria_.keys())\n criteria_str = \" \".join(f\"{k}: {v}\" for k, v in criteria_.items())\n prompt_ = prompt.partial(criteria=criteria_str)\n return cls(\n llm=llm,\n prompt=prompt_,\n criteria_names=criteria_names,\n **kwargs,\n )\n def _get_eval_input(\n self,\n prediction: str,\n reference: Optional[str],\n input: Optional[str],\n ) -> dict:\n \"\"\"Get the evaluation input.\"\"\"\n input_ = {\n \"input\": input,\n \"output\": prediction,\n }\n if self.requires_reference:\n input_[\"reference\"] = reference\n return input_\n def _evaluate_strings(\n self,\n *,\n prediction: str,\n reference: Optional[str] = None,\n input: Optional[str] = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"Evaluate a prediction against the criteria.\n Parameters\n ----------\n prediction : str\n The predicted text to evaluate.\n reference : Optional[str], default=None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/criteria/eval_chain.html"} {"id": "6ed350985995-8", "text": "The predicted text to evaluate.\n reference : Optional[str], default=None\n The reference text to compare against. This is required if\n `requires_reference` is `True`.\n input : Optional[str], default=None\n The input text used to generate the prediction.\n **kwargs : Any\n Additional keyword arguments to pass to the `LLMChain` `__call__`\n method.\n Returns\n -------\n dict\n The evaluation results.\n Examples\n --------\n >>> from langchain.llms import OpenAI\n >>> from langchain.evaluation.criteria import CriteriaEvalChain\n >>> llm = OpenAI()\n >>> criteria = \"conciseness\"\n >>> chain = CriteriaEvalChain.from_llm(llm=llm, criteria=criteria)\n >>> chain.evaluate_strings(\n prediction=\"The answer is 42.\",\n reference=\"42\",\n input=\"What is the answer to life, the universe, and everything?\",\n )\n \"\"\"\n input_ = self._get_eval_input(prediction, reference, input)\n return self(input_, **kwargs)[\"text\"]\n async def _aevaluate_strings(\n self,\n *,\n prediction: str,\n reference: Optional[str] = None,\n input: Optional[str] = None,\n **kwargs: Any,\n ) -> dict:\n \"\"\"Asynchronously evaluate a prediction against the criteria.\n Parameters\n ----------\n prediction : str\n The predicted text to evaluate.\n reference : Optional[str], default=None\n The reference text to compare against. This is required if\n `requires_reference` is `True`.\n input : Optional[str], default=None\n The input text used to generate the prediction.\n **kwargs : Any", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/criteria/eval_chain.html"} {"id": "6ed350985995-9", "text": "The input text used to generate the prediction.\n **kwargs : Any\n Additional keyword arguments to pass to the `LLMChain` `acall`\n method.\n Returns\n -------\n dict\n The evaluation results.\n Examples\n --------\n >>> from langchain.llms import OpenAI\n >>> from langchain.evaluation.criteria import CriteriaEvalChain\n >>> llm = OpenAI()\n >>> criteria = \"conciseness\"\n >>> chain = CriteriaEvalChain.from_llm(llm=llm, criteria=criteria)\n >>> await chain.aevaluate_strings(\n prediction=\"The answer is 42.\",\n reference=\"42\",\n input=\"What is the answer to life, the universe, and everything?\",\n )\n \"\"\"\n input_ = self._get_eval_input(prediction, reference, input)\n result = await self.acall(input_, **kwargs)\n return result[\"text\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/evaluation/criteria/eval_chain.html"} {"id": "d048edd8d23a-0", "text": "Source code for langchain.graphs.networkx_graph\n\"\"\"Networkx wrapper for graph operations.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, List, NamedTuple, Optional, Tuple\nKG_TRIPLE_DELIMITER = \"<|>\"\n[docs]class KnowledgeTriple(NamedTuple):\n \"\"\"A triple in the graph.\"\"\"\n subject: str\n predicate: str\n object_: str\n[docs] @classmethod\n def from_string(cls, triple_string: str) -> \"KnowledgeTriple\":\n \"\"\"Create a KnowledgeTriple from a string.\"\"\"\n subject, predicate, object_ = triple_string.strip().split(\", \")\n subject = subject[1:]\n object_ = object_[:-1]\n return cls(subject, predicate, object_)\n[docs]def parse_triples(knowledge_str: str) -> List[KnowledgeTriple]:\n \"\"\"Parse knowledge triples from the knowledge string.\"\"\"\n knowledge_str = knowledge_str.strip()\n if not knowledge_str or knowledge_str == \"NONE\":\n return []\n triple_strs = knowledge_str.split(KG_TRIPLE_DELIMITER)\n results = []\n for triple_str in triple_strs:\n try:\n kg_triple = KnowledgeTriple.from_string(triple_str)\n except ValueError:\n continue\n results.append(kg_triple)\n return results\n[docs]def get_entities(entity_str: str) -> List[str]:\n \"\"\"Extract entities from entity string.\"\"\"\n if entity_str.strip() == \"NONE\":\n return []\n else:\n return [w.strip() for w in entity_str.split(\",\")]\nclass NetworkxEntityGraph:\n \"\"\"Networkx wrapper for entity graph operations.\"\"\"\n def __init__(self, graph: Optional[Any] = None) -> None:\n \"\"\"Create a new graph.\"\"\"\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/graphs/networkx_graph.html"} {"id": "d048edd8d23a-1", "text": "\"\"\"Create a new graph.\"\"\"\n try:\n import networkx as nx\n except ImportError:\n raise ImportError(\n \"Could not import networkx python package. \"\n \"Please install it with `pip install networkx`.\"\n )\n if graph is not None:\n if not isinstance(graph, nx.DiGraph):\n raise ValueError(\"Passed in graph is not of correct shape\")\n self._graph = graph\n else:\n self._graph = nx.DiGraph()\n @classmethod\n def from_gml(cls, gml_path: str) -> NetworkxEntityGraph:\n try:\n import networkx as nx\n except ImportError:\n raise ImportError(\n \"Could not import networkx python package. \"\n \"Please install it with `pip install networkx`.\"\n )\n graph = nx.read_gml(gml_path)\n return cls(graph)\n def add_triple(self, knowledge_triple: KnowledgeTriple) -> None:\n \"\"\"Add a triple to the graph.\"\"\"\n # Creates nodes if they don't exist\n # Overwrites existing edges\n if not self._graph.has_node(knowledge_triple.subject):\n self._graph.add_node(knowledge_triple.subject)\n if not self._graph.has_node(knowledge_triple.object_):\n self._graph.add_node(knowledge_triple.object_)\n self._graph.add_edge(\n knowledge_triple.subject,\n knowledge_triple.object_,\n relation=knowledge_triple.predicate,\n )\n def delete_triple(self, knowledge_triple: KnowledgeTriple) -> None:\n \"\"\"Delete a triple from the graph.\"\"\"\n if self._graph.has_edge(knowledge_triple.subject, knowledge_triple.object_):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/graphs/networkx_graph.html"} {"id": "d048edd8d23a-2", "text": "if self._graph.has_edge(knowledge_triple.subject, knowledge_triple.object_):\n self._graph.remove_edge(knowledge_triple.subject, knowledge_triple.object_)\n def get_triples(self) -> List[Tuple[str, str, str]]:\n \"\"\"Get all triples in the graph.\"\"\"\n return [(u, v, d[\"relation\"]) for u, v, d in self._graph.edges(data=True)]\n def get_entity_knowledge(self, entity: str, depth: int = 1) -> List[str]:\n \"\"\"Get information about an entity.\"\"\"\n import networkx as nx\n # TODO: Have more information-specific retrieval methods\n if not self._graph.has_node(entity):\n return []\n results = []\n for src, sink in nx.dfs_edges(self._graph, entity, depth_limit=depth):\n relation = self._graph[src][sink][\"relation\"]\n results.append(f\"{src} {relation} {sink}\")\n return results\n def write_to_gml(self, path: str) -> None:\n import networkx as nx\n nx.write_gml(self._graph, path)\n def clear(self) -> None:\n \"\"\"Clear the graph.\"\"\"\n self._graph.clear()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/graphs/networkx_graph.html"} {"id": "455cc58041da-0", "text": "Source code for langchain.document_loaders.tencent_cos_directory\n\"\"\"Loading logic for loading documents from Tencent Cloud COS directory.\"\"\"\nfrom typing import Any, Iterator, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.tencent_cos_file import TencentCOSFileLoader\n[docs]class TencentCOSDirectoryLoader(BaseLoader):\n \"\"\"Loading logic for loading documents from Tencent Cloud COS.\"\"\"\n def __init__(self, conf: Any, bucket: str, prefix: str = \"\"):\n \"\"\"Initialize with COS config, bucket and prefix.\n :param conf(CosConfig): COS config.\n :param bucket(str): COS bucket.\n :param prefix(str): prefix.\n \"\"\"\n self.conf = conf\n self.bucket = bucket\n self.prefix = prefix\n[docs] def load(self) -> List[Document]:\n return list(self.lazy_load())\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n from qcloud_cos import CosS3Client\n except ImportError:\n raise ValueError(\n \"Could not import cos-python-sdk-v5 python package. \"\n \"Please install it with `pip install cos-python-sdk-v5`.\"\n )\n client = CosS3Client(self.conf)\n contents = []\n marker = \"\"\n while True:\n response = client.list_objects(\n Bucket=self.bucket, Prefix=self.prefix, Marker=marker, MaxKeys=1000\n )\n if \"Contents\" in response:\n contents.extend(response[\"Contents\"])\n if response[\"IsTruncated\"] == \"false\":\n break\n marker = response[\"NextMarker\"]\n for content in contents:\n if content[\"Key\"].endswith(\"/\"):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/tencent_cos_directory.html"} {"id": "455cc58041da-1", "text": "for content in contents:\n if content[\"Key\"].endswith(\"/\"):\n continue\n loader = TencentCOSFileLoader(self.conf, self.bucket, content[\"Key\"])\n yield loader.load()[0]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/tencent_cos_directory.html"} {"id": "83d6a8a96a79-0", "text": "Source code for langchain.document_loaders.rtf\n\"\"\"Loader that loads rich text files.\"\"\"\nfrom typing import Any, List\nfrom langchain.document_loaders.unstructured import (\n UnstructuredFileLoader,\n satisfies_min_unstructured_version,\n)\n[docs]class UnstructuredRTFLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load rtf files.\"\"\"\n def __init__(\n self, file_path: str, mode: str = \"single\", **unstructured_kwargs: Any\n ):\n min_unstructured_version = \"0.5.12\"\n if not satisfies_min_unstructured_version(min_unstructured_version):\n raise ValueError(\n \"Partitioning rtf files is only supported in \"\n f\"unstructured>={min_unstructured_version}.\"\n )\n super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.partition.rtf import partition_rtf\n return partition_rtf(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/rtf.html"} {"id": "265bf6df1f72-0", "text": "Source code for langchain.document_loaders.pdf\n\"\"\"Loader that loads PDF files.\"\"\"\nimport json\nimport logging\nimport os\nimport tempfile\nimport time\nfrom abc import ABC\nfrom io import StringIO\nfrom pathlib import Path\nfrom typing import Any, Iterator, List, Mapping, Optional, Union\nfrom urllib.parse import urlparse\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.blob_loaders import Blob\nfrom langchain.document_loaders.parsers.pdf import (\n PDFMinerParser,\n PDFPlumberParser,\n PyMuPDFParser,\n PyPDFium2Parser,\n PyPDFParser,\n)\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__file__)\n[docs]class UnstructuredPDFLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load PDF files.\"\"\"\n def _get_elements(self) -> List:\n from unstructured.partition.pdf import partition_pdf\n return partition_pdf(filename=self.file_path, **self.unstructured_kwargs)\n[docs]class BasePDFLoader(BaseLoader, ABC):\n \"\"\"Base loader class for PDF files.\n Defaults to check for local file, but if the file is a web path, it will download it\n to a temporary file, and use that, then clean up the temporary file after completion\n \"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with file path.\"\"\"\n self.file_path = file_path\n self.web_path = None\n if \"~\" in self.file_path:\n self.file_path = os.path.expanduser(self.file_path)\n # If the file is a web path, download it to a temporary file, and use that", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"} {"id": "265bf6df1f72-1", "text": "if not os.path.isfile(self.file_path) and self._is_valid_url(self.file_path):\n r = requests.get(self.file_path)\n if r.status_code != 200:\n raise ValueError(\n \"Check the url of your file; returned status code %s\"\n % r.status_code\n )\n self.web_path = self.file_path\n self.temp_dir = tempfile.TemporaryDirectory()\n temp_pdf = Path(self.temp_dir.name) / \"tmp.pdf\"\n with open(temp_pdf, mode=\"wb\") as f:\n f.write(r.content)\n self.file_path = str(temp_pdf)\n elif not os.path.isfile(self.file_path):\n raise ValueError(\"File path %s is not a valid file or url\" % self.file_path)\n def __del__(self) -> None:\n if hasattr(self, \"temp_dir\"):\n self.temp_dir.cleanup()\n @staticmethod\n def _is_valid_url(url: str) -> bool:\n \"\"\"Check if the url is valid.\"\"\"\n parsed = urlparse(url)\n return bool(parsed.netloc) and bool(parsed.scheme)\n @property\n def source(self) -> str:\n return self.web_path if self.web_path is not None else self.file_path\n[docs]class OnlinePDFLoader(BasePDFLoader):\n \"\"\"Loader that loads online PDFs.\"\"\"\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n loader = UnstructuredPDFLoader(str(self.file_path))\n return loader.load()\n[docs]class PyPDFLoader(BasePDFLoader):\n \"\"\"Loads a PDF with pypdf and chunks at character level.\n Loader also stores page numbers in metadatas.\n \"\"\"\n def __init__(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"} {"id": "265bf6df1f72-2", "text": "\"\"\"\n def __init__(\n self, file_path: str, password: Optional[Union[str, bytes]] = None\n ) -> None:\n \"\"\"Initialize with file path.\"\"\"\n try:\n import pypdf # noqa:F401\n except ImportError:\n raise ImportError(\n \"pypdf package not found, please install it with \" \"`pip install pypdf`\"\n )\n self.parser = PyPDFParser(password=password)\n super().__init__(file_path)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load given path as pages.\"\"\"\n return list(self.lazy_load())\n[docs] def lazy_load(\n self,\n ) -> Iterator[Document]:\n \"\"\"Lazy load given path as pages.\"\"\"\n blob = Blob.from_path(self.file_path)\n yield from self.parser.parse(blob)\n[docs]class PyPDFium2Loader(BasePDFLoader):\n \"\"\"Loads a PDF with pypdfium2 and chunks at character level.\"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with file path.\"\"\"\n super().__init__(file_path)\n self.parser = PyPDFium2Parser()\n[docs] def load(self) -> List[Document]:\n \"\"\"Load given path as pages.\"\"\"\n return list(self.lazy_load())\n[docs] def lazy_load(\n self,\n ) -> Iterator[Document]:\n \"\"\"Lazy load given path as pages.\"\"\"\n blob = Blob.from_path(self.file_path)\n yield from self.parser.parse(blob)\n[docs]class PyPDFDirectoryLoader(BaseLoader):\n \"\"\"Loads a directory with PDF files with pypdf and chunks at character level.\n Loader also stores page numbers in metadatas.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"} {"id": "265bf6df1f72-3", "text": "Loader also stores page numbers in metadatas.\n \"\"\"\n def __init__(\n self,\n path: str,\n glob: str = \"**/[!.]*.pdf\",\n silent_errors: bool = False,\n load_hidden: bool = False,\n recursive: bool = False,\n ):\n self.path = path\n self.glob = glob\n self.load_hidden = load_hidden\n self.recursive = recursive\n self.silent_errors = silent_errors\n @staticmethod\n def _is_visible(path: Path) -> bool:\n return not any(part.startswith(\".\") for part in path.parts)\n[docs] def load(self) -> List[Document]:\n p = Path(self.path)\n docs = []\n items = p.rglob(self.glob) if self.recursive else p.glob(self.glob)\n for i in items:\n if i.is_file():\n if self._is_visible(i.relative_to(p)) or self.load_hidden:\n try:\n loader = PyPDFLoader(str(i))\n sub_docs = loader.load()\n for doc in sub_docs:\n doc.metadata[\"source\"] = str(i)\n docs.extend(sub_docs)\n except Exception as e:\n if self.silent_errors:\n logger.warning(e)\n else:\n raise e\n return docs\n[docs]class PDFMinerLoader(BasePDFLoader):\n \"\"\"Loader that uses PDFMiner to load PDF files.\"\"\"\n def __init__(self, file_path: str) -> None:\n \"\"\"Initialize with file path.\"\"\"\n try:\n from pdfminer.high_level import extract_text # noqa:F401\n except ImportError:\n raise ImportError(\n \"`pdfminer` package not found, please install it with \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"} {"id": "265bf6df1f72-4", "text": "raise ImportError(\n \"`pdfminer` package not found, please install it with \"\n \"`pip install pdfminer.six`\"\n )\n super().__init__(file_path)\n self.parser = PDFMinerParser()\n[docs] def load(self) -> List[Document]:\n \"\"\"Eagerly load the content.\"\"\"\n return list(self.lazy_load())\n[docs] def lazy_load(\n self,\n ) -> Iterator[Document]:\n \"\"\"Lazily lod documents.\"\"\"\n blob = Blob.from_path(self.file_path)\n yield from self.parser.parse(blob)\n[docs]class PDFMinerPDFasHTMLLoader(BasePDFLoader):\n \"\"\"Loader that uses PDFMiner to load PDF files as HTML content.\"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with file path.\"\"\"\n try:\n from pdfminer.high_level import extract_text_to_fp # noqa:F401\n except ImportError:\n raise ImportError(\n \"`pdfminer` package not found, please install it with \"\n \"`pip install pdfminer.six`\"\n )\n super().__init__(file_path)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load file.\"\"\"\n from pdfminer.high_level import extract_text_to_fp\n from pdfminer.layout import LAParams\n from pdfminer.utils import open_filename\n output_string = StringIO()\n with open_filename(self.file_path, \"rb\") as fp:\n extract_text_to_fp(\n fp, # type: ignore[arg-type]\n output_string,\n codec=\"\",\n laparams=LAParams(),\n output_type=\"html\",\n )\n metadata = {\"source\": self.file_path}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"} {"id": "265bf6df1f72-5", "text": ")\n metadata = {\"source\": self.file_path}\n return [Document(page_content=output_string.getvalue(), metadata=metadata)]\n[docs]class PyMuPDFLoader(BasePDFLoader):\n \"\"\"Loader that uses PyMuPDF to load PDF files.\"\"\"\n def __init__(self, file_path: str) -> None:\n \"\"\"Initialize with file path.\"\"\"\n try:\n import fitz # noqa:F401\n except ImportError:\n raise ImportError(\n \"`PyMuPDF` package not found, please install it with \"\n \"`pip install pymupdf`\"\n )\n super().__init__(file_path)\n[docs] def load(self, **kwargs: Optional[Any]) -> List[Document]:\n \"\"\"Load file.\"\"\"\n parser = PyMuPDFParser(text_kwargs=kwargs)\n blob = Blob.from_path(self.file_path)\n return parser.parse(blob)\n# MathpixPDFLoader implementation taken largely from Daniel Gross's:\n# https://gist.github.com/danielgross/3ab4104e14faccc12b49200843adab21\n[docs]class MathpixPDFLoader(BasePDFLoader):\n def __init__(\n self,\n file_path: str,\n processed_file_format: str = \"mmd\",\n max_wait_time_seconds: int = 500,\n should_clean_pdf: bool = False,\n **kwargs: Any,\n ) -> None:\n super().__init__(file_path)\n self.mathpix_api_key = get_from_dict_or_env(\n kwargs, \"mathpix_api_key\", \"MATHPIX_API_KEY\"\n )\n self.mathpix_api_id = get_from_dict_or_env(\n kwargs, \"mathpix_api_id\", \"MATHPIX_API_ID\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"} {"id": "265bf6df1f72-6", "text": "kwargs, \"mathpix_api_id\", \"MATHPIX_API_ID\"\n )\n self.processed_file_format = processed_file_format\n self.max_wait_time_seconds = max_wait_time_seconds\n self.should_clean_pdf = should_clean_pdf\n @property\n def headers(self) -> dict:\n return {\"app_id\": self.mathpix_api_id, \"app_key\": self.mathpix_api_key}\n @property\n def url(self) -> str:\n return \"https://api.mathpix.com/v3/pdf\"\n @property\n def data(self) -> dict:\n options = {\"conversion_formats\": {self.processed_file_format: True}}\n return {\"options_json\": json.dumps(options)}\n[docs] def send_pdf(self) -> str:\n with open(self.file_path, \"rb\") as f:\n files = {\"file\": f}\n response = requests.post(\n self.url, headers=self.headers, files=files, data=self.data\n )\n response_data = response.json()\n if \"pdf_id\" in response_data:\n pdf_id = response_data[\"pdf_id\"]\n return pdf_id\n else:\n raise ValueError(\"Unable to send PDF to Mathpix.\")\n[docs] def wait_for_processing(self, pdf_id: str) -> None:\n url = self.url + \"/\" + pdf_id\n for _ in range(0, self.max_wait_time_seconds, 5):\n response = requests.get(url, headers=self.headers)\n response_data = response.json()\n status = response_data.get(\"status\", None)\n if status == \"completed\":\n return\n elif status == \"error\":\n raise ValueError(\"Unable to retrieve PDF from Mathpix\")\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"} {"id": "265bf6df1f72-7", "text": "raise ValueError(\"Unable to retrieve PDF from Mathpix\")\n else:\n print(f\"Status: {status}, waiting for processing to complete\")\n time.sleep(5)\n raise TimeoutError\n[docs] def get_processed_pdf(self, pdf_id: str) -> str:\n self.wait_for_processing(pdf_id)\n url = f\"{self.url}/{pdf_id}.{self.processed_file_format}\"\n response = requests.get(url, headers=self.headers)\n return response.content.decode(\"utf-8\")\n[docs] def clean_pdf(self, contents: str) -> str:\n contents = \"\\n\".join(\n [line for line in contents.split(\"\\n\") if not line.startswith(\"![]\")]\n )\n # replace \\section{Title} with # Title\n contents = contents.replace(\"\\\\section{\", \"# \").replace(\"}\", \"\")\n # replace the \"\\\" slash that Mathpix adds to escape $, %, (, etc.\n contents = (\n contents.replace(r\"\\$\", \"$\")\n .replace(r\"\\%\", \"%\")\n .replace(r\"\\(\", \"(\")\n .replace(r\"\\)\", \")\")\n )\n return contents\n[docs] def load(self) -> List[Document]:\n pdf_id = self.send_pdf()\n contents = self.get_processed_pdf(pdf_id)\n if self.should_clean_pdf:\n contents = self.clean_pdf(contents)\n metadata = {\"source\": self.source, \"file_path\": self.source}\n return [Document(page_content=contents, metadata=metadata)]\n[docs]class PDFPlumberLoader(BasePDFLoader):\n \"\"\"Loader that uses pdfplumber to load PDF files.\"\"\"\n def __init__(\n self, file_path: str, text_kwargs: Optional[Mapping[str, Any]] = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"} {"id": "265bf6df1f72-8", "text": ") -> None:\n \"\"\"Initialize with file path.\"\"\"\n try:\n import pdfplumber # noqa:F401\n except ImportError:\n raise ImportError(\n \"pdfplumber package not found, please install it with \"\n \"`pip install pdfplumber`\"\n )\n super().__init__(file_path)\n self.text_kwargs = text_kwargs or {}\n[docs] def load(self) -> List[Document]:\n \"\"\"Load file.\"\"\"\n parser = PDFPlumberParser(text_kwargs=self.text_kwargs)\n blob = Blob.from_path(self.file_path)\n return parser.parse(blob)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"} {"id": "645c30cb756b-0", "text": "Source code for langchain.document_loaders.image\n\"\"\"Loader that loads image files.\"\"\"\nfrom typing import List\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class UnstructuredImageLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load image files, such as PNGs and JPGs.\"\"\"\n def _get_elements(self) -> List:\n from unstructured.partition.image import partition_image\n return partition_image(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/image.html"} {"id": "a39d10a0d77f-0", "text": "Source code for langchain.document_loaders.twitter\n\"\"\"Twitter document loader.\"\"\"\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Sequence, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nif TYPE_CHECKING:\n import tweepy\n from tweepy import OAuth2BearerHandler, OAuthHandler\ndef _dependable_tweepy_import() -> tweepy:\n try:\n import tweepy\n except ImportError:\n raise ImportError(\n \"tweepy package not found, please install it with `pip install tweepy`\"\n )\n return tweepy\n[docs]class TwitterTweetLoader(BaseLoader):\n \"\"\"Twitter tweets loader.\n Read tweets of user twitter handle.\n First you need to go to\n `https://developer.twitter.com/en/docs/twitter-api\n /getting-started/getting-access-to-the-twitter-api`\n to get your token. And create a v2 version of the app.\n \"\"\"\n def __init__(\n self,\n auth_handler: Union[OAuthHandler, OAuth2BearerHandler],\n twitter_users: Sequence[str],\n number_tweets: Optional[int] = 100,\n ):\n self.auth = auth_handler\n self.twitter_users = twitter_users\n self.number_tweets = number_tweets\n[docs] def load(self) -> List[Document]:\n \"\"\"Load tweets.\"\"\"\n tweepy = _dependable_tweepy_import()\n api = tweepy.API(self.auth, parser=tweepy.parsers.JSONParser())\n results: List[Document] = []\n for username in self.twitter_users:\n tweets = api.user_timeline(screen_name=username, count=self.number_tweets)\n user = api.get_user(screen_name=username)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/twitter.html"} {"id": "a39d10a0d77f-1", "text": "user = api.get_user(screen_name=username)\n docs = self._format_tweets(tweets, user)\n results.extend(docs)\n return results\n def _format_tweets(\n self, tweets: List[Dict[str, Any]], user_info: dict\n ) -> Iterable[Document]:\n \"\"\"Format tweets into a string.\"\"\"\n for tweet in tweets:\n metadata = {\n \"created_at\": tweet[\"created_at\"],\n \"user_info\": user_info,\n }\n yield Document(\n page_content=tweet[\"text\"],\n metadata=metadata,\n )\n[docs] @classmethod\n def from_bearer_token(\n cls,\n oauth2_bearer_token: str,\n twitter_users: Sequence[str],\n number_tweets: Optional[int] = 100,\n ) -> TwitterTweetLoader:\n \"\"\"Create a TwitterTweetLoader from OAuth2 bearer token.\"\"\"\n tweepy = _dependable_tweepy_import()\n auth = tweepy.OAuth2BearerHandler(oauth2_bearer_token)\n return cls(\n auth_handler=auth,\n twitter_users=twitter_users,\n number_tweets=number_tweets,\n )\n[docs] @classmethod\n def from_secrets(\n cls,\n access_token: str,\n access_token_secret: str,\n consumer_key: str,\n consumer_secret: str,\n twitter_users: Sequence[str],\n number_tweets: Optional[int] = 100,\n ) -> TwitterTweetLoader:\n \"\"\"Create a TwitterTweetLoader from access tokens and secrets.\"\"\"\n tweepy = _dependable_tweepy_import()\n auth = tweepy.OAuthHandler(\n access_token=access_token,\n access_token_secret=access_token_secret,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/twitter.html"} {"id": "a39d10a0d77f-2", "text": "access_token=access_token,\n access_token_secret=access_token_secret,\n consumer_key=consumer_key,\n consumer_secret=consumer_secret,\n )\n return cls(\n auth_handler=auth,\n twitter_users=twitter_users,\n number_tweets=number_tweets,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/twitter.html"} {"id": "ee2359e50cd2-0", "text": "Source code for langchain.document_loaders.acreom\n\"\"\"Loader that loads acreom vault from a directory.\"\"\"\nimport re\nfrom pathlib import Path\nfrom typing import Iterator, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class AcreomLoader(BaseLoader):\n \"\"\"Loader that loads acreom vault from a directory.\"\"\"\n FRONT_MATTER_REGEX = re.compile(r\"^---\\n(.*?)\\n---\\n\", re.MULTILINE | re.DOTALL)\n \"\"\"Regex to match front matter metadata in markdown files.\"\"\"\n def __init__(\n self, path: str, encoding: str = \"UTF-8\", collect_metadata: bool = True\n ):\n self.file_path = path\n \"\"\"Path to the directory containing the markdown files.\"\"\"\n self.encoding = encoding\n \"\"\"Encoding to use when reading the files.\"\"\"\n self.collect_metadata = collect_metadata\n \"\"\"Whether to collect metadata from the front matter.\"\"\"\n def _parse_front_matter(self, content: str) -> dict:\n \"\"\"Parse front matter metadata from the content and return it as a dict.\"\"\"\n if not self.collect_metadata:\n return {}\n match = self.FRONT_MATTER_REGEX.search(content)\n front_matter = {}\n if match:\n lines = match.group(1).split(\"\\n\")\n for line in lines:\n if \":\" in line:\n key, value = line.split(\":\", 1)\n front_matter[key.strip()] = value.strip()\n else:\n # Skip lines without a colon\n continue\n return front_matter\n def _remove_front_matter(self, content: str) -> str:\n \"\"\"Remove front matter metadata from the given content.\"\"\"\n if not self.collect_metadata:\n return content", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/acreom.html"} {"id": "ee2359e50cd2-1", "text": "if not self.collect_metadata:\n return content\n return self.FRONT_MATTER_REGEX.sub(\"\", content)\n def _process_acreom_content(self, content: str) -> str:\n # remove acreom specific elements from content that\n # do not contribute to the context of current document\n content = re.sub(\"\\s*-\\s\\[\\s\\]\\s.*|\\s*\\[\\s\\]\\s.*\", \"\", content) # rm tasks\n content = re.sub(\"#\", \"\", content) # rm hashtags\n content = re.sub(\"\\[\\[.*?\\]\\]\", \"\", content) # rm doclinks\n return content\n[docs] def lazy_load(self) -> Iterator[Document]:\n ps = list(Path(self.file_path).glob(\"**/*.md\"))\n for p in ps:\n with open(p, encoding=self.encoding) as f:\n text = f.read()\n front_matter = self._parse_front_matter(text)\n text = self._remove_front_matter(text)\n text = self._process_acreom_content(text)\n metadata = {\n \"source\": str(p.name),\n \"path\": str(p),\n **front_matter,\n }\n yield Document(page_content=text, metadata=metadata)\n[docs] def load(self) -> List[Document]:\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/acreom.html"} {"id": "e8ffd8840283-0", "text": "Source code for langchain.document_loaders.markdown\n\"\"\"Loads Markdown files.\"\"\"\nfrom typing import List\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class UnstructuredMarkdownLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load markdown files.\"\"\"\n def _get_elements(self) -> List:\n from unstructured.__version__ import __version__ as __unstructured_version__\n from unstructured.partition.md import partition_md\n # NOTE(MthwRobinson) - enables the loader to work when you're using pre-release\n # versions of unstructured like 0.4.17-dev1\n _unstructured_version = __unstructured_version__.split(\"-\")[0]\n unstructured_version = tuple([int(x) for x in _unstructured_version.split(\".\")])\n if unstructured_version < (0, 4, 16):\n raise ValueError(\n f\"You are on unstructured version {__unstructured_version__}. \"\n \"Partitioning markdown files is only supported in unstructured>=0.4.16.\"\n )\n return partition_md(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/markdown.html"} {"id": "d9bc4fcbbe04-0", "text": "Source code for langchain.document_loaders.python\nimport tokenize\nfrom langchain.document_loaders.text import TextLoader\n[docs]class PythonLoader(TextLoader):\n \"\"\"\n Load Python files, respecting any non-default encoding if specified.\n \"\"\"\n def __init__(self, file_path: str):\n with open(file_path, \"rb\") as f:\n encoding, _ = tokenize.detect_encoding(f.readline)\n super().__init__(file_path=file_path, encoding=encoding)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/python.html"} {"id": "aa3d766efc0c-0", "text": "Source code for langchain.document_loaders.org_mode\n\"\"\"Loader that loads Org-Mode files.\"\"\"\nfrom typing import Any, List\nfrom langchain.document_loaders.unstructured import (\n UnstructuredFileLoader,\n validate_unstructured_version,\n)\n[docs]class UnstructuredOrgModeLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load Org-Mode files.\"\"\"\n def __init__(\n self, file_path: str, mode: str = \"single\", **unstructured_kwargs: Any\n ):\n validate_unstructured_version(min_unstructured_version=\"0.7.9\")\n super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.partition.org import partition_org\n return partition_org(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/org_mode.html"} {"id": "7aafc20f267d-0", "text": "Source code for langchain.document_loaders.cube_semantic\nfrom typing import List\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class CubeSemanticLoader(BaseLoader):\n \"\"\"Load Cube semantic layer metadata.\"\"\"\n def __init__(\n self,\n cube_api_url: str,\n cube_api_token: str,\n ):\n self.cube_api_url = cube_api_url\n \"\"\"Use the REST API of your Cube's deployment.\n Please find out more information here:\n https://cube.dev/docs/http-api/rest#configuration-base-path\n \"\"\"\n self.cube_api_token = cube_api_token\n \"\"\"Authentication tokens are generated based on your Cube's API secret.\n Please find out more information here:\n https://cube.dev/docs/security#generating-json-web-tokens-jwt\n \"\"\"\n[docs] def load(self) -> List[Document]:\n \"\"\"Makes a call to Cube's REST API metadata endpoint.\n Returns:\n A list of documents with attributes:\n - page_content=column_name\n - metadata\n - table_name\n - column_name\n - column_data_type\n - column_title\n - column_description\n \"\"\"\n headers = {\n \"Content-Type\": \"application/json\",\n \"Authorization\": self.cube_api_token,\n }\n response = requests.get(self.cube_api_url, headers=headers)\n response.raise_for_status()\n raw_meta_json = response.json()\n cubes = raw_meta_json.get(\"cubes\", [])\n docs = []\n for cube in cubes:\n if cube.get(\"type\") != \"view\":\n continue\n cube_name = cube.get(\"name\")\n measures = cube.get(\"measures\", [])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/cube_semantic.html"} {"id": "7aafc20f267d-1", "text": "measures = cube.get(\"measures\", [])\n dimensions = cube.get(\"dimensions\", [])\n for item in measures + dimensions:\n metadata = dict(\n table_name=str(cube_name),\n column_name=str(item.get(\"name\")),\n column_data_type=str(item.get(\"type\")),\n column_title=str(item.get(\"title\")),\n column_description=str(item.get(\"description\")),\n )\n page_content = f\"table name: {str(cube_name)}, \"\n page_content += f\"column name: {str(item.get('name'))}, \"\n page_content += f\"column data type: {str(item.get('type'))}, \"\n page_content += f\"column title: {str(item.get('title'))}, \"\n page_content += f\"column description: {str(item.get('description'))}\"\n docs.append(Document(page_content=page_content, metadata=metadata))\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/cube_semantic.html"} {"id": "adc435c90a58-0", "text": "Source code for langchain.document_loaders.gutenberg\n\"\"\"Loads .txt web files.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class GutenbergLoader(BaseLoader):\n \"\"\"Loader that uses urllib to load .txt web files.\"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with a file path.\"\"\"\n if not file_path.startswith(\"https://www.gutenberg.org\"):\n raise ValueError(\"file path must start with 'https://www.gutenberg.org'\")\n if not file_path.endswith(\".txt\"):\n raise ValueError(\"file path must end with '.txt'\")\n self.file_path = file_path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load file.\"\"\"\n from urllib.request import urlopen\n elements = urlopen(self.file_path)\n text = \"\\n\\n\".join([str(el.decode(\"utf-8-sig\")) for el in elements])\n metadata = {\"source\": self.file_path}\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/gutenberg.html"} {"id": "552e6a1a13e0-0", "text": "Source code for langchain.document_loaders.srt\n\"\"\"Loader for .srt (subtitle) files.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class SRTLoader(BaseLoader):\n \"\"\"Loader for .srt (subtitle) files.\"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with file path.\"\"\"\n try:\n import pysrt # noqa:F401\n except ImportError:\n raise ImportError(\n \"package `pysrt` not found, please install it with `pip install pysrt`\"\n )\n self.file_path = file_path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load using pysrt file.\"\"\"\n import pysrt\n parsed_info = pysrt.open(self.file_path)\n text = \" \".join([t.text for t in parsed_info])\n metadata = {\"source\": self.file_path}\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/srt.html"} {"id": "a77c329ef94f-0", "text": "Source code for langchain.document_loaders.duckdb_loader\nfrom typing import Dict, List, Optional, cast\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class DuckDBLoader(BaseLoader):\n \"\"\"Loads a query result from DuckDB into a list of documents.\n Each document represents one row of the result. The `page_content_columns`\n are written into the `page_content` of the document. The `metadata_columns`\n are written into the `metadata` of the document. By default, all columns\n are written into the `page_content` and none into the `metadata`.\n \"\"\"\n def __init__(\n self,\n query: str,\n database: str = \":memory:\",\n read_only: bool = False,\n config: Optional[Dict[str, str]] = None,\n page_content_columns: Optional[List[str]] = None,\n metadata_columns: Optional[List[str]] = None,\n ):\n \"\"\"\n Args:\n query: The query to execute.\n database: The database to connect to. Defaults to \":memory:\".\n read_only: Whether to open the database in read-only mode.\n Defaults to False.\n config: A dictionary of configuration options to pass to the database.\n Optional.\n page_content_columns: The columns to write into the `page_content`\n of the document. Optional.\n metadata_columns: The columns to write into the `metadata` of the document.\n Optional.\n \"\"\"\n self.query = query\n self.database = database\n self.read_only = read_only\n self.config = config or {}\n self.page_content_columns = page_content_columns\n self.metadata_columns = metadata_columns\n[docs] def load(self) -> List[Document]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/duckdb_loader.html"} {"id": "a77c329ef94f-1", "text": "[docs] def load(self) -> List[Document]:\n try:\n import duckdb\n except ImportError:\n raise ImportError(\n \"Could not import duckdb python package. \"\n \"Please install it with `pip install duckdb`.\"\n )\n docs = []\n with duckdb.connect(\n database=self.database, read_only=self.read_only, config=self.config\n ) as con:\n query_result = con.execute(self.query)\n results = query_result.fetchall()\n description = cast(list, query_result.description)\n field_names = [c[0] for c in description]\n if self.page_content_columns is None:\n page_content_columns = field_names\n else:\n page_content_columns = self.page_content_columns\n if self.metadata_columns is None:\n metadata_columns = []\n else:\n metadata_columns = self.metadata_columns\n for result in results:\n page_content = \"\\n\".join(\n f\"{column}: {result[field_names.index(column)]}\"\n for column in page_content_columns\n )\n metadata = {\n column: result[field_names.index(column)]\n for column in metadata_columns\n }\n doc = Document(page_content=page_content, metadata=metadata)\n docs.append(doc)\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/duckdb_loader.html"} {"id": "dd93933f64b8-0", "text": "Source code for langchain.document_loaders.telegram\n\"\"\"Loader that loads Telegram chat json dump.\"\"\"\nfrom __future__ import annotations\nimport asyncio\nimport json\nfrom pathlib import Path\nfrom typing import TYPE_CHECKING, Dict, List, Optional, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\nif TYPE_CHECKING:\n import pandas as pd\n from telethon.hints import EntityLike\n[docs]def concatenate_rows(row: dict) -> str:\n \"\"\"Combine message information in a readable format ready to be used.\"\"\"\n date = row[\"date\"]\n sender = row[\"from\"]\n text = row[\"text\"]\n return f\"{sender} on {date}: {text}\\n\\n\"\n[docs]class TelegramChatFileLoader(BaseLoader):\n \"\"\"Loader that loads Telegram chat json directory dump.\"\"\"\n def __init__(self, path: str):\n \"\"\"Initialize with path.\"\"\"\n self.file_path = path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n p = Path(self.file_path)\n with open(p, encoding=\"utf8\") as f:\n d = json.load(f)\n text = \"\".join(\n concatenate_rows(message)\n for message in d[\"messages\"]\n if message[\"type\"] == \"message\" and isinstance(message[\"text\"], str)\n )\n metadata = {\"source\": str(p)}\n return [Document(page_content=text, metadata=metadata)]\n[docs]def text_to_docs(text: Union[str, List[str]]) -> List[Document]:\n \"\"\"Converts a string or list of strings to a list of Documents with metadata.\"\"\"\n if isinstance(text, str):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/telegram.html"} {"id": "dd93933f64b8-1", "text": "if isinstance(text, str):\n # Take a single string as one page\n text = [text]\n page_docs = [Document(page_content=page) for page in text]\n # Add page numbers as metadata\n for i, doc in enumerate(page_docs):\n doc.metadata[\"page\"] = i + 1\n # Split pages into chunks\n doc_chunks = []\n for doc in page_docs:\n text_splitter = RecursiveCharacterTextSplitter(\n chunk_size=800,\n separators=[\"\\n\\n\", \"\\n\", \".\", \"!\", \"?\", \",\", \" \", \"\"],\n chunk_overlap=20,\n )\n chunks = text_splitter.split_text(doc.page_content)\n for i, chunk in enumerate(chunks):\n doc = Document(\n page_content=chunk, metadata={\"page\": doc.metadata[\"page\"], \"chunk\": i}\n )\n # Add sources a metadata\n doc.metadata[\"source\"] = f\"{doc.metadata['page']}-{doc.metadata['chunk']}\"\n doc_chunks.append(doc)\n return doc_chunks\n[docs]class TelegramChatApiLoader(BaseLoader):\n \"\"\"Loader that loads Telegram chat json directory dump.\"\"\"\n def __init__(\n self,\n chat_entity: Optional[EntityLike] = None,\n api_id: Optional[int] = None,\n api_hash: Optional[str] = None,\n username: Optional[str] = None,\n file_path: str = \"telegram_data.json\",\n ):\n \"\"\"Initialize with API parameters.\"\"\"\n self.chat_entity = chat_entity\n self.api_id = api_id\n self.api_hash = api_hash\n self.username = username\n self.file_path = file_path\n[docs] async def fetch_data_from_telegram(self) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/telegram.html"} {"id": "dd93933f64b8-2", "text": "[docs] async def fetch_data_from_telegram(self) -> None:\n \"\"\"Fetch data from Telegram API and save it as a JSON file.\"\"\"\n from telethon.sync import TelegramClient\n data = []\n async with TelegramClient(self.username, self.api_id, self.api_hash) as client:\n async for message in client.iter_messages(self.chat_entity):\n is_reply = message.reply_to is not None\n reply_to_id = message.reply_to.reply_to_msg_id if is_reply else None\n data.append(\n {\n \"sender_id\": message.sender_id,\n \"text\": message.text,\n \"date\": message.date.isoformat(),\n \"message.id\": message.id,\n \"is_reply\": is_reply,\n \"reply_to_id\": reply_to_id,\n }\n )\n with open(self.file_path, \"w\", encoding=\"utf-8\") as f:\n json.dump(data, f, ensure_ascii=False, indent=4)\n def _get_message_threads(self, data: pd.DataFrame) -> dict:\n \"\"\"Create a dictionary of message threads from the given data.\n Args:\n data (pd.DataFrame): A DataFrame containing the conversation \\\n data with columns:\n - message.sender_id\n - text\n - date\n - message.id\n - is_reply\n - reply_to_id\n Returns:\n dict: A dictionary where the key is the parent message ID and \\\n the value is a list of message IDs in ascending order.\n \"\"\"\n def find_replies(parent_id: int, reply_data: pd.DataFrame) -> List[int]:\n \"\"\"\n Recursively find all replies to a given parent message ID.\n Args:\n parent_id (int): The parent message ID.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/telegram.html"} {"id": "dd93933f64b8-3", "text": "Args:\n parent_id (int): The parent message ID.\n reply_data (pd.DataFrame): A DataFrame containing reply messages.\n Returns:\n list: A list of message IDs that are replies to the parent message ID.\n \"\"\"\n # Find direct replies to the parent message ID\n direct_replies = reply_data[reply_data[\"reply_to_id\"] == parent_id][\n \"message.id\"\n ].tolist()\n # Recursively find replies to the direct replies\n all_replies = []\n for reply_id in direct_replies:\n all_replies += [reply_id] + find_replies(reply_id, reply_data)\n return all_replies\n # Filter out parent messages\n parent_messages = data[~data[\"is_reply\"]]\n # Filter out reply messages and drop rows with NaN in 'reply_to_id'\n reply_messages = data[data[\"is_reply\"]].dropna(subset=[\"reply_to_id\"])\n # Convert 'reply_to_id' to integer\n reply_messages[\"reply_to_id\"] = reply_messages[\"reply_to_id\"].astype(int)\n # Create a dictionary of message threads with parent message IDs as keys and \\\n # lists of reply message IDs as values\n message_threads = {\n parent_id: [parent_id] + find_replies(parent_id, reply_messages)\n for parent_id in parent_messages[\"message.id\"]\n }\n return message_threads\n def _combine_message_texts(\n self, message_threads: Dict[int, List[int]], data: pd.DataFrame\n ) -> str:\n \"\"\"\n Combine the message texts for each parent message ID based \\\n on the list of message threads.\n Args:\n message_threads (dict): A dictionary where the key is the parent message \\", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/telegram.html"} {"id": "dd93933f64b8-4", "text": "message_threads (dict): A dictionary where the key is the parent message \\\n ID and the value is a list of message IDs in ascending order.\n data (pd.DataFrame): A DataFrame containing the conversation data:\n - message.sender_id\n - text\n - date\n - message.id\n - is_reply\n - reply_to_id\n Returns:\n str: A combined string of message texts sorted by date.\n \"\"\"\n combined_text = \"\"\n # Iterate through sorted parent message IDs\n for parent_id, message_ids in message_threads.items():\n # Get the message texts for the message IDs and sort them by date\n message_texts = (\n data[data[\"message.id\"].isin(message_ids)]\n .sort_values(by=\"date\")[\"text\"]\n .tolist()\n )\n message_texts = [str(elem) for elem in message_texts]\n # Combine the message texts\n combined_text += \" \".join(message_texts) + \".\\n\"\n return combined_text.strip()\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n if self.chat_entity is not None:\n try:\n import nest_asyncio\n nest_asyncio.apply()\n asyncio.run(self.fetch_data_from_telegram())\n except ImportError:\n raise ImportError(\n \"\"\"`nest_asyncio` package not found.\n please install with `pip install nest_asyncio`\n \"\"\"\n )\n p = Path(self.file_path)\n with open(p, encoding=\"utf8\") as f:\n d = json.load(f)\n try:\n import pandas as pd\n except ImportError:\n raise ImportError(\n \"\"\"`pandas` package not found. \n please install with `pip install pandas`\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/telegram.html"} {"id": "dd93933f64b8-5", "text": "please install with `pip install pandas`\n \"\"\"\n )\n normalized_messages = pd.json_normalize(d)\n df = pd.DataFrame(normalized_messages)\n message_threads = self._get_message_threads(df)\n combined_texts = self._combine_message_texts(message_threads, df)\n return text_to_docs(combined_texts)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/telegram.html"} {"id": "ff608079498d-0", "text": "Source code for langchain.document_loaders.onedrive_file\nfrom __future__ import annotations\nimport tempfile\nfrom typing import TYPE_CHECKING, List\nfrom pydantic import BaseModel, Field\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\nif TYPE_CHECKING:\n from O365.drive import File\nCHUNK_SIZE = 1024 * 1024 * 5\n[docs]class OneDriveFileLoader(BaseLoader, BaseModel):\n file: File = Field(...)\n[docs] class Config:\n arbitrary_types_allowed = True\n[docs] def load(self) -> List[Document]:\n \"\"\"Load Documents\"\"\"\n with tempfile.TemporaryDirectory() as temp_dir:\n file_path = f\"{temp_dir}/{self.file.name}\"\n self.file.download(to_path=temp_dir, chunk_size=CHUNK_SIZE)\n loader = UnstructuredFileLoader(file_path)\n return loader.load()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/onedrive_file.html"} {"id": "9f911bc38344-0", "text": "Source code for langchain.document_loaders.csv_loader\nimport csv\nfrom typing import Any, Dict, List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.unstructured import (\n UnstructuredFileLoader,\n validate_unstructured_version,\n)\n[docs]class CSVLoader(BaseLoader):\n \"\"\"Loads a CSV file into a list of documents.\n Each document represents one row of the CSV file. Every row is converted into a\n key/value pair and outputted to a new line in the document's page_content.\n The source for each document loaded from csv is set to the value of the\n `file_path` argument for all doucments by default.\n You can override this by setting the `source_column` argument to the\n name of a column in the CSV file.\n The source of each document will then be set to the value of the column\n with the name specified in `source_column`.\n Output Example:\n .. code-block:: txt\n column1: value1\n column2: value2\n column3: value3\n \"\"\"\n def __init__(\n self,\n file_path: str,\n source_column: Optional[str] = None,\n csv_args: Optional[Dict] = None,\n encoding: Optional[str] = None,\n ):\n \"\"\"\n Args:\n file_path: The path to the CSV file.\n source_column: The name of the column in the CSV file to use as the source.\n Optional. Defaults to None.\n csv_args: A dictionary of arguments to pass to the csv.DictReader.\n Optional. Defaults to None.\n encoding: The encoding of the CSV file. Optional. Defaults to None.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/csv_loader.html"} {"id": "9f911bc38344-1", "text": "encoding: The encoding of the CSV file. Optional. Defaults to None.\n \"\"\"\n self.file_path = file_path\n self.source_column = source_column\n self.encoding = encoding\n self.csv_args = csv_args or {}\n[docs] def load(self) -> List[Document]:\n \"\"\"Load data into document objects.\"\"\"\n docs = []\n with open(self.file_path, newline=\"\", encoding=self.encoding) as csvfile:\n csv_reader = csv.DictReader(csvfile, **self.csv_args) # type: ignore\n for i, row in enumerate(csv_reader):\n content = \"\\n\".join(f\"{k.strip()}: {v.strip()}\" for k, v in row.items())\n try:\n source = (\n row[self.source_column]\n if self.source_column is not None\n else self.file_path\n )\n except KeyError:\n raise ValueError(\n f\"Source column '{self.source_column}' not found in CSV file.\"\n )\n metadata = {\"source\": source, \"row\": i}\n doc = Document(page_content=content, metadata=metadata)\n docs.append(doc)\n return docs\n[docs]class UnstructuredCSVLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load CSV files.\"\"\"\n def __init__(\n self, file_path: str, mode: str = \"single\", **unstructured_kwargs: Any\n ):\n \"\"\"\n Args:\n file_path: The path to the CSV file.\n mode: The mode to use when loading the CSV file.\n Optional. Defaults to \"single\".\n **unstructured_kwargs: Keyword arguments to pass to unstructured.\n \"\"\"\n validate_unstructured_version(min_unstructured_version=\"0.6.8\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/csv_loader.html"} {"id": "9f911bc38344-2", "text": "\"\"\"\n validate_unstructured_version(min_unstructured_version=\"0.6.8\")\n super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.partition.csv import partition_csv\n return partition_csv(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/csv_loader.html"} {"id": "01f30a4e32ea-0", "text": "Source code for langchain.document_loaders.weather\n\"\"\"Simple reader that reads weather data from OpenWeatherMap API\"\"\"\nfrom __future__ import annotations\nfrom datetime import datetime\nfrom typing import Iterator, List, Optional, Sequence\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utilities.openweathermap import OpenWeatherMapAPIWrapper\n[docs]class WeatherDataLoader(BaseLoader):\n \"\"\"Weather Reader.\n Reads the forecast & current weather of any location using OpenWeatherMap's free\n API. Checkout 'https://openweathermap.org/appid' for more on how to generate a free\n OpenWeatherMap API.\n \"\"\"\n def __init__(\n self,\n client: OpenWeatherMapAPIWrapper,\n places: Sequence[str],\n ) -> None:\n \"\"\"Initialize with parameters.\"\"\"\n super().__init__()\n self.client = client\n self.places = places\n[docs] @classmethod\n def from_params(\n cls, places: Sequence[str], *, openweathermap_api_key: Optional[str] = None\n ) -> WeatherDataLoader:\n client = OpenWeatherMapAPIWrapper(openweathermap_api_key=openweathermap_api_key)\n return cls(client, places)\n[docs] def lazy_load(\n self,\n ) -> Iterator[Document]:\n \"\"\"Lazily load weather data for the given locations.\"\"\"\n for place in self.places:\n metadata = {\"queried_at\": datetime.now()}\n content = self.client.run(place)\n yield Document(page_content=content, metadata=metadata)\n[docs] def load(\n self,\n ) -> List[Document]:\n \"\"\"Load weather data for the given locations.\"\"\"\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/weather.html"} {"id": "366a309fa44f-0", "text": "Source code for langchain.document_loaders.mastodon\n\"\"\"Mastodon document loader.\"\"\"\nfrom __future__ import annotations\nimport os\nfrom typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Sequence\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nif TYPE_CHECKING:\n import mastodon\ndef _dependable_mastodon_import() -> mastodon:\n try:\n import mastodon\n except ImportError:\n raise ImportError(\n \"Mastodon.py package not found, \"\n \"please install it with `pip install Mastodon.py`\"\n )\n return mastodon\n[docs]class MastodonTootsLoader(BaseLoader):\n \"\"\"Mastodon toots loader.\"\"\"\n def __init__(\n self,\n mastodon_accounts: Sequence[str],\n number_toots: Optional[int] = 100,\n exclude_replies: bool = False,\n access_token: Optional[str] = None,\n api_base_url: str = \"https://mastodon.social\",\n ):\n \"\"\"Instantiate Mastodon toots loader.\n Args:\n mastodon_accounts: The list of Mastodon accounts to query.\n number_toots: How many toots to pull for each account. Default is 100.\n exclude_replies: Whether to exclude reply toots from the load.\n Default is False.\n access_token: An access token if toots are loaded as a Mastodon app. Can\n also be specified via the environment variables \"MASTODON_ACCESS_TOKEN\".\n api_base_url: A Mastodon API base URL to talk to, if not using the default.\n Default is \"https://mastodon.social\".\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/mastodon.html"} {"id": "366a309fa44f-1", "text": "Default is \"https://mastodon.social\".\n \"\"\"\n mastodon = _dependable_mastodon_import()\n access_token = access_token or os.environ.get(\"MASTODON_ACCESS_TOKEN\")\n self.api = mastodon.Mastodon(\n access_token=access_token, api_base_url=api_base_url\n )\n self.mastodon_accounts = mastodon_accounts\n self.number_toots = number_toots\n self.exclude_replies = exclude_replies\n[docs] def load(self) -> List[Document]:\n \"\"\"Load toots into documents.\"\"\"\n results: List[Document] = []\n for account in self.mastodon_accounts:\n user = self.api.account_lookup(account)\n toots = self.api.account_statuses(\n user.id,\n only_media=False,\n pinned=False,\n exclude_replies=self.exclude_replies,\n exclude_reblogs=True,\n limit=self.number_toots,\n )\n docs = self._format_toots(toots, user)\n results.extend(docs)\n return results\n def _format_toots(\n self, toots: List[Dict[str, Any]], user_info: dict\n ) -> Iterable[Document]:\n \"\"\"Format toots into documents.\n Adding user info, and selected toot fields into the metadata.\n \"\"\"\n for toot in toots:\n metadata = {\n \"created_at\": toot[\"created_at\"],\n \"user_info\": user_info,\n \"is_reply\": toot[\"in_reply_to_id\"] is not None,\n }\n yield Document(\n page_content=toot[\"content\"],\n metadata=metadata,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/mastodon.html"} {"id": "464dfa656ad4-0", "text": "Source code for langchain.document_loaders.tomarkdown\n\"\"\"Loader that loads HTML to markdown using 2markdown.\"\"\"\nfrom __future__ import annotations\nfrom typing import Iterator, List\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class ToMarkdownLoader(BaseLoader):\n \"\"\"Loader that loads HTML to markdown using 2markdown.\"\"\"\n def __init__(self, url: str, api_key: str):\n \"\"\"Initialize with url and api key.\"\"\"\n self.url = url\n self.api_key = api_key\n[docs] def lazy_load(\n self,\n ) -> Iterator[Document]:\n \"\"\"Lazily load the file.\"\"\"\n response = requests.post(\n \"https://2markdown.com/api/2md\",\n headers={\"X-Api-Key\": self.api_key},\n json={\"url\": self.url},\n )\n text = response.json()[\"article\"]\n metadata = {\"source\": self.url}\n yield Document(page_content=text, metadata=metadata)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load file.\"\"\"\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/tomarkdown.html"} {"id": "f38d65ba34ed-0", "text": "Source code for langchain.document_loaders.trello\n\"\"\"Loader that loads cards from Trello\"\"\"\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, Any, List, Literal, Optional, Tuple\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utils import get_from_env\nif TYPE_CHECKING:\n from trello import Board, Card, TrelloClient\n[docs]class TrelloLoader(BaseLoader):\n \"\"\"Trello loader. Reads all cards from a Trello board.\"\"\"\n def __init__(\n self,\n client: TrelloClient,\n board_name: str,\n *,\n include_card_name: bool = True,\n include_comments: bool = True,\n include_checklist: bool = True,\n card_filter: Literal[\"closed\", \"open\", \"all\"] = \"all\",\n extra_metadata: Tuple[str, ...] = (\"due_date\", \"labels\", \"list\", \"closed\"),\n ):\n \"\"\"Initialize Trello loader.\n Args:\n client: Trello API client.\n board_name: The name of the Trello board.\n include_card_name: Whether to include the name of the card in the document.\n include_comments: Whether to include the comments on the card in the\n document.\n include_checklist: Whether to include the checklist on the card in the\n document.\n card_filter: Filter on card status. Valid values are \"closed\", \"open\",\n \"all\".\n extra_metadata: List of additional metadata fields to include as document\n metadata.Valid values are \"due_date\", \"labels\", \"list\", \"closed\".\n \"\"\"\n self.client = client\n self.board_name = board_name\n self.include_card_name = include_card_name", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/trello.html"} {"id": "f38d65ba34ed-1", "text": "self.board_name = board_name\n self.include_card_name = include_card_name\n self.include_comments = include_comments\n self.include_checklist = include_checklist\n self.extra_metadata = extra_metadata\n self.card_filter = card_filter\n[docs] @classmethod\n def from_credentials(\n cls,\n board_name: str,\n *,\n api_key: Optional[str] = None,\n token: Optional[str] = None,\n **kwargs: Any,\n ) -> TrelloLoader:\n \"\"\"Convenience constructor that builds TrelloClient init param for you.\n Args:\n board_name: The name of the Trello board.\n api_key: Trello API key. Can also be specified as environment variable\n TRELLO_API_KEY.\n token: Trello token. Can also be specified as environment variable\n TRELLO_TOKEN.\n include_card_name: Whether to include the name of the card in the document.\n include_comments: Whether to include the comments on the card in the\n document.\n include_checklist: Whether to include the checklist on the card in the\n document.\n card_filter: Filter on card status. Valid values are \"closed\", \"open\",\n \"all\".\n extra_metadata: List of additional metadata fields to include as document\n metadata.Valid values are \"due_date\", \"labels\", \"list\", \"closed\".\n \"\"\"\n try:\n from trello import TrelloClient # type: ignore\n except ImportError as ex:\n raise ImportError(\n \"Could not import trello python package. \"\n \"Please install it with `pip install py-trello`.\"\n ) from ex\n api_key = api_key or get_from_env(\"api_key\", \"TRELLO_API_KEY\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/trello.html"} {"id": "f38d65ba34ed-2", "text": "token = token or get_from_env(\"token\", \"TRELLO_TOKEN\")\n client = TrelloClient(api_key=api_key, token=token)\n return cls(client, board_name, **kwargs)\n[docs] def load(self) -> List[Document]:\n \"\"\"Loads all cards from the specified Trello board.\n You can filter the cards, metadata and text included by using the optional\n parameters.\n Returns:\n A list of documents, one for each card in the board.\n \"\"\"\n try:\n from bs4 import BeautifulSoup # noqa: F401\n except ImportError as ex:\n raise ImportError(\n \"`beautifulsoup4` package not found, please run\"\n \" `pip install beautifulsoup4`\"\n ) from ex\n board = self._get_board()\n # Create a dictionary with the list IDs as keys and the list names as values\n list_dict = {list_item.id: list_item.name for list_item in board.list_lists()}\n # Get Cards on the board\n cards = board.get_cards(card_filter=self.card_filter)\n return [self._card_to_doc(card, list_dict) for card in cards]\n def _get_board(self) -> Board:\n # Find the first board with a matching name\n board = next(\n (b for b in self.client.list_boards() if b.name == self.board_name), None\n )\n if not board:\n raise ValueError(f\"Board `{self.board_name}` not found.\")\n return board\n def _card_to_doc(self, card: Card, list_dict: dict) -> Document:\n from bs4 import BeautifulSoup # type: ignore\n text_content = \"\"\n if self.include_card_name:\n text_content = card.name + \"\\n\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/trello.html"} {"id": "f38d65ba34ed-3", "text": "if self.include_card_name:\n text_content = card.name + \"\\n\"\n if card.description.strip():\n text_content += BeautifulSoup(card.description, \"lxml\").get_text()\n if self.include_checklist:\n # Get all the checklist items on the card\n for checklist in card.checklists:\n if checklist.items:\n items = [\n f\"{item['name']}:{item['state']}\" for item in checklist.items\n ]\n text_content += f\"\\n{checklist.name}\\n\" + \"\\n\".join(items)\n if self.include_comments:\n # Get all the comments on the card\n comments = [\n BeautifulSoup(comment[\"data\"][\"text\"], \"lxml\").get_text()\n for comment in card.comments\n ]\n text_content += \"Comments:\" + \"\\n\".join(comments)\n # Default metadata fields\n metadata = {\n \"title\": card.name,\n \"id\": card.id,\n \"url\": card.url,\n }\n # Extra metadata fields. Card object is not subscriptable.\n if \"labels\" in self.extra_metadata:\n metadata[\"labels\"] = [label.name for label in card.labels]\n if \"list\" in self.extra_metadata:\n if card.list_id in list_dict:\n metadata[\"list\"] = list_dict[card.list_id]\n if \"closed\" in self.extra_metadata:\n metadata[\"closed\"] = card.closed\n if \"due_date\" in self.extra_metadata:\n metadata[\"due_date\"] = card.due_date\n return Document(page_content=text_content, metadata=metadata)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/trello.html"} {"id": "5dd015a0215e-0", "text": "Source code for langchain.document_loaders.readthedocs\n\"\"\"Loader that loads ReadTheDocs documentation directory dump.\"\"\"\nfrom pathlib import Path\nfrom typing import Any, List, Optional, Tuple, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class ReadTheDocsLoader(BaseLoader):\n \"\"\"Loader that loads ReadTheDocs documentation directory dump.\"\"\"\n def __init__(\n self,\n path: Union[str, Path],\n encoding: Optional[str] = None,\n errors: Optional[str] = None,\n custom_html_tag: Optional[Tuple[str, dict]] = None,\n **kwargs: Optional[Any]\n ):\n \"\"\"\n Initialize ReadTheDocsLoader\n The loader loops over all files under `path` and extract the actual content of\n the files by retrieving main html tags. Default main html tags include\n `
`, <`div role=\"main>`, and `
`. You\n can also define your own html tags by passing custom_html_tag, e.g.\n `(\"div\", \"class=main\")`. The loader iterates html tags with the order of\n custom html tags (if exists) and default html tags. If any of the tags is not\n empty, the loop will break and retrieve the content out of that tag.\n Args:\n path: The location of pulled readthedocs folder.\n encoding: The encoding with which to open the documents.\n errors: Specifies how encoding and decoding errors are to be handled\u2014this\n cannot be used in binary mode.\n custom_html_tag: Optional custom html tag to retrieve the content from\n files.\n \"\"\"\n try:\n from bs4 import BeautifulSoup\n except ImportError:\n raise ImportError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/readthedocs.html"} {"id": "5dd015a0215e-1", "text": "from bs4 import BeautifulSoup\n except ImportError:\n raise ImportError(\n \"Could not import python packages. \"\n \"Please install it with `pip install beautifulsoup4`. \"\n )\n try:\n _ = BeautifulSoup(\n \"Parser builder library test.\", **kwargs\n )\n except Exception as e:\n raise ValueError(\"Parsing kwargs do not appear valid\") from e\n self.file_path = Path(path)\n self.encoding = encoding\n self.errors = errors\n self.custom_html_tag = custom_html_tag\n self.bs_kwargs = kwargs\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n docs = []\n for p in self.file_path.rglob(\"*\"):\n if p.is_dir():\n continue\n with open(p, encoding=self.encoding, errors=self.errors) as f:\n text = self._clean_data(f.read())\n metadata = {\"source\": str(p)}\n docs.append(Document(page_content=text, metadata=metadata))\n return docs\n def _clean_data(self, data: str) -> str:\n from bs4 import BeautifulSoup\n soup = BeautifulSoup(data, **self.bs_kwargs)\n # default tags\n html_tags = [\n (\"div\", {\"role\": \"main\"}),\n (\"main\", {\"id\": \"main-content\"}),\n ]\n if self.custom_html_tag is not None:\n html_tags.append(self.custom_html_tag)\n text = None\n # reversed order. check the custom one first\n for tag, attrs in html_tags[::-1]:\n text = soup.find(tag, attrs)\n # if found, break\n if text is not None:\n break\n if text is not None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/readthedocs.html"} {"id": "5dd015a0215e-2", "text": "if text is not None:\n break\n if text is not None:\n text = text.get_text()\n else:\n text = \"\"\n # trim empty lines\n return \"\\n\".join([t for t in text.split(\"\\n\") if t])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/readthedocs.html"} {"id": "c9801c11e9d4-0", "text": "Source code for langchain.document_loaders.pyspark_dataframe\n\"\"\"Load from a Spark Dataframe object\"\"\"\nimport itertools\nimport logging\nimport sys\nfrom typing import TYPE_CHECKING, Any, Iterator, List, Optional, Tuple\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__file__)\nif TYPE_CHECKING:\n from pyspark.sql import SparkSession\n[docs]class PySparkDataFrameLoader(BaseLoader):\n \"\"\"Load PySpark DataFrames\"\"\"\n def __init__(\n self,\n spark_session: Optional[\"SparkSession\"] = None,\n df: Optional[Any] = None,\n page_content_column: str = \"text\",\n fraction_of_memory: float = 0.1,\n ):\n \"\"\"Initialize with a Spark DataFrame object.\"\"\"\n try:\n from pyspark.sql import DataFrame, SparkSession\n except ImportError:\n raise ImportError(\n \"pyspark is not installed. \"\n \"Please install it with `pip install pyspark`\"\n )\n self.spark = (\n spark_session if spark_session else SparkSession.builder.getOrCreate()\n )\n if not isinstance(df, DataFrame):\n raise ValueError(\n f\"Expected data_frame to be a PySpark DataFrame, got {type(df)}\"\n )\n self.df = df\n self.page_content_column = page_content_column\n self.fraction_of_memory = fraction_of_memory\n self.num_rows, self.max_num_rows = self.get_num_rows()\n self.rdd_df = self.df.rdd.map(list)\n self.column_names = self.df.columns\n[docs] def get_num_rows(self) -> Tuple[int, int]:\n \"\"\"Gets the amount of \"feasible\" rows for the DataFrame\"\"\"\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/pyspark_dataframe.html"} {"id": "c9801c11e9d4-1", "text": "\"\"\"Gets the amount of \"feasible\" rows for the DataFrame\"\"\"\n try:\n import psutil\n except ImportError as e:\n raise ImportError(\n \"psutil not installed. Please install it with `pip install psutil`.\"\n ) from e\n row = self.df.limit(1).collect()[0]\n estimated_row_size = sys.getsizeof(row)\n mem_info = psutil.virtual_memory()\n available_memory = mem_info.available\n max_num_rows = int(\n (available_memory / estimated_row_size) * self.fraction_of_memory\n )\n return min(max_num_rows, self.df.count()), max_num_rows\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"A lazy loader for document content.\"\"\"\n for row in self.rdd_df.toLocalIterator():\n metadata = {self.column_names[i]: row[i] for i in range(len(row))}\n text = metadata[self.page_content_column]\n metadata.pop(self.page_content_column)\n yield Document(page_content=text, metadata=metadata)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load from the dataframe.\"\"\"\n if self.df.count() > self.max_num_rows:\n logger.warning(\n f\"The number of DataFrame rows is {self.df.count()}, \"\n f\"but we will only include the amount \"\n f\"of rows that can reasonably fit in memory: {self.num_rows}.\"\n )\n lazy_load_iterator = self.lazy_load()\n return list(itertools.islice(lazy_load_iterator, self.num_rows))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/pyspark_dataframe.html"} {"id": "2dbbbf44a361-0", "text": "Source code for langchain.document_loaders.bilibili\nimport json\nimport re\nimport warnings\nfrom typing import List, Tuple\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class BiliBiliLoader(BaseLoader):\n \"\"\"Loader that loads bilibili transcripts.\"\"\"\n def __init__(self, video_urls: List[str]):\n \"\"\"Initialize with bilibili url.\n Args:\n video_urls: List of bilibili urls.\n \"\"\"\n self.video_urls = video_urls\n[docs] def load(self) -> List[Document]:\n \"\"\"Load Documents from bilibili url.\"\"\"\n results = []\n for url in self.video_urls:\n transcript, video_info = self._get_bilibili_subs_and_info(url)\n doc = Document(page_content=transcript, metadata=video_info)\n results.append(doc)\n return results\n def _get_bilibili_subs_and_info(self, url: str) -> Tuple[str, dict]:\n try:\n from bilibili_api import sync, video\n except ImportError:\n raise ImportError(\n \"requests package not found, please install it with \"\n \"`pip install bilibili-api-python`\"\n )\n bvid = re.search(r\"BV\\w+\", url)\n if bvid is not None:\n v = video.Video(bvid=bvid.group())\n else:\n aid = re.search(r\"av[0-9]+\", url)\n if aid is not None:\n try:\n v = video.Video(aid=int(aid.group()[2:]))\n except AttributeError:\n raise ValueError(f\"{url} is not bilibili url.\")\n else:\n raise ValueError(f\"{url} is not bilibili url.\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/bilibili.html"} {"id": "2dbbbf44a361-1", "text": "else:\n raise ValueError(f\"{url} is not bilibili url.\")\n video_info = sync(v.get_info())\n video_info.update({\"url\": url})\n # Get subtitle url\n subtitle = video_info.pop(\"subtitle\")\n sub_list = subtitle[\"list\"]\n if sub_list:\n sub_url = sub_list[0][\"subtitle_url\"]\n result = requests.get(sub_url)\n raw_sub_titles = json.loads(result.content)[\"body\"]\n raw_transcript = \" \".join([c[\"content\"] for c in raw_sub_titles])\n raw_transcript_with_meta_info = (\n f\"Video Title: {video_info['title']},\"\n f\"description: {video_info['desc']}\\n\\n\"\n f\"Transcript: {raw_transcript}\"\n )\n return raw_transcript_with_meta_info, video_info\n else:\n raw_transcript = \"\"\n warnings.warn(\n f\"\"\"\n No subtitles found for video: {url}.\n Return Empty transcript.\n \"\"\"\n )\n return raw_transcript, video_info", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/bilibili.html"} {"id": "b93f5033593c-0", "text": "Source code for langchain.document_loaders.generic\nfrom __future__ import annotations\nfrom pathlib import Path\nfrom typing import Iterator, List, Literal, Optional, Sequence, Union\nfrom langchain.document_loaders.base import BaseBlobParser, BaseLoader\nfrom langchain.document_loaders.blob_loaders import BlobLoader, FileSystemBlobLoader\nfrom langchain.document_loaders.parsers.registry import get_parser\nfrom langchain.schema import Document\nfrom langchain.text_splitter import TextSplitter\n_PathLike = Union[str, Path]\nDEFAULT = Literal[\"default\"]\n[docs]class GenericLoader(BaseLoader):\n \"\"\"A generic document loader.\n A generic document loader that allows combining an arbitrary blob loader with\n a blob parser.\n Examples:\n .. code-block:: python\n from langchain.document_loaders import GenericLoader\n from langchain.document_loaders.blob_loaders import FileSystemBlobLoader\n loader = GenericLoader.from_filesystem(\n path=\"path/to/directory\",\n glob=\"**/[!.]*\",\n suffixes=[\".pdf\"],\n show_progress=True,\n )\n docs = loader.lazy_load()\n next(docs)\n Example instantiations to change which files are loaded:\n ... code-block:: python\n # Recursively load all text files in a directory.\n loader = GenericLoader.from_filesystem(\"/path/to/dir\", glob=\"**/*.txt\")\n # Recursively load all non-hidden files in a directory.\n loader = GenericLoader.from_filesystem(\"/path/to/dir\", glob=\"**/[!.]*\")\n # Load all files in a directory without recursion.\n loader = GenericLoader.from_filesystem(\"/path/to/dir\", glob=\"*\")\n Example instantiations to change which parser is used:\n ... code-block:: python\n from langchain.document_loaders.parsers.pdf import PyPDFParser", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/generic.html"} {"id": "b93f5033593c-1", "text": "from langchain.document_loaders.parsers.pdf import PyPDFParser\n # Recursively load all text files in a directory.\n loader = GenericLoader.from_filesystem(\n \"/path/to/dir\",\n glob=\"**/*.pdf\",\n parser=PyPDFParser()\n )\n \"\"\"\n def __init__(\n self,\n blob_loader: BlobLoader,\n blob_parser: BaseBlobParser,\n ) -> None:\n \"\"\"A generic document loader.\n Args:\n blob_loader: A blob loader which knows how to yield blobs\n blob_parser: A blob parser which knows how to parse blobs into documents\n \"\"\"\n self.blob_loader = blob_loader\n self.blob_parser = blob_parser\n[docs] def lazy_load(\n self,\n ) -> Iterator[Document]:\n \"\"\"Load documents lazily. Use this when working at a large scale.\"\"\"\n for blob in self.blob_loader.yield_blobs():\n yield from self.blob_parser.lazy_parse(blob)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load all documents.\"\"\"\n return list(self.lazy_load())\n[docs] def load_and_split(\n self, text_splitter: Optional[TextSplitter] = None\n ) -> List[Document]:\n \"\"\"Load all documents and split them into sentences.\"\"\"\n raise NotImplementedError(\n \"Loading and splitting is not yet implemented for generic loaders. \"\n \"When they will be implemented they will be added via the initializer. \"\n \"This method should not be used going forward.\"\n )\n[docs] @classmethod\n def from_filesystem(\n cls,\n path: _PathLike,\n *,\n glob: str = \"**/[!.]*\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/generic.html"} {"id": "b93f5033593c-2", "text": "*,\n glob: str = \"**/[!.]*\",\n suffixes: Optional[Sequence[str]] = None,\n show_progress: bool = False,\n parser: Union[DEFAULT, BaseBlobParser] = \"default\",\n ) -> GenericLoader:\n \"\"\"Create a generic document loader using a filesystem blob loader.\n Args:\n path: The path to the directory to load documents from.\n glob: The glob pattern to use to find documents.\n suffixes: The suffixes to use to filter documents. If None, all files\n matching the glob will be loaded.\n show_progress: Whether to show a progress bar or not (requires tqdm).\n Proxies to the file system loader.\n parser: A blob parser which knows how to parse blobs into documents\n Returns:\n A generic document loader.\n \"\"\"\n blob_loader = FileSystemBlobLoader(\n path, glob=glob, suffixes=suffixes, show_progress=show_progress\n )\n if isinstance(parser, str):\n blob_parser = get_parser(parser)\n else:\n blob_parser = parser\n return cls(blob_loader, blob_parser)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/generic.html"} {"id": "c9f6f54a7038-0", "text": "Source code for langchain.document_loaders.larksuite\n\"\"\"Loads LarkSuite (FeiShu) document json dump.\"\"\"\nimport json\nimport urllib.request\nfrom typing import Any, Iterator, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class LarkSuiteDocLoader(BaseLoader):\n \"\"\"Loads LarkSuite (FeiShu) document.\"\"\"\n def __init__(self, domain: str, access_token: str, document_id: str):\n \"\"\"Initialize with domain, access_token (tenant / user), and document_id.\n Args:\n domain: The domain to load the LarkSuite.\n access_token: The access_token to use.\n document_id: The document_id to load.\n \"\"\"\n self.domain = domain\n self.access_token = access_token\n self.document_id = document_id\n def _get_larksuite_api_json_data(self, api_url: str) -> Any:\n \"\"\"Get LarkSuite (FeiShu) API response json data.\"\"\"\n headers = {\"Authorization\": f\"Bearer {self.access_token}\"}\n request = urllib.request.Request(api_url, headers=headers)\n with urllib.request.urlopen(request) as response:\n json_data = json.loads(response.read().decode())\n return json_data\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"Lazy load LarkSuite (FeiShu) document.\"\"\"\n api_url_prefix = f\"{self.domain}/open-apis/docx/v1/documents\"\n metadata_json = self._get_larksuite_api_json_data(\n f\"{api_url_prefix}/{self.document_id}\"\n )\n raw_content_json = self._get_larksuite_api_json_data(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/larksuite.html"} {"id": "c9f6f54a7038-1", "text": ")\n raw_content_json = self._get_larksuite_api_json_data(\n f\"{api_url_prefix}/{self.document_id}/raw_content\"\n )\n text = raw_content_json[\"data\"][\"content\"]\n metadata = {\n \"document_id\": self.document_id,\n \"revision_id\": metadata_json[\"data\"][\"document\"][\"revision_id\"],\n \"title\": metadata_json[\"data\"][\"document\"][\"title\"],\n }\n yield Document(page_content=text, metadata=metadata)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load LarkSuite (FeiShu) document.\"\"\"\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/larksuite.html"} {"id": "986089da03de-0", "text": "Source code for langchain.document_loaders.iugu\n\"\"\"Loader that fetches data from IUGU\"\"\"\nimport json\nimport urllib.request\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utils import get_from_env, stringify_dict\nIUGU_ENDPOINTS = {\n \"invoices\": \"https://api.iugu.com/v1/invoices\",\n \"customers\": \"https://api.iugu.com/v1/customers\",\n \"charges\": \"https://api.iugu.com/v1/charges\",\n \"subscriptions\": \"https://api.iugu.com/v1/subscriptions\",\n \"plans\": \"https://api.iugu.com/v1/plans\",\n}\n[docs]class IuguLoader(BaseLoader):\n \"\"\"Loader that fetches data from IUGU.\"\"\"\n def __init__(self, resource: str, api_token: Optional[str] = None) -> None:\n \"\"\"Initialize the IUGU resource.\n Args:\n resource: The name of the resource to fetch.\n api_token: The IUGU API token to use.\n \"\"\"\n self.resource = resource\n api_token = api_token or get_from_env(\"api_token\", \"IUGU_API_TOKEN\")\n self.headers = {\"Authorization\": f\"Bearer {api_token}\"}\n def _make_request(self, url: str) -> List[Document]:\n request = urllib.request.Request(url, headers=self.headers)\n with urllib.request.urlopen(request) as response:\n json_data = json.loads(response.read().decode())\n text = stringify_dict(json_data)\n metadata = {\"source\": url}\n return [Document(page_content=text, metadata=metadata)]\n def _get_resource(self) -> List[Document]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/iugu.html"} {"id": "986089da03de-1", "text": "def _get_resource(self) -> List[Document]:\n endpoint = IUGU_ENDPOINTS.get(self.resource)\n if endpoint is None:\n return []\n return self._make_request(endpoint)\n[docs] def load(self) -> List[Document]:\n return self._get_resource()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/iugu.html"} {"id": "95b1487a42a2-0", "text": "Source code for langchain.document_loaders.gcs_file\n\"\"\"Load documents from a GCS file.\"\"\"\nimport os\nimport tempfile\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class GCSFileLoader(BaseLoader):\n \"\"\"Load Documents from a GCS file.\"\"\"\n def __init__(self, project_name: str, bucket: str, blob: str):\n \"\"\"Initialize with bucket and key name.\n Args:\n project_name: The name of the project to load\n bucket: The name of the GCS bucket.\n blob: The name of the GCS blob to load.\n \"\"\"\n self.bucket = bucket\n self.blob = blob\n self.project_name = project_name\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n from google.cloud import storage\n except ImportError:\n raise ImportError(\n \"Could not import google-cloud-storage python package. \"\n \"Please install it with `pip install google-cloud-storage`.\"\n )\n # Initialise a client\n storage_client = storage.Client(self.project_name)\n # Create a bucket object for our bucket\n bucket = storage_client.get_bucket(self.bucket)\n # Create a blob object from the filepath\n blob = bucket.blob(self.blob)\n with tempfile.TemporaryDirectory() as temp_dir:\n file_path = f\"{temp_dir}/{self.blob}\"\n os.makedirs(os.path.dirname(file_path), exist_ok=True)\n # Download the file to a destination\n blob.download_to_filename(file_path)\n loader = UnstructuredFileLoader(file_path)\n return loader.load()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/gcs_file.html"} {"id": "7ac9b3c463d9-0", "text": "Source code for langchain.document_loaders.apify_dataset\nfrom typing import Any, Callable, Dict, List\nfrom pydantic import BaseModel, root_validator\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class ApifyDatasetLoader(BaseLoader, BaseModel):\n \"\"\"Loading Documents from Apify datasets.\"\"\"\n apify_client: Any\n \"\"\"An instance of the ApifyClient class from the apify-client Python package.\"\"\"\n dataset_id: str\n \"\"\"The ID of the dataset on the Apify platform.\"\"\"\n dataset_mapping_function: Callable[[Dict], Document]\n \"\"\"A custom function that takes a single dictionary (an Apify dataset item)\n and converts it to an instance of the Document class.\"\"\"\n def __init__(\n self, dataset_id: str, dataset_mapping_function: Callable[[Dict], Document]\n ):\n \"\"\"Initialize the loader with an Apify dataset ID and a mapping function.\n Args:\n dataset_id (str): The ID of the dataset on the Apify platform.\n dataset_mapping_function (Callable): A function that takes a single\n dictionary (an Apify dataset item) and converts it to an instance\n of the Document class.\n \"\"\"\n super().__init__(\n dataset_id=dataset_id, dataset_mapping_function=dataset_mapping_function\n )\n[docs] @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate environment.\n Args:\n values: The values to validate.\n \"\"\"\n try:\n from apify_client import ApifyClient\n values[\"apify_client\"] = ApifyClient()\n except ImportError:\n raise ImportError(\n \"Could not import apify-client Python package. \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/apify_dataset.html"} {"id": "7ac9b3c463d9-1", "text": "raise ImportError(\n \"Could not import apify-client Python package. \"\n \"Please install it with `pip install apify-client`.\"\n )\n return values\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n dataset_items = (\n self.apify_client.dataset(self.dataset_id).list_items(clean=True).items\n )\n return list(map(self.dataset_mapping_function, dataset_items))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/apify_dataset.html"} {"id": "4520dc817fd9-0", "text": "Source code for langchain.document_loaders.word_document\n\"\"\"Loader that loads word documents.\"\"\"\nimport os\nimport tempfile\nfrom abc import ABC\nfrom typing import List\nfrom urllib.parse import urlparse\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class Docx2txtLoader(BaseLoader, ABC):\n \"\"\"Loads a DOCX with docx2txt and chunks at character level.\n Defaults to check for local file, but if the file is a web path, it will download it\n to a temporary file, and use that, then clean up the temporary file after completion\n \"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with file path.\"\"\"\n self.file_path = file_path\n if \"~\" in self.file_path:\n self.file_path = os.path.expanduser(self.file_path)\n # If the file is a web path, download it to a temporary file, and use that\n if not os.path.isfile(self.file_path) and self._is_valid_url(self.file_path):\n r = requests.get(self.file_path)\n if r.status_code != 200:\n raise ValueError(\n \"Check the url of your file; returned status code %s\"\n % r.status_code\n )\n self.web_path = self.file_path\n self.temp_file = tempfile.NamedTemporaryFile()\n self.temp_file.write(r.content)\n self.file_path = self.temp_file.name\n elif not os.path.isfile(self.file_path):\n raise ValueError(\"File path %s is not a valid file or url\" % self.file_path)\n def __del__(self) -> None:\n if hasattr(self, \"temp_file\"):\n self.temp_file.close()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/word_document.html"} {"id": "4520dc817fd9-1", "text": "if hasattr(self, \"temp_file\"):\n self.temp_file.close()\n[docs] def load(self) -> List[Document]:\n \"\"\"Load given path as single page.\"\"\"\n import docx2txt\n return [\n Document(\n page_content=docx2txt.process(self.file_path),\n metadata={\"source\": self.file_path},\n )\n ]\n @staticmethod\n def _is_valid_url(url: str) -> bool:\n \"\"\"Check if the url is valid.\"\"\"\n parsed = urlparse(url)\n return bool(parsed.netloc) and bool(parsed.scheme)\n[docs]class UnstructuredWordDocumentLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load word documents.\"\"\"\n def _get_elements(self) -> List:\n from unstructured.__version__ import __version__ as __unstructured_version__\n from unstructured.file_utils.filetype import FileType, detect_filetype\n unstructured_version = tuple(\n [int(x) for x in __unstructured_version__.split(\".\")]\n )\n # NOTE(MthwRobinson) - magic will raise an import error if the libmagic\n # system dependency isn't installed. If it's not installed, we'll just\n # check the file extension\n try:\n import magic # noqa: F401\n is_doc = detect_filetype(self.file_path) == FileType.DOC\n except ImportError:\n _, extension = os.path.splitext(str(self.file_path))\n is_doc = extension == \".doc\"\n if is_doc and unstructured_version < (0, 4, 11):\n raise ValueError(\n f\"You are on unstructured version {__unstructured_version__}. \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/word_document.html"} {"id": "4520dc817fd9-2", "text": "f\"You are on unstructured version {__unstructured_version__}. \"\n \"Partitioning .doc files is only supported in unstructured>=0.4.11. \"\n \"Please upgrade the unstructured package and try again.\"\n )\n if is_doc:\n from unstructured.partition.doc import partition_doc\n return partition_doc(filename=self.file_path, **self.unstructured_kwargs)\n else:\n from unstructured.partition.docx import partition_docx\n return partition_docx(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/word_document.html"} {"id": "b3a67996d2b3-0", "text": "Source code for langchain.document_loaders.slack_directory\n\"\"\"Loader for documents from a Slack export.\"\"\"\nimport json\nimport zipfile\nfrom pathlib import Path\nfrom typing import Dict, List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class SlackDirectoryLoader(BaseLoader):\n \"\"\"Loader for loading documents from a Slack directory dump.\"\"\"\n def __init__(self, zip_path: str, workspace_url: Optional[str] = None):\n \"\"\"Initialize the SlackDirectoryLoader.\n Args:\n zip_path (str): The path to the Slack directory dump zip file.\n workspace_url (Optional[str]): The Slack workspace URL.\n Including the URL will turn\n sources into links. Defaults to None.\n \"\"\"\n self.zip_path = Path(zip_path)\n self.workspace_url = workspace_url\n self.channel_id_map = self._get_channel_id_map(self.zip_path)\n @staticmethod\n def _get_channel_id_map(zip_path: Path) -> Dict[str, str]:\n \"\"\"Get a dictionary mapping channel names to their respective IDs.\"\"\"\n with zipfile.ZipFile(zip_path, \"r\") as zip_file:\n try:\n with zip_file.open(\"channels.json\", \"r\") as f:\n channels = json.load(f)\n return {channel[\"name\"]: channel[\"id\"] for channel in channels}\n except KeyError:\n return {}\n[docs] def load(self) -> List[Document]:\n \"\"\"Load and return documents from the Slack directory dump.\"\"\"\n docs = []\n with zipfile.ZipFile(self.zip_path, \"r\") as zip_file:\n for channel_path in zip_file.namelist():\n channel_name = Path(channel_path).parent.name\n if not channel_name:\n continue", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/slack_directory.html"} {"id": "b3a67996d2b3-1", "text": "if not channel_name:\n continue\n if channel_path.endswith(\".json\"):\n messages = self._read_json(zip_file, channel_path)\n for message in messages:\n document = self._convert_message_to_document(\n message, channel_name\n )\n docs.append(document)\n return docs\n def _read_json(self, zip_file: zipfile.ZipFile, file_path: str) -> List[dict]:\n \"\"\"Read JSON data from a zip subfile.\"\"\"\n with zip_file.open(file_path, \"r\") as f:\n data = json.load(f)\n return data\n def _convert_message_to_document(\n self, message: dict, channel_name: str\n ) -> Document:\n \"\"\"\n Convert a message to a Document object.\n Args:\n message (dict): A message in the form of a dictionary.\n channel_name (str): The name of the channel the message belongs to.\n Returns:\n Document: A Document object representing the message.\n \"\"\"\n text = message.get(\"text\", \"\")\n metadata = self._get_message_metadata(message, channel_name)\n return Document(\n page_content=text,\n metadata=metadata,\n )\n def _get_message_metadata(self, message: dict, channel_name: str) -> dict:\n \"\"\"Create and return metadata for a given message and channel.\"\"\"\n timestamp = message.get(\"ts\", \"\")\n user = message.get(\"user\", \"\")\n source = self._get_message_source(channel_name, user, timestamp)\n return {\n \"source\": source,\n \"channel\": channel_name,\n \"timestamp\": timestamp,\n \"user\": user,\n }", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/slack_directory.html"} {"id": "b3a67996d2b3-2", "text": "\"timestamp\": timestamp,\n \"user\": user,\n }\n def _get_message_source(self, channel_name: str, user: str, timestamp: str) -> str:\n \"\"\"\n Get the message source as a string.\n Args:\n channel_name (str): The name of the channel the message belongs to.\n user (str): The user ID who sent the message.\n timestamp (str): The timestamp of the message.\n Returns:\n str: The message source.\n \"\"\"\n if self.workspace_url:\n channel_id = self.channel_id_map.get(channel_name, \"\")\n return (\n f\"{self.workspace_url}/archives/{channel_id}\"\n + f\"/p{timestamp.replace('.', '')}\"\n )\n else:\n return f\"{channel_name} - {user} - {timestamp}\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/slack_directory.html"} {"id": "4118594e6cd5-0", "text": "Source code for langchain.document_loaders.college_confidential\n\"\"\"Loader that loads College Confidential.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.web_base import WebBaseLoader\n[docs]class CollegeConfidentialLoader(WebBaseLoader):\n \"\"\"Loader that loads College Confidential webpages.\"\"\"\n[docs] def load(self) -> List[Document]:\n \"\"\"Load webpages as Documents.\"\"\"\n soup = self.scrape()\n text = soup.select_one(\"main[class='skin-handler']\").text\n metadata = {\"source\": self.web_path}\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/college_confidential.html"} {"id": "b2c523814b3c-0", "text": "Source code for langchain.document_loaders.max_compute\nfrom __future__ import annotations\nfrom typing import Any, Iterator, List, Optional, Sequence\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utilities.max_compute import MaxComputeAPIWrapper\n[docs]class MaxComputeLoader(BaseLoader):\n \"\"\"Loads a query result from Alibaba Cloud MaxCompute table into documents.\"\"\"\n def __init__(\n self,\n query: str,\n api_wrapper: MaxComputeAPIWrapper,\n *,\n page_content_columns: Optional[Sequence[str]] = None,\n metadata_columns: Optional[Sequence[str]] = None,\n ):\n \"\"\"Initialize Alibaba Cloud MaxCompute document loader.\n Args:\n query: SQL query to execute.\n api_wrapper: MaxCompute API wrapper.\n page_content_columns: The columns to write into the `page_content` of the\n Document. If unspecified, all columns will be written to `page_content`.\n metadata_columns: The columns to write into the `metadata` of the Document.\n If unspecified, all columns not added to `page_content` will be written.\n \"\"\"\n self.query = query\n self.api_wrapper = api_wrapper\n self.page_content_columns = page_content_columns\n self.metadata_columns = metadata_columns\n[docs] @classmethod\n def from_params(\n cls,\n query: str,\n endpoint: str,\n project: str,\n *,\n access_id: Optional[str] = None,\n secret_access_key: Optional[str] = None,\n **kwargs: Any,\n ) -> MaxComputeLoader:\n \"\"\"Convenience constructor that builds the MaxCompute API wrapper from\n given parameters.\n Args:\n query: SQL query to execute.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/max_compute.html"} {"id": "b2c523814b3c-1", "text": "given parameters.\n Args:\n query: SQL query to execute.\n endpoint: MaxCompute endpoint.\n project: A project is a basic organizational unit of MaxCompute, which is\n similar to a database.\n access_id: MaxCompute access ID. Should be passed in directly or set as the\n environment variable `MAX_COMPUTE_ACCESS_ID`.\n secret_access_key: MaxCompute secret access key. Should be passed in\n directly or set as the environment variable\n `MAX_COMPUTE_SECRET_ACCESS_KEY`.\n \"\"\"\n api_wrapper = MaxComputeAPIWrapper.from_params(\n endpoint, project, access_id=access_id, secret_access_key=secret_access_key\n )\n return cls(query, api_wrapper, **kwargs)\n[docs] def lazy_load(self) -> Iterator[Document]:\n for row in self.api_wrapper.query(self.query):\n if self.page_content_columns:\n page_content_data = {\n k: v for k, v in row.items() if k in self.page_content_columns\n }\n else:\n page_content_data = row\n page_content = \"\\n\".join(f\"{k}: {v}\" for k, v in page_content_data.items())\n if self.metadata_columns:\n metadata = {k: v for k, v in row.items() if k in self.metadata_columns}\n else:\n metadata = {k: v for k, v in row.items() if k not in page_content_data}\n yield Document(page_content=page_content, metadata=metadata)\n[docs] def load(self) -> List[Document]:\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/max_compute.html"} {"id": "4e34580b2294-0", "text": "Source code for langchain.document_loaders.github\nfrom abc import ABC\nfrom datetime import datetime\nfrom typing import Dict, Iterator, List, Literal, Optional, Union\nimport requests\nfrom pydantic import BaseModel, root_validator, validator\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utils import get_from_dict_or_env\n[docs]class BaseGitHubLoader(BaseLoader, BaseModel, ABC):\n \"\"\"Load issues of a GitHub repository.\"\"\"\n repo: str\n \"\"\"Name of repository\"\"\"\n access_token: str\n \"\"\"Personal access token - see https://github.com/settings/tokens?type=beta\"\"\"\n[docs] @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that access token exists in environment.\"\"\"\n values[\"access_token\"] = get_from_dict_or_env(\n values, \"access_token\", \"GITHUB_PERSONAL_ACCESS_TOKEN\"\n )\n return values\n @property\n def headers(self) -> Dict[str, str]:\n return {\n \"Accept\": \"application/vnd.github+json\",\n \"Authorization\": f\"Bearer {self.access_token}\",\n }\n[docs]class GitHubIssuesLoader(BaseGitHubLoader):\n \"\"\"Load issues of a GitHub repository.\"\"\"\n include_prs: bool = True\n \"\"\"If True include Pull Requests in results, otherwise ignore them.\"\"\"\n milestone: Union[int, Literal[\"*\", \"none\"], None] = None\n \"\"\"If integer is passed, it should be a milestone's number field.\n If the string '*' is passed, issues with any milestone are accepted.\n If the string 'none' is passed, issues without milestones are returned.\n \"\"\"\n state: Optional[Literal[\"open\", \"closed\", \"all\"]] = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/github.html"} {"id": "4e34580b2294-1", "text": "state: Optional[Literal[\"open\", \"closed\", \"all\"]] = None\n \"\"\"Filter on issue state. Can be one of: 'open', 'closed', 'all'.\"\"\"\n assignee: Optional[str] = None\n \"\"\"Filter on assigned user. Pass 'none' for no user and '*' for any user.\"\"\"\n creator: Optional[str] = None\n \"\"\"Filter on the user that created the issue.\"\"\"\n mentioned: Optional[str] = None\n \"\"\"Filter on a user that's mentioned in the issue.\"\"\"\n labels: Optional[List[str]] = None\n \"\"\"Label names to filter one. Example: bug,ui,@high.\"\"\"\n sort: Optional[Literal[\"created\", \"updated\", \"comments\"]] = None\n \"\"\"What to sort results by. Can be one of: 'created', 'updated', 'comments'.\n Default is 'created'.\"\"\"\n direction: Optional[Literal[\"asc\", \"desc\"]] = None\n \"\"\"The direction to sort the results by. Can be one of: 'asc', 'desc'.\"\"\"\n since: Optional[str] = None\n \"\"\"Only show notifications updated after the given time.\n This is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.\"\"\"\n[docs] @validator(\"since\")\n def validate_since(cls, v: Optional[str]) -> Optional[str]:\n if v:\n try:\n datetime.strptime(v, \"%Y-%m-%dT%H:%M:%SZ\")\n except ValueError:\n raise ValueError(\n \"Invalid value for 'since'. Expected a date string in \"\n f\"YYYY-MM-DDTHH:MM:SSZ format. Received: {v}\"\n )\n return v\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/github.html"} {"id": "4e34580b2294-2", "text": "[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"\n Get issues of a GitHub repository.\n Returns:\n A list of Documents with attributes:\n - page_content\n - metadata\n - url\n - title\n - creator\n - created_at\n - last_update_time\n - closed_time\n - number of comments\n - state\n - labels\n - assignee\n - assignees\n - milestone\n - locked\n - number\n - is_pull_request\n \"\"\"\n url: Optional[str] = self.url\n while url:\n response = requests.get(url, headers=self.headers)\n response.raise_for_status()\n issues = response.json()\n for issue in issues:\n doc = self.parse_issue(issue)\n if not self.include_prs and doc.metadata[\"is_pull_request\"]:\n continue\n yield doc\n if response.links and response.links.get(\"next\"):\n url = response.links[\"next\"][\"url\"]\n else:\n url = None\n[docs] def load(self) -> List[Document]:\n \"\"\"\n Get issues of a GitHub repository.\n Returns:\n A list of Documents with attributes:\n - page_content\n - metadata\n - url\n - title\n - creator\n - created_at\n - last_update_time\n - closed_time\n - number of comments\n - state\n - labels\n - assignee\n - assignees\n - milestone\n - locked\n - number\n - is_pull_request\n \"\"\"\n return list(self.lazy_load())\n[docs] def parse_issue(self, issue: dict) -> Document:\n \"\"\"Create Document objects from a list of GitHub issues.\"\"\"\n metadata = {", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/github.html"} {"id": "4e34580b2294-3", "text": "\"\"\"Create Document objects from a list of GitHub issues.\"\"\"\n metadata = {\n \"url\": issue[\"html_url\"],\n \"title\": issue[\"title\"],\n \"creator\": issue[\"user\"][\"login\"],\n \"created_at\": issue[\"created_at\"],\n \"comments\": issue[\"comments\"],\n \"state\": issue[\"state\"],\n \"labels\": [label[\"name\"] for label in issue[\"labels\"]],\n \"assignee\": issue[\"assignee\"][\"login\"] if issue[\"assignee\"] else None,\n \"milestone\": issue[\"milestone\"][\"title\"] if issue[\"milestone\"] else None,\n \"locked\": issue[\"locked\"],\n \"number\": issue[\"number\"],\n \"is_pull_request\": \"pull_request\" in issue,\n }\n content = issue[\"body\"] if issue[\"body\"] is not None else \"\"\n return Document(page_content=content, metadata=metadata)\n @property\n def query_params(self) -> str:\n \"\"\"Create query parameters for GitHub API.\"\"\"\n labels = \",\".join(self.labels) if self.labels else self.labels\n query_params_dict = {\n \"milestone\": self.milestone,\n \"state\": self.state,\n \"assignee\": self.assignee,\n \"creator\": self.creator,\n \"mentioned\": self.mentioned,\n \"labels\": labels,\n \"sort\": self.sort,\n \"direction\": self.direction,\n \"since\": self.since,\n }\n query_params_list = [\n f\"{k}={v}\" for k, v in query_params_dict.items() if v is not None\n ]\n query_params = \"&\".join(query_params_list)\n return query_params\n @property\n def url(self) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/github.html"} {"id": "4e34580b2294-4", "text": "return query_params\n @property\n def url(self) -> str:\n \"\"\"Create URL for GitHub API.\"\"\"\n return f\"https://api.github.com/repos/{self.repo}/issues?{self.query_params}\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/github.html"} {"id": "67c4b489cb01-0", "text": "Source code for langchain.document_loaders.rst\n\"\"\"Loader that loads RST files.\"\"\"\nfrom typing import Any, List\nfrom langchain.document_loaders.unstructured import (\n UnstructuredFileLoader,\n validate_unstructured_version,\n)\n[docs]class UnstructuredRSTLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load RST files.\"\"\"\n def __init__(\n self, file_path: str, mode: str = \"single\", **unstructured_kwargs: Any\n ):\n validate_unstructured_version(min_unstructured_version=\"0.7.5\")\n super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.partition.rst import partition_rst\n return partition_rst(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/rst.html"} {"id": "3b95cfb17391-0", "text": "Source code for langchain.document_loaders.recursive_url_loader\nfrom typing import Iterator, List, Optional, Set\nfrom urllib.parse import urlparse\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class RecursiveUrlLoader(BaseLoader):\n \"\"\"Loader that loads all child links from a given url.\"\"\"\n def __init__(self, url: str, exclude_dirs: Optional[str] = None) -> None:\n \"\"\"Initialize with URL to crawl and any sub-directories to exclude.\"\"\"\n self.url = url\n self.exclude_dirs = exclude_dirs\n[docs] def get_child_links_recursive(\n self, url: str, visited: Optional[Set[str]] = None\n ) -> Set[str]:\n \"\"\"Recursively get all child links starting with the path of the input URL.\"\"\"\n try:\n from bs4 import BeautifulSoup\n except ImportError:\n raise ImportError(\n \"The BeautifulSoup package is required for the RecursiveUrlLoader.\"\n )\n # Construct the base and parent URLs\n parsed_url = urlparse(url)\n base_url = f\"{parsed_url.scheme}://{parsed_url.netloc}\"\n parent_url = \"/\".join(parsed_url.path.split(\"/\")[:-1])\n current_path = parsed_url.path\n # Add a trailing slash if not present\n if not base_url.endswith(\"/\"):\n base_url += \"/\"\n if not parent_url.endswith(\"/\"):\n parent_url += \"/\"\n # Exclude the root and parent from list\n visited = set() if visited is None else visited\n # Exclude the links that start with any of the excluded directories\n if self.exclude_dirs and any(\n url.startswith(exclude_dir) for exclude_dir in self.exclude_dirs\n ):\n return visited", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/recursive_url_loader.html"} {"id": "3b95cfb17391-1", "text": "):\n return visited\n # Get all links that are relative to the root of the website\n response = requests.get(url)\n soup = BeautifulSoup(response.text, \"html.parser\")\n all_links = [link.get(\"href\") for link in soup.find_all(\"a\")]\n # Extract only the links that are children of the current URL\n child_links = list(\n {\n link\n for link in all_links\n if link and link.startswith(current_path) and link != current_path\n }\n )\n # Get absolute path for all root relative links listed\n absolute_paths = [\n f\"{urlparse(base_url).scheme}://{urlparse(base_url).netloc}{link}\"\n for link in child_links\n ]\n # Store the visited links and recursively visit the children\n for link in absolute_paths:\n # Check all unvisited links\n if link not in visited:\n visited.add(link)\n # If the link is a directory (w/ children) then visit it\n if link.endswith(\"/\"):\n visited.update(self.get_child_links_recursive(link, visited))\n return visited\n[docs] def lazy_load(self) -> Iterator[Document]:\n from langchain.document_loaders import WebBaseLoader\n \"\"\"Lazy load web pages.\"\"\"\n child_links = self.get_child_links_recursive(self.url)\n loader = WebBaseLoader(list(child_links))\n return loader.lazy_load()\n[docs] def load(self) -> List[Document]:\n \"\"\"Load web pages.\"\"\"\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/recursive_url_loader.html"} {"id": "e60bcbd6733d-0", "text": "Source code for langchain.document_loaders.imsdb\n\"\"\"Loads IMSDb.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.web_base import WebBaseLoader\n[docs]class IMSDbLoader(WebBaseLoader):\n \"\"\"Loads IMSDb webpages.\"\"\"\n[docs] def load(self) -> List[Document]:\n \"\"\"Load webpage.\"\"\"\n soup = self.scrape()\n text = soup.select_one(\"td[class='scrtext']\").text\n metadata = {\"source\": self.web_path}\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/imsdb.html"} {"id": "6af62ca0beda-0", "text": "Source code for langchain.document_loaders.gcs_directory\n\"\"\"Loading logic for loading documents from an GCS directory.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.gcs_file import GCSFileLoader\n[docs]class GCSDirectoryLoader(BaseLoader):\n \"\"\"Loads Documents from GCS.\"\"\"\n def __init__(self, project_name: str, bucket: str, prefix: str = \"\"):\n \"\"\"Initialize with bucket and key name.\n Args:\n project_name: The name of the project for the GCS bucket.\n bucket: The name of the GCS bucket.\n prefix: The prefix of the GCS bucket.\n \"\"\"\n self.project_name = project_name\n self.bucket = bucket\n self.prefix = prefix\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n from google.cloud import storage\n except ImportError:\n raise ImportError(\n \"Could not import google-cloud-storage python package. \"\n \"Please install it with `pip install google-cloud-storage`.\"\n )\n client = storage.Client(project=self.project_name)\n docs = []\n for blob in client.list_blobs(self.bucket, prefix=self.prefix):\n # we shall just skip directories since GCSFileLoader creates\n # intermediate directories on the fly\n if blob.name.endswith(\"/\"):\n continue\n loader = GCSFileLoader(self.project_name, self.bucket, blob.name)\n docs.extend(loader.load())\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/gcs_directory.html"} {"id": "ee36c4e4be9d-0", "text": "Source code for langchain.document_loaders.base\n\"\"\"Abstract interface for document loader implementations.\"\"\"\nfrom abc import ABC, abstractmethod\nfrom typing import Iterator, List, Optional\nfrom langchain.document_loaders.blob_loaders import Blob\nfrom langchain.schema import Document\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter, TextSplitter\n[docs]class BaseLoader(ABC):\n \"\"\"Interface for loading Documents.\n Implementations should implement the lazy-loading method using generators\n to avoid loading all Documents into memory at once.\n The `load` method will remain as is for backwards compatibility, but its\n implementation should be just `list(self.lazy_load())`.\n \"\"\"\n # Sub-classes should implement this method\n # as return list(self.lazy_load()).\n # This method returns a List which is materialized in memory.\n[docs] @abstractmethod\n def load(self) -> List[Document]:\n \"\"\"Load data into Document objects.\"\"\"\n[docs] def load_and_split(\n self, text_splitter: Optional[TextSplitter] = None\n ) -> List[Document]:\n \"\"\"Load Documents and split into chunks. Chunks are returned as Documents.\n Args:\n text_splitter: TextSplitter instance to use for splitting documents.\n Defaults to RecursiveCharacterTextSplitter.\n Returns:\n List of Documents.\n \"\"\"\n if text_splitter is None:\n _text_splitter: TextSplitter = RecursiveCharacterTextSplitter()\n else:\n _text_splitter = text_splitter\n docs = self.load()\n return _text_splitter.split_documents(docs)\n # Attention: This method will be upgraded into an abstractmethod once it's\n # implemented in all the existing subclasses.\n[docs] def lazy_load(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/base.html"} {"id": "ee36c4e4be9d-1", "text": "# implemented in all the existing subclasses.\n[docs] def lazy_load(\n self,\n ) -> Iterator[Document]:\n \"\"\"A lazy loader for Documents.\"\"\"\n raise NotImplementedError(\n f\"{self.__class__.__name__} does not implement lazy_load()\"\n )\n[docs]class BaseBlobParser(ABC):\n \"\"\"Abstract interface for blob parsers.\n A blob parser provides a way to parse raw data stored in a blob into one\n or more documents.\n The parser can be composed with blob loaders, making it easy to re-use\n a parser independent of how the blob was originally loaded.\n \"\"\"\n[docs] @abstractmethod\n def lazy_parse(self, blob: Blob) -> Iterator[Document]:\n \"\"\"Lazy parsing interface.\n Subclasses are required to implement this method.\n Args:\n blob: Blob instance\n Returns:\n Generator of documents\n \"\"\"\n[docs] def parse(self, blob: Blob) -> List[Document]:\n \"\"\"Eagerly parse the blob into a document or documents.\n This is a convenience method for interactive development environment.\n Production applications should favor the lazy_parse method instead.\n Subclasses should generally not over-ride this parse method.\n Args:\n blob: Blob instance\n Returns:\n List of documents\n \"\"\"\n return list(self.lazy_parse(blob))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/base.html"} {"id": "a7ce2d921d83-0", "text": "Source code for langchain.document_loaders.notebook\n\"\"\"Loader that loads .ipynb notebook files.\"\"\"\nimport json\nfrom pathlib import Path\nfrom typing import Any, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]def concatenate_cells(\n cell: dict, include_outputs: bool, max_output_length: int, traceback: bool\n) -> str:\n \"\"\"Combine cells information in a readable format ready to be used.\"\"\"\n cell_type = cell[\"cell_type\"]\n source = cell[\"source\"]\n output = cell[\"outputs\"]\n if include_outputs and cell_type == \"code\" and output:\n if \"ename\" in output[0].keys():\n error_name = output[0][\"ename\"]\n error_value = output[0][\"evalue\"]\n if traceback:\n traceback = output[0][\"traceback\"]\n return (\n f\"'{cell_type}' cell: '{source}'\\n, gives error '{error_name}',\"\n f\" with description '{error_value}'\\n\"\n f\"and traceback '{traceback}'\\n\\n\"\n )\n else:\n return (\n f\"'{cell_type}' cell: '{source}'\\n, gives error '{error_name}',\"\n f\"with description '{error_value}'\\n\\n\"\n )\n elif output[0][\"output_type\"] == \"stream\":\n output = output[0][\"text\"]\n min_output = min(max_output_length, len(output))\n return (\n f\"'{cell_type}' cell: '{source}'\\n with \"\n f\"output: '{output[:min_output]}'\\n\\n\"\n )\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/notebook.html"} {"id": "a7ce2d921d83-1", "text": ")\n else:\n return f\"'{cell_type}' cell: '{source}'\\n\\n\"\n return \"\"\n[docs]def remove_newlines(x: Any) -> Any:\n \"\"\"Remove recursively newlines, no matter the data structure they are stored in.\"\"\"\n import pandas as pd\n if isinstance(x, str):\n return x.replace(\"\\n\", \"\")\n elif isinstance(x, list):\n return [remove_newlines(elem) for elem in x]\n elif isinstance(x, pd.DataFrame):\n return x.applymap(remove_newlines)\n else:\n return x\n[docs]class NotebookLoader(BaseLoader):\n \"\"\"Loader that loads .ipynb notebook files.\"\"\"\n def __init__(\n self,\n path: str,\n include_outputs: bool = False,\n max_output_length: int = 10,\n remove_newline: bool = False,\n traceback: bool = False,\n ):\n \"\"\"Initialize with path.\"\"\"\n self.file_path = path\n self.include_outputs = include_outputs\n self.max_output_length = max_output_length\n self.remove_newline = remove_newline\n self.traceback = traceback\n[docs] def load(\n self,\n ) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n import pandas as pd\n except ImportError:\n raise ImportError(\n \"pandas is needed for Notebook Loader, \"\n \"please install with `pip install pandas`\"\n )\n p = Path(self.file_path)\n with open(p, encoding=\"utf8\") as f:\n d = json.load(f)\n data = pd.json_normalize(d[\"cells\"])\n filtered_data = data[[\"cell_type\", \"source\", \"outputs\"]]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/notebook.html"} {"id": "a7ce2d921d83-2", "text": "filtered_data = data[[\"cell_type\", \"source\", \"outputs\"]]\n if self.remove_newline:\n filtered_data = filtered_data.applymap(remove_newlines)\n text = filtered_data.apply(\n lambda x: concatenate_cells(\n x, self.include_outputs, self.max_output_length, self.traceback\n ),\n axis=1,\n ).str.cat(sep=\" \")\n metadata = {\"source\": str(p)}\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/notebook.html"} {"id": "66e10cb11d55-0", "text": "Source code for langchain.document_loaders.xml\n\"\"\"Loader that loads Microsoft Excel files.\"\"\"\nfrom typing import Any, List\nfrom langchain.document_loaders.unstructured import (\n UnstructuredFileLoader,\n validate_unstructured_version,\n)\n[docs]class UnstructuredXMLLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load XML files.\"\"\"\n def __init__(\n self, file_path: str, mode: str = \"single\", **unstructured_kwargs: Any\n ):\n validate_unstructured_version(min_unstructured_version=\"0.6.7\")\n super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.partition.xml import partition_xml\n return partition_xml(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/xml.html"} {"id": "86c71b1473c3-0", "text": "Source code for langchain.document_loaders.bibtex\nimport logging\nimport re\nfrom pathlib import Path\nfrom typing import Any, Iterator, List, Mapping, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utilities.bibtex import BibtexparserWrapper\nlogger = logging.getLogger(__name__)\n[docs]class BibtexLoader(BaseLoader):\n \"\"\"Loads a bibtex file into a list of Documents.\n Each document represents one entry from the bibtex file.\n If a PDF file is present in the `file` bibtex field, the original PDF\n is loaded into the document text. If no such file entry is present,\n the `abstract` field is used instead.\n \"\"\"\n def __init__(\n self,\n file_path: str,\n *,\n parser: Optional[BibtexparserWrapper] = None,\n max_docs: Optional[int] = None,\n max_content_chars: Optional[int] = 4_000,\n load_extra_metadata: bool = False,\n file_pattern: str = r\"[^:]+\\.pdf\",\n ):\n \"\"\"Initialize the BibtexLoader.\n Args:\n file_path: Path to the bibtex file.\n parser: The parser to use. If None, a default parser is used.\n max_docs: Max number of associated documents to load. Use -1 means\n no limit.\n max_content_chars: Maximum number of characters to load from the PDF.\n load_extra_metadata: Whether to load extra metadata from the PDF.\n file_pattern: Regex pattern to match the file name in the bibtex.\n \"\"\"\n self.file_path = file_path\n self.parser = parser or BibtexparserWrapper()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/bibtex.html"} {"id": "86c71b1473c3-1", "text": "self.parser = parser or BibtexparserWrapper()\n self.max_docs = max_docs\n self.max_content_chars = max_content_chars\n self.load_extra_metadata = load_extra_metadata\n self.file_regex = re.compile(file_pattern)\n def _load_entry(self, entry: Mapping[str, Any]) -> Optional[Document]:\n import fitz\n parent_dir = Path(self.file_path).parent\n # regex is useful for Zotero flavor bibtex files\n file_names = self.file_regex.findall(entry.get(\"file\", \"\"))\n if not file_names:\n return None\n texts: List[str] = []\n for file_name in file_names:\n try:\n with fitz.open(parent_dir / file_name) as f:\n texts.extend(page.get_text() for page in f)\n except FileNotFoundError as e:\n logger.debug(e)\n content = \"\\n\".join(texts) or entry.get(\"abstract\", \"\")\n if self.max_content_chars:\n content = content[: self.max_content_chars]\n metadata = self.parser.get_metadata(entry, load_extra=self.load_extra_metadata)\n return Document(\n page_content=content,\n metadata=metadata,\n )\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"Load bibtex file using bibtexparser and get the article texts plus the\n article metadata.\n See https://bibtexparser.readthedocs.io/en/master/\n Returns:\n a list of documents with the document.page_content in text format\n \"\"\"\n try:\n import fitz # noqa: F401\n except ImportError:\n raise ImportError(\n \"PyMuPDF package not found, please install it with \"\n \"`pip install pymupdf`\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/bibtex.html"} {"id": "86c71b1473c3-2", "text": "\"`pip install pymupdf`\"\n )\n entries = self.parser.load_bibtex_entries(self.file_path)\n if self.max_docs:\n entries = entries[: self.max_docs]\n for entry in entries:\n doc = self._load_entry(entry)\n if doc:\n yield doc\n[docs] def load(self) -> List[Document]:\n \"\"\"Load bibtex file documents from the given bibtex file path.\n See https://bibtexparser.readthedocs.io/en/master/\n Args:\n file_path: the path to the bibtex file\n Returns:\n a list of documents with the document.page_content in text format\n \"\"\"\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/bibtex.html"} {"id": "51dab05a4ee8-0", "text": "Source code for langchain.document_loaders.obsidian\n\"\"\"Loader that loads Obsidian directory dump.\"\"\"\nimport re\nfrom pathlib import Path\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class ObsidianLoader(BaseLoader):\n \"\"\"Loader that loads Obsidian files from disk.\"\"\"\n FRONT_MATTER_REGEX = re.compile(r\"^---\\n(.*?)\\n---\\n\", re.MULTILINE | re.DOTALL)\n def __init__(\n self, path: str, encoding: str = \"UTF-8\", collect_metadata: bool = True\n ):\n \"\"\"Initialize with path.\"\"\"\n self.file_path = path\n self.encoding = encoding\n self.collect_metadata = collect_metadata\n def _parse_front_matter(self, content: str) -> dict:\n \"\"\"Parse front matter metadata from the content and return it as a dict.\"\"\"\n if not self.collect_metadata:\n return {}\n match = self.FRONT_MATTER_REGEX.search(content)\n front_matter = {}\n if match:\n lines = match.group(1).split(\"\\n\")\n for line in lines:\n if \":\" in line:\n key, value = line.split(\":\", 1)\n front_matter[key.strip()] = value.strip()\n else:\n # Skip lines without a colon\n continue\n return front_matter\n def _remove_front_matter(self, content: str) -> str:\n \"\"\"Remove front matter metadata from the given content.\"\"\"\n if not self.collect_metadata:\n return content\n return self.FRONT_MATTER_REGEX.sub(\"\", content)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n ps = list(Path(self.file_path).glob(\"**/*.md\"))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/obsidian.html"} {"id": "51dab05a4ee8-1", "text": "ps = list(Path(self.file_path).glob(\"**/*.md\"))\n docs = []\n for p in ps:\n with open(p, encoding=self.encoding) as f:\n text = f.read()\n front_matter = self._parse_front_matter(text)\n text = self._remove_front_matter(text)\n metadata = {\n \"source\": str(p.name),\n \"path\": str(p),\n \"created\": p.stat().st_ctime,\n \"last_modified\": p.stat().st_mtime,\n \"last_accessed\": p.stat().st_atime,\n **front_matter,\n }\n docs.append(Document(page_content=text, metadata=metadata))\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/obsidian.html"} {"id": "7fc7058b60dc-0", "text": "Source code for langchain.document_loaders.azlyrics\n\"\"\"Loader that loads AZLyrics.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.web_base import WebBaseLoader\n[docs]class AZLyricsLoader(WebBaseLoader):\n \"\"\"Loader that loads AZLyrics webpages.\"\"\"\n[docs] def load(self) -> List[Document]:\n \"\"\"Load webpages into Documents.\"\"\"\n soup = self.scrape()\n title = soup.title.text\n lyrics = soup.find_all(\"div\", {\"class\": \"\"})[2].text\n text = title + lyrics\n metadata = {\"source\": self.web_path}\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/azlyrics.html"} {"id": "9cc4a54d48b5-0", "text": "Source code for langchain.document_loaders.mhtml\n\"\"\"Load MHTML files, enriching metadata with page title.\"\"\"\nimport email\nimport logging\nfrom typing import Dict, List, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\n[docs]class MHTMLLoader(BaseLoader):\n \"\"\"Loader that uses beautiful soup to parse HTML files.\"\"\"\n def __init__(\n self,\n file_path: str,\n open_encoding: Union[str, None] = None,\n bs_kwargs: Union[dict, None] = None,\n get_text_separator: str = \"\",\n ) -> None:\n \"\"\"Initialise with path, and optionally, file encoding to use, and any kwargs\n to pass to the BeautifulSoup object.\n Args:\n file_path: The path to the file to load.\n open_encoding: The encoding to use when opening the file.\n bs_kwargs: soup kwargs to pass to the BeautifulSoup object.\n get_text_separator: The separator to use when getting text from the soup.\n \"\"\"\n try:\n import bs4 # noqa:F401\n except ImportError:\n raise ImportError(\n \"beautifulsoup4 package not found, please install it with \"\n \"`pip install beautifulsoup4`\"\n )\n self.file_path = file_path\n self.open_encoding = open_encoding\n if bs_kwargs is None:\n bs_kwargs = {\"features\": \"lxml\"}\n self.bs_kwargs = bs_kwargs\n self.get_text_separator = get_text_separator\n[docs] def load(self) -> List[Document]:\n from bs4 import BeautifulSoup\n \"\"\"Load MHTML document into document objects.\"\"\"\n with open(self.file_path, \"r\", encoding=self.open_encoding) as f:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/mhtml.html"} {"id": "9cc4a54d48b5-1", "text": "with open(self.file_path, \"r\", encoding=self.open_encoding) as f:\n message = email.message_from_string(f.read())\n parts = message.get_payload()\n if type(parts) is not list:\n parts = [message]\n for part in parts:\n if part.get_content_type() == \"text/html\":\n html = part.get_payload(decode=True).decode()\n soup = BeautifulSoup(html, **self.bs_kwargs)\n text = soup.get_text(self.get_text_separator)\n if soup.title:\n title = str(soup.title.string)\n else:\n title = \"\"\n metadata: Dict[str, Union[str, None]] = {\n \"source\": self.file_path,\n \"title\": title,\n }\n return [Document(page_content=text, metadata=metadata)]\n return []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/mhtml.html"} {"id": "9f79c2b543de-0", "text": "Source code for langchain.document_loaders.snowflake_loader\nfrom __future__ import annotations\nfrom typing import Any, Dict, Iterator, List, Optional, Tuple\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class SnowflakeLoader(BaseLoader):\n \"\"\"Loads a query result from Snowflake into a list of documents.\n Each document represents one row of the result. The `page_content_columns`\n are written into the `page_content` of the document. The `metadata_columns`\n are written into the `metadata` of the document. By default, all columns\n are written into the `page_content` and none into the `metadata`.\n \"\"\"\n def __init__(\n self,\n query: str,\n user: str,\n password: str,\n account: str,\n warehouse: str,\n role: str,\n database: str,\n schema: str,\n parameters: Optional[Dict[str, Any]] = None,\n page_content_columns: Optional[List[str]] = None,\n metadata_columns: Optional[List[str]] = None,\n ):\n \"\"\"Initialize Snowflake document loader.\n Args:\n query: The query to run in Snowflake.\n user: Snowflake user.\n password: Snowflake password.\n account: Snowflake account.\n warehouse: Snowflake warehouse.\n role: Snowflake role.\n database: Snowflake database\n schema: Snowflake schema\n page_content_columns: Optional. Columns written to Document `page_content`.\n metadata_columns: Optional. Columns written to Document `metadata`.\n \"\"\"\n self.query = query\n self.user = user\n self.password = password\n self.account = account\n self.warehouse = warehouse", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/snowflake_loader.html"} {"id": "9f79c2b543de-1", "text": "self.password = password\n self.account = account\n self.warehouse = warehouse\n self.role = role\n self.database = database\n self.schema = schema\n self.parameters = parameters\n self.page_content_columns = (\n page_content_columns if page_content_columns is not None else [\"*\"]\n )\n self.metadata_columns = metadata_columns if metadata_columns is not None else []\n def _execute_query(self) -> List[Dict[str, Any]]:\n try:\n import snowflake.connector\n except ImportError as ex:\n raise ValueError(\n \"Could not import snowflake-connector-python package. \"\n \"Please install it with `pip install snowflake-connector-python`.\"\n ) from ex\n conn = snowflake.connector.connect(\n user=self.user,\n password=self.password,\n account=self.account,\n warehouse=self.warehouse,\n role=self.role,\n database=self.database,\n schema=self.schema,\n parameters=self.parameters,\n )\n try:\n cur = conn.cursor()\n cur.execute(\"USE DATABASE \" + self.database)\n cur.execute(\"USE SCHEMA \" + self.schema)\n cur.execute(self.query, self.parameters)\n query_result = cur.fetchall()\n column_names = [column[0] for column in cur.description]\n query_result = [dict(zip(column_names, row)) for row in query_result]\n except Exception as e:\n print(f\"An error occurred: {e}\")\n query_result = []\n finally:\n cur.close()\n return query_result\n def _get_columns(\n self, query_result: List[Dict[str, Any]]\n ) -> Tuple[List[str], List[str]]:\n page_content_columns = (", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/snowflake_loader.html"} {"id": "9f79c2b543de-2", "text": ") -> Tuple[List[str], List[str]]:\n page_content_columns = (\n self.page_content_columns if self.page_content_columns else []\n )\n metadata_columns = self.metadata_columns if self.metadata_columns else []\n if page_content_columns is None and query_result:\n page_content_columns = list(query_result[0].keys())\n if metadata_columns is None:\n metadata_columns = []\n return page_content_columns or [], metadata_columns\n[docs] def lazy_load(self) -> Iterator[Document]:\n query_result = self._execute_query()\n if isinstance(query_result, Exception):\n print(f\"An error occurred during the query: {query_result}\")\n return []\n page_content_columns, metadata_columns = self._get_columns(query_result)\n if \"*\" in page_content_columns:\n page_content_columns = list(query_result[0].keys())\n for row in query_result:\n page_content = \"\\n\".join(\n f\"{k}: {v}\" for k, v in row.items() if k in page_content_columns\n )\n metadata = {k: v for k, v in row.items() if k in metadata_columns}\n doc = Document(page_content=page_content, metadata=metadata)\n yield doc\n[docs] def load(self) -> List[Document]:\n \"\"\"Load data into document objects.\"\"\"\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/snowflake_loader.html"} {"id": "8e80597559f2-0", "text": "Source code for langchain.document_loaders.stripe\n\"\"\"Loader that fetches data from Stripe\"\"\"\nimport json\nimport urllib.request\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utils import get_from_env, stringify_dict\nSTRIPE_ENDPOINTS = {\n \"balance_transactions\": \"https://api.stripe.com/v1/balance_transactions\",\n \"charges\": \"https://api.stripe.com/v1/charges\",\n \"customers\": \"https://api.stripe.com/v1/customers\",\n \"events\": \"https://api.stripe.com/v1/events\",\n \"refunds\": \"https://api.stripe.com/v1/refunds\",\n \"disputes\": \"https://api.stripe.com/v1/disputes\",\n}\n[docs]class StripeLoader(BaseLoader):\n \"\"\"Loader that fetches data from Stripe.\"\"\"\n def __init__(self, resource: str, access_token: Optional[str] = None) -> None:\n self.resource = resource\n access_token = access_token or get_from_env(\n \"access_token\", \"STRIPE_ACCESS_TOKEN\"\n )\n self.headers = {\"Authorization\": f\"Bearer {access_token}\"}\n def _make_request(self, url: str) -> List[Document]:\n request = urllib.request.Request(url, headers=self.headers)\n with urllib.request.urlopen(request) as response:\n json_data = json.loads(response.read().decode())\n text = stringify_dict(json_data)\n metadata = {\"source\": url}\n return [Document(page_content=text, metadata=metadata)]\n def _get_resource(self) -> List[Document]:\n endpoint = STRIPE_ENDPOINTS.get(self.resource)\n if endpoint is None:\n return []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/stripe.html"} {"id": "8e80597559f2-1", "text": "if endpoint is None:\n return []\n return self._make_request(endpoint)\n[docs] def load(self) -> List[Document]:\n return self._get_resource()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/stripe.html"} {"id": "c7bc35cea53d-0", "text": "Source code for langchain.document_loaders.confluence\n\"\"\"Load Data from a Confluence Space\"\"\"\nimport logging\nfrom enum import Enum\nfrom io import BytesIO\nfrom typing import Any, Callable, Dict, List, Optional, Union\nfrom tenacity import (\n before_sleep_log,\n retry,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\n[docs]class ContentFormat(str, Enum):\n \"\"\"Enumerator of the content formats of Confluence page.\"\"\"\n STORAGE = \"body.storage\"\n VIEW = \"body.view\"\n[docs] def get_content(self, page: dict) -> str:\n if self == ContentFormat.STORAGE:\n return page[\"body\"][\"storage\"][\"value\"]\n elif self == ContentFormat.VIEW:\n return page[\"body\"][\"view\"][\"value\"]\n raise ValueError(\"unknown content format\")\n[docs]class ConfluenceLoader(BaseLoader):\n \"\"\"Load Confluence pages.\n Port of https://llamahub.ai/l/confluence\n This currently supports username/api_key, Oauth2 login or personal access token\n authentication.\n Specify a list page_ids and/or space_key to load in the corresponding pages into\n Document objects, if both are specified the union of both sets will be returned.\n You can also specify a boolean `include_attachments` to include attachments, this\n is set to False by default, if set to True all attachments will be downloaded and\n ConfluenceReader will extract the text from the attachments and add it to the\n Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG,\n SVG, Word and Excel.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} {"id": "c7bc35cea53d-1", "text": "SVG, Word and Excel.\n Confluence API supports difference format of page content. The storage format is the\n raw XML representation for storage. The view format is the HTML representation for\n viewing with macros are rendered as though it is viewed by users. You can pass\n a enum `content_format` argument to `load()` to specify the content format, this is\n set to `ContentFormat.STORAGE` by default.\n Hint: space_key and page_id can both be found in the URL of a page in Confluence\n - https://yoursite.atlassian.com/wiki/spaces//pages/\n Example:\n .. code-block:: python\n from langchain.document_loaders import ConfluenceLoader\n loader = ConfluenceLoader(\n url=\"https://yoursite.atlassian.com/wiki\",\n username=\"me\",\n api_key=\"12345\"\n )\n documents = loader.load(space_key=\"SPACE\",limit=50)\n :param url: _description_\n :type url: str\n :param api_key: _description_, defaults to None\n :type api_key: str, optional\n :param username: _description_, defaults to None\n :type username: str, optional\n :param oauth2: _description_, defaults to {}\n :type oauth2: dict, optional\n :param token: _description_, defaults to None\n :type token: str, optional\n :param cloud: _description_, defaults to True\n :type cloud: bool, optional\n :param number_of_retries: How many times to retry, defaults to 3\n :type number_of_retries: Optional[int], optional\n :param min_retry_seconds: defaults to 2\n :type min_retry_seconds: Optional[int], optional", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} {"id": "c7bc35cea53d-2", "text": ":type min_retry_seconds: Optional[int], optional\n :param max_retry_seconds: defaults to 10\n :type max_retry_seconds: Optional[int], optional\n :param confluence_kwargs: additional kwargs to initialize confluence with\n :type confluence_kwargs: dict, optional\n :raises ValueError: Errors while validating input\n :raises ImportError: Required dependencies not installed.\n \"\"\"\n def __init__(\n self,\n url: str,\n api_key: Optional[str] = None,\n username: Optional[str] = None,\n oauth2: Optional[dict] = None,\n token: Optional[str] = None,\n cloud: Optional[bool] = True,\n number_of_retries: Optional[int] = 3,\n min_retry_seconds: Optional[int] = 2,\n max_retry_seconds: Optional[int] = 10,\n confluence_kwargs: Optional[dict] = None,\n ):\n confluence_kwargs = confluence_kwargs or {}\n errors = ConfluenceLoader.validate_init_args(\n url, api_key, username, oauth2, token\n )\n if errors:\n raise ValueError(f\"Error(s) while validating input: {errors}\")\n self.base_url = url\n self.number_of_retries = number_of_retries\n self.min_retry_seconds = min_retry_seconds\n self.max_retry_seconds = max_retry_seconds\n try:\n from atlassian import Confluence # noqa: F401\n except ImportError:\n raise ImportError(\n \"`atlassian` package not found, please run \"\n \"`pip install atlassian-python-api`\"\n )\n if oauth2:\n self.confluence = Confluence(\n url=url, oauth2=oauth2, cloud=cloud, **confluence_kwargs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} {"id": "c7bc35cea53d-3", "text": "url=url, oauth2=oauth2, cloud=cloud, **confluence_kwargs\n )\n elif token:\n self.confluence = Confluence(\n url=url, token=token, cloud=cloud, **confluence_kwargs\n )\n else:\n self.confluence = Confluence(\n url=url,\n username=username,\n password=api_key,\n cloud=cloud,\n **confluence_kwargs,\n )\n[docs] @staticmethod\n def validate_init_args(\n url: Optional[str] = None,\n api_key: Optional[str] = None,\n username: Optional[str] = None,\n oauth2: Optional[dict] = None,\n token: Optional[str] = None,\n ) -> Union[List, None]:\n \"\"\"Validates proper combinations of init arguments\"\"\"\n errors = []\n if url is None:\n errors.append(\"Must provide `base_url`\")\n if (api_key and not username) or (username and not api_key):\n errors.append(\n \"If one of `api_key` or `username` is provided, \"\n \"the other must be as well.\"\n )\n if (api_key or username) and oauth2:\n errors.append(\n \"Cannot provide a value for `api_key` and/or \"\n \"`username` and provide a value for `oauth2`\"\n )\n if oauth2 and oauth2.keys() != [\n \"access_token\",\n \"access_token_secret\",\n \"consumer_key\",\n \"key_cert\",\n ]:\n errors.append(\n \"You have either omitted require keys or added extra \"\n \"keys to the oauth2 dictionary. key values should be \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} {"id": "c7bc35cea53d-4", "text": "\"keys to the oauth2 dictionary. key values should be \"\n \"`['access_token', 'access_token_secret', 'consumer_key', 'key_cert']`\"\n )\n if token and (api_key or username or oauth2):\n errors.append(\n \"Cannot provide a value for `token` and a value for `api_key`, \"\n \"`username` or `oauth2`\"\n )\n if errors:\n return errors\n return None\n[docs] def load(\n self,\n space_key: Optional[str] = None,\n page_ids: Optional[List[str]] = None,\n label: Optional[str] = None,\n cql: Optional[str] = None,\n include_restricted_content: bool = False,\n include_archived_content: bool = False,\n include_attachments: bool = False,\n include_comments: bool = False,\n content_format: ContentFormat = ContentFormat.STORAGE,\n limit: Optional[int] = 50,\n max_pages: Optional[int] = 1000,\n ocr_languages: Optional[str] = None,\n ) -> List[Document]:\n \"\"\"\n :param space_key: Space key retrieved from a confluence URL, defaults to None\n :type space_key: Optional[str], optional\n :param page_ids: List of specific page IDs to load, defaults to None\n :type page_ids: Optional[List[str]], optional\n :param label: Get all pages with this label, defaults to None\n :type label: Optional[str], optional\n :param cql: CQL Expression, defaults to None\n :type cql: Optional[str], optional\n :param include_restricted_content: defaults to False\n :type include_restricted_content: bool, optional", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} {"id": "c7bc35cea53d-5", "text": ":type include_restricted_content: bool, optional\n :param include_archived_content: Whether to include archived content,\n defaults to False\n :type include_archived_content: bool, optional\n :param include_attachments: defaults to False\n :type include_attachments: bool, optional\n :param include_comments: defaults to False\n :type include_comments: bool, optional\n :param content_format: Specify content format, defaults to ContentFormat.STORAGE\n :type content_format: ContentFormat\n :param limit: Maximum number of pages to retrieve per request, defaults to 50\n :type limit: int, optional\n :param max_pages: Maximum number of pages to retrieve in total, defaults 1000\n :type max_pages: int, optional\n :param ocr_languages: The languages to use for the Tesseract agent. To use a\n language, you'll first need to install the appropriate\n Tesseract language pack.\n :type ocr_languages: str, optional\n :raises ValueError: _description_\n :raises ImportError: _description_\n :return: _description_\n :rtype: List[Document]\n \"\"\"\n if not space_key and not page_ids and not label and not cql:\n raise ValueError(\n \"Must specify at least one among `space_key`, `page_ids`, \"\n \"`label`, `cql` parameters.\"\n )\n docs = []\n if space_key:\n pages = self.paginate_request(\n self.confluence.get_all_pages_from_space,\n space=space_key,\n limit=limit,\n max_pages=max_pages,\n status=\"any\" if include_archived_content else \"current\",\n expand=content_format.value,\n )\n docs += self.process_pages(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} {"id": "c7bc35cea53d-6", "text": "expand=content_format.value,\n )\n docs += self.process_pages(\n pages,\n include_restricted_content,\n include_attachments,\n include_comments,\n content_format,\n ocr_languages,\n )\n if label:\n pages = self.paginate_request(\n self.confluence.get_all_pages_by_label,\n label=label,\n limit=limit,\n max_pages=max_pages,\n )\n ids_by_label = [page[\"id\"] for page in pages]\n if page_ids:\n page_ids = list(set(page_ids + ids_by_label))\n else:\n page_ids = list(set(ids_by_label))\n if cql:\n pages = self.paginate_request(\n self._search_content_by_cql,\n cql=cql,\n limit=limit,\n max_pages=max_pages,\n include_archived_spaces=include_archived_content,\n expand=content_format.value,\n )\n docs += self.process_pages(\n pages,\n include_restricted_content,\n include_attachments,\n include_comments,\n content_format,\n ocr_languages,\n )\n if page_ids:\n for page_id in page_ids:\n get_page = retry(\n reraise=True,\n stop=stop_after_attempt(\n self.number_of_retries # type: ignore[arg-type]\n ),\n wait=wait_exponential(\n multiplier=1, # type: ignore[arg-type]\n min=self.min_retry_seconds, # type: ignore[arg-type]\n max=self.max_retry_seconds, # type: ignore[arg-type]\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )(self.confluence.get_page_by_id)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} {"id": "c7bc35cea53d-7", "text": ")(self.confluence.get_page_by_id)\n page = get_page(page_id=page_id, expand=content_format.value)\n if not include_restricted_content and not self.is_public_page(page):\n continue\n doc = self.process_page(\n page,\n include_attachments,\n include_comments,\n content_format,\n ocr_languages,\n )\n docs.append(doc)\n return docs\n def _search_content_by_cql(\n self, cql: str, include_archived_spaces: Optional[bool] = None, **kwargs: Any\n ) -> List[dict]:\n url = \"rest/api/content/search\"\n params: Dict[str, Any] = {\"cql\": cql}\n params.update(kwargs)\n if include_archived_spaces is not None:\n params[\"includeArchivedSpaces\"] = include_archived_spaces\n response = self.confluence.get(url, params=params)\n return response.get(\"results\", [])\n[docs] def paginate_request(self, retrieval_method: Callable, **kwargs: Any) -> List:\n \"\"\"Paginate the various methods to retrieve groups of pages.\n Unfortunately, due to page size, sometimes the Confluence API\n doesn't match the limit value. If `limit` is >100 confluence\n seems to cap the response to 100. Also, due to the Atlassian Python\n package, we don't get the \"next\" values from the \"_links\" key because\n they only return the value from the result key. So here, the pagination\n starts from 0 and goes until the max_pages, getting the `limit` number\n of pages with each request. We have to manually check if there\n are more docs based on the length of the returned list of pages, rather than", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} {"id": "c7bc35cea53d-8", "text": "are more docs based on the length of the returned list of pages, rather than\n just checking for the presence of a `next` key in the response like this page\n would have you do:\n https://developer.atlassian.com/server/confluence/pagination-in-the-rest-api/\n :param retrieval_method: Function used to retrieve docs\n :type retrieval_method: callable\n :return: List of documents\n :rtype: List\n \"\"\"\n max_pages = kwargs.pop(\"max_pages\")\n docs: List[dict] = []\n while len(docs) < max_pages:\n get_pages = retry(\n reraise=True,\n stop=stop_after_attempt(\n self.number_of_retries # type: ignore[arg-type]\n ),\n wait=wait_exponential(\n multiplier=1,\n min=self.min_retry_seconds, # type: ignore[arg-type]\n max=self.max_retry_seconds, # type: ignore[arg-type]\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )(retrieval_method)\n batch = get_pages(**kwargs, start=len(docs))\n if not batch:\n break\n docs.extend(batch)\n return docs[:max_pages]\n[docs] def is_public_page(self, page: dict) -> bool:\n \"\"\"Check if a page is publicly accessible.\"\"\"\n restrictions = self.confluence.get_all_restrictions_for_content(page[\"id\"])\n return (\n page[\"status\"] == \"current\"\n and not restrictions[\"read\"][\"restrictions\"][\"user\"][\"results\"]\n and not restrictions[\"read\"][\"restrictions\"][\"group\"][\"results\"]\n )\n[docs] def process_pages(\n self,\n pages: List[dict],\n include_restricted_content: bool,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} {"id": "c7bc35cea53d-9", "text": "pages: List[dict],\n include_restricted_content: bool,\n include_attachments: bool,\n include_comments: bool,\n content_format: ContentFormat,\n ocr_languages: Optional[str] = None,\n ) -> List[Document]:\n \"\"\"Process a list of pages into a list of documents.\"\"\"\n docs = []\n for page in pages:\n if not include_restricted_content and not self.is_public_page(page):\n continue\n doc = self.process_page(\n page,\n include_attachments,\n include_comments,\n content_format,\n ocr_languages,\n )\n docs.append(doc)\n return docs\n[docs] def process_page(\n self,\n page: dict,\n include_attachments: bool,\n include_comments: bool,\n content_format: ContentFormat,\n ocr_languages: Optional[str] = None,\n ) -> Document:\n try:\n from bs4 import BeautifulSoup # type: ignore\n except ImportError:\n raise ImportError(\n \"`beautifulsoup4` package not found, please run \"\n \"`pip install beautifulsoup4`\"\n )\n if include_attachments:\n attachment_texts = self.process_attachment(page[\"id\"], ocr_languages)\n else:\n attachment_texts = []\n content = content_format.get_content(page)\n text = BeautifulSoup(content, \"lxml\").get_text(\" \", strip=True) + \"\".join(\n attachment_texts\n )\n if include_comments:\n comments = self.confluence.get_page_comments(\n page[\"id\"], expand=\"body.view.value\", depth=\"all\"\n )[\"results\"]\n comment_texts = [\n BeautifulSoup(comment[\"body\"][\"view\"][\"value\"], \"lxml\").get_text(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} {"id": "c7bc35cea53d-10", "text": "BeautifulSoup(comment[\"body\"][\"view\"][\"value\"], \"lxml\").get_text(\n \" \", strip=True\n )\n for comment in comments\n ]\n text = text + \"\".join(comment_texts)\n return Document(\n page_content=text,\n metadata={\n \"title\": page[\"title\"],\n \"id\": page[\"id\"],\n \"source\": self.base_url.strip(\"/\") + page[\"_links\"][\"webui\"],\n },\n )\n[docs] def process_attachment(\n self,\n page_id: str,\n ocr_languages: Optional[str] = None,\n ) -> List[str]:\n try:\n from PIL import Image # noqa: F401\n except ImportError:\n raise ImportError(\n \"`Pillow` package not found, \" \"please run `pip install Pillow`\"\n )\n # depending on setup you may also need to set the correct path for\n # poppler and tesseract\n attachments = self.confluence.get_attachments_from_content(page_id)[\"results\"]\n texts = []\n for attachment in attachments:\n media_type = attachment[\"metadata\"][\"mediaType\"]\n absolute_url = self.base_url + attachment[\"_links\"][\"download\"]\n title = attachment[\"title\"]\n if media_type == \"application/pdf\":\n text = title + self.process_pdf(absolute_url, ocr_languages)\n elif (\n media_type == \"image/png\"\n or media_type == \"image/jpg\"\n or media_type == \"image/jpeg\"\n ):\n text = title + self.process_image(absolute_url, ocr_languages)\n elif (\n media_type == \"application/vnd.openxmlformats-officedocument\"\n \".wordprocessingml.document\"\n ):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} {"id": "c7bc35cea53d-11", "text": "\".wordprocessingml.document\"\n ):\n text = title + self.process_doc(absolute_url)\n elif media_type == \"application/vnd.ms-excel\":\n text = title + self.process_xls(absolute_url)\n elif media_type == \"image/svg+xml\":\n text = title + self.process_svg(absolute_url, ocr_languages)\n else:\n continue\n texts.append(text)\n return texts\n[docs] def process_pdf(\n self,\n link: str,\n ocr_languages: Optional[str] = None,\n ) -> str:\n try:\n import pytesseract # noqa: F401\n from pdf2image import convert_from_bytes # noqa: F401\n except ImportError:\n raise ImportError(\n \"`pytesseract` or `pdf2image` package not found, \"\n \"please run `pip install pytesseract pdf2image`\"\n )\n response = self.confluence.request(path=link, absolute=True)\n text = \"\"\n if (\n response.status_code != 200\n or response.content == b\"\"\n or response.content is None\n ):\n return text\n try:\n images = convert_from_bytes(response.content)\n except ValueError:\n return text\n for i, image in enumerate(images):\n image_text = pytesseract.image_to_string(image, lang=ocr_languages)\n text += f\"Page {i + 1}:\\n{image_text}\\n\\n\"\n return text\n[docs] def process_image(\n self,\n link: str,\n ocr_languages: Optional[str] = None,\n ) -> str:\n try:\n import pytesseract # noqa: F401", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} {"id": "c7bc35cea53d-12", "text": "try:\n import pytesseract # noqa: F401\n from PIL import Image # noqa: F401\n except ImportError:\n raise ImportError(\n \"`pytesseract` or `Pillow` package not found, \"\n \"please run `pip install pytesseract Pillow`\"\n )\n response = self.confluence.request(path=link, absolute=True)\n text = \"\"\n if (\n response.status_code != 200\n or response.content == b\"\"\n or response.content is None\n ):\n return text\n try:\n image = Image.open(BytesIO(response.content))\n except OSError:\n return text\n return pytesseract.image_to_string(image, lang=ocr_languages)\n[docs] def process_doc(self, link: str) -> str:\n try:\n import docx2txt # noqa: F401\n except ImportError:\n raise ImportError(\n \"`docx2txt` package not found, please run `pip install docx2txt`\"\n )\n response = self.confluence.request(path=link, absolute=True)\n text = \"\"\n if (\n response.status_code != 200\n or response.content == b\"\"\n or response.content is None\n ):\n return text\n file_data = BytesIO(response.content)\n return docx2txt.process(file_data)\n[docs] def process_xls(self, link: str) -> str:\n import io\n import os\n try:\n import xlrd # noqa: F401\n except ImportError:\n raise ImportError(\"`xlrd` package not found, please run `pip install xlrd`\")\n try:\n import pandas as pd\n except ImportError:\n raise ImportError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} {"id": "c7bc35cea53d-13", "text": "try:\n import pandas as pd\n except ImportError:\n raise ImportError(\n \"`pandas` package not found, please run `pip install pandas`\"\n )\n response = self.confluence.request(path=link, absolute=True)\n text = \"\"\n if (\n response.status_code != 200\n or response.content == b\"\"\n or response.content is None\n ):\n return text\n filename = os.path.basename(link)\n # Getting the whole content of the url after filename,\n # Example: \".csv?version=2&modificationDate=1631800010678&cacheVersion=1&api=v2\"\n file_extension = os.path.splitext(filename)[1]\n if file_extension.startswith(\n \".csv\"\n ): # if the extension found in the url is \".csv\"\n content_string = response.content.decode(\"utf-8\")\n df = pd.read_csv(io.StringIO(content_string))\n text += df.to_string(index=False, header=False) + \"\\n\\n\"\n else:\n workbook = xlrd.open_workbook(file_contents=response.content)\n for sheet in workbook.sheets():\n text += f\"{sheet.name}:\\n\"\n for row in range(sheet.nrows):\n for col in range(sheet.ncols):\n text += f\"{sheet.cell_value(row, col)}\\t\"\n text += \"\\n\"\n text += \"\\n\"\n return text\n[docs] def process_svg(\n self,\n link: str,\n ocr_languages: Optional[str] = None,\n ) -> str:\n try:\n import pytesseract # noqa: F401\n from PIL import Image # noqa: F401", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} {"id": "c7bc35cea53d-14", "text": "from PIL import Image # noqa: F401\n from reportlab.graphics import renderPM # noqa: F401\n from svglib.svglib import svg2rlg # noqa: F401\n except ImportError:\n raise ImportError(\n \"`pytesseract`, `Pillow`, `reportlab` or `svglib` package not found, \"\n \"please run `pip install pytesseract Pillow reportlab svglib`\"\n )\n response = self.confluence.request(path=link, absolute=True)\n text = \"\"\n if (\n response.status_code != 200\n or response.content == b\"\"\n or response.content is None\n ):\n return text\n drawing = svg2rlg(BytesIO(response.content))\n img_data = BytesIO()\n renderPM.drawToFile(drawing, img_data, fmt=\"PNG\")\n img_data.seek(0)\n image = Image.open(img_data)\n return pytesseract.image_to_string(image, lang=ocr_languages)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} {"id": "14445894b7d3-0", "text": "Source code for langchain.document_loaders.mediawikidump\n\"\"\"Load Data from a MediaWiki dump xml.\"\"\"\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class MWDumpLoader(BaseLoader):\n \"\"\"\n Load MediaWiki dump from XML file\n Example:\n .. code-block:: python\n from langchain.document_loaders import MWDumpLoader\n loader = MWDumpLoader(\n file_path=\"myWiki.xml\",\n encoding=\"utf8\"\n )\n docs = loader.load()\n from langchain.text_splitter import RecursiveCharacterTextSplitter\n text_splitter = RecursiveCharacterTextSplitter(\n chunk_size=1000, chunk_overlap=0\n )\n texts = text_splitter.split_documents(docs)\n :param file_path: XML local file path\n :type file_path: str\n :param encoding: Charset encoding, defaults to \"utf8\"\n :type encoding: str, optional\n \"\"\"\n def __init__(self, file_path: str, encoding: Optional[str] = \"utf8\"):\n \"\"\"Initialize with a file path.\n Args:\n file_path: XML local file path\n encoding: Charset encoding, defaults to \"utf8\"\n \"\"\"\n self.file_path = file_path\n self.encoding = encoding\n[docs] def load(self) -> List[Document]:\n \"\"\"Load from a file path.\"\"\"\n import mwparserfromhell\n import mwxml\n dump = mwxml.Dump.from_file(open(self.file_path, encoding=self.encoding))\n docs = []\n for page in dump.pages:\n for revision in page:\n code = mwparserfromhell.parse(revision.text)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/mediawikidump.html"} {"id": "14445894b7d3-1", "text": "for revision in page:\n code = mwparserfromhell.parse(revision.text)\n text = code.strip_code(\n normalize=True, collapse=True, keep_template_params=False\n )\n metadata = {\"source\": page.title}\n docs.append(Document(page_content=text, metadata=metadata))\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/mediawikidump.html"} {"id": "e63a9c9940c0-0", "text": "Source code for langchain.document_loaders.bigquery\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nif TYPE_CHECKING:\n from google.auth.credentials import Credentials\n[docs]class BigQueryLoader(BaseLoader):\n \"\"\"Loads a query result from BigQuery into a list of documents.\n Each document represents one row of the result. The `page_content_columns`\n are written into the `page_content` of the document. The `metadata_columns`\n are written into the `metadata` of the document. By default, all columns\n are written into the `page_content` and none into the `metadata`.\n \"\"\"\n def __init__(\n self,\n query: str,\n project: Optional[str] = None,\n page_content_columns: Optional[List[str]] = None,\n metadata_columns: Optional[List[str]] = None,\n credentials: Optional[Credentials] = None,\n ):\n \"\"\"Initialize BigQuery document loader.\n Args:\n query: The query to run in BigQuery.\n project: Optional. The project to run the query in.\n page_content_columns: Optional. The columns to write into the `page_content`\n of the document.\n metadata_columns: Optional. The columns to write into the `metadata` of the\n document.\n credentials : google.auth.credentials.Credentials, optional\n Credentials for accessing Google APIs. Use this parameter to override\n default credentials, such as to use Compute Engine\n (`google.auth.compute_engine.Credentials`) or Service Account\n (`google.oauth2.service_account.Credentials`) credentials directly.\n \"\"\"\n self.query = query\n self.project = project\n self.page_content_columns = page_content_columns", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/bigquery.html"} {"id": "e63a9c9940c0-1", "text": "self.project = project\n self.page_content_columns = page_content_columns\n self.metadata_columns = metadata_columns\n self.credentials = credentials\n[docs] def load(self) -> List[Document]:\n try:\n from google.cloud import bigquery\n except ImportError as ex:\n raise ImportError(\n \"Could not import google-cloud-bigquery python package. \"\n \"Please install it with `pip install google-cloud-bigquery`.\"\n ) from ex\n bq_client = bigquery.Client(credentials=self.credentials, project=self.project)\n query_result = bq_client.query(self.query).result()\n docs: List[Document] = []\n page_content_columns = self.page_content_columns\n metadata_columns = self.metadata_columns\n if page_content_columns is None:\n page_content_columns = [column.name for column in query_result.schema]\n if metadata_columns is None:\n metadata_columns = []\n for row in query_result:\n page_content = \"\\n\".join(\n f\"{k}: {v}\" for k, v in row.items() if k in page_content_columns\n )\n metadata = {k: v for k, v in row.items() if k in metadata_columns}\n doc = Document(page_content=page_content, metadata=metadata)\n docs.append(doc)\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/bigquery.html"} {"id": "625f824c194e-0", "text": "Source code for langchain.document_loaders.onedrive\n\"\"\"Loader that loads data from OneDrive\"\"\"\nfrom __future__ import annotations\nimport logging\nimport os\nimport tempfile\nfrom enum import Enum\nfrom pathlib import Path\nfrom typing import TYPE_CHECKING, Dict, List, Optional, Type, Union\nfrom pydantic import BaseModel, BaseSettings, Field, FilePath, SecretStr\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.onedrive_file import OneDriveFileLoader\nif TYPE_CHECKING:\n from O365 import Account\n from O365.drive import Drive, Folder\nSCOPES = [\"offline_access\", \"Files.Read.All\"]\nlogger = logging.getLogger(__name__)\nclass _OneDriveSettings(BaseSettings):\n client_id: str = Field(..., env=\"O365_CLIENT_ID\")\n client_secret: SecretStr = Field(..., env=\"O365_CLIENT_SECRET\")\n class Config:\n env_prefix = \"\"\n case_sentive = False\n env_file = \".env\"\nclass _OneDriveTokenStorage(BaseSettings):\n token_path: FilePath = Field(Path.home() / \".credentials\" / \"o365_token.txt\")\nclass _FileType(str, Enum):\n DOC = \"doc\"\n DOCX = \"docx\"\n PDF = \"pdf\"\nclass _SupportedFileTypes(BaseModel):\n file_types: List[_FileType]\n def fetch_mime_types(self) -> Dict[str, str]:\n mime_types_mapping = {}\n for file_type in self.file_types:\n if file_type.value == \"doc\":\n mime_types_mapping[file_type.value] = \"application/msword\"\n elif file_type.value == \"docx\":\n mime_types_mapping[\n file_type.value", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/onedrive.html"} {"id": "625f824c194e-1", "text": "mime_types_mapping[\n file_type.value\n ] = \"application/vnd.openxmlformats-officedocument.wordprocessingml.document\" # noqa: E501\n elif file_type.value == \"pdf\":\n mime_types_mapping[file_type.value] = \"application/pdf\"\n return mime_types_mapping\n[docs]class OneDriveLoader(BaseLoader, BaseModel):\n settings: _OneDriveSettings = Field(default_factory=_OneDriveSettings)\n drive_id: str = Field(...)\n folder_path: Optional[str] = None\n object_ids: Optional[List[str]] = None\n auth_with_token: bool = False\n def _auth(self) -> Type[Account]:\n \"\"\"\n Authenticates the OneDrive API client using the specified\n authentication method and returns the Account object.\n Returns:\n Type[Account]: The authenticated Account object.\n \"\"\"\n try:\n from O365 import FileSystemTokenBackend\n except ImportError:\n raise ImportError(\n \"O365 package not found, please install it with `pip install o365`\"\n )\n if self.auth_with_token:\n token_storage = _OneDriveTokenStorage()\n token_path = token_storage.token_path\n token_backend = FileSystemTokenBackend(\n token_path=token_path.parent, token_filename=token_path.name\n )\n account = Account(\n credentials=(\n self.settings.client_id,\n self.settings.client_secret.get_secret_value(),\n ),\n scopes=SCOPES,\n token_backend=token_backend,\n **{\"raise_http_errors\": False},\n )\n else:\n token_backend = FileSystemTokenBackend(\n token_path=Path.home() / \".credentials\"\n )\n account = Account(\n credentials=(\n self.settings.client_id,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/onedrive.html"} {"id": "625f824c194e-2", "text": ")\n account = Account(\n credentials=(\n self.settings.client_id,\n self.settings.client_secret.get_secret_value(),\n ),\n scopes=SCOPES,\n token_backend=token_backend,\n **{\"raise_http_errors\": False},\n )\n # make the auth\n account.authenticate()\n return account\n def _get_folder_from_path(self, drive: Type[Drive]) -> Union[Folder, Drive]:\n \"\"\"\n Returns the folder or drive object located at the\n specified path relative to the given drive.\n Args:\n drive (Type[Drive]): The root drive from which the folder path is relative.\n Returns:\n Union[Folder, Drive]: The folder or drive object\n located at the specified path.\n Raises:\n FileNotFoundError: If the path does not exist.\n \"\"\"\n subfolder_drive = drive\n if self.folder_path is None:\n return subfolder_drive\n subfolders = [f for f in self.folder_path.split(\"/\") if f != \"\"]\n if len(subfolders) == 0:\n return subfolder_drive\n items = subfolder_drive.get_items()\n for subfolder in subfolders:\n try:\n subfolder_drive = list(filter(lambda x: subfolder in x.name, items))[0]\n items = subfolder_drive.get_items()\n except (IndexError, AttributeError):\n raise FileNotFoundError(\"Path {} not exist.\".format(self.folder_path))\n return subfolder_drive\n def _load_from_folder(self, folder: Type[Folder]) -> List[Document]:\n \"\"\"\n Loads all supported document files from the specified folder\n and returns a list of Document objects.\n Args:\n folder (Type[Folder]): The folder object to load the documents from.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/onedrive.html"} {"id": "625f824c194e-3", "text": "folder (Type[Folder]): The folder object to load the documents from.\n Returns:\n List[Document]: A list of Document objects representing\n the loaded documents.\n \"\"\"\n docs = []\n file_types = _SupportedFileTypes(file_types=[\"doc\", \"docx\", \"pdf\"])\n file_mime_types = file_types.fetch_mime_types()\n items = folder.get_items()\n with tempfile.TemporaryDirectory() as temp_dir:\n file_path = f\"{temp_dir}\"\n os.makedirs(os.path.dirname(file_path), exist_ok=True)\n for file in items:\n if file.is_file:\n if file.mime_type in list(file_mime_types.values()):\n loader = OneDriveFileLoader(file=file)\n docs.extend(loader.load())\n return docs\n def _load_from_object_ids(self, drive: Type[Drive]) -> List[Document]:\n \"\"\"\n Loads all supported document files from the specified OneDrive\n drive based on their object IDs and returns a list\n of Document objects.\n Args:\n drive (Type[Drive]): The OneDrive drive object\n to load the documents from.\n Returns:\n List[Document]: A list of Document objects representing\n the loaded documents.\n \"\"\"\n docs = []\n file_types = _SupportedFileTypes(file_types=[\"doc\", \"docx\", \"pdf\"])\n file_mime_types = file_types.fetch_mime_types()\n with tempfile.TemporaryDirectory() as temp_dir:\n file_path = f\"{temp_dir}\"\n os.makedirs(os.path.dirname(file_path), exist_ok=True)\n for object_id in self.object_ids if self.object_ids else [\"\"]:\n file = drive.get_item(object_id)\n if not file:\n logging.warning(\n \"There isn't a file with \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/onedrive.html"} {"id": "625f824c194e-4", "text": "logging.warning(\n \"There isn't a file with \"\n f\"object_id {object_id} in drive {drive}.\"\n )\n continue\n if file.is_file:\n if file.mime_type in list(file_mime_types.values()):\n loader = OneDriveFileLoader(file=file)\n docs.extend(loader.load())\n return docs\n[docs] def load(self) -> List[Document]:\n \"\"\"\n Loads all supported document files from the specified OneDrive drive a\n nd returns a list of Document objects.\n Returns:\n List[Document]: A list of Document objects\n representing the loaded documents.\n Raises:\n ValueError: If the specified drive ID\n does not correspond to a drive in the OneDrive storage.\n \"\"\"\n account = self._auth()\n storage = account.storage()\n drive = storage.get_drive(self.drive_id)\n docs: List[Document] = []\n if not drive:\n raise ValueError(f\"There isn't a drive with id {self.drive_id}.\")\n if self.folder_path:\n folder = self._get_folder_from_path(drive=drive)\n docs.extend(self._load_from_folder(folder=folder))\n elif self.object_ids:\n docs.extend(self._load_from_object_ids(drive=drive))\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/onedrive.html"} {"id": "940cad2d79e5-0", "text": "Source code for langchain.document_loaders.json_loader\n\"\"\"Loads data from JSON.\"\"\"\nimport json\nfrom pathlib import Path\nfrom typing import Any, Callable, Dict, List, Optional, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class JSONLoader(BaseLoader):\n \"\"\"Loads a JSON file using a jq schema.\n Example:\n [{\"text\": ...}, {\"text\": ...}, {\"text\": ...}] -> schema = .[].text\n {\"key\": [{\"text\": ...}, {\"text\": ...}, {\"text\": ...}]} -> schema = .key[].text\n [\"\", \"\", \"\"] -> schema = .[]\n \"\"\"\n def __init__(\n self,\n file_path: Union[str, Path],\n jq_schema: str,\n content_key: Optional[str] = None,\n metadata_func: Optional[Callable[[Dict, Dict], Dict]] = None,\n text_content: bool = True,\n json_lines: bool = False,\n ):\n \"\"\"Initialize the JSONLoader.\n Args:\n file_path (Union[str, Path]): The path to the JSON or JSON Lines file.\n jq_schema (str): The jq schema to use to extract the data or text from\n the JSON.\n content_key (str): The key to use to extract the content from the JSON if\n the jq_schema results to a list of objects (dict).\n metadata_func (Callable[Dict, Dict]): A function that takes in the JSON\n object extracted by the jq_schema and the default metadata and returns\n a dict of the updated metadata.\n text_content (bool): Boolean flag to indicate whether the content is in\n string format, default to True.\n json_lines (bool): Boolean flag to indicate whether the input is in", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/json_loader.html"} {"id": "940cad2d79e5-1", "text": "json_lines (bool): Boolean flag to indicate whether the input is in\n JSON Lines format.\n \"\"\"\n try:\n import jq # noqa:F401\n except ImportError:\n raise ImportError(\n \"jq package not found, please install it with `pip install jq`\"\n )\n self.file_path = Path(file_path).resolve()\n self._jq_schema = jq.compile(jq_schema)\n self._content_key = content_key\n self._metadata_func = metadata_func\n self._text_content = text_content\n self._json_lines = json_lines\n[docs] def load(self) -> List[Document]:\n \"\"\"Load and return documents from the JSON file.\"\"\"\n docs: List[Document] = []\n if self._json_lines:\n with self.file_path.open(encoding=\"utf-8\") as f:\n for line in f:\n line = line.strip()\n if line:\n self._parse(line, docs)\n else:\n self._parse(self.file_path.read_text(), docs)\n return docs\n def _parse(self, content: str, docs: List[Document]) -> None:\n \"\"\"Convert given content to documents.\"\"\"\n data = self._jq_schema.input(json.loads(content))\n # Perform some validation\n # This is not a perfect validation, but it should catch most cases\n # and prevent the user from getting a cryptic error later on.\n if self._content_key is not None:\n self._validate_content_key(data)\n for i, sample in enumerate(data, len(docs) + 1):\n metadata = dict(\n source=str(self.file_path),\n seq_num=i,\n )\n text = self._get_text(sample=sample, metadata=metadata)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/json_loader.html"} {"id": "940cad2d79e5-2", "text": ")\n text = self._get_text(sample=sample, metadata=metadata)\n docs.append(Document(page_content=text, metadata=metadata))\n def _get_text(self, sample: Any, metadata: dict) -> str:\n \"\"\"Convert sample to string format\"\"\"\n if self._content_key is not None:\n content = sample.get(self._content_key)\n if self._metadata_func is not None:\n # We pass in the metadata dict to the metadata_func\n # so that the user can customize the default metadata\n # based on the content of the JSON object.\n metadata = self._metadata_func(sample, metadata)\n else:\n content = sample\n if self._text_content and not isinstance(content, str):\n raise ValueError(\n f\"Expected page_content is string, got {type(content)} instead. \\\n Set `text_content=False` if the desired input for \\\n `page_content` is not a string\"\n )\n # In case the text is None, set it to an empty string\n elif isinstance(content, str):\n return content\n elif isinstance(content, dict):\n return json.dumps(content) if content else \"\"\n else:\n return str(content) if content is not None else \"\"\n def _validate_content_key(self, data: Any) -> None:\n \"\"\"Check if a content key is valid\"\"\"\n sample = data.first()\n if not isinstance(sample, dict):\n raise ValueError(\n f\"Expected the jq schema to result in a list of objects (dict), \\\n so sample must be a dict but got `{type(sample)}`\"\n )\n if sample.get(self._content_key) is None:\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/json_loader.html"} {"id": "940cad2d79e5-3", "text": "if sample.get(self._content_key) is None:\n raise ValueError(\n f\"Expected the jq schema to result in a list of objects (dict) \\\n with the key `{self._content_key}`\"\n )\n if self._metadata_func is not None:\n sample_metadata = self._metadata_func(sample, {})\n if not isinstance(sample_metadata, dict):\n raise ValueError(\n f\"Expected the metadata_func to return a dict but got \\\n `{type(sample_metadata)}`\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/json_loader.html"} {"id": "8349c9e287da-0", "text": "Source code for langchain.document_loaders.figma\n\"\"\"Loader that loads Figma files json dump.\"\"\"\nimport json\nimport urllib.request\nfrom typing import Any, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utils import stringify_dict\n[docs]class FigmaFileLoader(BaseLoader):\n \"\"\"Loads Figma file json.\"\"\"\n def __init__(self, access_token: str, ids: str, key: str):\n \"\"\"Initialize with access token, ids, and key.\n Args:\n access_token: The access token for the Figma REST API.\n ids: The ids of the Figma file.\n key: The key for the Figma file\n \"\"\"\n self.access_token = access_token\n self.ids = ids\n self.key = key\n def _construct_figma_api_url(self) -> str:\n api_url = \"https://api.figma.com/v1/files/%s/nodes?ids=%s\" % (\n self.key,\n self.ids,\n )\n return api_url\n def _get_figma_file(self) -> Any:\n \"\"\"Get Figma file from Figma REST API.\"\"\"\n headers = {\"X-Figma-Token\": self.access_token}\n request = urllib.request.Request(\n self._construct_figma_api_url(), headers=headers\n )\n with urllib.request.urlopen(request) as response:\n json_data = json.loads(response.read().decode())\n return json_data\n[docs] def load(self) -> List[Document]:\n \"\"\"Load file\"\"\"\n data = self._get_figma_file()\n text = stringify_dict(data)\n metadata = {\"source\": self._construct_figma_api_url()}\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/figma.html"} {"id": "bd1a4be228ea-0", "text": "Source code for langchain.document_loaders.s3_file\n\"\"\"Loading logic for loading documents from an s3 file.\"\"\"\nimport os\nimport tempfile\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class S3FileLoader(BaseLoader):\n \"\"\"Loading logic for loading documents from s3.\"\"\"\n def __init__(self, bucket: str, key: str):\n \"\"\"Initialize with bucket and key name.\"\"\"\n self.bucket = bucket\n self.key = key\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n import boto3\n except ImportError:\n raise ImportError(\n \"Could not import `boto3` python package. \"\n \"Please install it with `pip install boto3`.\"\n )\n s3 = boto3.client(\"s3\")\n with tempfile.TemporaryDirectory() as temp_dir:\n file_path = f\"{temp_dir}/{self.key}\"\n os.makedirs(os.path.dirname(file_path), exist_ok=True)\n s3.download_file(self.bucket, self.key, file_path)\n loader = UnstructuredFileLoader(file_path)\n return loader.load()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/s3_file.html"} {"id": "8c76aae44130-0", "text": "Source code for langchain.document_loaders.url_playwright\n\"\"\"Loader that uses Playwright to load a page, then uses unstructured to load the html.\n\"\"\"\nimport logging\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\n[docs]class PlaywrightURLLoader(BaseLoader):\n \"\"\"Loader that uses Playwright and to load a page and unstructured to load the html.\n This is useful for loading pages that require javascript to render.\n Attributes:\n urls (List[str]): List of URLs to load.\n continue_on_failure (bool): If True, continue loading other URLs on failure.\n headless (bool): If True, the browser will run in headless mode.\n \"\"\"\n def __init__(\n self,\n urls: List[str],\n continue_on_failure: bool = True,\n headless: bool = True,\n remove_selectors: Optional[List[str]] = None,\n ):\n \"\"\"Load a list of URLs using Playwright and unstructured.\"\"\"\n try:\n import playwright # noqa:F401\n except ImportError:\n raise ImportError(\n \"playwright package not found, please install it with \"\n \"`pip install playwright`\"\n )\n try:\n import unstructured # noqa:F401\n except ImportError:\n raise ValueError(\n \"unstructured package not found, please install it with \"\n \"`pip install unstructured`\"\n )\n self.urls = urls\n self.continue_on_failure = continue_on_failure\n self.headless = headless\n self.remove_selectors = remove_selectors\n[docs] def load(self) -> List[Document]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/url_playwright.html"} {"id": "8c76aae44130-1", "text": "[docs] def load(self) -> List[Document]:\n \"\"\"Load the specified URLs using Playwright and create Document instances.\n Returns:\n List[Document]: A list of Document instances with loaded content.\n \"\"\"\n from playwright.sync_api import sync_playwright\n from unstructured.partition.html import partition_html\n docs: List[Document] = list()\n with sync_playwright() as p:\n browser = p.chromium.launch(headless=self.headless)\n for url in self.urls:\n try:\n page = browser.new_page()\n page.goto(url)\n for selector in self.remove_selectors or []:\n elements = page.locator(selector).all()\n for element in elements:\n if element.is_visible():\n element.evaluate(\"element => element.remove()\")\n page_source = page.content()\n elements = partition_html(text=page_source)\n text = \"\\n\\n\".join([str(el) for el in elements])\n metadata = {\"source\": url}\n docs.append(Document(page_content=text, metadata=metadata))\n except Exception as e:\n if self.continue_on_failure:\n logger.error(\n f\"Error fetching or processing {url}, exception: {e}\"\n )\n else:\n raise e\n browser.close()\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/url_playwright.html"} {"id": "7e8209825d4d-0", "text": "Source code for langchain.document_loaders.discord\n\"\"\"Load from Discord chat dump\"\"\"\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nif TYPE_CHECKING:\n import pandas as pd\n[docs]class DiscordChatLoader(BaseLoader):\n \"\"\"Load Discord chat logs.\"\"\"\n def __init__(self, chat_log: pd.DataFrame, user_id_col: str = \"ID\"):\n \"\"\"Initialize with a Pandas DataFrame containing chat logs.\n Args:\n chat_log: Pandas DataFrame containing chat logs.\n user_id_col: Name of the column containing the user ID. Defaults to \"ID\".\n \"\"\"\n if not isinstance(chat_log, pd.DataFrame):\n raise ValueError(\n f\"Expected chat_log to be a pd.DataFrame, got {type(chat_log)}\"\n )\n self.chat_log = chat_log\n self.user_id_col = user_id_col\n[docs] def load(self) -> List[Document]:\n \"\"\"Load all chat messages.\"\"\"\n result = []\n for _, row in self.chat_log.iterrows():\n user_id = row[self.user_id_col]\n metadata = row.to_dict()\n metadata.pop(self.user_id_col)\n result.append(Document(page_content=user_id, metadata=metadata))\n return result", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/discord.html"} {"id": "4239d6050874-0", "text": "Source code for langchain.document_loaders.url_selenium\n\"\"\"Loader that uses Selenium to load a page, then uses unstructured to load the html.\n\"\"\"\nimport logging\nfrom typing import TYPE_CHECKING, List, Literal, Optional, Union\nif TYPE_CHECKING:\n from selenium.webdriver import Chrome, Firefox\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\n[docs]class SeleniumURLLoader(BaseLoader):\n \"\"\"Loader that uses Selenium and to load a page and unstructured to load the html.\n This is useful for loading pages that require javascript to render.\n Attributes:\n urls (List[str]): List of URLs to load.\n continue_on_failure (bool): If True, continue loading other URLs on failure.\n browser (str): The browser to use, either 'chrome' or 'firefox'.\n binary_location (Optional[str]): The location of the browser binary.\n executable_path (Optional[str]): The path to the browser executable.\n headless (bool): If True, the browser will run in headless mode.\n arguments [List[str]]: List of arguments to pass to the browser.\n \"\"\"\n def __init__(\n self,\n urls: List[str],\n continue_on_failure: bool = True,\n browser: Literal[\"chrome\", \"firefox\"] = \"chrome\",\n binary_location: Optional[str] = None,\n executable_path: Optional[str] = None,\n headless: bool = True,\n arguments: List[str] = [],\n ):\n \"\"\"Load a list of URLs using Selenium and unstructured.\"\"\"\n try:\n import selenium # noqa:F401\n except ImportError:\n raise ImportError(\n \"selenium package not found, please install it with \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/url_selenium.html"} {"id": "4239d6050874-1", "text": "raise ImportError(\n \"selenium package not found, please install it with \"\n \"`pip install selenium`\"\n )\n try:\n import unstructured # noqa:F401\n except ImportError:\n raise ImportError(\n \"unstructured package not found, please install it with \"\n \"`pip install unstructured`\"\n )\n self.urls = urls\n self.continue_on_failure = continue_on_failure\n self.browser = browser\n self.binary_location = binary_location\n self.executable_path = executable_path\n self.headless = headless\n self.arguments = arguments\n def _get_driver(self) -> Union[\"Chrome\", \"Firefox\"]:\n \"\"\"Create and return a WebDriver instance based on the specified browser.\n Raises:\n ValueError: If an invalid browser is specified.\n Returns:\n Union[Chrome, Firefox]: A WebDriver instance for the specified browser.\n \"\"\"\n if self.browser.lower() == \"chrome\":\n from selenium.webdriver import Chrome\n from selenium.webdriver.chrome.options import Options as ChromeOptions\n chrome_options = ChromeOptions()\n for arg in self.arguments:\n chrome_options.add_argument(arg)\n if self.headless:\n chrome_options.add_argument(\"--headless\")\n chrome_options.add_argument(\"--no-sandbox\")\n if self.binary_location is not None:\n chrome_options.binary_location = self.binary_location\n if self.executable_path is None:\n return Chrome(options=chrome_options)\n return Chrome(executable_path=self.executable_path, options=chrome_options)\n elif self.browser.lower() == \"firefox\":\n from selenium.webdriver import Firefox\n from selenium.webdriver.firefox.options import Options as FirefoxOptions\n firefox_options = FirefoxOptions()\n for arg in self.arguments:\n firefox_options.add_argument(arg)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/url_selenium.html"} {"id": "4239d6050874-2", "text": "for arg in self.arguments:\n firefox_options.add_argument(arg)\n if self.headless:\n firefox_options.add_argument(\"--headless\")\n if self.binary_location is not None:\n firefox_options.binary_location = self.binary_location\n if self.executable_path is None:\n return Firefox(options=firefox_options)\n return Firefox(\n executable_path=self.executable_path, options=firefox_options\n )\n else:\n raise ValueError(\"Invalid browser specified. Use 'chrome' or 'firefox'.\")\n[docs] def load(self) -> List[Document]:\n \"\"\"Load the specified URLs using Selenium and create Document instances.\n Returns:\n List[Document]: A list of Document instances with loaded content.\n \"\"\"\n from unstructured.partition.html import partition_html\n docs: List[Document] = list()\n driver = self._get_driver()\n for url in self.urls:\n try:\n driver.get(url)\n page_content = driver.page_source\n elements = partition_html(text=page_content)\n text = \"\\n\\n\".join([str(el) for el in elements])\n metadata = {\"source\": url}\n docs.append(Document(page_content=text, metadata=metadata))\n except Exception as e:\n if self.continue_on_failure:\n logger.error(f\"Error fetching or processing {url}, exception: {e}\")\n else:\n raise e\n driver.quit()\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/url_selenium.html"} {"id": "04b5d9443cb3-0", "text": "Source code for langchain.document_loaders.open_city_data\nfrom typing import Iterator, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class OpenCityDataLoader(BaseLoader):\n \"\"\"Loader that loads Open city data.\"\"\"\n def __init__(self, city_id: str, dataset_id: str, limit: int):\n \"\"\"Initialize with dataset_id\"\"\"\n \"\"\" Example: https://dev.socrata.com/foundry/data.sfgov.org/vw6y-z8j6 \"\"\"\n \"\"\" e.g., city_id = data.sfgov.org \"\"\"\n \"\"\" e.g., dataset_id = vw6y-z8j6 \"\"\"\n self.city_id = city_id\n self.dataset_id = dataset_id\n self.limit = limit\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"Lazy load records.\"\"\"\n from sodapy import Socrata\n client = Socrata(self.city_id, None)\n results = client.get(self.dataset_id, limit=self.limit)\n for record in results:\n yield Document(\n page_content=str(record),\n metadata={\n \"source\": self.city_id + \"_\" + self.dataset_id,\n },\n )\n[docs] def load(self) -> List[Document]:\n \"\"\"Load records.\"\"\"\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/open_city_data.html"} {"id": "58214a5a68a7-0", "text": "Source code for langchain.document_loaders.sitemap\n\"\"\"Loader that fetches a sitemap and loads those URLs.\"\"\"\nimport itertools\nimport re\nfrom typing import Any, Callable, Generator, Iterable, List, Optional\nfrom langchain.document_loaders.web_base import WebBaseLoader\nfrom langchain.schema import Document\ndef _default_parsing_function(content: Any) -> str:\n return str(content.get_text())\ndef _default_meta_function(meta: dict, _content: Any) -> dict:\n return {\"source\": meta[\"loc\"], **meta}\ndef _batch_block(iterable: Iterable, size: int) -> Generator[List[dict], None, None]:\n it = iter(iterable)\n while item := list(itertools.islice(it, size)):\n yield item\n[docs]class SitemapLoader(WebBaseLoader):\n \"\"\"Loader that fetches a sitemap and loads those URLs.\"\"\"\n def __init__(\n self,\n web_path: str,\n filter_urls: Optional[List[str]] = None,\n parsing_function: Optional[Callable] = None,\n blocksize: Optional[int] = None,\n blocknum: int = 0,\n meta_function: Optional[Callable] = None,\n is_local: bool = False,\n ):\n \"\"\"Initialize with webpage path and optional filter URLs.\n Args:\n web_path: url of the sitemap. can also be a local path\n filter_urls: list of strings or regexes that will be applied to filter the\n urls that are parsed and loaded\n parsing_function: Function to parse bs4.Soup output\n blocksize: number of sitemap locations per block\n blocknum: the number of the block that should be loaded - zero indexed\n meta_function: Function to parse bs4.Soup output for metadata", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/sitemap.html"} {"id": "58214a5a68a7-1", "text": "meta_function: Function to parse bs4.Soup output for metadata\n remember when setting this method to also copy metadata[\"loc\"]\n to metadata[\"source\"] if you are using this field\n is_local: whether the sitemap is a local file\n \"\"\"\n if blocksize is not None and blocksize < 1:\n raise ValueError(\"Sitemap blocksize should be at least 1\")\n if blocknum < 0:\n raise ValueError(\"Sitemap blocknum can not be lower then 0\")\n try:\n import lxml # noqa:F401\n except ImportError:\n raise ImportError(\n \"lxml package not found, please install it with \" \"`pip install lxml`\"\n )\n super().__init__(web_path)\n self.filter_urls = filter_urls\n self.parsing_function = parsing_function or _default_parsing_function\n self.meta_function = meta_function or _default_meta_function\n self.blocksize = blocksize\n self.blocknum = blocknum\n self.is_local = is_local\n[docs] def parse_sitemap(self, soup: Any) -> List[dict]:\n \"\"\"Parse sitemap xml and load into a list of dicts.\"\"\"\n els = []\n for url in soup.find_all(\"url\"):\n loc = url.find(\"loc\")\n if not loc:\n continue\n # Strip leading and trailing whitespace and newlines\n loc_text = loc.text.strip()\n if self.filter_urls and not any(\n re.match(r, loc_text) for r in self.filter_urls\n ):\n continue\n els.append(\n {\n tag: prop.text\n for tag in [\"loc\", \"lastmod\", \"changefreq\", \"priority\"]\n if (prop := url.find(tag))\n }\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/sitemap.html"} {"id": "58214a5a68a7-2", "text": "if (prop := url.find(tag))\n }\n )\n for sitemap in soup.find_all(\"sitemap\"):\n loc = sitemap.find(\"loc\")\n if not loc:\n continue\n soup_child = self.scrape_all([loc.text], \"xml\")[0]\n els.extend(self.parse_sitemap(soup_child))\n return els\n[docs] def load(self) -> List[Document]:\n \"\"\"Load sitemap.\"\"\"\n if self.is_local:\n try:\n import bs4\n except ImportError:\n raise ImportError(\n \"beautifulsoup4 package not found, please install it\"\n \" with `pip install beautifulsoup4`\"\n )\n fp = open(self.web_path)\n soup = bs4.BeautifulSoup(fp, \"xml\")\n else:\n soup = self.scrape(\"xml\")\n els = self.parse_sitemap(soup)\n if self.blocksize is not None:\n elblocks = list(_batch_block(els, self.blocksize))\n blockcount = len(elblocks)\n if blockcount - 1 < self.blocknum:\n raise ValueError(\n \"Selected sitemap does not contain enough blocks for given blocknum\"\n )\n else:\n els = elblocks[self.blocknum]\n results = self.scrape_all([el[\"loc\"].strip() for el in els if \"loc\" in el])\n return [\n Document(\n page_content=self.parsing_function(results[i]),\n metadata=self.meta_function(els[i], results[i]),\n )\n for i in range(len(results))\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/sitemap.html"} {"id": "155f56be2469-0", "text": "Source code for langchain.document_loaders.hugging_face_dataset\n\"\"\"Loads HuggingFace datasets.\"\"\"\nfrom typing import Iterator, List, Mapping, Optional, Sequence, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class HuggingFaceDatasetLoader(BaseLoader):\n \"\"\"Load Documents from the Hugging Face Hub.\"\"\"\n def __init__(\n self,\n path: str,\n page_content_column: str = \"text\",\n name: Optional[str] = None,\n data_dir: Optional[str] = None,\n data_files: Optional[\n Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]\n ] = None,\n cache_dir: Optional[str] = None,\n keep_in_memory: Optional[bool] = None,\n save_infos: bool = False,\n use_auth_token: Optional[Union[bool, str]] = None,\n num_proc: Optional[int] = None,\n ):\n \"\"\"Initialize the HuggingFaceDatasetLoader.\n Args:\n path: Path or name of the dataset.\n page_content_column: Page content column name. Default is \"text\".\n name: Name of the dataset configuration.\n data_dir: Data directory of the dataset configuration.\n data_files: Path(s) to source data file(s).\n cache_dir: Directory to read/write data.\n keep_in_memory: Whether to copy the dataset in-memory.\n save_infos: Save the dataset information (checksums/size/splits/...).\n Default is False.\n use_auth_token: Bearer token for remote files on the Dataset Hub.\n num_proc: Number of processes.\n \"\"\"\n self.path = path\n self.page_content_column = page_content_column", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/hugging_face_dataset.html"} {"id": "155f56be2469-1", "text": "\"\"\"\n self.path = path\n self.page_content_column = page_content_column\n self.name = name\n self.data_dir = data_dir\n self.data_files = data_files\n self.cache_dir = cache_dir\n self.keep_in_memory = keep_in_memory\n self.save_infos = save_infos\n self.use_auth_token = use_auth_token\n self.num_proc = num_proc\n[docs] def lazy_load(\n self,\n ) -> Iterator[Document]:\n \"\"\"Load documents lazily.\"\"\"\n try:\n from datasets import load_dataset\n except ImportError:\n raise ImportError(\n \"Could not import datasets python package. \"\n \"Please install it with `pip install datasets`.\"\n )\n dataset = load_dataset(\n path=self.path,\n name=self.name,\n data_dir=self.data_dir,\n data_files=self.data_files,\n cache_dir=self.cache_dir,\n keep_in_memory=self.keep_in_memory,\n save_infos=self.save_infos,\n use_auth_token=self.use_auth_token,\n num_proc=self.num_proc,\n )\n yield from (\n Document(\n page_content=row.pop(self.page_content_column),\n metadata=row,\n )\n for key in dataset.keys()\n for row in dataset[key]\n )\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/hugging_face_dataset.html"} {"id": "e16fba29f312-0", "text": "Source code for langchain.document_loaders.gitbook\n\"\"\"Loader that loads GitBook.\"\"\"\nfrom typing import Any, List, Optional\nfrom urllib.parse import urljoin, urlparse\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.web_base import WebBaseLoader\n[docs]class GitbookLoader(WebBaseLoader):\n \"\"\"Load GitBook data.\n 1. load from either a single page, or\n 2. load all (relative) paths in the navbar.\n \"\"\"\n def __init__(\n self,\n web_page: str,\n load_all_paths: bool = False,\n base_url: Optional[str] = None,\n content_selector: str = \"main\",\n ):\n \"\"\"Initialize with web page and whether to load all paths.\n Args:\n web_page: The web page to load or the starting point from where\n relative paths are discovered.\n load_all_paths: If set to True, all relative paths in the navbar\n are loaded instead of only `web_page`.\n base_url: If `load_all_paths` is True, the relative paths are\n appended to this base url. Defaults to `web_page`.\n content_selector: The CSS selector for the content to load.\n Defaults to \"main\".\n \"\"\"\n self.base_url = base_url or web_page\n if self.base_url.endswith(\"/\"):\n self.base_url = self.base_url[:-1]\n if load_all_paths:\n # set web_path to the sitemap if we want to crawl all paths\n web_paths = f\"{self.base_url}/sitemap.xml\"\n else:\n web_paths = web_page\n super().__init__(web_paths)\n self.load_all_paths = load_all_paths\n self.content_selector = content_selector", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/gitbook.html"} {"id": "e16fba29f312-1", "text": "self.load_all_paths = load_all_paths\n self.content_selector = content_selector\n[docs] def load(self) -> List[Document]:\n \"\"\"Fetch text from one single GitBook page.\"\"\"\n if self.load_all_paths:\n soup_info = self.scrape()\n relative_paths = self._get_paths(soup_info)\n urls = [urljoin(self.base_url, path) for path in relative_paths]\n soup_infos = self.scrape_all(urls)\n _documents = [\n self._get_document(soup_info, url)\n for soup_info, url in zip(soup_infos, urls)\n ]\n else:\n soup_info = self.scrape()\n _documents = [self._get_document(soup_info, self.web_path)]\n documents = [d for d in _documents if d]\n return documents\n def _get_document(\n self, soup: Any, custom_url: Optional[str] = None\n ) -> Optional[Document]:\n \"\"\"Fetch content from page and return Document.\"\"\"\n page_content_raw = soup.find(self.content_selector)\n if not page_content_raw:\n return None\n content = page_content_raw.get_text(separator=\"\\n\").strip()\n title_if_exists = page_content_raw.find(\"h1\")\n title = title_if_exists.text if title_if_exists else \"\"\n metadata = {\"source\": custom_url or self.web_path, \"title\": title}\n return Document(page_content=content, metadata=metadata)\n def _get_paths(self, soup: Any) -> List[str]:\n \"\"\"Fetch all relative paths in the navbar.\"\"\"\n return [urlparse(loc.text).path for loc in soup.find_all(\"loc\")]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/gitbook.html"} {"id": "cba51ff24bdc-0", "text": "Source code for langchain.document_loaders.tencent_cos_file\n\"\"\"Loading logic for loading documents from Tencent Cloud COS file.\"\"\"\nimport os\nimport tempfile\nfrom typing import Any, Iterator, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class TencentCOSFileLoader(BaseLoader):\n \"\"\"Loading logic for loading documents from Tencent Cloud COS.\"\"\"\n def __init__(self, conf: Any, bucket: str, key: str):\n \"\"\"Initialize with COS config, bucket and key name.\n :param conf(CosConfig): COS config.\n :param bucket(str): COS bucket.\n :param key(str): COS file key.\n \"\"\"\n self.conf = conf\n self.bucket = bucket\n self.key = key\n[docs] def load(self) -> List[Document]:\n return list(self.lazy_load())\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n from qcloud_cos import CosS3Client\n except ImportError:\n raise ValueError(\n \"Could not import cos-python-sdk-v5 python package. \"\n \"Please install it with `pip install cos-python-sdk-v5`.\"\n )\n # Initialise a client\n client = CosS3Client(self.conf)\n with tempfile.TemporaryDirectory() as temp_dir:\n file_path = f\"{temp_dir}/{self.bucket}/{self.key}\"\n os.makedirs(os.path.dirname(file_path), exist_ok=True)\n # Download the file to a destination\n client.download_file(\n Bucket=self.bucket, Key=self.key, DestFilePath=file_path\n )\n loader = UnstructuredFileLoader(file_path)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/tencent_cos_file.html"} {"id": "cba51ff24bdc-1", "text": ")\n loader = UnstructuredFileLoader(file_path)\n # UnstructuredFileLoader not implement lazy_load yet\n return iter(loader.load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/tencent_cos_file.html"} {"id": "1c8a03a17963-0", "text": "Source code for langchain.document_loaders.image_captions\n\"\"\"Loads image captions.\nBy default, the loader utilizes the pre-trained BLIP image captioning model.\nhttps://huggingface.co/Salesforce/blip-image-captioning-base\n\"\"\"\nfrom typing import Any, List, Tuple, Union\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class ImageCaptionLoader(BaseLoader):\n \"\"\"Loads the captions of an image\"\"\"\n def __init__(\n self,\n path_images: Union[str, List[str]],\n blip_processor: str = \"Salesforce/blip-image-captioning-base\",\n blip_model: str = \"Salesforce/blip-image-captioning-base\",\n ):\n \"\"\"\n Initialize with a list of image paths\n Args:\n path_images: A list of image paths.\n blip_processor: The name of the pre-trained BLIP processor.\n blip_model: The name of the pre-trained BLIP model.\n \"\"\"\n if isinstance(path_images, str):\n self.image_paths = [path_images]\n else:\n self.image_paths = path_images\n self.blip_processor = blip_processor\n self.blip_model = blip_model\n[docs] def load(self) -> List[Document]:\n \"\"\"\n Load from a list of image files\n \"\"\"\n try:\n from transformers import BlipForConditionalGeneration, BlipProcessor\n except ImportError:\n raise ImportError(\n \"`transformers` package not found, please install with \"\n \"`pip install transformers`.\"\n )\n processor = BlipProcessor.from_pretrained(self.blip_processor)\n model = BlipForConditionalGeneration.from_pretrained(self.blip_model)\n results = []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/image_captions.html"} {"id": "1c8a03a17963-1", "text": "results = []\n for path_image in self.image_paths:\n caption, metadata = self._get_captions_and_metadata(\n model=model, processor=processor, path_image=path_image\n )\n doc = Document(page_content=caption, metadata=metadata)\n results.append(doc)\n return results\n def _get_captions_and_metadata(\n self, model: Any, processor: Any, path_image: str\n ) -> Tuple[str, dict]:\n \"\"\"\n Helper function for getting the captions and metadata of an image\n \"\"\"\n try:\n from PIL import Image\n except ImportError:\n raise ImportError(\n \"`PIL` package not found, please install with `pip install pillow`\"\n )\n try:\n if path_image.startswith(\"http://\") or path_image.startswith(\"https://\"):\n image = Image.open(requests.get(path_image, stream=True).raw).convert(\n \"RGB\"\n )\n else:\n image = Image.open(path_image).convert(\"RGB\")\n except Exception:\n raise ValueError(f\"Could not get image data for {path_image}\")\n inputs = processor(image, \"an image of\", return_tensors=\"pt\")\n output = model.generate(**inputs)\n caption: str = processor.decode(output[0])\n metadata: dict = {\"image_path\": path_image}\n return caption, metadata", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/image_captions.html"} {"id": "240c59e7b055-0", "text": "Source code for langchain.document_loaders.psychic\n\"\"\"Loader that loads documents from Psychic.dev.\"\"\"\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class PsychicLoader(BaseLoader):\n \"\"\"Loader that loads documents from Psychic.dev.\"\"\"\n def __init__(\n self, api_key: str, account_id: str, connector_id: Optional[str] = None\n ):\n \"\"\"Initialize with API key, connector id, and account id.\"\"\"\n try:\n from psychicapi import ConnectorId, Psychic # noqa: F401\n except ImportError:\n raise ImportError(\n \"`psychicapi` package not found, please run `pip install psychicapi`\"\n )\n self.psychic = Psychic(secret_key=api_key)\n self.connector_id = ConnectorId(connector_id)\n self.account_id = account_id\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n psychic_docs = self.psychic.get_documents(\n connector_id=self.connector_id, account_id=self.account_id\n )\n return [\n Document(\n page_content=doc[\"content\"],\n metadata={\"title\": doc[\"title\"], \"source\": doc[\"uri\"]},\n )\n for doc in psychic_docs.documents\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/psychic.html"} {"id": "27c0704364f5-0", "text": "Source code for langchain.document_loaders.reddit\n\"\"\"Reddit document loader.\"\"\"\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, Iterable, List, Optional, Sequence\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nif TYPE_CHECKING:\n import praw\ndef _dependable_praw_import() -> praw:\n try:\n import praw\n except ImportError:\n raise ValueError(\n \"praw package not found, please install it with `pip install praw`\"\n )\n return praw\n[docs]class RedditPostsLoader(BaseLoader):\n \"\"\"Reddit posts loader.\n Read posts on a subreddit.\n First you need to go to\n https://www.reddit.com/prefs/apps/\n and create your application\n \"\"\"\n def __init__(\n self,\n client_id: str,\n client_secret: str,\n user_agent: str,\n search_queries: Sequence[str],\n mode: str,\n categories: Sequence[str] = [\"new\"],\n number_posts: Optional[int] = 10,\n ):\n self.client_id = client_id\n self.client_secret = client_secret\n self.user_agent = user_agent\n self.search_queries = search_queries\n self.mode = mode\n self.categories = categories\n self.number_posts = number_posts\n[docs] def load(self) -> List[Document]:\n \"\"\"Load reddits.\"\"\"\n praw = _dependable_praw_import()\n reddit = praw.Reddit(\n client_id=self.client_id,\n client_secret=self.client_secret,\n user_agent=self.user_agent,\n )\n results: List[Document] = []\n if self.mode == \"subreddit\":\n for search_query in self.search_queries:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/reddit.html"} {"id": "27c0704364f5-1", "text": "if self.mode == \"subreddit\":\n for search_query in self.search_queries:\n for category in self.categories:\n docs = self._subreddit_posts_loader(\n search_query=search_query, category=category, reddit=reddit\n )\n results.extend(docs)\n elif self.mode == \"username\":\n for search_query in self.search_queries:\n for category in self.categories:\n docs = self._user_posts_loader(\n search_query=search_query, category=category, reddit=reddit\n )\n results.extend(docs)\n else:\n raise ValueError(\n \"mode not correct, please enter 'username' or 'subreddit' as mode\"\n )\n return results\n def _subreddit_posts_loader(\n self, search_query: str, category: str, reddit: praw.reddit.Reddit\n ) -> Iterable[Document]:\n subreddit = reddit.subreddit(search_query)\n method = getattr(subreddit, category)\n cat_posts = method(limit=self.number_posts)\n \"\"\"Format reddit posts into a string.\"\"\"\n for post in cat_posts:\n metadata = {\n \"post_subreddit\": post.subreddit_name_prefixed,\n \"post_category\": category,\n \"post_title\": post.title,\n \"post_score\": post.score,\n \"post_id\": post.id,\n \"post_url\": post.url,\n \"post_author\": post.author,\n }\n yield Document(\n page_content=post.selftext,\n metadata=metadata,\n )\n def _user_posts_loader(\n self, search_query: str, category: str, reddit: praw.reddit.Reddit\n ) -> Iterable[Document]:\n user = reddit.redditor(search_query)\n method = getattr(user.submissions, category)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/reddit.html"} {"id": "27c0704364f5-2", "text": "method = getattr(user.submissions, category)\n cat_posts = method(limit=self.number_posts)\n \"\"\"Format reddit posts into a string.\"\"\"\n for post in cat_posts:\n metadata = {\n \"post_subreddit\": post.subreddit_name_prefixed,\n \"post_category\": category,\n \"post_title\": post.title,\n \"post_score\": post.score,\n \"post_id\": post.id,\n \"post_url\": post.url,\n \"post_author\": post.author,\n }\n yield Document(\n page_content=post.selftext,\n metadata=metadata,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/reddit.html"} {"id": "01176316e20c-0", "text": "Source code for langchain.document_loaders.airbyte_json\n\"\"\"Loader that loads local airbyte json files.\"\"\"\nimport json\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utils import stringify_dict\n[docs]class AirbyteJSONLoader(BaseLoader):\n \"\"\"Loader that loads local airbyte json files.\"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with a file path. This should start with '/tmp/airbyte_local/'.\"\"\"\n self.file_path = file_path\n \"\"\"Path to the directory containing the json files.\"\"\"\n[docs] def load(self) -> List[Document]:\n text = \"\"\n for line in open(self.file_path, \"r\"):\n data = json.loads(line)[\"_airbyte_data\"]\n text += stringify_dict(data)\n metadata = {\"source\": self.file_path}\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/airbyte_json.html"} {"id": "854ff646c58b-0", "text": "Source code for langchain.document_loaders.evernote\n\"\"\"Load documents from Evernote.\nhttps://gist.github.com/foxmask/7b29c43a161e001ff04afdb2f181e31c\n\"\"\"\nimport hashlib\nimport logging\nfrom base64 import b64decode\nfrom time import strptime\nfrom typing import Any, Dict, Iterator, List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class EverNoteLoader(BaseLoader):\n \"\"\"EverNote Loader.\n Loads an EverNote notebook export file e.g. my_notebook.enex into Documents.\n Instructions on producing this file can be found at\n https://help.evernote.com/hc/en-us/articles/209005557-Export-notes-and-notebooks-as-ENEX-or-HTML\n Currently only the plain text in the note is extracted and stored as the contents\n of the Document, any non content metadata (e.g. 'author', 'created', 'updated' etc.\n but not 'content-raw' or 'resource') tags on the note will be extracted and stored\n as metadata on the Document.\n Args:\n file_path (str): The path to the notebook export with a .enex extension\n load_single_document (bool): Whether or not to concatenate the content of all\n notes into a single long Document.\n If this is set to True (default) then the only metadata on the document will be\n the 'source' which contains the file name of the export.\n \"\"\" # noqa: E501\n def __init__(self, file_path: str, load_single_document: bool = True):\n \"\"\"Initialize with file path.\"\"\"\n self.file_path = file_path\n self.load_single_document = load_single_document", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/evernote.html"} {"id": "854ff646c58b-1", "text": "self.file_path = file_path\n self.load_single_document = load_single_document\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents from EverNote export file.\"\"\"\n documents = [\n Document(\n page_content=note[\"content\"],\n metadata={\n **{\n key: value\n for key, value in note.items()\n if key not in [\"content\", \"content-raw\", \"resource\"]\n },\n **{\"source\": self.file_path},\n },\n )\n for note in self._parse_note_xml(self.file_path)\n if note.get(\"content\") is not None\n ]\n if not self.load_single_document:\n return documents\n return [\n Document(\n page_content=\"\".join([document.page_content for document in documents]),\n metadata={\"source\": self.file_path},\n )\n ]\n @staticmethod\n def _parse_content(content: str) -> str:\n try:\n import html2text\n return html2text.html2text(content).strip()\n except ImportError as e:\n logging.error(\n \"Could not import `html2text`. Although it is not a required package \"\n \"to use Langchain, using the EverNote loader requires `html2text`. \"\n \"Please install `html2text` via `pip install html2text` and try again.\"\n )\n raise e\n @staticmethod\n def _parse_resource(resource: list) -> dict:\n rsc_dict: Dict[str, Any] = {}\n for elem in resource:\n if elem.tag == \"data\":\n # Sometimes elem.text is None\n rsc_dict[elem.tag] = b64decode(elem.text) if elem.text else b\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/evernote.html"} {"id": "854ff646c58b-2", "text": "rsc_dict[\"hash\"] = hashlib.md5(rsc_dict[elem.tag]).hexdigest()\n else:\n rsc_dict[elem.tag] = elem.text\n return rsc_dict\n @staticmethod\n def _parse_note(note: List, prefix: Optional[str] = None) -> dict:\n note_dict: Dict[str, Any] = {}\n resources = []\n def add_prefix(element_tag: str) -> str:\n if prefix is None:\n return element_tag\n return f\"{prefix}.{element_tag}\"\n for elem in note:\n if elem.tag == \"content\":\n note_dict[elem.tag] = EverNoteLoader._parse_content(elem.text)\n # A copy of original content\n note_dict[\"content-raw\"] = elem.text\n elif elem.tag == \"resource\":\n resources.append(EverNoteLoader._parse_resource(elem))\n elif elem.tag == \"created\" or elem.tag == \"updated\":\n note_dict[elem.tag] = strptime(elem.text, \"%Y%m%dT%H%M%SZ\")\n elif elem.tag == \"note-attributes\":\n additional_attributes = EverNoteLoader._parse_note(\n elem, elem.tag\n ) # Recursively enter the note-attributes tag\n note_dict.update(additional_attributes)\n else:\n note_dict[elem.tag] = elem.text\n if len(resources) > 0:\n note_dict[\"resource\"] = resources\n return {add_prefix(key): value for key, value in note_dict.items()}\n @staticmethod\n def _parse_note_xml(xml_file: str) -> Iterator[Dict[str, Any]]:\n \"\"\"Parse Evernote xml.\"\"\"\n # Without huge_tree set to True, parser may complain about huge text node", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/evernote.html"} {"id": "854ff646c58b-3", "text": "# Without huge_tree set to True, parser may complain about huge text node\n # Try to recover, because there may be \" \", which will cause\n # \"XMLSyntaxError: Entity 'nbsp' not defined\"\n try:\n from lxml import etree\n except ImportError as e:\n logging.error(\n \"Could not import `lxml`. Although it is not a required package to use \"\n \"Langchain, using the EverNote loader requires `lxml`. Please install \"\n \"`lxml` via `pip install lxml` and try again.\"\n )\n raise e\n context = etree.iterparse(\n xml_file, encoding=\"utf-8\", strip_cdata=False, huge_tree=True, recover=True\n )\n for action, elem in context:\n if elem.tag == \"note\":\n yield EverNoteLoader._parse_note(elem)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/evernote.html"} {"id": "88d3ee568a33-0", "text": "Source code for langchain.document_loaders.toml\nimport json\nfrom pathlib import Path\nfrom typing import Iterator, List, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class TomlLoader(BaseLoader):\n \"\"\"\n A TOML document loader that inherits from the BaseLoader class.\n This class can be initialized with either a single source file or a source\n directory containing TOML files.\n \"\"\"\n def __init__(self, source: Union[str, Path]):\n \"\"\"Initialize the TomlLoader with a source file or directory.\"\"\"\n self.source = Path(source)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load and return all documents.\"\"\"\n return list(self.lazy_load())\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"Lazily load the TOML documents from the source file or directory.\"\"\"\n import tomli\n if self.source.is_file() and self.source.suffix == \".toml\":\n files = [self.source]\n elif self.source.is_dir():\n files = list(self.source.glob(\"**/*.toml\"))\n else:\n raise ValueError(\"Invalid source path or file type\")\n for file_path in files:\n with file_path.open(\"r\", encoding=\"utf-8\") as file:\n content = file.read()\n try:\n data = tomli.loads(content)\n doc = Document(\n page_content=json.dumps(data),\n metadata={\"source\": str(file_path)},\n )\n yield doc\n except tomli.TOMLDecodeError as e:\n print(f\"Error parsing TOML file {file_path}: {e}\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/toml.html"} {"id": "c848d85f3c42-0", "text": "Source code for langchain.document_loaders.fauna\nfrom typing import Iterator, List, Optional, Sequence\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class FaunaLoader(BaseLoader):\n \"\"\"FaunaDB Loader.\n Attributes:\n query (str): The FQL query string to execute.\n page_content_field (str): The field that contains the content of each page.\n secret (str): The secret key for authenticating to FaunaDB.\n metadata_fields (Optional[Sequence[str]]):\n Optional list of field names to include in metadata.\n \"\"\"\n def __init__(\n self,\n query: str,\n page_content_field: str,\n secret: str,\n metadata_fields: Optional[Sequence[str]] = None,\n ):\n self.query = query\n self.page_content_field = page_content_field\n self.secret = secret\n self.metadata_fields = metadata_fields\n[docs] def load(self) -> List[Document]:\n return list(self.lazy_load())\n[docs] def lazy_load(self) -> Iterator[Document]:\n try:\n from fauna import Page, fql\n from fauna.client import Client\n from fauna.encoding import QuerySuccess\n except ImportError:\n raise ImportError(\n \"Could not import fauna python package. \"\n \"Please install it with `pip install fauna`.\"\n )\n # Create Fauna Client\n client = Client(secret=self.secret)\n # Run FQL Query\n response: QuerySuccess = client.query(fql(self.query))\n page: Page = response.data\n for result in page:\n if result is not None:\n document_dict = dict(result.items())\n page_content = \"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/fauna.html"} {"id": "c848d85f3c42-1", "text": "document_dict = dict(result.items())\n page_content = \"\"\n for key, value in document_dict.items():\n if key == self.page_content_field:\n page_content = value\n document: Document = Document(\n page_content=page_content,\n metadata={\"id\": result.id, \"ts\": result.ts},\n )\n yield document\n if page.after is not None:\n yield Document(\n page_content=\"Next Page Exists\",\n metadata={\"after\": page.after},\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/fauna.html"} {"id": "9050c69f402e-0", "text": "Source code for langchain.document_loaders.embaas\nimport base64\nimport warnings\nfrom typing import Any, Dict, Iterator, List, Optional\nimport requests\nfrom pydantic import BaseModel, root_validator, validator\nfrom typing_extensions import NotRequired, TypedDict\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseBlobParser, BaseLoader\nfrom langchain.document_loaders.blob_loaders import Blob\nfrom langchain.text_splitter import TextSplitter\nfrom langchain.utils import get_from_dict_or_env\nEMBAAS_DOC_API_URL = \"https://api.embaas.io/v1/document/extract-text/bytes/\"\n[docs]class EmbaasDocumentExtractionParameters(TypedDict):\n \"\"\"Parameters for the embaas document extraction API.\"\"\"\n mime_type: NotRequired[str]\n \"\"\"The mime type of the document.\"\"\"\n file_extension: NotRequired[str]\n \"\"\"The file extension of the document.\"\"\"\n file_name: NotRequired[str]\n \"\"\"The file name of the document.\"\"\"\n should_chunk: NotRequired[bool]\n \"\"\"Whether to chunk the document into pages.\"\"\"\n chunk_size: NotRequired[int]\n \"\"\"The maximum size of the text chunks.\"\"\"\n chunk_overlap: NotRequired[int]\n \"\"\"The maximum overlap allowed between chunks.\"\"\"\n chunk_splitter: NotRequired[str]\n \"\"\"The text splitter class name for creating chunks.\"\"\"\n separators: NotRequired[List[str]]\n \"\"\"The separators for chunks.\"\"\"\n should_embed: NotRequired[bool]\n \"\"\"Whether to create embeddings for the document in the response.\"\"\"\n model: NotRequired[str]\n \"\"\"The model to pass to the Embaas document extraction API.\"\"\"\n instruction: NotRequired[str]\n \"\"\"The instruction to pass to the Embaas document extraction API.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/embaas.html"} {"id": "9050c69f402e-1", "text": "\"\"\"The instruction to pass to the Embaas document extraction API.\"\"\"\n[docs]class EmbaasDocumentExtractionPayload(EmbaasDocumentExtractionParameters):\n \"\"\"Payload for the Embaas document extraction API.\"\"\"\n bytes: str\n \"\"\"The base64 encoded bytes of the document to extract text from.\"\"\"\n[docs]class BaseEmbaasLoader(BaseModel):\n \"\"\"Base class for embedding a model into an Embaas document extraction API.\"\"\"\n embaas_api_key: Optional[str] = None\n \"\"\"The API key for the embaas document extraction API.\"\"\"\n api_url: str = EMBAAS_DOC_API_URL\n \"\"\"The URL of the embaas document extraction API.\"\"\"\n params: EmbaasDocumentExtractionParameters = EmbaasDocumentExtractionParameters()\n \"\"\"Additional parameters to pass to the embaas document extraction API.\"\"\"\n[docs] @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n embaas_api_key = get_from_dict_or_env(\n values, \"embaas_api_key\", \"EMBAAS_API_KEY\"\n )\n values[\"embaas_api_key\"] = embaas_api_key\n return values\n[docs]class EmbaasBlobLoader(BaseEmbaasLoader, BaseBlobParser):\n \"\"\"Embaas's document byte loader.\n To use, you should have the\n environment variable ``EMBAAS_API_KEY`` set with your API key, or pass\n it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n # Default parsing\n from langchain.document_loaders.embaas import EmbaasBlobLoader\n loader = EmbaasBlobLoader()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/embaas.html"} {"id": "9050c69f402e-2", "text": "loader = EmbaasBlobLoader()\n blob = Blob.from_path(path=\"example.mp3\")\n documents = loader.parse(blob=blob)\n # Custom api parameters (create embeddings automatically)\n from langchain.document_loaders.embaas import EmbaasBlobLoader\n loader = EmbaasBlobLoader(\n params={\n \"should_embed\": True,\n \"model\": \"e5-large-v2\",\n \"chunk_size\": 256,\n \"chunk_splitter\": \"CharacterTextSplitter\"\n }\n )\n blob = Blob.from_path(path=\"example.pdf\")\n documents = loader.parse(blob=blob)\n \"\"\"\n[docs] def lazy_parse(self, blob: Blob) -> Iterator[Document]:\n \"\"\"Parses the blob lazily.\n Args:\n blob: The blob to parse.\n \"\"\"\n yield from self._get_documents(blob=blob)\n @staticmethod\n def _api_response_to_documents(chunks: List[Dict[str, Any]]) -> List[Document]:\n \"\"\"Convert the API response to a list of documents.\"\"\"\n docs = []\n for chunk in chunks:\n metadata = chunk[\"metadata\"]\n if chunk.get(\"embedding\", None) is not None:\n metadata[\"embedding\"] = chunk[\"embedding\"]\n doc = Document(page_content=chunk[\"text\"], metadata=metadata)\n docs.append(doc)\n return docs\n def _generate_payload(self, blob: Blob) -> EmbaasDocumentExtractionPayload:\n \"\"\"Generates payload for the API request.\"\"\"\n base64_byte_str = base64.b64encode(blob.as_bytes()).decode()\n payload: EmbaasDocumentExtractionPayload = EmbaasDocumentExtractionPayload(\n bytes=base64_byte_str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/embaas.html"} {"id": "9050c69f402e-3", "text": "bytes=base64_byte_str,\n # Workaround for mypy issue: https://github.com/python/mypy/issues/9408\n # type: ignore\n **self.params,\n )\n if blob.mimetype is not None and payload.get(\"mime_type\", None) is None:\n payload[\"mime_type\"] = blob.mimetype\n return payload\n def _handle_request(\n self, payload: EmbaasDocumentExtractionPayload\n ) -> List[Document]:\n \"\"\"Sends a request to the embaas API and handles the response.\"\"\"\n headers = {\n \"Authorization\": f\"Bearer {self.embaas_api_key}\",\n \"Content-Type\": \"application/json\",\n }\n response = requests.post(self.api_url, headers=headers, json=payload)\n response.raise_for_status()\n parsed_response = response.json()\n return EmbaasBlobLoader._api_response_to_documents(\n chunks=parsed_response[\"data\"][\"chunks\"]\n )\n def _get_documents(self, blob: Blob) -> Iterator[Document]:\n \"\"\"Get the documents from the blob.\"\"\"\n payload = self._generate_payload(blob=blob)\n try:\n documents = self._handle_request(payload=payload)\n except requests.exceptions.RequestException as e:\n if e.response is None or not e.response.text:\n raise ValueError(\n f\"Error raised by embaas document text extraction API: {e}\"\n )\n parsed_response = e.response.json()\n if \"message\" in parsed_response:\n raise ValueError(\n f\"Validation Error raised by embaas document text extraction API:\"\n f\" {parsed_response['message']}\"\n )\n raise\n yield from documents", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/embaas.html"} {"id": "9050c69f402e-4", "text": ")\n raise\n yield from documents\n[docs]class EmbaasLoader(BaseEmbaasLoader, BaseLoader):\n \"\"\"Embaas's document loader.\n To use, you should have the\n environment variable ``EMBAAS_API_KEY`` set with your API key, or pass\n it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n # Default parsing\n from langchain.document_loaders.embaas import EmbaasLoader\n loader = EmbaasLoader(file_path=\"example.mp3\")\n documents = loader.load()\n # Custom api parameters (create embeddings automatically)\n from langchain.document_loaders.embaas import EmbaasBlobLoader\n loader = EmbaasBlobLoader(\n file_path=\"example.pdf\",\n params={\n \"should_embed\": True,\n \"model\": \"e5-large-v2\",\n \"chunk_size\": 256,\n \"chunk_splitter\": \"CharacterTextSplitter\"\n }\n )\n documents = loader.load()\n \"\"\"\n file_path: str\n \"\"\"The path to the file to load.\"\"\"\n blob_loader: Optional[EmbaasBlobLoader]\n \"\"\"The blob loader to use. If not provided, a default one will be created.\"\"\"\n[docs] @validator(\"blob_loader\", always=True)\n def validate_blob_loader(\n cls, v: EmbaasBlobLoader, values: Dict\n ) -> EmbaasBlobLoader:\n return v or EmbaasBlobLoader(\n embaas_api_key=values[\"embaas_api_key\"],\n api_url=values[\"api_url\"],\n params=values[\"params\"],\n )\n[docs] def lazy_load(self) -> Iterator[Document]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/embaas.html"} {"id": "9050c69f402e-5", "text": ")\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"Load the documents from the file path lazily.\"\"\"\n blob = Blob.from_path(path=self.file_path)\n assert self.blob_loader is not None\n # Should never be None, but mypy doesn't know that.\n yield from self.blob_loader.lazy_parse(blob=blob)\n[docs] def load(self) -> List[Document]:\n return list(self.lazy_load())\n[docs] def load_and_split(\n self, text_splitter: Optional[TextSplitter] = None\n ) -> List[Document]:\n if self.params.get(\"should_embed\", False):\n warnings.warn(\n \"Embeddings are not supported with load_and_split.\"\n \" Use the API splitter to properly generate embeddings.\"\n \" For more information see embaas.io docs.\"\n )\n return super().load_and_split(text_splitter=text_splitter)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/embaas.html"} {"id": "b6e91b2c1fa1-0", "text": "Source code for langchain.document_loaders.docugami\n\"\"\"Loads processed documents from Docugami.\"\"\"\nimport io\nimport logging\nimport os\nimport re\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Mapping, Optional, Sequence, Union\nimport requests\nfrom pydantic import BaseModel, root_validator\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nTD_NAME = \"{http://www.w3.org/1999/xhtml}td\"\nTABLE_NAME = \"{http://www.w3.org/1999/xhtml}table\"\nXPATH_KEY = \"xpath\"\nDOCUMENT_ID_KEY = \"id\"\nDOCUMENT_NAME_KEY = \"name\"\nSTRUCTURE_KEY = \"structure\"\nTAG_KEY = \"tag\"\nPROJECTS_KEY = \"projects\"\nDEFAULT_API_ENDPOINT = \"https://api.docugami.com/v1preview1\"\nlogger = logging.getLogger(__name__)\n[docs]class DocugamiLoader(BaseLoader, BaseModel):\n \"\"\"Loads processed docs from Docugami.\n To use, you should have the ``lxml`` python package installed.\n \"\"\"\n api: str = DEFAULT_API_ENDPOINT\n \"\"\"The Docugami API endpoint to use.\"\"\"\n access_token: Optional[str] = os.environ.get(\"DOCUGAMI_API_KEY\")\n \"\"\"The Docugami API access token to use.\"\"\"\n docset_id: Optional[str]\n \"\"\"The Docugami API docset ID to use.\"\"\"\n document_ids: Optional[Sequence[str]]\n \"\"\"The Docugami API document IDs to use.\"\"\"\n file_paths: Optional[Sequence[Union[Path, str]]]\n \"\"\"The local file paths to use.\"\"\"\n min_chunk_size: int = 32 # appended to the next chunk to avoid over-chunking", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/docugami.html"} {"id": "b6e91b2c1fa1-1", "text": "\"\"\"The minimum chunk size to use when parsing DGML. Defaults to 32.\"\"\"\n[docs] @root_validator\n def validate_local_or_remote(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Validate that either local file paths are given, or remote API docset ID.\n Args:\n values: The values to validate.\n Returns:\n The validated values.\n \"\"\"\n if values.get(\"file_paths\") and values.get(\"docset_id\"):\n raise ValueError(\"Cannot specify both file_paths and remote API docset_id\")\n if not values.get(\"file_paths\") and not values.get(\"docset_id\"):\n raise ValueError(\"Must specify either file_paths or remote API docset_id\")\n if values.get(\"docset_id\") and not values.get(\"access_token\"):\n raise ValueError(\"Must specify access token if using remote API docset_id\")\n return values\n def _parse_dgml(\n self, document: Mapping, content: bytes, doc_metadata: Optional[Mapping] = None\n ) -> List[Document]:\n \"\"\"Parse a single DGML document into a list of Documents.\"\"\"\n try:\n from lxml import etree\n except ImportError:\n raise ImportError(\n \"Could not import lxml python package. \"\n \"Please install it with `pip install lxml`.\"\n )\n # helpers\n def _xpath_qname_for_chunk(chunk: Any) -> str:\n \"\"\"Get the xpath qname for a chunk.\"\"\"\n qname = f\"{chunk.prefix}:{chunk.tag.split('}')[-1]}\"\n parent = chunk.getparent()\n if parent is not None:\n doppelgangers = [x for x in parent if x.tag == chunk.tag]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/docugami.html"} {"id": "b6e91b2c1fa1-2", "text": "doppelgangers = [x for x in parent if x.tag == chunk.tag]\n if len(doppelgangers) > 1:\n idx_of_self = doppelgangers.index(chunk)\n qname = f\"{qname}[{idx_of_self + 1}]\"\n return qname\n def _xpath_for_chunk(chunk: Any) -> str:\n \"\"\"Get the xpath for a chunk.\"\"\"\n ancestor_chain = chunk.xpath(\"ancestor-or-self::*\")\n return \"/\" + \"/\".join(_xpath_qname_for_chunk(x) for x in ancestor_chain)\n def _structure_value(node: Any) -> str:\n \"\"\"Get the structure value for a node.\"\"\"\n structure = (\n \"table\"\n if node.tag == TABLE_NAME\n else node.attrib[\"structure\"]\n if \"structure\" in node.attrib\n else None\n )\n return structure\n def _is_structural(node: Any) -> bool:\n \"\"\"Check if a node is structural.\"\"\"\n return _structure_value(node) is not None\n def _is_heading(node: Any) -> bool:\n \"\"\"Check if a node is a heading.\"\"\"\n structure = _structure_value(node)\n return structure is not None and structure.lower().startswith(\"h\")\n def _get_text(node: Any) -> str:\n \"\"\"Get the text of a node.\"\"\"\n return \" \".join(node.itertext()).strip()\n def _has_structural_descendant(node: Any) -> bool:\n \"\"\"Check if a node has a structural descendant.\"\"\"\n for child in node:\n if _is_structural(child) or _has_structural_descendant(child):\n return True\n return False\n def _leaf_structural_nodes(node: Any) -> List:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/docugami.html"} {"id": "b6e91b2c1fa1-3", "text": "return False\n def _leaf_structural_nodes(node: Any) -> List:\n \"\"\"Get the leaf structural nodes of a node.\"\"\"\n if _is_structural(node) and not _has_structural_descendant(node):\n return [node]\n else:\n leaf_nodes = []\n for child in node:\n leaf_nodes.extend(_leaf_structural_nodes(child))\n return leaf_nodes\n def _create_doc(node: Any, text: str) -> Document:\n \"\"\"Create a Document from a node and text.\"\"\"\n metadata = {\n XPATH_KEY: _xpath_for_chunk(node),\n DOCUMENT_ID_KEY: document[\"id\"],\n DOCUMENT_NAME_KEY: document[\"name\"],\n STRUCTURE_KEY: node.attrib.get(\"structure\", \"\"),\n TAG_KEY: re.sub(r\"\\{.*\\}\", \"\", node.tag),\n }\n if doc_metadata:\n metadata.update(doc_metadata)\n return Document(\n page_content=text,\n metadata=metadata,\n )\n # parse the tree and return chunks\n tree = etree.parse(io.BytesIO(content))\n root = tree.getroot()\n chunks: List[Document] = []\n prev_small_chunk_text = None\n for node in _leaf_structural_nodes(root):\n text = _get_text(node)\n if prev_small_chunk_text:\n text = prev_small_chunk_text + \" \" + text\n prev_small_chunk_text = None\n if _is_heading(node) or len(text) < self.min_chunk_size:\n # Save headings or other small chunks to be appended to the next chunk\n prev_small_chunk_text = text\n else:\n chunks.append(_create_doc(node, text))\n if prev_small_chunk_text and len(chunks) > 0:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/docugami.html"} {"id": "b6e91b2c1fa1-4", "text": "if prev_small_chunk_text and len(chunks) > 0:\n # small chunk at the end left over, just append to last chunk\n chunks[-1].page_content += \" \" + prev_small_chunk_text\n return chunks\n def _document_details_for_docset_id(self, docset_id: str) -> List[Dict]:\n \"\"\"Gets all document details for the given docset ID\"\"\"\n url = f\"{self.api}/docsets/{docset_id}/documents\"\n all_documents = []\n while url:\n response = requests.get(\n url,\n headers={\"Authorization\": f\"Bearer {self.access_token}\"},\n )\n if response.ok:\n data = response.json()\n all_documents.extend(data[\"documents\"])\n url = data.get(\"next\", None)\n else:\n raise Exception(\n f\"Failed to download {url} (status: {response.status_code})\"\n )\n return all_documents\n def _project_details_for_docset_id(self, docset_id: str) -> List[Dict]:\n \"\"\"Gets all project details for the given docset ID\"\"\"\n url = f\"{self.api}/projects?docset.id={docset_id}\"\n all_projects = []\n while url:\n response = requests.request(\n \"GET\",\n url,\n headers={\"Authorization\": f\"Bearer {self.access_token}\"},\n data={},\n )\n if response.ok:\n data = response.json()\n all_projects.extend(data[\"projects\"])\n url = data.get(\"next\", None)\n else:\n raise Exception(\n f\"Failed to download {url} (status: {response.status_code})\"\n )\n return all_projects", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/docugami.html"} {"id": "b6e91b2c1fa1-5", "text": ")\n return all_projects\n def _metadata_for_project(self, project: Dict) -> Dict:\n \"\"\"Gets project metadata for all files\"\"\"\n project_id = project.get(\"id\")\n url = f\"{self.api}/projects/{project_id}/artifacts/latest\"\n all_artifacts = []\n while url:\n response = requests.request(\n \"GET\",\n url,\n headers={\"Authorization\": f\"Bearer {self.access_token}\"},\n data={},\n )\n if response.ok:\n data = response.json()\n all_artifacts.extend(data[\"artifacts\"])\n url = data.get(\"next\", None)\n else:\n raise Exception(\n f\"Failed to download {url} (status: {response.status_code})\"\n )\n per_file_metadata = {}\n for artifact in all_artifacts:\n artifact_name = artifact.get(\"name\")\n artifact_url = artifact.get(\"url\")\n artifact_doc = artifact.get(\"document\")\n if artifact_name == \"report-values.xml\" and artifact_url and artifact_doc:\n doc_id = artifact_doc[\"id\"]\n metadata: Dict = {}\n # the evaluated XML for each document is named after the project\n response = requests.request(\n \"GET\",\n f\"{artifact_url}/content\",\n headers={\"Authorization\": f\"Bearer {self.access_token}\"},\n data={},\n )\n if response.ok:\n try:\n from lxml import etree\n except ImportError:\n raise ImportError(\n \"Could not import lxml python package. \"\n \"Please install it with `pip install lxml`.\"\n )\n artifact_tree = etree.parse(io.BytesIO(response.content))\n artifact_root = artifact_tree.getroot()\n ns = artifact_root.nsmap", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/docugami.html"} {"id": "b6e91b2c1fa1-6", "text": "artifact_root = artifact_tree.getroot()\n ns = artifact_root.nsmap\n entries = artifact_root.xpath(\"//pr:Entry\", namespaces=ns)\n for entry in entries:\n heading = entry.xpath(\"./pr:Heading\", namespaces=ns)[0].text\n value = \" \".join(\n entry.xpath(\"./pr:Value\", namespaces=ns)[0].itertext()\n ).strip()\n metadata[heading] = value\n per_file_metadata[doc_id] = metadata\n else:\n raise Exception(\n f\"Failed to download {artifact_url}/content \"\n + \"(status: {response.status_code})\"\n )\n return per_file_metadata\n def _load_chunks_for_document(\n self, docset_id: str, document: Dict, doc_metadata: Optional[Dict] = None\n ) -> List[Document]:\n \"\"\"Load chunks for a document.\"\"\"\n document_id = document[\"id\"]\n url = f\"{self.api}/docsets/{docset_id}/documents/{document_id}/dgml\"\n response = requests.request(\n \"GET\",\n url,\n headers={\"Authorization\": f\"Bearer {self.access_token}\"},\n data={},\n )\n if response.ok:\n return self._parse_dgml(document, response.content, doc_metadata)\n else:\n raise Exception(\n f\"Failed to download {url} (status: {response.status_code})\"\n )\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n chunks: List[Document] = []\n if self.access_token and self.docset_id:\n # remote mode\n _document_details = self._document_details_for_docset_id(self.docset_id)\n if self.document_ids:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/docugami.html"} {"id": "b6e91b2c1fa1-7", "text": "if self.document_ids:\n _document_details = [\n d for d in _document_details if d[\"id\"] in self.document_ids\n ]\n _project_details = self._project_details_for_docset_id(self.docset_id)\n combined_project_metadata = {}\n if _project_details:\n # if there are any projects for this docset, load project metadata\n for project in _project_details:\n metadata = self._metadata_for_project(project)\n combined_project_metadata.update(metadata)\n for doc in _document_details:\n doc_metadata = combined_project_metadata.get(doc[\"id\"])\n chunks += self._load_chunks_for_document(\n self.docset_id, doc, doc_metadata\n )\n elif self.file_paths:\n # local mode (for integration testing, or pre-downloaded XML)\n for path in self.file_paths:\n path = Path(path)\n with open(path, \"rb\") as file:\n chunks += self._parse_dgml(\n {\n DOCUMENT_ID_KEY: path.name,\n DOCUMENT_NAME_KEY: path.name,\n },\n file.read(),\n )\n return chunks", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/docugami.html"} {"id": "194a9142a6e6-0", "text": "Source code for langchain.document_loaders.text\nimport logging\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.helpers import detect_file_encodings\nlogger = logging.getLogger(__name__)\n[docs]class TextLoader(BaseLoader):\n \"\"\"Load text files.\n Args:\n file_path: Path to the file to load.\n encoding: File encoding to use. If `None`, the file will be loaded\n with the default system encoding.\n autodetect_encoding: Whether to try to autodetect the file encoding\n if the specified encoding fails.\n \"\"\"\n def __init__(\n self,\n file_path: str,\n encoding: Optional[str] = None,\n autodetect_encoding: bool = False,\n ):\n \"\"\"Initialize with file path.\"\"\"\n self.file_path = file_path\n self.encoding = encoding\n self.autodetect_encoding = autodetect_encoding\n[docs] def load(self) -> List[Document]:\n \"\"\"Load from file path.\"\"\"\n text = \"\"\n try:\n with open(self.file_path, encoding=self.encoding) as f:\n text = f.read()\n except UnicodeDecodeError as e:\n if self.autodetect_encoding:\n detected_encodings = detect_file_encodings(self.file_path)\n for encoding in detected_encodings:\n logger.debug(\"Trying encoding: \", encoding.encoding)\n try:\n with open(self.file_path, encoding=encoding.encoding) as f:\n text = f.read()\n break\n except UnicodeDecodeError:\n continue\n else:\n raise RuntimeError(f\"Error loading {self.file_path}\") from e\n except Exception as e:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/text.html"} {"id": "194a9142a6e6-1", "text": "except Exception as e:\n raise RuntimeError(f\"Error loading {self.file_path}\") from e\n metadata = {\"source\": self.file_path}\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/text.html"} {"id": "342b24ac27a4-0", "text": "Source code for langchain.document_loaders.blockchain\nimport os\nimport re\nimport time\nfrom enum import Enum\nfrom typing import List, Optional\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class BlockchainType(Enum):\n \"\"\"Enumerator of the supported blockchains.\"\"\"\n ETH_MAINNET = \"eth-mainnet\"\n ETH_GOERLI = \"eth-goerli\"\n POLYGON_MAINNET = \"polygon-mainnet\"\n POLYGON_MUMBAI = \"polygon-mumbai\"\n[docs]class BlockchainDocumentLoader(BaseLoader):\n \"\"\"Loads elements from a blockchain smart contract into Langchain documents.\n The supported blockchains are: Ethereum mainnet, Ethereum Goerli testnet,\n Polygon mainnet, and Polygon Mumbai testnet.\n If no BlockchainType is specified, the default is Ethereum mainnet.\n The Loader uses the Alchemy API to interact with the blockchain.\n ALCHEMY_API_KEY environment variable must be set to use this loader.\n The API returns 100 NFTs per request and can be paginated using the\n startToken parameter.\n If get_all_tokens is set to True, the loader will get all tokens\n on the contract. Note that for contracts with a large number of tokens,\n this may take a long time (e.g. 10k tokens is 100 requests).\n Default value is false for this reason.\n The max_execution_time (sec) can be set to limit the execution time\n of the loader.\n Future versions of this loader can:\n - Support additional Alchemy APIs (e.g. getTransactions, etc.)\n - Support additional blockain APIs (e.g. Infura, Opensea, etc.)\n \"\"\"\n def __init__(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blockchain.html"} {"id": "342b24ac27a4-1", "text": "\"\"\"\n def __init__(\n self,\n contract_address: str,\n blockchainType: BlockchainType = BlockchainType.ETH_MAINNET,\n api_key: str = \"docs-demo\",\n startToken: str = \"\",\n get_all_tokens: bool = False,\n max_execution_time: Optional[int] = None,\n ):\n \"\"\"\n Args:\n contract_address: The address of the smart contract.\n blockchainType: The blockchain type.\n api_key: The Alchemy API key.\n startToken: The start token for pagination.\n get_all_tokens: Whether to get all tokens on the contract.\n max_execution_time: The maximum execution time (sec).\n \"\"\"\n self.contract_address = contract_address\n self.blockchainType = blockchainType.value\n self.api_key = os.environ.get(\"ALCHEMY_API_KEY\") or api_key\n self.startToken = startToken\n self.get_all_tokens = get_all_tokens\n self.max_execution_time = max_execution_time\n if not self.api_key:\n raise ValueError(\"Alchemy API key not provided.\")\n if not re.match(r\"^0x[a-fA-F0-9]{40}$\", self.contract_address):\n raise ValueError(f\"Invalid contract address {self.contract_address}\")\n[docs] def load(self) -> List[Document]:\n result = []\n current_start_token = self.startToken\n start_time = time.time()\n while True:\n url = (\n f\"https://{self.blockchainType}.g.alchemy.com/nft/v2/\"\n f\"{self.api_key}/getNFTsForCollection?withMetadata=\"\n f\"True&contractAddress={self.contract_address}\"\n f\"&startToken={current_start_token}\"\n )\n response = requests.get(url)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blockchain.html"} {"id": "342b24ac27a4-2", "text": ")\n response = requests.get(url)\n if response.status_code != 200:\n raise ValueError(\n f\"Request failed with status code {response.status_code}\"\n )\n items = response.json()[\"nfts\"]\n if not items:\n break\n for item in items:\n content = str(item)\n tokenId = item[\"id\"][\"tokenId\"]\n metadata = {\n \"source\": self.contract_address,\n \"blockchain\": self.blockchainType,\n \"tokenId\": tokenId,\n }\n result.append(Document(page_content=content, metadata=metadata))\n # exit after the first API call if get_all_tokens is False\n if not self.get_all_tokens:\n break\n # get the start token for the next API call from the last item in array\n current_start_token = self._get_next_tokenId(result[-1].metadata[\"tokenId\"])\n if (\n self.max_execution_time is not None\n and (time.time() - start_time) > self.max_execution_time\n ):\n raise RuntimeError(\"Execution time exceeded the allowed time limit.\")\n if not result:\n raise ValueError(\n f\"No NFTs found for contract address {self.contract_address}\"\n )\n return result\n # add one to the tokenId, ensuring the correct tokenId format is used\n def _get_next_tokenId(self, tokenId: str) -> str:\n value_type = self._detect_value_type(tokenId)\n if value_type == \"hex_0x\":\n value_int = int(tokenId, 16)\n elif value_type == \"hex_0xbf\":\n value_int = int(tokenId[2:], 16)\n else:\n value_int = int(tokenId)\n result = value_int + 1", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blockchain.html"} {"id": "342b24ac27a4-3", "text": "value_int = int(tokenId)\n result = value_int + 1\n if value_type == \"hex_0x\":\n return \"0x\" + format(result, \"0\" + str(len(tokenId) - 2) + \"x\")\n elif value_type == \"hex_0xbf\":\n return \"0xbf\" + format(result, \"0\" + str(len(tokenId) - 4) + \"x\")\n else:\n return str(result)\n # A smart contract can use different formats for the tokenId\n @staticmethod\n def _detect_value_type(tokenId: str) -> str:\n if isinstance(tokenId, int):\n return \"int\"\n elif tokenId.startswith(\"0x\"):\n return \"hex_0x\"\n elif tokenId.startswith(\"0xbf\"):\n return \"hex_0xbf\"\n else:\n return \"hex_0xbf\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blockchain.html"} {"id": "e676fd47244b-0", "text": "Source code for langchain.document_loaders.html_bs\n\"\"\"Loader that uses bs4 to load HTML files, enriching metadata with page title.\"\"\"\nimport logging\nfrom typing import Dict, List, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\n[docs]class BSHTMLLoader(BaseLoader):\n \"\"\"Loader that uses beautiful soup to parse HTML files.\"\"\"\n def __init__(\n self,\n file_path: str,\n open_encoding: Union[str, None] = None,\n bs_kwargs: Union[dict, None] = None,\n get_text_separator: str = \"\",\n ) -> None:\n \"\"\"Initialise with path, and optionally, file encoding to use, and any kwargs\n to pass to the BeautifulSoup object.\n Args:\n file_path: The path to the file to load.\n open_encoding: The encoding to use when opening the file.\n bs_kwargs: Any kwargs to pass to the BeautifulSoup object.\n get_text_separator: The separator to use when calling get_text on the soup.\n \"\"\"\n try:\n import bs4 # noqa:F401\n except ImportError:\n raise ImportError(\n \"beautifulsoup4 package not found, please install it with \"\n \"`pip install beautifulsoup4`\"\n )\n self.file_path = file_path\n self.open_encoding = open_encoding\n if bs_kwargs is None:\n bs_kwargs = {\"features\": \"lxml\"}\n self.bs_kwargs = bs_kwargs\n self.get_text_separator = get_text_separator\n[docs] def load(self) -> List[Document]:\n \"\"\"Load HTML document into document objects.\"\"\"\n from bs4 import BeautifulSoup", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/html_bs.html"} {"id": "e676fd47244b-1", "text": "\"\"\"Load HTML document into document objects.\"\"\"\n from bs4 import BeautifulSoup\n with open(self.file_path, \"r\", encoding=self.open_encoding) as f:\n soup = BeautifulSoup(f, **self.bs_kwargs)\n text = soup.get_text(self.get_text_separator)\n if soup.title:\n title = str(soup.title.string)\n else:\n title = \"\"\n metadata: Dict[str, Union[str, None]] = {\n \"source\": self.file_path,\n \"title\": title,\n }\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/html_bs.html"} {"id": "d23c41749e7d-0", "text": "Source code for langchain.document_loaders.helpers\n\"\"\"Document loader helpers.\"\"\"\nimport concurrent.futures\nfrom typing import List, NamedTuple, Optional, cast\n[docs]class FileEncoding(NamedTuple):\n \"\"\"A file encoding as the NamedTuple.\"\"\"\n encoding: Optional[str]\n \"\"\"The encoding of the file.\"\"\"\n confidence: float\n \"\"\"The confidence of the encoding.\"\"\"\n language: Optional[str]\n \"\"\"The language of the file.\"\"\"\n[docs]def detect_file_encodings(file_path: str, timeout: int = 5) -> List[FileEncoding]:\n \"\"\"Try to detect the file encoding.\n Returns a list of `FileEncoding` tuples with the detected encodings ordered\n by confidence.\n Args:\n file_path: The path to the file to detect the encoding for.\n timeout: The timeout in seconds for the encoding detection.\n \"\"\"\n import chardet\n def read_and_detect(file_path: str) -> List[dict]:\n with open(file_path, \"rb\") as f:\n rawdata = f.read()\n return cast(List[dict], chardet.detect_all(rawdata))\n with concurrent.futures.ThreadPoolExecutor() as executor:\n future = executor.submit(read_and_detect, file_path)\n try:\n encodings = future.result(timeout=timeout)\n except concurrent.futures.TimeoutError:\n raise TimeoutError(\n f\"Timeout reached while detecting encoding for {file_path}\"\n )\n if all(encoding[\"encoding\"] is None for encoding in encodings):\n raise RuntimeError(f\"Could not detect encoding for {file_path}\")\n return [FileEncoding(**enc) for enc in encodings if enc[\"encoding\"] is not None]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/helpers.html"} {"id": "fd11c15ff609-0", "text": "Source code for langchain.document_loaders.url\n\"\"\"Loader that uses unstructured to load HTML files.\"\"\"\nimport logging\nfrom typing import Any, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\n[docs]class UnstructuredURLLoader(BaseLoader):\n \"\"\"Loader that uses unstructured to load HTML files.\"\"\"\n def __init__(\n self,\n urls: List[str],\n continue_on_failure: bool = True,\n mode: str = \"single\",\n show_progress_bar: bool = False,\n **unstructured_kwargs: Any,\n ):\n \"\"\"Initialize with file path.\"\"\"\n try:\n import unstructured # noqa:F401\n from unstructured.__version__ import __version__ as __unstructured_version__\n self.__version = __unstructured_version__\n except ImportError:\n raise ValueError(\n \"unstructured package not found, please install it with \"\n \"`pip install unstructured`\"\n )\n self._validate_mode(mode)\n self.mode = mode\n headers = unstructured_kwargs.pop(\"headers\", {})\n if len(headers.keys()) != 0:\n warn_about_headers = False\n if self.__is_non_html_available():\n warn_about_headers = not self.__is_headers_available_for_non_html()\n else:\n warn_about_headers = not self.__is_headers_available_for_html()\n if warn_about_headers:\n logger.warning(\n \"You are using an old version of unstructured. \"\n \"The headers parameter is ignored\"\n )\n self.urls = urls\n self.continue_on_failure = continue_on_failure\n self.headers = headers\n self.unstructured_kwargs = unstructured_kwargs\n self.show_progress_bar = show_progress_bar", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/url.html"} {"id": "fd11c15ff609-1", "text": "self.unstructured_kwargs = unstructured_kwargs\n self.show_progress_bar = show_progress_bar\n def _validate_mode(self, mode: str) -> None:\n _valid_modes = {\"single\", \"elements\"}\n if mode not in _valid_modes:\n raise ValueError(\n f\"Got {mode} for `mode`, but should be one of `{_valid_modes}`\"\n )\n def __is_headers_available_for_html(self) -> bool:\n _unstructured_version = self.__version.split(\"-\")[0]\n unstructured_version = tuple([int(x) for x in _unstructured_version.split(\".\")])\n return unstructured_version >= (0, 5, 7)\n def __is_headers_available_for_non_html(self) -> bool:\n _unstructured_version = self.__version.split(\"-\")[0]\n unstructured_version = tuple([int(x) for x in _unstructured_version.split(\".\")])\n return unstructured_version >= (0, 5, 13)\n def __is_non_html_available(self) -> bool:\n _unstructured_version = self.__version.split(\"-\")[0]\n unstructured_version = tuple([int(x) for x in _unstructured_version.split(\".\")])\n return unstructured_version >= (0, 5, 12)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load file.\"\"\"\n from unstructured.partition.auto import partition\n from unstructured.partition.html import partition_html\n docs: List[Document] = list()\n if self.show_progress_bar:\n try:\n from tqdm import tqdm\n except ImportError as e:\n raise ImportError(\n \"Package tqdm must be installed if show_progress_bar=True. \"\n \"Please install with 'pip install tqdm' or set \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/url.html"} {"id": "fd11c15ff609-2", "text": "\"Please install with 'pip install tqdm' or set \"\n \"show_progress_bar=False.\"\n ) from e\n urls = tqdm(self.urls)\n else:\n urls = self.urls\n for url in urls:\n try:\n if self.__is_non_html_available():\n if self.__is_headers_available_for_non_html():\n elements = partition(\n url=url, headers=self.headers, **self.unstructured_kwargs\n )\n else:\n elements = partition(url=url, **self.unstructured_kwargs)\n else:\n if self.__is_headers_available_for_html():\n elements = partition_html(\n url=url, headers=self.headers, **self.unstructured_kwargs\n )\n else:\n elements = partition_html(url=url, **self.unstructured_kwargs)\n except Exception as e:\n if self.continue_on_failure:\n logger.error(f\"Error fetching or processing {url}, exception: {e}\")\n continue\n else:\n raise e\n if self.mode == \"single\":\n text = \"\\n\\n\".join([str(el) for el in elements])\n metadata = {\"source\": url}\n docs.append(Document(page_content=text, metadata=metadata))\n elif self.mode == \"elements\":\n for element in elements:\n metadata = element.metadata.to_dict()\n metadata[\"category\"] = element.category\n docs.append(Document(page_content=str(element), metadata=metadata))\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/url.html"} {"id": "068fe306411c-0", "text": "Source code for langchain.document_loaders.notiondb\n\"\"\"Notion DB loader for langchain\"\"\"\nfrom typing import Any, Dict, List, Optional\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nNOTION_BASE_URL = \"https://api.notion.com/v1\"\nDATABASE_URL = NOTION_BASE_URL + \"/databases/{database_id}/query\"\nPAGE_URL = NOTION_BASE_URL + \"/pages/{page_id}\"\nBLOCK_URL = NOTION_BASE_URL + \"/blocks/{block_id}/children\"\n[docs]class NotionDBLoader(BaseLoader):\n \"\"\"Notion DB Loader.\n Reads content from pages within a Noton Database.\n Args:\n integration_token (str): Notion integration token.\n database_id (str): Notion database id.\n request_timeout_sec (int): Timeout for Notion requests in seconds.\n \"\"\"\n def __init__(\n self,\n integration_token: str,\n database_id: str,\n request_timeout_sec: Optional[int] = 10,\n ) -> None:\n \"\"\"Initialize with parameters.\"\"\"\n if not integration_token:\n raise ValueError(\"integration_token must be provided\")\n if not database_id:\n raise ValueError(\"database_id must be provided\")\n self.token = integration_token\n self.database_id = database_id\n self.headers = {\n \"Authorization\": \"Bearer \" + self.token,\n \"Content-Type\": \"application/json\",\n \"Notion-Version\": \"2022-06-28\",\n }\n self.request_timeout_sec = request_timeout_sec\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents from the Notion database.\n Returns:\n List[Document]: List of documents.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/notiondb.html"} {"id": "068fe306411c-1", "text": "Returns:\n List[Document]: List of documents.\n \"\"\"\n page_summaries = self._retrieve_page_summaries()\n return list(self.load_page(page_summary) for page_summary in page_summaries)\n def _retrieve_page_summaries(\n self, query_dict: Dict[str, Any] = {\"page_size\": 100}\n ) -> List[Dict[str, Any]]:\n \"\"\"Get all the pages from a Notion database.\"\"\"\n pages: List[Dict[str, Any]] = []\n while True:\n data = self._request(\n DATABASE_URL.format(database_id=self.database_id),\n method=\"POST\",\n query_dict=query_dict,\n )\n pages.extend(data.get(\"results\"))\n if not data.get(\"has_more\"):\n break\n query_dict[\"start_cursor\"] = data.get(\"next_cursor\")\n return pages\n[docs] def load_page(self, page_summary: Dict[str, Any]) -> Document:\n \"\"\"Read a page.\"\"\"\n page_id = page_summary[\"id\"]\n # load properties as metadata\n metadata: Dict[str, Any] = {}\n for prop_name, prop_data in page_summary[\"properties\"].items():\n prop_type = prop_data[\"type\"]\n if prop_type == \"rich_text\":\n value = (\n prop_data[\"rich_text\"][0][\"plain_text\"]\n if prop_data[\"rich_text\"]\n else None\n )\n elif prop_type == \"title\":\n value = (\n prop_data[\"title\"][0][\"plain_text\"] if prop_data[\"title\"] else None\n )\n elif prop_type == \"multi_select\":\n value = (\n [item[\"name\"] for item in prop_data[\"multi_select\"]]\n if prop_data[\"multi_select\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/notiondb.html"} {"id": "068fe306411c-2", "text": "if prop_data[\"multi_select\"]\n else []\n )\n elif prop_type == \"url\":\n value = prop_data[\"url\"]\n else:\n value = None\n metadata[prop_name.lower()] = value\n metadata[\"id\"] = page_id\n return Document(page_content=self._load_blocks(page_id), metadata=metadata)\n def _load_blocks(self, block_id: str, num_tabs: int = 0) -> str:\n \"\"\"Read a block and its children.\"\"\"\n result_lines_arr: List[str] = []\n cur_block_id: str = block_id\n while cur_block_id:\n data = self._request(BLOCK_URL.format(block_id=cur_block_id))\n for result in data[\"results\"]:\n result_obj = result[result[\"type\"]]\n if \"rich_text\" not in result_obj:\n continue\n cur_result_text_arr: List[str] = []\n for rich_text in result_obj[\"rich_text\"]:\n if \"text\" in rich_text:\n cur_result_text_arr.append(\n \"\\t\" * num_tabs + rich_text[\"text\"][\"content\"]\n )\n if result[\"has_children\"]:\n children_text = self._load_blocks(\n result[\"id\"], num_tabs=num_tabs + 1\n )\n cur_result_text_arr.append(children_text)\n result_lines_arr.append(\"\\n\".join(cur_result_text_arr))\n cur_block_id = data.get(\"next_cursor\")\n return \"\\n\".join(result_lines_arr)\n def _request(\n self, url: str, method: str = \"GET\", query_dict: Dict[str, Any] = {}\n ) -> Any:\n res = requests.request(\n method,\n url,\n headers=self.headers,\n json=query_dict,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/notiondb.html"} {"id": "068fe306411c-3", "text": "method,\n url,\n headers=self.headers,\n json=query_dict,\n timeout=self.request_timeout_sec,\n )\n res.raise_for_status()\n return res.json()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/notiondb.html"} {"id": "f2b73e819319-0", "text": "Source code for langchain.document_loaders.merge\nfrom typing import Iterator, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class MergedDataLoader(BaseLoader):\n \"\"\"Merge documents from a list of loaders\"\"\"\n def __init__(self, loaders: List):\n \"\"\"Initialize with a list of loaders\"\"\"\n self.loaders = loaders\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"Lazy load docs from each individual loader.\"\"\"\n for loader in self.loaders:\n # Check if lazy_load is implemented\n try:\n data = loader.lazy_load()\n except NotImplementedError:\n data = loader.load()\n for document in data:\n yield document\n[docs] def load(self) -> List[Document]:\n \"\"\"Load docs.\"\"\"\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/merge.html"} {"id": "d23302233641-0", "text": "Source code for langchain.document_loaders.hn\n\"\"\"Loader that loads Hacker News.\"\"\"\nfrom typing import Any, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.web_base import WebBaseLoader\n[docs]class HNLoader(WebBaseLoader):\n \"\"\"Load Hacker News data from either main page results or the comments page.\"\"\"\n[docs] def load(self) -> List[Document]:\n \"\"\"Get important HN webpage information.\n HN webpage components are:\n - title\n - content\n - source url,\n - time of post\n - author of the post\n - number of comments\n - rank of the post\n \"\"\"\n soup_info = self.scrape()\n if \"item\" in self.web_path:\n return self.load_comments(soup_info)\n else:\n return self.load_results(soup_info)\n[docs] def load_comments(self, soup_info: Any) -> List[Document]:\n \"\"\"Load comments from a HN post.\"\"\"\n comments = soup_info.select(\"tr[class='athing comtr']\")\n title = soup_info.select_one(\"tr[id='pagespace']\").get(\"title\")\n return [\n Document(\n page_content=comment.text.strip(),\n metadata={\"source\": self.web_path, \"title\": title},\n )\n for comment in comments\n ]\n[docs] def load_results(self, soup: Any) -> List[Document]:\n \"\"\"Load items from an HN page.\"\"\"\n items = soup.select(\"tr[class='athing']\")\n documents = []\n for lineItem in items:\n ranking = lineItem.select_one(\"span[class='rank']\").text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/hn.html"} {"id": "d23302233641-1", "text": "ranking = lineItem.select_one(\"span[class='rank']\").text\n link = lineItem.find(\"span\", {\"class\": \"titleline\"}).find(\"a\").get(\"href\")\n title = lineItem.find(\"span\", {\"class\": \"titleline\"}).text.strip()\n metadata = {\n \"source\": self.web_path,\n \"title\": title,\n \"link\": link,\n \"ranking\": ranking,\n }\n documents.append(\n Document(\n page_content=title, link=link, ranking=ranking, metadata=metadata\n )\n )\n return documents", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/hn.html"} {"id": "0ff92c52538d-0", "text": "Source code for langchain.document_loaders.git\nimport os\nfrom typing import Callable, List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class GitLoader(BaseLoader):\n \"\"\"Loads files from a Git repository into a list of documents.\n The Repository can be local on disk available at `repo_path`,\n or remote at `clone_url` that will be cloned to `repo_path`.\n Currently, supports only text files.\n Each document represents one file in the repository. The `path` points to\n the local Git repository, and the `branch` specifies the branch to load\n files from. By default, it loads from the `main` branch.\n \"\"\"\n def __init__(\n self,\n repo_path: str,\n clone_url: Optional[str] = None,\n branch: Optional[str] = \"main\",\n file_filter: Optional[Callable[[str], bool]] = None,\n ):\n \"\"\"\n Args:\n repo_path: The path to the Git repository.\n clone_url: Optional. The URL to clone the repository from.\n branch: Optional. The branch to load files from. Defaults to `main`.\n file_filter: Optional. A function that takes a file path and returns\n a boolean indicating whether to load the file. Defaults to None.\n \"\"\"\n self.repo_path = repo_path\n self.clone_url = clone_url\n self.branch = branch\n self.file_filter = file_filter\n[docs] def load(self) -> List[Document]:\n try:\n from git import Blob, Repo # type: ignore\n except ImportError as ex:\n raise ImportError(\n \"Could not import git python package. \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/git.html"} {"id": "0ff92c52538d-1", "text": "raise ImportError(\n \"Could not import git python package. \"\n \"Please install it with `pip install GitPython`.\"\n ) from ex\n if not os.path.exists(self.repo_path) and self.clone_url is None:\n raise ValueError(f\"Path {self.repo_path} does not exist\")\n elif self.clone_url:\n repo = Repo.clone_from(self.clone_url, self.repo_path)\n repo.git.checkout(self.branch)\n else:\n repo = Repo(self.repo_path)\n repo.git.checkout(self.branch)\n docs: List[Document] = []\n for item in repo.tree().traverse():\n if not isinstance(item, Blob):\n continue\n file_path = os.path.join(self.repo_path, item.path)\n ignored_files = repo.ignored([file_path]) # type: ignore\n if len(ignored_files):\n continue\n # uses filter to skip files\n if self.file_filter and not self.file_filter(file_path):\n continue\n rel_file_path = os.path.relpath(file_path, self.repo_path)\n try:\n with open(file_path, \"rb\") as f:\n content = f.read()\n file_type = os.path.splitext(item.name)[1]\n # loads only text files\n try:\n text_content = content.decode(\"utf-8\")\n except UnicodeDecodeError:\n continue\n metadata = {\n \"source\": rel_file_path,\n \"file_path\": rel_file_path,\n \"file_name\": item.name,\n \"file_type\": file_type,\n }\n doc = Document(page_content=text_content, metadata=metadata)\n docs.append(doc)\n except Exception as e:\n print(f\"Error reading file {file_path}: {e}\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/git.html"} {"id": "0ff92c52538d-2", "text": "print(f\"Error reading file {file_path}: {e}\")\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/git.html"} {"id": "fb433bc96c10-0", "text": "Source code for langchain.document_loaders.roam\n\"\"\"Loader that loads Roam directory dump.\"\"\"\nfrom pathlib import Path\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class RoamLoader(BaseLoader):\n \"\"\"Loader that loads Roam files from disk.\"\"\"\n def __init__(self, path: str):\n \"\"\"Initialize with path.\"\"\"\n self.file_path = path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n ps = list(Path(self.file_path).glob(\"**/*.md\"))\n docs = []\n for p in ps:\n with open(p) as f:\n text = f.read()\n metadata = {\"source\": str(p)}\n docs.append(Document(page_content=text, metadata=metadata))\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/roam.html"} {"id": "8c55692f02de-0", "text": "Source code for langchain.document_loaders.googledrive\n\"\"\"Loader that loads data from Google Drive.\"\"\"\n# Prerequisites:\n# 1. Create a Google Cloud project\n# 2. Enable the Google Drive API:\n# https://console.cloud.google.com/flows/enableapi?apiid=drive.googleapis.com\n# 3. Authorize credentials for desktop app:\n# https://developers.google.com/drive/api/quickstart/python#authorize_credentials_for_a_desktop_application # noqa: E501\n# 4. For service accounts visit\n# https://cloud.google.com/iam/docs/service-accounts-create\nimport os\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Optional, Sequence, Union\nfrom pydantic import BaseModel, root_validator, validator\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nSCOPES = [\"https://www.googleapis.com/auth/drive.readonly\"]\n[docs]class GoogleDriveLoader(BaseLoader, BaseModel):\n \"\"\"Loads Google Docs from Google Drive.\"\"\"\n service_account_key: Path = Path.home() / \".credentials\" / \"keys.json\"\n \"\"\"Path to the service account key file.\"\"\"\n credentials_path: Path = Path.home() / \".credentials\" / \"credentials.json\"\n \"\"\"Path to the credentials file.\"\"\"\n token_path: Path = Path.home() / \".credentials\" / \"token.json\"\n \"\"\"Path to the token file.\"\"\"\n folder_id: Optional[str] = None\n \"\"\"The folder id to load from.\"\"\"\n document_ids: Optional[List[str]] = None\n \"\"\"The document ids to load from.\"\"\"\n file_ids: Optional[List[str]] = None\n \"\"\"The file ids to load from.\"\"\"\n recursive: bool = False\n \"\"\"Whether to load recursively. Only applies when folder_id is given.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"} {"id": "8c55692f02de-1", "text": "\"\"\"Whether to load recursively. Only applies when folder_id is given.\"\"\"\n file_types: Optional[Sequence[str]] = None\n \"\"\"The file types to load. Only applies when folder_id is given.\"\"\"\n load_trashed_files: bool = False\n \"\"\"Whether to load trashed files. Only applies when folder_id is given.\"\"\"\n # NOTE(MthwRobinson) - changing the file_loader_cls to type here currently\n # results in pydantic validation errors\n file_loader_cls: Any = None\n \"\"\"The file loader class to use.\"\"\"\n file_loader_kwargs: Dict[\"str\", Any] = {}\n \"\"\"The file loader kwargs to use.\"\"\"\n[docs] @root_validator\n def validate_inputs(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Validate that either folder_id or document_ids is set, but not both.\"\"\"\n if values.get(\"folder_id\") and (\n values.get(\"document_ids\") or values.get(\"file_ids\")\n ):\n raise ValueError(\n \"Cannot specify both folder_id and document_ids nor \"\n \"folder_id and file_ids\"\n )\n if (\n not values.get(\"folder_id\")\n and not values.get(\"document_ids\")\n and not values.get(\"file_ids\")\n ):\n raise ValueError(\"Must specify either folder_id, document_ids, or file_ids\")\n file_types = values.get(\"file_types\")\n if file_types:\n if values.get(\"document_ids\") or values.get(\"file_ids\"):\n raise ValueError(\n \"file_types can only be given when folder_id is given,\"\n \" (not when document_ids or file_ids are given).\"\n )\n type_mapping = {\n \"document\": \"application/vnd.google-apps.document\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"} {"id": "8c55692f02de-2", "text": "type_mapping = {\n \"document\": \"application/vnd.google-apps.document\",\n \"sheet\": \"application/vnd.google-apps.spreadsheet\",\n \"pdf\": \"application/pdf\",\n }\n allowed_types = list(type_mapping.keys()) + list(type_mapping.values())\n short_names = \", \".join([f\"'{x}'\" for x in type_mapping.keys()])\n full_names = \", \".join([f\"'{x}'\" for x in type_mapping.values()])\n for file_type in file_types:\n if file_type not in allowed_types:\n raise ValueError(\n f\"Given file type {file_type} is not supported. \"\n f\"Supported values are: {short_names}; and \"\n f\"their full-form names: {full_names}\"\n )\n # replace short-form file types by full-form file types\n def full_form(x: str) -> str:\n return type_mapping[x] if x in type_mapping else x\n values[\"file_types\"] = [full_form(file_type) for file_type in file_types]\n return values\n[docs] @validator(\"credentials_path\")\n def validate_credentials_path(cls, v: Any, **kwargs: Any) -> Any:\n \"\"\"Validate that credentials_path exists.\"\"\"\n if not v.exists():\n raise ValueError(f\"credentials_path {v} does not exist\")\n return v\n def _load_credentials(self) -> Any:\n \"\"\"Load credentials.\"\"\"\n # Adapted from https://developers.google.com/drive/api/v3/quickstart/python\n try:\n from google.auth import default\n from google.auth.transport.requests import Request\n from google.oauth2 import service_account\n from google.oauth2.credentials import Credentials\n from google_auth_oauthlib.flow import InstalledAppFlow\n except ImportError:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"} {"id": "8c55692f02de-3", "text": "from google_auth_oauthlib.flow import InstalledAppFlow\n except ImportError:\n raise ImportError(\n \"You must run \"\n \"`pip install --upgrade \"\n \"google-api-python-client google-auth-httplib2 \"\n \"google-auth-oauthlib` \"\n \"to use the Google Drive loader.\"\n )\n creds = None\n if self.service_account_key.exists():\n return service_account.Credentials.from_service_account_file(\n str(self.service_account_key), scopes=SCOPES\n )\n if self.token_path.exists():\n creds = Credentials.from_authorized_user_file(str(self.token_path), SCOPES)\n if not creds or not creds.valid:\n if creds and creds.expired and creds.refresh_token:\n creds.refresh(Request())\n elif \"GOOGLE_APPLICATION_CREDENTIALS\" not in os.environ:\n creds, project = default()\n creds = creds.with_scopes(SCOPES)\n # no need to write to file\n if creds:\n return creds\n else:\n flow = InstalledAppFlow.from_client_secrets_file(\n str(self.credentials_path), SCOPES\n )\n creds = flow.run_local_server(port=0)\n with open(self.token_path, \"w\") as token:\n token.write(creds.to_json())\n return creds\n def _load_sheet_from_id(self, id: str) -> List[Document]:\n \"\"\"Load a sheet and all tabs from an ID.\"\"\"\n from googleapiclient.discovery import build\n creds = self._load_credentials()\n sheets_service = build(\"sheets\", \"v4\", credentials=creds)\n spreadsheet = sheets_service.spreadsheets().get(spreadsheetId=id).execute()\n sheets = spreadsheet.get(\"sheets\", [])\n documents = []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"} {"id": "8c55692f02de-4", "text": "sheets = spreadsheet.get(\"sheets\", [])\n documents = []\n for sheet in sheets:\n sheet_name = sheet[\"properties\"][\"title\"]\n result = (\n sheets_service.spreadsheets()\n .values()\n .get(spreadsheetId=id, range=sheet_name)\n .execute()\n )\n values = result.get(\"values\", [])\n header = values[0]\n for i, row in enumerate(values[1:], start=1):\n metadata = {\n \"source\": (\n f\"https://docs.google.com/spreadsheets/d/{id}/\"\n f\"edit?gid={sheet['properties']['sheetId']}\"\n ),\n \"title\": f\"{spreadsheet['properties']['title']} - {sheet_name}\",\n \"row\": i,\n }\n content = []\n for j, v in enumerate(row):\n title = header[j].strip() if len(header) > j else \"\"\n content.append(f\"{title}: {v.strip()}\")\n page_content = \"\\n\".join(content)\n documents.append(Document(page_content=page_content, metadata=metadata))\n return documents\n def _load_document_from_id(self, id: str) -> Document:\n \"\"\"Load a document from an ID.\"\"\"\n from io import BytesIO\n from googleapiclient.discovery import build\n from googleapiclient.errors import HttpError\n from googleapiclient.http import MediaIoBaseDownload\n creds = self._load_credentials()\n service = build(\"drive\", \"v3\", credentials=creds)\n file = service.files().get(fileId=id, supportsAllDrives=True).execute()\n request = service.files().export_media(fileId=id, mimeType=\"text/plain\")\n fh = BytesIO()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"} {"id": "8c55692f02de-5", "text": "fh = BytesIO()\n downloader = MediaIoBaseDownload(fh, request)\n done = False\n try:\n while done is False:\n status, done = downloader.next_chunk()\n except HttpError as e:\n if e.resp.status == 404:\n print(\"File not found: {}\".format(id))\n else:\n print(\"An error occurred: {}\".format(e))\n text = fh.getvalue().decode(\"utf-8\")\n metadata = {\n \"source\": f\"https://docs.google.com/document/d/{id}/edit\",\n \"title\": f\"{file.get('name')}\",\n }\n return Document(page_content=text, metadata=metadata)\n def _load_documents_from_folder(\n self, folder_id: str, *, file_types: Optional[Sequence[str]] = None\n ) -> List[Document]:\n \"\"\"Load documents from a folder.\"\"\"\n from googleapiclient.discovery import build\n creds = self._load_credentials()\n service = build(\"drive\", \"v3\", credentials=creds)\n files = self._fetch_files_recursive(service, folder_id)\n # If file types filter is provided, we'll filter by the file type.\n if file_types:\n _files = [f for f in files if f[\"mimeType\"] in file_types] # type: ignore\n else:\n _files = files\n returns = []\n for file in _files:\n if file[\"trashed\"] and not self.load_trashed_files:\n continue\n elif file[\"mimeType\"] == \"application/vnd.google-apps.document\":\n returns.append(self._load_document_from_id(file[\"id\"])) # type: ignore\n elif file[\"mimeType\"] == \"application/vnd.google-apps.spreadsheet\":", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"} {"id": "8c55692f02de-6", "text": "elif file[\"mimeType\"] == \"application/vnd.google-apps.spreadsheet\":\n returns.extend(self._load_sheet_from_id(file[\"id\"])) # type: ignore\n elif (\n file[\"mimeType\"] == \"application/pdf\"\n or self.file_loader_cls is not None\n ):\n returns.extend(self._load_file_from_id(file[\"id\"])) # type: ignore\n else:\n pass\n return returns\n def _fetch_files_recursive(\n self, service: Any, folder_id: str\n ) -> List[Dict[str, Union[str, List[str]]]]:\n \"\"\"Fetch all files and subfolders recursively.\"\"\"\n results = (\n service.files()\n .list(\n q=f\"'{folder_id}' in parents\",\n pageSize=1000,\n includeItemsFromAllDrives=True,\n supportsAllDrives=True,\n fields=\"nextPageToken, files(id, name, mimeType, parents, trashed)\",\n )\n .execute()\n )\n files = results.get(\"files\", [])\n returns = []\n for file in files:\n if file[\"mimeType\"] == \"application/vnd.google-apps.folder\":\n if self.recursive:\n returns.extend(self._fetch_files_recursive(service, file[\"id\"]))\n else:\n returns.append(file)\n return returns\n def _load_documents_from_ids(self) -> List[Document]:\n \"\"\"Load documents from a list of IDs.\"\"\"\n if not self.document_ids:\n raise ValueError(\"document_ids must be set\")\n return [self._load_document_from_id(doc_id) for doc_id in self.document_ids]\n def _load_file_from_id(self, id: str) -> List[Document]:\n \"\"\"Load a file from an ID.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"} {"id": "8c55692f02de-7", "text": "\"\"\"Load a file from an ID.\"\"\"\n from io import BytesIO\n from googleapiclient.discovery import build\n from googleapiclient.http import MediaIoBaseDownload\n creds = self._load_credentials()\n service = build(\"drive\", \"v3\", credentials=creds)\n file = service.files().get(fileId=id, supportsAllDrives=True).execute()\n request = service.files().get_media(fileId=id)\n fh = BytesIO()\n downloader = MediaIoBaseDownload(fh, request)\n done = False\n while done is False:\n status, done = downloader.next_chunk()\n if self.file_loader_cls is not None:\n fh.seek(0)\n loader = self.file_loader_cls(file=fh, **self.file_loader_kwargs)\n docs = loader.load()\n for doc in docs:\n doc.metadata[\"source\"] = f\"https://drive.google.com/file/d/{id}/view\"\n return docs\n else:\n from PyPDF2 import PdfReader\n content = fh.getvalue()\n pdf_reader = PdfReader(BytesIO(content))\n return [\n Document(\n page_content=page.extract_text(),\n metadata={\n \"source\": f\"https://drive.google.com/file/d/{id}/view\",\n \"title\": f\"{file.get('name')}\",\n \"page\": i,\n },\n )\n for i, page in enumerate(pdf_reader.pages)\n ]\n def _load_file_from_ids(self) -> List[Document]:\n \"\"\"Load files from a list of IDs.\"\"\"\n if not self.file_ids:\n raise ValueError(\"file_ids must be set\")\n docs = []\n for file_id in self.file_ids:\n docs.extend(self._load_file_from_id(file_id))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"} {"id": "8c55692f02de-8", "text": "docs.extend(self._load_file_from_id(file_id))\n return docs\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n if self.folder_id:\n return self._load_documents_from_folder(\n self.folder_id, file_types=self.file_types\n )\n elif self.document_ids:\n return self._load_documents_from_ids()\n else:\n return self._load_file_from_ids()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"} {"id": "6a8e574eebcd-0", "text": "Source code for langchain.document_loaders.powerpoint\n\"\"\"Loader that loads powerpoint files.\"\"\"\nimport os\nfrom typing import List\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class UnstructuredPowerPointLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load powerpoint files.\"\"\"\n def _get_elements(self) -> List:\n from unstructured.__version__ import __version__ as __unstructured_version__\n from unstructured.file_utils.filetype import FileType, detect_filetype\n unstructured_version = tuple(\n [int(x) for x in __unstructured_version__.split(\".\")]\n )\n # NOTE(MthwRobinson) - magic will raise an import error if the libmagic\n # system dependency isn't installed. If it's not installed, we'll just\n # check the file extension\n try:\n import magic # noqa: F401\n is_ppt = detect_filetype(self.file_path) == FileType.PPT\n except ImportError:\n _, extension = os.path.splitext(str(self.file_path))\n is_ppt = extension == \".ppt\"\n if is_ppt and unstructured_version < (0, 4, 11):\n raise ValueError(\n f\"You are on unstructured version {__unstructured_version__}. \"\n \"Partitioning .ppt files is only supported in unstructured>=0.4.11. \"\n \"Please upgrade the unstructured package and try again.\"\n )\n if is_ppt:\n from unstructured.partition.ppt import partition_ppt\n return partition_ppt(filename=self.file_path, **self.unstructured_kwargs)\n else:\n from unstructured.partition.pptx import partition_pptx\n return partition_pptx(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/powerpoint.html"} {"id": "dc8eb2083f56-0", "text": "Source code for langchain.document_loaders.unstructured\n\"\"\"Loader that uses unstructured to load files.\"\"\"\nimport collections\nfrom abc import ABC, abstractmethod\nfrom typing import IO, Any, Dict, List, Sequence, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]def satisfies_min_unstructured_version(min_version: str) -> bool:\n \"\"\"Checks to see if the installed unstructured version exceeds the minimum version\n for the feature in question.\"\"\"\n from unstructured.__version__ import __version__ as __unstructured_version__\n min_version_tuple = tuple([int(x) for x in min_version.split(\".\")])\n # NOTE(MthwRobinson) - enables the loader to work when you're using pre-release\n # versions of unstructured like 0.4.17-dev1\n _unstructured_version = __unstructured_version__.split(\"-\")[0]\n unstructured_version_tuple = tuple(\n [int(x) for x in _unstructured_version.split(\".\")]\n )\n return unstructured_version_tuple >= min_version_tuple\n[docs]def validate_unstructured_version(min_unstructured_version: str) -> None:\n \"\"\"Raises an error if the unstructured version does not exceed the\n specified minimum.\"\"\"\n if not satisfies_min_unstructured_version(min_unstructured_version):\n raise ValueError(\n f\"unstructured>={min_unstructured_version} is required in this loader.\"\n )\n[docs]class UnstructuredBaseLoader(BaseLoader, ABC):\n \"\"\"Loader that uses unstructured to load files.\"\"\"\n def __init__(self, mode: str = \"single\", **unstructured_kwargs: Any):\n \"\"\"Initialize with file path.\"\"\"\n try:\n import unstructured # noqa:F401\n except ImportError:\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/unstructured.html"} {"id": "dc8eb2083f56-1", "text": "import unstructured # noqa:F401\n except ImportError:\n raise ValueError(\n \"unstructured package not found, please install it with \"\n \"`pip install unstructured`\"\n )\n _valid_modes = {\"single\", \"elements\", \"paged\"}\n if mode not in _valid_modes:\n raise ValueError(\n f\"Got {mode} for `mode`, but should be one of `{_valid_modes}`\"\n )\n self.mode = mode\n if not satisfies_min_unstructured_version(\"0.5.4\"):\n if \"strategy\" in unstructured_kwargs:\n unstructured_kwargs.pop(\"strategy\")\n self.unstructured_kwargs = unstructured_kwargs\n @abstractmethod\n def _get_elements(self) -> List:\n \"\"\"Get elements.\"\"\"\n @abstractmethod\n def _get_metadata(self) -> dict:\n \"\"\"Get metadata.\"\"\"\n[docs] def load(self) -> List[Document]:\n \"\"\"Load file.\"\"\"\n elements = self._get_elements()\n if self.mode == \"elements\":\n docs: List[Document] = list()\n for element in elements:\n metadata = self._get_metadata()\n # NOTE(MthwRobinson) - the attribute check is for backward compatibility\n # with unstructured<0.4.9. The metadata attributed was added in 0.4.9.\n if hasattr(element, \"metadata\"):\n metadata.update(element.metadata.to_dict())\n if hasattr(element, \"category\"):\n metadata[\"category\"] = element.category\n docs.append(Document(page_content=str(element), metadata=metadata))\n elif self.mode == \"paged\":\n text_dict: Dict[int, str] = {}\n meta_dict: Dict[int, Dict] = {}\n for idx, element in enumerate(elements):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/unstructured.html"} {"id": "dc8eb2083f56-2", "text": "for idx, element in enumerate(elements):\n metadata = self._get_metadata()\n if hasattr(element, \"metadata\"):\n metadata.update(element.metadata.to_dict())\n page_number = metadata.get(\"page_number\", 1)\n # Check if this page_number already exists in docs_dict\n if page_number not in text_dict:\n # If not, create new entry with initial text and metadata\n text_dict[page_number] = str(element) + \"\\n\\n\"\n meta_dict[page_number] = metadata\n else:\n # If exists, append to text and update the metadata\n text_dict[page_number] += str(element) + \"\\n\\n\"\n meta_dict[page_number].update(metadata)\n # Convert the dict to a list of Document objects\n docs = [\n Document(page_content=text_dict[key], metadata=meta_dict[key])\n for key in text_dict.keys()\n ]\n elif self.mode == \"single\":\n metadata = self._get_metadata()\n text = \"\\n\\n\".join([str(el) for el in elements])\n docs = [Document(page_content=text, metadata=metadata)]\n else:\n raise ValueError(f\"mode of {self.mode} not supported.\")\n return docs\n[docs]class UnstructuredFileLoader(UnstructuredBaseLoader):\n \"\"\"UnstructuredFileLoader uses unstructured to load files. The file loader uses the\n unstructured partition function and will automatically detect the file\n type. You can run the loader in one of two modes: \"single\" and \"elements\".\n If you use \"single\" mode, the document will be returned as a single\n langchain Document object. If you use \"elements\" mode, the unstructured\n library will split the document into elements such as Title and NarrativeText.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/unstructured.html"} {"id": "dc8eb2083f56-3", "text": "library will split the document into elements such as Title and NarrativeText.\n You can pass in additional unstructured kwargs after mode to apply\n different unstructured settings.\n Examples\n --------\n ```python\n from langchain.document_loaders import UnstructuredFileLoader\n loader = UnstructuredFileLoader(\n \"example.pdf\", mode=\"elements\", strategy=\"fast\",\n )\n docs = loader.load()\n ```\n References\n ----------\n https://unstructured-io.github.io/unstructured/bricks.html#partition\n \"\"\"\n def __init__(\n self,\n file_path: Union[str, List[str]],\n mode: str = \"single\",\n **unstructured_kwargs: Any,\n ):\n \"\"\"Initialize with file path.\"\"\"\n self.file_path = file_path\n super().__init__(mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.partition.auto import partition\n return partition(filename=self.file_path, **self.unstructured_kwargs)\n def _get_metadata(self) -> dict:\n return {\"source\": self.file_path}\n[docs]def get_elements_from_api(\n file_path: Union[str, List[str], None] = None,\n file: Union[IO, Sequence[IO], None] = None,\n api_url: str = \"https://api.unstructured.io/general/v0/general\",\n api_key: str = \"\",\n **unstructured_kwargs: Any,\n) -> List:\n \"\"\"Retrieves a list of elements from the Unstructured API.\"\"\"\n if isinstance(file, collections.abc.Sequence) or isinstance(file_path, list):\n from unstructured.partition.api import partition_multiple_via_api\n _doc_elements = partition_multiple_via_api(\n filenames=file_path,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/unstructured.html"} {"id": "dc8eb2083f56-4", "text": "_doc_elements = partition_multiple_via_api(\n filenames=file_path,\n files=file,\n api_key=api_key,\n api_url=api_url,\n **unstructured_kwargs,\n )\n elements = []\n for _elements in _doc_elements:\n elements.extend(_elements)\n return elements\n else:\n from unstructured.partition.api import partition_via_api\n return partition_via_api(\n filename=file_path,\n file=file,\n api_key=api_key,\n api_url=api_url,\n **unstructured_kwargs,\n )\n[docs]class UnstructuredAPIFileLoader(UnstructuredFileLoader):\n \"\"\"UnstructuredAPIFileLoader uses the Unstructured API to load files.\n By default, the loader makes a call to the hosted Unstructured API.\n If you are running the unstructured API locally, you can change the\n API rule by passing in the url parameter when you initialize the loader.\n The hosted Unstructured API requires an API key. See\n https://www.unstructured.io/api-key/ if you need to generate a key.\n You can run the loader in one of two modes: \"single\" and \"elements\".\n If you use \"single\" mode, the document will be returned as a single\n langchain Document object. If you use \"elements\" mode, the unstructured\n library will split the document into elements such as Title and NarrativeText.\n You can pass in additional unstructured kwargs after mode to apply\n different unstructured settings.\n Examples\n --------\n ```python\n from langchain.document_loaders import UnstructuredAPIFileLoader\n loader = UnstructuredFileAPILoader(\n \"example.pdf\", mode=\"elements\", strategy=\"fast\", api_key=\"MY_API_KEY\",\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/unstructured.html"} {"id": "dc8eb2083f56-5", "text": ")\n docs = loader.load()\n ```\n References\n ----------\n https://unstructured-io.github.io/unstructured/bricks.html#partition\n https://www.unstructured.io/api-key/\n https://github.com/Unstructured-IO/unstructured-api\n \"\"\"\n def __init__(\n self,\n file_path: Union[str, List[str]] = \"\",\n mode: str = \"single\",\n url: str = \"https://api.unstructured.io/general/v0/general\",\n api_key: str = \"\",\n **unstructured_kwargs: Any,\n ):\n \"\"\"Initialize with file path.\"\"\"\n if isinstance(file_path, str):\n validate_unstructured_version(min_unstructured_version=\"0.6.2\")\n else:\n validate_unstructured_version(min_unstructured_version=\"0.6.3\")\n self.url = url\n self.api_key = api_key\n super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)\n def _get_metadata(self) -> dict:\n return {\"source\": self.file_path}\n def _get_elements(self) -> List:\n return get_elements_from_api(\n file_path=self.file_path,\n api_key=self.api_key,\n api_url=self.url,\n **self.unstructured_kwargs,\n )\n[docs]class UnstructuredFileIOLoader(UnstructuredBaseLoader):\n \"\"\"UnstructuredFileIOLoader uses unstructured to load files. The file loader\n uses the unstructured partition function and will automatically detect the file\n type. You can run the loader in one of two modes: \"single\" and \"elements\".\n If you use \"single\" mode, the document will be returned as a single", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/unstructured.html"} {"id": "dc8eb2083f56-6", "text": "If you use \"single\" mode, the document will be returned as a single\n langchain Document object. If you use \"elements\" mode, the unstructured\n library will split the document into elements such as Title and NarrativeText.\n You can pass in additional unstructured kwargs after mode to apply\n different unstructured settings.\n Examples\n --------\n ```python\n from langchain.document_loaders import UnstructuredFileIOLoader\n with open(\"example.pdf\", \"rb\") as f:\n loader = UnstructuredFileIOLoader(\n f, mode=\"elements\", strategy=\"fast\",\n )\n docs = loader.load()\n ```\n References\n ----------\n https://unstructured-io.github.io/unstructured/bricks.html#partition\n \"\"\"\n def __init__(\n self,\n file: Union[IO, Sequence[IO]],\n mode: str = \"single\",\n **unstructured_kwargs: Any,\n ):\n \"\"\"Initialize with file path.\"\"\"\n self.file = file\n super().__init__(mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.partition.auto import partition\n return partition(file=self.file, **self.unstructured_kwargs)\n def _get_metadata(self) -> dict:\n return {}\n[docs]class UnstructuredAPIFileIOLoader(UnstructuredFileIOLoader):\n \"\"\"UnstructuredAPIFileIOLoader uses the Unstructured API to load files.\n By default, the loader makes a call to the hosted Unstructured API.\n If you are running the unstructured API locally, you can change the\n API rule by passing in the url parameter when you initialize the loader.\n The hosted Unstructured API requires an API key. See", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/unstructured.html"} {"id": "dc8eb2083f56-7", "text": "The hosted Unstructured API requires an API key. See\n https://www.unstructured.io/api-key/ if you need to generate a key.\n You can run the loader in one of two modes: \"single\" and \"elements\".\n If you use \"single\" mode, the document will be returned as a single\n langchain Document object. If you use \"elements\" mode, the unstructured\n library will split the document into elements such as Title and NarrativeText.\n You can pass in additional unstructured kwargs after mode to apply\n different unstructured settings.\n Examples\n --------\n ```python\n from langchain.document_loaders import UnstructuredAPIFileLoader\n with open(\"example.pdf\", \"rb\") as f:\n loader = UnstructuredFileAPILoader(\n f, mode=\"elements\", strategy=\"fast\", api_key=\"MY_API_KEY\",\n )\n docs = loader.load()\n ```\n References\n ----------\n https://unstructured-io.github.io/unstructured/bricks.html#partition\n https://www.unstructured.io/api-key/\n https://github.com/Unstructured-IO/unstructured-api\n \"\"\"\n def __init__(\n self,\n file: Union[IO, Sequence[IO]],\n mode: str = \"single\",\n url: str = \"https://api.unstructured.io/general/v0/general\",\n api_key: str = \"\",\n **unstructured_kwargs: Any,\n ):\n \"\"\"Initialize with file path.\"\"\"\n if isinstance(file, collections.abc.Sequence):\n validate_unstructured_version(min_unstructured_version=\"0.6.3\")\n if file:\n validate_unstructured_version(min_unstructured_version=\"0.6.2\")\n self.url = url\n self.api_key = api_key", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/unstructured.html"} {"id": "dc8eb2083f56-8", "text": "self.url = url\n self.api_key = api_key\n super().__init__(file=file, mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n return get_elements_from_api(\n file=self.file,\n api_key=self.api_key,\n api_url=self.url,\n **self.unstructured_kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/unstructured.html"} {"id": "9b73282f8e92-0", "text": "Source code for langchain.document_loaders.facebook_chat\n\"\"\"Loader that loads Facebook chat json dump.\"\"\"\nimport datetime\nimport json\nfrom pathlib import Path\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]def concatenate_rows(row: dict) -> str:\n \"\"\"Combine message information in a readable format ready to be used.\n Args:\n row: dictionary containing message information.\n \"\"\"\n sender = row[\"sender_name\"]\n text = row[\"content\"]\n date = datetime.datetime.fromtimestamp(row[\"timestamp_ms\"] / 1000).strftime(\n \"%Y-%m-%d %H:%M:%S\"\n )\n return f\"{sender} on {date}: {text}\\n\\n\"\n[docs]class FacebookChatLoader(BaseLoader):\n \"\"\"Loads Facebook messages json directory dump.\"\"\"\n def __init__(self, path: str):\n \"\"\"Initialize with a path.\"\"\"\n self.file_path = path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n p = Path(self.file_path)\n with open(p, encoding=\"utf8\") as f:\n d = json.load(f)\n text = \"\".join(\n concatenate_rows(message)\n for message in d[\"messages\"]\n if message.get(\"content\") and isinstance(message[\"content\"], str)\n )\n metadata = {\"source\": str(p)}\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/facebook_chat.html"} {"id": "ad466dadf680-0", "text": "Source code for langchain.document_loaders.chatgpt\nimport datetime\nimport json\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]def concatenate_rows(message: dict, title: str) -> str:\n \"\"\"\n Combine message information in a readable format ready to be used.\n Args:\n message: Message to be concatenated\n title: Title of the conversation\n Returns:\n Concatenated message\n \"\"\"\n if not message:\n return \"\"\n sender = message[\"author\"][\"role\"] if message[\"author\"] else \"unknown\"\n text = message[\"content\"][\"parts\"][0]\n date = datetime.datetime.fromtimestamp(message[\"create_time\"]).strftime(\n \"%Y-%m-%d %H:%M:%S\"\n )\n return f\"{title} - {sender} on {date}: {text}\\n\\n\"\n[docs]class ChatGPTLoader(BaseLoader):\n \"\"\"Load conversations from exported ChatGPT data.\"\"\"\n def __init__(self, log_file: str, num_logs: int = -1):\n \"\"\"\n Args:\n log_file: Path to the log file\n num_logs: Number of logs to load. If 0, load all logs.\n \"\"\"\n self.log_file = log_file\n self.num_logs = num_logs\n[docs] def load(self) -> List[Document]:\n with open(self.log_file, encoding=\"utf8\") as f:\n data = json.load(f)[: self.num_logs] if self.num_logs else json.load(f)\n documents = []\n for d in data:\n title = d[\"title\"]\n messages = d[\"mapping\"]\n text = \"\".join(\n [", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/chatgpt.html"} {"id": "ad466dadf680-1", "text": "messages = d[\"mapping\"]\n text = \"\".join(\n [\n concatenate_rows(messages[key][\"message\"], title)\n for idx, key in enumerate(messages)\n if not (\n idx == 0\n and messages[key][\"message\"][\"author\"][\"role\"] == \"system\"\n )\n ]\n )\n metadata = {\"source\": str(self.log_file)}\n documents.append(Document(page_content=text, metadata=metadata))\n return documents", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/chatgpt.html"} {"id": "90dfbd6b6411-0", "text": "Source code for langchain.document_loaders.diffbot\n\"\"\"Loader that uses Diffbot to load webpages in text format.\"\"\"\nimport logging\nfrom typing import Any, List\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\n[docs]class DiffbotLoader(BaseLoader):\n \"\"\"Loads Diffbot file json.\"\"\"\n def __init__(\n self, api_token: str, urls: List[str], continue_on_failure: bool = True\n ):\n \"\"\"Initialize with API token, ids, and key.\n Args:\n api_token: Diffbot API token.\n urls: List of URLs to load.\n continue_on_failure: Whether to continue loading other URLs if one fails.\n Defaults to True.\n \"\"\"\n self.api_token = api_token\n self.urls = urls\n self.continue_on_failure = continue_on_failure\n def _diffbot_api_url(self, diffbot_api: str) -> str:\n return f\"https://api.diffbot.com/v3/{diffbot_api}\"\n def _get_diffbot_data(self, url: str) -> Any:\n \"\"\"Get Diffbot file from Diffbot REST API.\"\"\"\n # TODO: Add support for other Diffbot APIs\n diffbot_url = self._diffbot_api_url(\"article\")\n params = {\n \"token\": self.api_token,\n \"url\": url,\n }\n response = requests.get(diffbot_url, params=params, timeout=10)\n # TODO: handle non-ok errors\n return response.json() if response.ok else {}\n[docs] def load(self) -> List[Document]:\n \"\"\"Extract text from Diffbot on all the URLs and return Documents\"\"\"\n docs: List[Document] = list()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/diffbot.html"} {"id": "90dfbd6b6411-1", "text": "docs: List[Document] = list()\n for url in self.urls:\n try:\n data = self._get_diffbot_data(url)\n text = data[\"objects\"][0][\"text\"] if \"objects\" in data else \"\"\n metadata = {\"source\": url}\n docs.append(Document(page_content=text, metadata=metadata))\n except Exception as e:\n if self.continue_on_failure:\n logger.error(f\"Error fetching or processing {url}, exception: {e}\")\n else:\n raise e\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/diffbot.html"} {"id": "ac25bd0ab892-0", "text": "Source code for langchain.document_loaders.email\n\"\"\"Loads email files.\"\"\"\nimport os\nfrom typing import Any, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.unstructured import (\n UnstructuredFileLoader,\n satisfies_min_unstructured_version,\n)\n[docs]class UnstructuredEmailLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load email files. Works with both\n .eml and .msg files. You can process attachments in addition to the\n e-mail message itself by passing process_attachments=True into the\n constructor for the loader. By default, attachments will be processed\n with the unstructured partition function. If you already know the document\n types of the attachments, you can specify another partitioning function\n with the attachment partitioner kwarg.\n Example\n -------\n from langchain.document_loaders import UnstructuredEmailLoader\n loader = UnstructuredEmailLoader(\"example_data/fake-email.eml\", mode=\"elements\")\n loader.load()\n Example\n -------\n from langchain.document_loaders import UnstructuredEmailLoader\n loader = UnstructuredEmailLoader(\n \"example_data/fake-email-attachment.eml\",\n mode=\"elements\",\n process_attachments=True,\n )\n loader.load()\n \"\"\"\n def __init__(\n self, file_path: str, mode: str = \"single\", **unstructured_kwargs: Any\n ):\n process_attachments = unstructured_kwargs.get(\"process_attachments\")\n attachment_partitioner = unstructured_kwargs.get(\"attachment_partitioner\")\n if process_attachments and attachment_partitioner is None:\n from unstructured.partition.auto import partition\n unstructured_kwargs[\"attachment_partitioner\"] = partition", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/email.html"} {"id": "ac25bd0ab892-1", "text": "unstructured_kwargs[\"attachment_partitioner\"] = partition\n super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.file_utils.filetype import FileType, detect_filetype\n filetype = detect_filetype(self.file_path)\n if filetype == FileType.EML:\n from unstructured.partition.email import partition_email\n return partition_email(filename=self.file_path, **self.unstructured_kwargs)\n elif satisfies_min_unstructured_version(\"0.5.8\") and filetype == FileType.MSG:\n from unstructured.partition.msg import partition_msg\n return partition_msg(filename=self.file_path, **self.unstructured_kwargs)\n else:\n raise ValueError(\n f\"Filetype {filetype} is not supported in UnstructuredEmailLoader.\"\n )\n[docs]class OutlookMessageLoader(BaseLoader):\n \"\"\"\n Loads Outlook Message files using extract_msg.\n https://github.com/TeamMsgExtractor/msg-extractor\n \"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with a file path.\n Args:\n file_path: The path to the Outlook Message file.\n \"\"\"\n self.file_path = file_path\n if not os.path.isfile(self.file_path):\n raise ValueError(\"File path %s is not a valid file\" % self.file_path)\n try:\n import extract_msg # noqa:F401\n except ImportError:\n raise ImportError(\n \"extract_msg is not installed. Please install it with \"\n \"`pip install extract_msg`\"\n )\n[docs] def load(self) -> List[Document]:\n \"\"\"Load data into document objects.\"\"\"\n import extract_msg\n msg = extract_msg.Message(self.file_path)\n return [\n Document(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/email.html"} {"id": "ac25bd0ab892-2", "text": "msg = extract_msg.Message(self.file_path)\n return [\n Document(\n page_content=msg.body,\n metadata={\n \"subject\": msg.subject,\n \"sender\": msg.sender,\n \"date\": msg.date,\n },\n )\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/email.html"} {"id": "ec0882df0ee4-0", "text": "Source code for langchain.document_loaders.epub\n\"\"\"Loader that loads EPub files.\"\"\"\nfrom typing import List\nfrom langchain.document_loaders.unstructured import (\n UnstructuredFileLoader,\n satisfies_min_unstructured_version,\n)\n[docs]class UnstructuredEPubLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load epub files.\"\"\"\n def _get_elements(self) -> List:\n min_unstructured_version = \"0.5.4\"\n if not satisfies_min_unstructured_version(min_unstructured_version):\n raise ValueError(\n \"Partitioning epub files is only supported in \"\n f\"unstructured>={min_unstructured_version}.\"\n )\n from unstructured.partition.epub import partition_epub\n return partition_epub(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/epub.html"} {"id": "426a8461c50e-0", "text": "Source code for langchain.document_loaders.blackboard\n\"\"\"Loader that loads all documents from a blackboard course.\"\"\"\nimport contextlib\nimport re\nfrom pathlib import Path\nfrom typing import Any, List, Optional, Tuple\nfrom urllib.parse import unquote\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.directory import DirectoryLoader\nfrom langchain.document_loaders.pdf import PyPDFLoader\nfrom langchain.document_loaders.web_base import WebBaseLoader\n[docs]class BlackboardLoader(WebBaseLoader):\n \"\"\"Loads all documents from a Blackboard course.\n This loader is not compatible with all Blackboard courses. It is only\n compatible with courses that use the new Blackboard interface.\n To use this loader, you must have the BbRouter cookie. You can get this\n cookie by logging into the course and then copying the value of the\n BbRouter cookie from the browser's developer tools.\n Example:\n .. code-block:: python\n from langchain.document_loaders import BlackboardLoader\n loader = BlackboardLoader(\n blackboard_course_url=\"https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1\",\n bbrouter=\"expires:12345...\",\n )\n documents = loader.load()\n \"\"\"\n base_url: str\n \"\"\"Base url of the blackboard course.\"\"\"\n folder_path: str\n \"\"\"Path to the folder containing the documents.\"\"\"\n load_all_recursively: bool\n \"\"\"If True, load all documents recursively.\"\"\"\n def __init__(\n self,\n blackboard_course_url: str,\n bbrouter: str,\n load_all_recursively: bool = True,\n basic_auth: Optional[Tuple[str, str]] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blackboard.html"} {"id": "426a8461c50e-1", "text": "basic_auth: Optional[Tuple[str, str]] = None,\n cookies: Optional[dict] = None,\n ):\n \"\"\"Initialize with blackboard course url.\n The BbRouter cookie is required for most blackboard courses.\n Args:\n blackboard_course_url: Blackboard course url.\n bbrouter: BbRouter cookie.\n load_all_recursively: If True, load all documents recursively.\n basic_auth: Basic auth credentials.\n cookies: Cookies.\n Raises:\n ValueError: If blackboard course url is invalid.\n \"\"\"\n super().__init__(blackboard_course_url)\n # Get base url\n try:\n self.base_url = blackboard_course_url.split(\"/webapps/blackboard\")[0]\n except IndexError:\n raise IndexError(\n \"Invalid blackboard course url. \"\n \"Please provide a url that starts with \"\n \"https:///webapps/blackboard\"\n )\n if basic_auth is not None:\n self.session.auth = basic_auth\n # Combine cookies\n if cookies is None:\n cookies = {}\n cookies.update({\"BbRouter\": bbrouter})\n self.session.cookies.update(cookies)\n self.load_all_recursively = load_all_recursively\n self.check_bs4()\n[docs] def check_bs4(self) -> None:\n \"\"\"Check if BeautifulSoup4 is installed.\n Raises:\n ImportError: If BeautifulSoup4 is not installed.\n \"\"\"\n try:\n import bs4 # noqa: F401\n except ImportError:\n raise ImportError(\n \"BeautifulSoup4 is required for BlackboardLoader. \"\n \"Please install it with `pip install beautifulsoup4`.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blackboard.html"} {"id": "426a8461c50e-2", "text": "\"Please install it with `pip install beautifulsoup4`.\"\n )\n[docs] def load(self) -> List[Document]:\n \"\"\"Load data into Document objects.\n Returns:\n List of Documents.\n \"\"\"\n if self.load_all_recursively:\n soup_info = self.scrape()\n self.folder_path = self._get_folder_path(soup_info)\n relative_paths = self._get_paths(soup_info)\n documents = []\n for path in relative_paths:\n url = self.base_url + path\n print(f\"Fetching documents from {url}\")\n soup_info = self._scrape(url)\n with contextlib.suppress(ValueError):\n documents.extend(self._get_documents(soup_info))\n return documents\n else:\n print(f\"Fetching documents from {self.web_path}\")\n soup_info = self.scrape()\n self.folder_path = self._get_folder_path(soup_info)\n return self._get_documents(soup_info)\n def _get_folder_path(self, soup: Any) -> str:\n \"\"\"Get the folder path to save the Documents in.\n Args:\n soup: BeautifulSoup4 soup object.\n Returns:\n Folder path.\n \"\"\"\n # Get the course name\n course_name = soup.find(\"span\", {\"id\": \"crumb_1\"})\n if course_name is None:\n raise ValueError(\"No course name found.\")\n course_name = course_name.text.strip()\n # Prepare the folder path\n course_name_clean = (\n unquote(course_name)\n .replace(\" \", \"_\")\n .replace(\"/\", \"_\")\n .replace(\":\", \"_\")\n .replace(\",\", \"_\")\n .replace(\"?\", \"_\")\n .replace(\"'\", \"_\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blackboard.html"} {"id": "426a8461c50e-3", "text": ".replace(\"?\", \"_\")\n .replace(\"'\", \"_\")\n .replace(\"!\", \"_\")\n .replace('\"', \"_\")\n )\n # Get the folder path\n folder_path = Path(\".\") / course_name_clean\n return str(folder_path)\n def _get_documents(self, soup: Any) -> List[Document]:\n \"\"\"Fetch content from page and return Documents.\n Args:\n soup: BeautifulSoup4 soup object.\n Returns:\n List of documents.\n \"\"\"\n attachments = self._get_attachments(soup)\n self._download_attachments(attachments)\n documents = self._load_documents()\n return documents\n def _get_attachments(self, soup: Any) -> List[str]:\n \"\"\"Get all attachments from a page.\n Args:\n soup: BeautifulSoup4 soup object.\n Returns:\n List of attachments.\n \"\"\"\n from bs4 import BeautifulSoup, Tag\n # Get content list\n content_list = soup.find(\"ul\", {\"class\": \"contentList\"})\n if content_list is None:\n raise ValueError(\"No content list found.\")\n content_list: BeautifulSoup # type: ignore\n # Get all attachments\n attachments = []\n for attachment in content_list.find_all(\"ul\", {\"class\": \"attachments\"}):\n attachment: Tag # type: ignore\n for link in attachment.find_all(\"a\"):\n link: Tag # type: ignore\n href = link.get(\"href\")\n # Only add if href is not None and does not start with #\n if href is not None and not href.startswith(\"#\"):\n attachments.append(href)\n return attachments\n def _download_attachments(self, attachments: List[str]) -> None:\n \"\"\"Download all attachments.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blackboard.html"} {"id": "426a8461c50e-4", "text": "\"\"\"Download all attachments.\n Args:\n attachments: List of attachments.\n \"\"\"\n # Make sure the folder exists\n Path(self.folder_path).mkdir(parents=True, exist_ok=True)\n # Download all attachments\n for attachment in attachments:\n self.download(attachment)\n def _load_documents(self) -> List[Document]:\n \"\"\"Load all documents in the folder.\n Returns:\n List of documents.\n \"\"\"\n # Create the document loader\n loader = DirectoryLoader(\n path=self.folder_path, glob=\"*.pdf\", loader_cls=PyPDFLoader # type: ignore\n )\n # Load the documents\n documents = loader.load()\n # Return all documents\n return documents\n def _get_paths(self, soup: Any) -> List[str]:\n \"\"\"Get all relative paths in the navbar.\"\"\"\n relative_paths = []\n course_menu = soup.find(\"ul\", {\"class\": \"courseMenu\"})\n if course_menu is None:\n raise ValueError(\"No course menu found.\")\n for link in course_menu.find_all(\"a\"):\n href = link.get(\"href\")\n if href is not None and href.startswith(\"/\"):\n relative_paths.append(href)\n return relative_paths\n[docs] def download(self, path: str) -> None:\n \"\"\"Download a file from an url.\n Args:\n path: Path to the file.\n \"\"\"\n # Get the file content\n response = self.session.get(self.base_url + path, allow_redirects=True)\n # Get the filename\n filename = self.parse_filename(response.url)\n # Write the file to disk\n with open(Path(self.folder_path) / filename, \"wb\") as f:\n f.write(response.content)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blackboard.html"} {"id": "426a8461c50e-5", "text": "f.write(response.content)\n[docs] def parse_filename(self, url: str) -> str:\n \"\"\"Parse the filename from an url.\n Args:\n url: Url to parse the filename from.\n Returns:\n The filename.\n \"\"\"\n if (url_path := Path(url)) and url_path.suffix == \".pdf\":\n return url_path.name\n else:\n return self._parse_filename_from_url(url)\n def _parse_filename_from_url(self, url: str) -> str:\n \"\"\"Parse the filename from an url.\n Args:\n url: Url to parse the filename from.\n Returns:\n The filename.\n Raises:\n ValueError: If the filename could not be parsed.\n \"\"\"\n filename_matches = re.search(r\"filename%2A%3DUTF-8%27%27(.+)\", url)\n if filename_matches:\n filename = filename_matches.group(1)\n else:\n raise ValueError(f\"Could not parse filename from {url}\")\n if \".pdf\" not in filename:\n raise ValueError(f\"Incorrect file type: {filename}\")\n filename = filename.split(\".pdf\")[0] + \".pdf\"\n filename = unquote(filename)\n filename = filename.replace(\"%20\", \" \")\n return filename\nif __name__ == \"__main__\":\n loader = BlackboardLoader(\n \"https:///webapps/blackboard/content/listContent.jsp?course_id=__1&content_id=__1&mode=reset\",\n \"\",\n load_all_recursively=True,\n )\n documents = loader.load()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blackboard.html"} {"id": "426a8461c50e-6", "text": "load_all_recursively=True,\n )\n documents = loader.load()\n print(f\"Loaded {len(documents)} pages of PDFs from {loader.web_path}\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blackboard.html"} {"id": "9f0cea9bed6d-0", "text": "Source code for langchain.document_loaders.odt\n\"\"\"Loader that loads Open Office ODT files.\"\"\"\nfrom typing import Any, List\nfrom langchain.document_loaders.unstructured import (\n UnstructuredFileLoader,\n validate_unstructured_version,\n)\n[docs]class UnstructuredODTLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load open office ODT files.\"\"\"\n def __init__(\n self, file_path: str, mode: str = \"single\", **unstructured_kwargs: Any\n ):\n validate_unstructured_version(min_unstructured_version=\"0.6.3\")\n super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.partition.odt import partition_odt\n return partition_odt(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/odt.html"} {"id": "f1c03a9de835-0", "text": "Source code for langchain.document_loaders.azure_blob_storage_container\n\"\"\"Loading logic for loading documents from an Azure Blob Storage container.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.azure_blob_storage_file import (\n AzureBlobStorageFileLoader,\n)\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class AzureBlobStorageContainerLoader(BaseLoader):\n \"\"\"Loading Documents from Azure Blob Storage.\"\"\"\n def __init__(self, conn_str: str, container: str, prefix: str = \"\"):\n \"\"\"Initialize with connection string, container and blob prefix.\"\"\"\n self.conn_str = conn_str\n \"\"\"Connection string for Azure Blob Storage.\"\"\"\n self.container = container\n \"\"\"Container name.\"\"\"\n self.prefix = prefix\n \"\"\"Prefix for blob names.\"\"\"\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n from azure.storage.blob import ContainerClient\n except ImportError as exc:\n raise ImportError(\n \"Could not import azure storage blob python package. \"\n \"Please install it with `pip install azure-storage-blob`.\"\n ) from exc\n container = ContainerClient.from_connection_string(\n conn_str=self.conn_str, container_name=self.container\n )\n docs = []\n blob_list = container.list_blobs(name_starts_with=self.prefix)\n for blob in blob_list:\n loader = AzureBlobStorageFileLoader(\n self.conn_str, self.container, blob.name # type: ignore\n )\n docs.extend(loader.load())\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/azure_blob_storage_container.html"} {"id": "b6bb924063bc-0", "text": "Source code for langchain.document_loaders.azure_blob_storage_file\nimport os\nimport tempfile\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class AzureBlobStorageFileLoader(BaseLoader):\n \"\"\"Loading Documents from Azure Blob Storage.\"\"\"\n def __init__(self, conn_str: str, container: str, blob_name: str):\n \"\"\"Initialize with connection string, container and blob name.\"\"\"\n self.conn_str = conn_str\n \"\"\"Connection string for Azure Blob Storage.\"\"\"\n self.container = container\n \"\"\"Container name.\"\"\"\n self.blob = blob_name\n \"\"\"Blob name.\"\"\"\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n from azure.storage.blob import BlobClient\n except ImportError as exc:\n raise ImportError(\n \"Could not import azure storage blob python package. \"\n \"Please install it with `pip install azure-storage-blob`.\"\n ) from exc\n client = BlobClient.from_connection_string(\n conn_str=self.conn_str, container_name=self.container, blob_name=self.blob\n )\n with tempfile.TemporaryDirectory() as temp_dir:\n file_path = f\"{temp_dir}/{self.container}/{self.blob}\"\n os.makedirs(os.path.dirname(file_path), exist_ok=True)\n with open(f\"{file_path}\", \"wb\") as file:\n blob_data = client.download_blob()\n blob_data.readinto(file)\n loader = UnstructuredFileLoader(file_path)\n return loader.load()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/azure_blob_storage_file.html"} {"id": "d121b3e88907-0", "text": "Source code for langchain.document_loaders.s3_directory\n\"\"\"Loading logic for loading documents from an s3 directory.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.s3_file import S3FileLoader\n[docs]class S3DirectoryLoader(BaseLoader):\n \"\"\"Loading logic for loading documents from s3.\"\"\"\n def __init__(self, bucket: str, prefix: str = \"\"):\n \"\"\"Initialize with bucket and key name.\"\"\"\n self.bucket = bucket\n self.prefix = prefix\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n import boto3\n except ImportError:\n raise ImportError(\n \"Could not import boto3 python package. \"\n \"Please install it with `pip install boto3`.\"\n )\n s3 = boto3.resource(\"s3\")\n bucket = s3.Bucket(self.bucket)\n docs = []\n for obj in bucket.objects.filter(Prefix=self.prefix):\n loader = S3FileLoader(self.bucket, obj.key)\n docs.extend(loader.load())\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/s3_directory.html"} {"id": "84e3f24f21c1-0", "text": "Source code for langchain.document_loaders.whatsapp_chat\nimport re\nfrom pathlib import Path\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]def concatenate_rows(date: str, sender: str, text: str) -> str:\n \"\"\"Combine message information in a readable format ready to be used.\"\"\"\n return f\"{sender} on {date}: {text}\\n\\n\"\n[docs]class WhatsAppChatLoader(BaseLoader):\n \"\"\"Loader that loads WhatsApp messages text file.\"\"\"\n def __init__(self, path: str):\n \"\"\"Initialize with path.\"\"\"\n self.file_path = path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n p = Path(self.file_path)\n text_content = \"\"\n with open(p, encoding=\"utf8\") as f:\n lines = f.readlines()\n message_line_regex = r\"\"\"\n \\[?\n (\n \\d{1,4}\n [\\/.]\n \\d{1,2}\n [\\/.]\n \\d{1,4}\n ,\\s\n \\d{1,2}\n :\\d{2}\n (?:\n :\\d{2}\n )?\n (?:[\\s_](?:AM|PM))?\n )\n \\]?\n [\\s-]*\n ([~\\w\\s]+)\n [:]+\n \\s\n (.+)\n \"\"\"\n ignore_lines = [\"This message was deleted\", \"\"]\n for line in lines:\n result = re.match(\n message_line_regex, line.strip(), flags=re.VERBOSE | re.IGNORECASE\n )\n if result:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/whatsapp_chat.html"} {"id": "84e3f24f21c1-1", "text": ")\n if result:\n date, sender, text = result.groups()\n if text not in ignore_lines:\n text_content += concatenate_rows(date, sender, text)\n metadata = {\"source\": str(p)}\n return [Document(page_content=text_content, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/whatsapp_chat.html"} {"id": "0eee48328d0f-0", "text": "Source code for langchain.document_loaders.brave_search\nfrom typing import Iterator, List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utilities.brave_search import BraveSearchWrapper\n[docs]class BraveSearchLoader(BaseLoader):\n \"\"\"Loads a query result from Brave Search engine into a list of Documents.\"\"\"\n def __init__(self, query: str, api_key: str, search_kwargs: Optional[dict] = None):\n \"\"\"Initializes the BraveLoader.\n Args:\n query: The query to search for.\n api_key: The API key to use.\n search_kwargs: The search kwargs to use.\n \"\"\"\n self.query = query\n self.api_key = api_key\n self.search_kwargs = search_kwargs or {}\n[docs] def load(self) -> List[Document]:\n brave_client = BraveSearchWrapper(\n api_key=self.api_key,\n search_kwargs=self.search_kwargs,\n )\n return brave_client.download_documents(self.query)\n[docs] def lazy_load(self) -> Iterator[Document]:\n for doc in self.load():\n yield doc", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/brave_search.html"} {"id": "5f98edc866ee-0", "text": "Source code for langchain.document_loaders.notion\n\"\"\"Loader that loads Notion directory dump.\"\"\"\nfrom pathlib import Path\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class NotionDirectoryLoader(BaseLoader):\n \"\"\"Loader that loads Notion directory dump.\"\"\"\n def __init__(self, path: str):\n \"\"\"Initialize with path.\"\"\"\n self.file_path = path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n ps = list(Path(self.file_path).glob(\"**/*.md\"))\n docs = []\n for p in ps:\n with open(p) as f:\n text = f.read()\n metadata = {\"source\": str(p)}\n docs.append(Document(page_content=text, metadata=metadata))\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/notion.html"} {"id": "fefee3a1469a-0", "text": "Source code for langchain.document_loaders.excel\n\"\"\"Loader that loads Microsoft Excel files.\"\"\"\nfrom typing import Any, List\nfrom langchain.document_loaders.unstructured import (\n UnstructuredFileLoader,\n validate_unstructured_version,\n)\n[docs]class UnstructuredExcelLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load Microsoft Excel files.\"\"\"\n def __init__(\n self, file_path: str, mode: str = \"single\", **unstructured_kwargs: Any\n ):\n \"\"\"\n Args:\n file_path: The path to the Microsoft Excel file.\n mode: The mode to use when partitioning the file. See unstructured docs\n for more info. Optional. Defaults to \"single\".\n **unstructured_kwargs: Keyword arguments to pass to unstructured.\n \"\"\"\n validate_unstructured_version(min_unstructured_version=\"0.6.7\")\n super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.partition.xlsx import partition_xlsx\n return partition_xlsx(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/excel.html"} {"id": "37fa98ea0559-0", "text": "Source code for langchain.document_loaders.conllu\n\"\"\"Load CoNLL-U files.\"\"\"\nimport csv\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class CoNLLULoader(BaseLoader):\n \"\"\"Load CoNLL-U files.\"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with a file path.\"\"\"\n self.file_path = file_path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load from a file path.\"\"\"\n with open(self.file_path, encoding=\"utf8\") as f:\n tsv = list(csv.reader(f, delimiter=\"\\t\"))\n # If len(line) > 1, the line is not a comment\n lines = [line for line in tsv if len(line) > 1]\n text = \"\"\n for i, line in enumerate(lines):\n # Do not add a space after a punctuation mark or at the end of the sentence\n if line[9] == \"SpaceAfter=No\" or i == len(lines) - 1:\n text += line[1]\n else:\n text += line[1] + \" \"\n metadata = {\"source\": self.file_path}\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/conllu.html"} {"id": "5e70b881ef2d-0", "text": "Source code for langchain.document_loaders.joplin\nimport json\nimport urllib\nfrom datetime import datetime\nfrom typing import Iterator, List, Optional\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.schema import Document\nfrom langchain.utils import get_from_env\nLINK_NOTE_TEMPLATE = \"joplin://x-callback-url/openNote?id={id}\"\n[docs]class JoplinLoader(BaseLoader):\n \"\"\"\n Loader that fetches notes from Joplin.\n In order to use this loader, you need to have Joplin running with the\n Web Clipper enabled (look for \"Web Clipper\" in the app settings).\n To get the access token, you need to go to the Web Clipper options and\n under \"Advanced Options\" you will find the access token.\n You can find more information about the Web Clipper service here:\n https://joplinapp.org/clipper/\n \"\"\"\n def __init__(\n self,\n access_token: Optional[str] = None,\n port: int = 41184,\n host: str = \"localhost\",\n ) -> None:\n \"\"\"\n Args:\n access_token: The access token to use.\n port: The port where the Web Clipper service is running. Default is 41184.\n host: The host where the Web Clipper service is running.\n Default is localhost.\n \"\"\"\n access_token = access_token or get_from_env(\n \"access_token\", \"JOPLIN_ACCESS_TOKEN\"\n )\n base_url = f\"http://{host}:{port}\"\n self._get_note_url = (\n f\"{base_url}/notes?token={access_token}\"\n f\"&fields=id,parent_id,title,body,created_time,updated_time&page={{page}}\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/joplin.html"} {"id": "5e70b881ef2d-1", "text": "f\"&fields=id,parent_id,title,body,created_time,updated_time&page={{page}}\"\n )\n self._get_folder_url = (\n f\"{base_url}/folders/{{id}}?token={access_token}&fields=title\"\n )\n self._get_tag_url = (\n f\"{base_url}/notes/{{id}}/tags?token={access_token}&fields=title\"\n )\n def _get_notes(self) -> Iterator[Document]:\n has_more = True\n page = 1\n while has_more:\n req_note = urllib.request.Request(self._get_note_url.format(page=page))\n with urllib.request.urlopen(req_note) as response:\n json_data = json.loads(response.read().decode())\n for note in json_data[\"items\"]:\n metadata = {\n \"source\": LINK_NOTE_TEMPLATE.format(id=note[\"id\"]),\n \"folder\": self._get_folder(note[\"parent_id\"]),\n \"tags\": self._get_tags(note[\"id\"]),\n \"title\": note[\"title\"],\n \"created_time\": self._convert_date(note[\"created_time\"]),\n \"updated_time\": self._convert_date(note[\"updated_time\"]),\n }\n yield Document(page_content=note[\"body\"], metadata=metadata)\n has_more = json_data[\"has_more\"]\n page += 1\n def _get_folder(self, folder_id: str) -> str:\n req_folder = urllib.request.Request(self._get_folder_url.format(id=folder_id))\n with urllib.request.urlopen(req_folder) as response:\n json_data = json.loads(response.read().decode())\n return json_data[\"title\"]\n def _get_tags(self, note_id: str) -> List[str]:\n req_tag = urllib.request.Request(self._get_tag_url.format(id=note_id))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/joplin.html"} {"id": "5e70b881ef2d-2", "text": "req_tag = urllib.request.Request(self._get_tag_url.format(id=note_id))\n with urllib.request.urlopen(req_tag) as response:\n json_data = json.loads(response.read().decode())\n return [tag[\"title\"] for tag in json_data[\"items\"]]\n def _convert_date(self, date: int) -> str:\n return datetime.fromtimestamp(date / 1000).strftime(\"%Y-%m-%d %H:%M:%S\")\n[docs] def lazy_load(self) -> Iterator[Document]:\n yield from self._get_notes()\n[docs] def load(self) -> List[Document]:\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/joplin.html"} {"id": "6479d1c96fd3-0", "text": "Source code for langchain.document_loaders.modern_treasury\n\"\"\"Loader that fetches data from Modern Treasury\"\"\"\nimport json\nimport urllib.request\nfrom base64 import b64encode\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utils import get_from_env, stringify_value\nMODERN_TREASURY_ENDPOINTS = {\n \"payment_orders\": \"https://app.moderntreasury.com/api/payment_orders\",\n \"expected_payments\": \"https://app.moderntreasury.com/api/expected_payments\",\n \"returns\": \"https://app.moderntreasury.com/api/returns\",\n \"incoming_payment_details\": \"https://app.moderntreasury.com/api/\\\nincoming_payment_details\",\n \"counterparties\": \"https://app.moderntreasury.com/api/counterparties\",\n \"internal_accounts\": \"https://app.moderntreasury.com/api/internal_accounts\",\n \"external_accounts\": \"https://app.moderntreasury.com/api/external_accounts\",\n \"transactions\": \"https://app.moderntreasury.com/api/transactions\",\n \"ledgers\": \"https://app.moderntreasury.com/api/ledgers\",\n \"ledger_accounts\": \"https://app.moderntreasury.com/api/ledger_accounts\",\n \"ledger_transactions\": \"https://app.moderntreasury.com/api/ledger_transactions\",\n \"events\": \"https://app.moderntreasury.com/api/events\",\n \"invoices\": \"https://app.moderntreasury.com/api/invoices\",\n}\n[docs]class ModernTreasuryLoader(BaseLoader):\n \"\"\"Loader that fetches data from Modern Treasury.\"\"\"\n def __init__(\n self,\n resource: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/modern_treasury.html"} {"id": "6479d1c96fd3-1", "text": "def __init__(\n self,\n resource: str,\n organization_id: Optional[str] = None,\n api_key: Optional[str] = None,\n ) -> None:\n self.resource = resource\n organization_id = organization_id or get_from_env(\n \"organization_id\", \"MODERN_TREASURY_ORGANIZATION_ID\"\n )\n api_key = api_key or get_from_env(\"api_key\", \"MODERN_TREASURY_API_KEY\")\n credentials = f\"{organization_id}:{api_key}\".encode(\"utf-8\")\n basic_auth_token = b64encode(credentials).decode(\"utf-8\")\n self.headers = {\"Authorization\": f\"Basic {basic_auth_token}\"}\n def _make_request(self, url: str) -> List[Document]:\n request = urllib.request.Request(url, headers=self.headers)\n with urllib.request.urlopen(request) as response:\n json_data = json.loads(response.read().decode())\n text = stringify_value(json_data)\n metadata = {\"source\": url}\n return [Document(page_content=text, metadata=metadata)]\n def _get_resource(self) -> List[Document]:\n endpoint = MODERN_TREASURY_ENDPOINTS.get(self.resource)\n if endpoint is None:\n return []\n return self._make_request(endpoint)\n[docs] def load(self) -> List[Document]:\n return self._get_resource()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/modern_treasury.html"} {"id": "3f07c5b613b2-0", "text": "Source code for langchain.document_loaders.html\n\"\"\"Loader that uses unstructured to load HTML files.\"\"\"\nfrom typing import List\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class UnstructuredHTMLLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load HTML files.\"\"\"\n def _get_elements(self) -> List:\n from unstructured.partition.html import partition_html\n return partition_html(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/html.html"} {"id": "4d187c15c852-0", "text": "Source code for langchain.document_loaders.youtube\n\"\"\"Loader that loads YouTube transcript.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Optional, Sequence, Union\nfrom urllib.parse import parse_qs, urlparse\nfrom pydantic import root_validator\nfrom pydantic.dataclasses import dataclass\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\nSCOPES = [\"https://www.googleapis.com/auth/youtube.readonly\"]\n@dataclass\nclass GoogleApiClient:\n \"\"\"A Generic Google Api Client.\n To use, you should have the ``google_auth_oauthlib,youtube_transcript_api,google``\n python package installed.\n As the google api expects credentials you need to set up a google account and\n register your Service. \"https://developers.google.com/docs/api/quickstart/python\"\n Example:\n .. code-block:: python\n from langchain.document_loaders import GoogleApiClient\n google_api_client = GoogleApiClient(\n service_account_path=Path(\"path_to_your_sec_file.json\")\n )\n \"\"\"\n credentials_path: Path = Path.home() / \".credentials\" / \"credentials.json\"\n service_account_path: Path = Path.home() / \".credentials\" / \"credentials.json\"\n token_path: Path = Path.home() / \".credentials\" / \"token.json\"\n def __post_init__(self) -> None:\n self.creds = self._load_credentials()\n @root_validator\n def validate_channel_or_videoIds_is_set(\n cls, values: Dict[str, Any]\n ) -> Dict[str, Any]:\n \"\"\"Validate that either folder_id or document_ids is set, but not both.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"} {"id": "4d187c15c852-1", "text": "\"\"\"Validate that either folder_id or document_ids is set, but not both.\"\"\"\n if not values.get(\"credentials_path\") and not values.get(\n \"service_account_path\"\n ):\n raise ValueError(\"Must specify either channel_name or video_ids\")\n return values\n def _load_credentials(self) -> Any:\n \"\"\"Load credentials.\"\"\"\n # Adapted from https://developers.google.com/drive/api/v3/quickstart/python\n try:\n from google.auth.transport.requests import Request\n from google.oauth2 import service_account\n from google.oauth2.credentials import Credentials\n from google_auth_oauthlib.flow import InstalledAppFlow\n from youtube_transcript_api import YouTubeTranscriptApi # noqa: F401\n except ImportError:\n raise ImportError(\n \"You must run\"\n \"`pip install --upgrade \"\n \"google-api-python-client google-auth-httplib2 \"\n \"google-auth-oauthlib \"\n \"youtube-transcript-api` \"\n \"to use the Google Drive loader\"\n )\n creds = None\n if self.service_account_path.exists():\n return service_account.Credentials.from_service_account_file(\n str(self.service_account_path)\n )\n if self.token_path.exists():\n creds = Credentials.from_authorized_user_file(str(self.token_path), SCOPES)\n if not creds or not creds.valid:\n if creds and creds.expired and creds.refresh_token:\n creds.refresh(Request())\n else:\n flow = InstalledAppFlow.from_client_secrets_file(\n str(self.credentials_path), SCOPES\n )\n creds = flow.run_local_server(port=0)\n with open(self.token_path, \"w\") as token:\n token.write(creds.to_json())\n return creds", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"} {"id": "4d187c15c852-2", "text": "token.write(creds.to_json())\n return creds\nALLOWED_SCHEMAS = {\"http\", \"https\"}\nALLOWED_NETLOCK = {\n \"youtu.be\",\n \"m.youtube.com\",\n \"youtube.com\",\n \"www.youtube.com\",\n \"www.youtube-nocookie.com\",\n \"vid.plus\",\n}\ndef _parse_video_id(url: str) -> Optional[str]:\n \"\"\"Parse a youtube url and return the video id if valid, otherwise None.\"\"\"\n parsed_url = urlparse(url)\n if parsed_url.scheme not in ALLOWED_SCHEMAS:\n return None\n if parsed_url.netloc not in ALLOWED_NETLOCK:\n return None\n path = parsed_url.path\n if path.endswith(\"/watch\"):\n query = parsed_url.query\n parsed_query = parse_qs(query)\n if \"v\" in parsed_query:\n ids = parsed_query[\"v\"]\n video_id = ids if isinstance(ids, str) else ids[0]\n else:\n return None\n else:\n path = parsed_url.path.lstrip(\"/\")\n video_id = path.split(\"/\")[-1]\n if len(video_id) != 11: # Video IDs are 11 characters long\n return None\n return video_id\n[docs]class YoutubeLoader(BaseLoader):\n \"\"\"Loader that loads Youtube transcripts.\"\"\"\n def __init__(\n self,\n video_id: str,\n add_video_info: bool = False,\n language: Union[str, Sequence[str]] = \"en\",\n translation: str = \"en\",\n continue_on_failure: bool = False,\n ):\n \"\"\"Initialize with YouTube video ID.\"\"\"\n self.video_id = video_id\n self.add_video_info = add_video_info\n self.language = language", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"} {"id": "4d187c15c852-3", "text": "self.add_video_info = add_video_info\n self.language = language\n if isinstance(language, str):\n self.language = [language]\n else:\n self.language = language\n self.translation = translation\n self.continue_on_failure = continue_on_failure\n[docs] @staticmethod\n def extract_video_id(youtube_url: str) -> str:\n \"\"\"Extract video id from common YT urls.\"\"\"\n video_id = _parse_video_id(youtube_url)\n if not video_id:\n raise ValueError(\n f\"Could not determine the video ID for the URL {youtube_url}\"\n )\n return video_id\n[docs] @classmethod\n def from_youtube_url(cls, youtube_url: str, **kwargs: Any) -> YoutubeLoader:\n \"\"\"Given youtube URL, load video.\"\"\"\n video_id = cls.extract_video_id(youtube_url)\n return cls(video_id, **kwargs)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n from youtube_transcript_api import (\n NoTranscriptFound,\n TranscriptsDisabled,\n YouTubeTranscriptApi,\n )\n except ImportError:\n raise ImportError(\n \"Could not import youtube_transcript_api python package. \"\n \"Please install it with `pip install youtube-transcript-api`.\"\n )\n metadata = {\"source\": self.video_id}\n if self.add_video_info:\n # Get more video meta info\n # Such as title, description, thumbnail url, publish_date\n video_info = self._get_video_info()\n metadata.update(video_info)\n try:\n transcript_list = YouTubeTranscriptApi.list_transcripts(self.video_id)\n except TranscriptsDisabled:\n return []\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"} {"id": "4d187c15c852-4", "text": "except TranscriptsDisabled:\n return []\n try:\n transcript = transcript_list.find_transcript(self.language)\n except NoTranscriptFound:\n en_transcript = transcript_list.find_transcript([\"en\"])\n transcript = en_transcript.translate(self.translation)\n transcript_pieces = transcript.fetch()\n transcript = \" \".join([t[\"text\"].strip(\" \") for t in transcript_pieces])\n return [Document(page_content=transcript, metadata=metadata)]\n def _get_video_info(self) -> dict:\n \"\"\"Get important video information.\n Components are:\n - title\n - description\n - thumbnail url,\n - publish_date\n - channel_author\n - and more.\n \"\"\"\n try:\n from pytube import YouTube\n except ImportError:\n raise ImportError(\n \"Could not import pytube python package. \"\n \"Please install it with `pip install pytube`.\"\n )\n yt = YouTube(f\"https://www.youtube.com/watch?v={self.video_id}\")\n video_info = {\n \"title\": yt.title or \"Unknown\",\n \"description\": yt.description or \"Unknown\",\n \"view_count\": yt.views or 0,\n \"thumbnail_url\": yt.thumbnail_url or \"Unknown\",\n \"publish_date\": yt.publish_date.strftime(\"%Y-%m-%d %H:%M:%S\")\n if yt.publish_date\n else \"Unknown\",\n \"length\": yt.length or 0,\n \"author\": yt.author or \"Unknown\",\n }\n return video_info\n[docs]@dataclass\nclass GoogleApiYoutubeLoader(BaseLoader):\n \"\"\"Loader that loads all Videos from a Channel\n To use, you should have the ``googleapiclient,youtube_transcript_api``", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"} {"id": "4d187c15c852-5", "text": "To use, you should have the ``googleapiclient,youtube_transcript_api``\n python package installed.\n As the service needs a google_api_client, you first have to initialize\n the GoogleApiClient.\n Additionally you have to either provide a channel name or a list of videoids\n \"https://developers.google.com/docs/api/quickstart/python\"\n Example:\n .. code-block:: python\n from langchain.document_loaders import GoogleApiClient\n from langchain.document_loaders import GoogleApiYoutubeLoader\n google_api_client = GoogleApiClient(\n service_account_path=Path(\"path_to_your_sec_file.json\")\n )\n loader = GoogleApiYoutubeLoader(\n google_api_client=google_api_client,\n channel_name = \"CodeAesthetic\"\n )\n load.load()\n \"\"\"\n google_api_client: GoogleApiClient\n channel_name: Optional[str] = None\n video_ids: Optional[List[str]] = None\n add_video_info: bool = True\n captions_language: str = \"en\"\n continue_on_failure: bool = False\n def __post_init__(self) -> None:\n self.youtube_client = self._build_youtube_client(self.google_api_client.creds)\n def _build_youtube_client(self, creds: Any) -> Any:\n try:\n from googleapiclient.discovery import build\n from youtube_transcript_api import YouTubeTranscriptApi # noqa: F401\n except ImportError:\n raise ImportError(\n \"You must run\"\n \"`pip install --upgrade \"\n \"google-api-python-client google-auth-httplib2 \"\n \"google-auth-oauthlib \"\n \"youtube-transcript-api` \"\n \"to use the Google Drive loader\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"} {"id": "4d187c15c852-6", "text": "\"to use the Google Drive loader\"\n )\n return build(\"youtube\", \"v3\", credentials=creds)\n[docs] @root_validator\n def validate_channel_or_videoIds_is_set(\n cls, values: Dict[str, Any]\n ) -> Dict[str, Any]:\n \"\"\"Validate that either folder_id or document_ids is set, but not both.\"\"\"\n if not values.get(\"channel_name\") and not values.get(\"video_ids\"):\n raise ValueError(\"Must specify either channel_name or video_ids\")\n return values\n def _get_transcripe_for_video_id(self, video_id: str) -> str:\n from youtube_transcript_api import NoTranscriptFound, YouTubeTranscriptApi\n transcript_list = YouTubeTranscriptApi.list_transcripts(video_id)\n try:\n transcript = transcript_list.find_transcript([self.captions_language])\n except NoTranscriptFound:\n for available_transcript in transcript_list:\n transcript = available_transcript.translate(self.captions_language)\n continue\n transcript_pieces = transcript.fetch()\n return \" \".join([t[\"text\"].strip(\" \") for t in transcript_pieces])\n def _get_document_for_video_id(self, video_id: str, **kwargs: Any) -> Document:\n captions = self._get_transcripe_for_video_id(video_id)\n video_response = (\n self.youtube_client.videos()\n .list(\n part=\"id,snippet\",\n id=video_id,\n )\n .execute()\n )\n return Document(\n page_content=captions,\n metadata=video_response.get(\"items\")[0],\n )\n def _get_channel_id(self, channel_name: str) -> str:\n request = self.youtube_client.search().list(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"} {"id": "4d187c15c852-7", "text": "request = self.youtube_client.search().list(\n part=\"id\",\n q=channel_name,\n type=\"channel\",\n maxResults=1, # we only need one result since channel names are unique\n )\n response = request.execute()\n channel_id = response[\"items\"][0][\"id\"][\"channelId\"]\n return channel_id\n def _get_document_for_channel(self, channel: str, **kwargs: Any) -> List[Document]:\n try:\n from youtube_transcript_api import (\n NoTranscriptFound,\n TranscriptsDisabled,\n )\n except ImportError:\n raise ImportError(\n \"You must run\"\n \"`pip install --upgrade \"\n \"youtube-transcript-api` \"\n \"to use the youtube loader\"\n )\n channel_id = self._get_channel_id(channel)\n request = self.youtube_client.search().list(\n part=\"id,snippet\",\n channelId=channel_id,\n maxResults=50, # adjust this value to retrieve more or fewer videos\n )\n video_ids = []\n while request is not None:\n response = request.execute()\n # Add each video ID to the list\n for item in response[\"items\"]:\n if not item[\"id\"].get(\"videoId\"):\n continue\n meta_data = {\"videoId\": item[\"id\"][\"videoId\"]}\n if self.add_video_info:\n item[\"snippet\"].pop(\"thumbnails\")\n meta_data.update(item[\"snippet\"])\n try:\n page_content = self._get_transcripe_for_video_id(\n item[\"id\"][\"videoId\"]\n )\n video_ids.append(\n Document(\n page_content=page_content,\n metadata=meta_data,\n )\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"} {"id": "4d187c15c852-8", "text": "metadata=meta_data,\n )\n )\n except (TranscriptsDisabled, NoTranscriptFound) as e:\n if self.continue_on_failure:\n logger.error(\n \"Error fetching transscript \"\n + f\" {item['id']['videoId']}, exception: {e}\"\n )\n else:\n raise e\n pass\n request = self.youtube_client.search().list_next(request, response)\n return video_ids\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n document_list = []\n if self.channel_name:\n document_list.extend(self._get_document_for_channel(self.channel_name))\n elif self.video_ids:\n document_list.extend(\n [\n self._get_document_for_video_id(video_id)\n for video_id in self.video_ids\n ]\n )\n else:\n raise ValueError(\"Must specify either channel_name or video_ids\")\n return document_list", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"} {"id": "95a82129a8d3-0", "text": "Source code for langchain.document_loaders.wikipedia\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utilities.wikipedia import WikipediaAPIWrapper\n[docs]class WikipediaLoader(BaseLoader):\n \"\"\"Loads a query result from www.wikipedia.org into a list of Documents.\n The hard limit on the number of downloaded Documents is 300 for now.\n Each wiki page represents one Document.\n \"\"\"\n def __init__(\n self,\n query: str,\n lang: str = \"en\",\n load_max_docs: Optional[int] = 100,\n load_all_available_meta: Optional[bool] = False,\n doc_content_chars_max: Optional[int] = 4000,\n ):\n \"\"\"\n Initializes a new instance of the WikipediaLoader class.\n Args:\n query (str): The query string to search on Wikipedia.\n lang (str, optional): The language code for the Wikipedia language edition.\n Defaults to \"en\".\n load_max_docs (int, optional): The maximum number of documents to load.\n Defaults to 100.\n load_all_available_meta (bool, optional): Indicates whether to load all\n available metadata for each document. Defaults to False.\n doc_content_chars_max (int, optional): The maximum number of characters\n for the document content. Defaults to 4000.\n \"\"\"\n self.query = query\n self.lang = lang\n self.load_max_docs = load_max_docs\n self.load_all_available_meta = load_all_available_meta\n self.doc_content_chars_max = doc_content_chars_max\n[docs] def load(self) -> List[Document]:\n \"\"\"\n Loads the query result from Wikipedia into a list of Documents.\n Returns:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/wikipedia.html"} {"id": "95a82129a8d3-1", "text": "Loads the query result from Wikipedia into a list of Documents.\n Returns:\n List[Document]: A list of Document objects representing the loaded\n Wikipedia pages.\n \"\"\"\n client = WikipediaAPIWrapper(\n lang=self.lang,\n top_k_results=self.load_max_docs,\n load_all_available_meta=self.load_all_available_meta,\n doc_content_chars_max=self.doc_content_chars_max,\n )\n docs = client.load(self.query)\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/wikipedia.html"} {"id": "7c5cbbe31368-0", "text": "Source code for langchain.document_loaders.airtable\nfrom typing import Iterator, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class AirtableLoader(BaseLoader):\n \"\"\"Loader for Airtable tables.\"\"\"\n def __init__(self, api_token: str, table_id: str, base_id: str):\n \"\"\"Initialize with API token and the IDs for table and base\"\"\"\n self.api_token = api_token\n \"\"\"Airtable API token.\"\"\"\n self.table_id = table_id\n \"\"\"Airtable table ID.\"\"\"\n self.base_id = base_id\n \"\"\"Airtable base ID.\"\"\"\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"Lazy load Documents from table.\"\"\"\n from pyairtable import Table\n table = Table(self.api_token, self.base_id, self.table_id)\n records = table.all()\n for record in records:\n # Need to convert record from dict to str\n yield Document(\n page_content=str(record),\n metadata={\n \"source\": self.base_id + \"_\" + self.table_id,\n \"base_id\": self.base_id,\n \"table_id\": self.table_id,\n },\n )\n[docs] def load(self) -> List[Document]:\n \"\"\"Load Documents from table.\"\"\"\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/airtable.html"} {"id": "3d21918ec66c-0", "text": "Source code for langchain.document_loaders.dataframe\n\"\"\"Load from a Dataframe object\"\"\"\nfrom typing import Any, Iterator, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class DataFrameLoader(BaseLoader):\n \"\"\"Load Pandas DataFrame.\"\"\"\n def __init__(self, data_frame: Any, page_content_column: str = \"text\"):\n \"\"\"Initialize with dataframe object.\n Args:\n data_frame: Pandas DataFrame object.\n page_content_column: Name of the column containing the page content.\n Defaults to \"text\".\n \"\"\"\n import pandas as pd\n if not isinstance(data_frame, pd.DataFrame):\n raise ValueError(\n f\"Expected data_frame to be a pd.DataFrame, got {type(data_frame)}\"\n )\n self.data_frame = data_frame\n self.page_content_column = page_content_column\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"Lazy load records from dataframe.\"\"\"\n for _, row in self.data_frame.iterrows():\n text = row[self.page_content_column]\n metadata = row.to_dict()\n metadata.pop(self.page_content_column)\n yield Document(page_content=text, metadata=metadata)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load full dataframe.\"\"\"\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/dataframe.html"} {"id": "b099c63d88b2-0", "text": "Source code for langchain.document_loaders.directory\n\"\"\"Load documents from a directory.\"\"\"\nimport concurrent\nimport logging\nfrom pathlib import Path\nfrom typing import Any, List, Optional, Type, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.html_bs import BSHTMLLoader\nfrom langchain.document_loaders.text import TextLoader\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\nFILE_LOADER_TYPE = Union[\n Type[UnstructuredFileLoader], Type[TextLoader], Type[BSHTMLLoader]\n]\nlogger = logging.getLogger(__name__)\ndef _is_visible(p: Path) -> bool:\n parts = p.parts\n for _p in parts:\n if _p.startswith(\".\"):\n return False\n return True\n[docs]class DirectoryLoader(BaseLoader):\n \"\"\"Load documents from a directory.\"\"\"\n def __init__(\n self,\n path: str,\n glob: str = \"**/[!.]*\",\n silent_errors: bool = False,\n load_hidden: bool = False,\n loader_cls: FILE_LOADER_TYPE = UnstructuredFileLoader,\n loader_kwargs: Union[dict, None] = None,\n recursive: bool = False,\n show_progress: bool = False,\n use_multithreading: bool = False,\n max_concurrency: int = 4,\n ):\n \"\"\"Initialize with a path to directory and how to glob over it.\n Args:\n path: Path to directory.\n glob: Glob pattern to use to find files. Defaults to \"**/[!.]*\"\n (all files except hidden).\n silent_errors: Whether to silently ignore errors. Defaults to False.\n load_hidden: Whether to load hidden files. Defaults to False.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/directory.html"} {"id": "b099c63d88b2-1", "text": "load_hidden: Whether to load hidden files. Defaults to False.\n loader_cls: Loader class to use for loading files.\n Defaults to UnstructuredFileLoader.\n loader_kwargs: Keyword arguments to pass to loader_cls. Defaults to None.\n recursive: Whether to recursively search for files. Defaults to False.\n show_progress: Whether to show a progress bar. Defaults to False.\n use_multithreading: Whether to use multithreading. Defaults to False.\n max_concurrency: The maximum number of threads to use. Defaults to 4.\n \"\"\"\n if loader_kwargs is None:\n loader_kwargs = {}\n self.path = path\n self.glob = glob\n self.load_hidden = load_hidden\n self.loader_cls = loader_cls\n self.loader_kwargs = loader_kwargs\n self.silent_errors = silent_errors\n self.recursive = recursive\n self.show_progress = show_progress\n self.use_multithreading = use_multithreading\n self.max_concurrency = max_concurrency\n[docs] def load_file(\n self, item: Path, path: Path, docs: List[Document], pbar: Optional[Any]\n ) -> None:\n \"\"\"Load a file.\n Args:\n item: File path.\n path: Directory path.\n docs: List of documents to append to.\n pbar: Progress bar. Defaults to None.\n \"\"\"\n if item.is_file():\n if _is_visible(item.relative_to(path)) or self.load_hidden:\n try:\n sub_docs = self.loader_cls(str(item), **self.loader_kwargs).load()\n docs.extend(sub_docs)\n except Exception as e:\n if self.silent_errors:\n logger.warning(e)\n else:\n raise e\n finally:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/directory.html"} {"id": "b099c63d88b2-2", "text": "logger.warning(e)\n else:\n raise e\n finally:\n if pbar:\n pbar.update(1)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n p = Path(self.path)\n if not p.exists():\n raise FileNotFoundError(f\"Directory not found: '{self.path}'\")\n if not p.is_dir():\n raise ValueError(f\"Expected directory, got file: '{self.path}'\")\n docs: List[Document] = []\n items = list(p.rglob(self.glob) if self.recursive else p.glob(self.glob))\n pbar = None\n if self.show_progress:\n try:\n from tqdm import tqdm\n pbar = tqdm(total=len(items))\n except ImportError as e:\n logger.warning(\n \"To log the progress of DirectoryLoader you need to install tqdm, \"\n \"`pip install tqdm`\"\n )\n if self.silent_errors:\n logger.warning(e)\n else:\n raise e\n if self.use_multithreading:\n with concurrent.futures.ThreadPoolExecutor(\n max_workers=self.max_concurrency\n ) as executor:\n executor.map(lambda i: self.load_file(i, p, docs, pbar), items)\n else:\n for i in items:\n self.load_file(i, p, docs, pbar)\n if pbar:\n pbar.close()\n return docs\n#", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/directory.html"} {"id": "2139bf996b8b-0", "text": "Source code for langchain.document_loaders.spreedly\n\"\"\"Loader that fetches data from Spreedly API.\"\"\"\nimport json\nimport urllib.request\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utils import stringify_dict\nSPREEDLY_ENDPOINTS = {\n \"gateways_options\": \"https://core.spreedly.com/v1/gateways_options.json\",\n \"gateways\": \"https://core.spreedly.com/v1/gateways.json\",\n \"receivers_options\": \"https://core.spreedly.com/v1/receivers_options.json\",\n \"receivers\": \"https://core.spreedly.com/v1/receivers.json\",\n \"payment_methods\": \"https://core.spreedly.com/v1/payment_methods.json\",\n \"certificates\": \"https://core.spreedly.com/v1/certificates.json\",\n \"transactions\": \"https://core.spreedly.com/v1/transactions.json\",\n \"environments\": \"https://core.spreedly.com/v1/environments.json\",\n}\n[docs]class SpreedlyLoader(BaseLoader):\n \"\"\"Loader that fetches data from Spreedly API.\"\"\"\n def __init__(self, access_token: str, resource: str) -> None:\n self.access_token = access_token\n self.resource = resource\n self.headers = {\n \"Authorization\": f\"Bearer {self.access_token}\",\n \"Accept\": \"application/json\",\n }\n def _make_request(self, url: str) -> List[Document]:\n request = urllib.request.Request(url, headers=self.headers)\n with urllib.request.urlopen(request) as response:\n json_data = json.loads(response.read().decode())\n text = stringify_dict(json_data)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/spreedly.html"} {"id": "2139bf996b8b-1", "text": "text = stringify_dict(json_data)\n metadata = {\"source\": url}\n return [Document(page_content=text, metadata=metadata)]\n def _get_resource(self) -> List[Document]:\n endpoint = SPREEDLY_ENDPOINTS.get(self.resource)\n if endpoint is None:\n return []\n return self._make_request(endpoint)\n[docs] def load(self) -> List[Document]:\n return self._get_resource()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/spreedly.html"} {"id": "60afd97800f7-0", "text": "Source code for langchain.document_loaders.web_base\n\"\"\"Web base loader class.\"\"\"\nimport asyncio\nimport logging\nimport warnings\nfrom typing import Any, Dict, Iterator, List, Optional, Union\nimport aiohttp\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\ndefault_header_template = {\n \"User-Agent\": \"\",\n \"Accept\": \"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*\"\n \";q=0.8\",\n \"Accept-Language\": \"en-US,en;q=0.5\",\n \"Referer\": \"https://www.google.com/\",\n \"DNT\": \"1\",\n \"Connection\": \"keep-alive\",\n \"Upgrade-Insecure-Requests\": \"1\",\n}\ndef _build_metadata(soup: Any, url: str) -> dict:\n \"\"\"Build metadata from BeautifulSoup output.\"\"\"\n metadata = {\"source\": url}\n if title := soup.find(\"title\"):\n metadata[\"title\"] = title.get_text()\n if description := soup.find(\"meta\", attrs={\"name\": \"description\"}):\n metadata[\"description\"] = description.get(\"content\", None)\n if html := soup.find(\"html\"):\n metadata[\"language\"] = html.get(\"lang\", None)\n return metadata\n[docs]class WebBaseLoader(BaseLoader):\n \"\"\"Loader that uses urllib and beautiful soup to load webpages.\"\"\"\n web_paths: List[str]\n requests_per_second: int = 2\n \"\"\"Max number of concurrent requests to make.\"\"\"\n default_parser: str = \"html.parser\"\n \"\"\"Default parser to use for BeautifulSoup.\"\"\"\n requests_kwargs: Dict[str, Any] = {}\n \"\"\"kwargs for requests\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/web_base.html"} {"id": "60afd97800f7-1", "text": "requests_kwargs: Dict[str, Any] = {}\n \"\"\"kwargs for requests\"\"\"\n raise_for_status: bool = False\n \"\"\"Raise an exception if http status code denotes an error.\"\"\"\n bs_get_text_kwargs: Dict[str, Any] = {}\n \"\"\"kwargs for beatifulsoup4 get_text\"\"\"\n def __init__(\n self,\n web_path: Union[str, List[str]],\n header_template: Optional[dict] = None,\n verify_ssl: Optional[bool] = True,\n proxies: Optional[dict] = None,\n ):\n \"\"\"Initialize with webpage path.\"\"\"\n # TODO: Deprecate web_path in favor of web_paths, and remove this\n # left like this because there are a number of loaders that expect single\n # urls\n if isinstance(web_path, str):\n self.web_paths = [web_path]\n elif isinstance(web_path, List):\n self.web_paths = web_path\n try:\n import bs4 # noqa:F401\n except ImportError:\n raise ValueError(\n \"bs4 package not found, please install it with \" \"`pip install bs4`\"\n )\n headers = header_template or default_header_template\n if not headers.get(\"User-Agent\"):\n try:\n from fake_useragent import UserAgent\n headers[\"User-Agent\"] = UserAgent().random\n except ImportError:\n logger.info(\n \"fake_useragent not found, using default user agent.\"\n \"To get a realistic header for requests, \"\n \"`pip install fake_useragent`.\"\n )\n self.session = requests.Session()\n self.session.headers = dict(headers)\n self.session.verify = verify_ssl\n if proxies:\n self.session.proxies.update(proxies)\n @property", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/web_base.html"} {"id": "60afd97800f7-2", "text": "if proxies:\n self.session.proxies.update(proxies)\n @property\n def web_path(self) -> str:\n if len(self.web_paths) > 1:\n raise ValueError(\"Multiple webpaths found.\")\n return self.web_paths[0]\n async def _fetch(\n self, url: str, retries: int = 3, cooldown: int = 2, backoff: float = 1.5\n ) -> str:\n async with aiohttp.ClientSession() as session:\n for i in range(retries):\n try:\n async with session.get(\n url,\n headers=self.session.headers,\n ssl=None if self.session.verify else False,\n ) as response:\n return await response.text()\n except aiohttp.ClientConnectionError as e:\n if i == retries - 1:\n raise\n else:\n logger.warning(\n f\"Error fetching {url} with attempt \"\n f\"{i + 1}/{retries}: {e}. Retrying...\"\n )\n await asyncio.sleep(cooldown * backoff**i)\n raise ValueError(\"retry count exceeded\")\n async def _fetch_with_rate_limit(\n self, url: str, semaphore: asyncio.Semaphore\n ) -> str:\n async with semaphore:\n return await self._fetch(url)\n[docs] async def fetch_all(self, urls: List[str]) -> Any:\n \"\"\"Fetch all urls concurrently with rate limiting.\"\"\"\n semaphore = asyncio.Semaphore(self.requests_per_second)\n tasks = []\n for url in urls:\n task = asyncio.ensure_future(self._fetch_with_rate_limit(url, semaphore))\n tasks.append(task)\n try:\n from tqdm.asyncio import tqdm_asyncio", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/web_base.html"} {"id": "60afd97800f7-3", "text": "tasks.append(task)\n try:\n from tqdm.asyncio import tqdm_asyncio\n return await tqdm_asyncio.gather(\n *tasks, desc=\"Fetching pages\", ascii=True, mininterval=1\n )\n except ImportError:\n warnings.warn(\"For better logging of progress, `pip install tqdm`\")\n return await asyncio.gather(*tasks)\n @staticmethod\n def _check_parser(parser: str) -> None:\n \"\"\"Check that parser is valid for bs4.\"\"\"\n valid_parsers = [\"html.parser\", \"lxml\", \"xml\", \"lxml-xml\", \"html5lib\"]\n if parser not in valid_parsers:\n raise ValueError(\n \"`parser` must be one of \" + \", \".join(valid_parsers) + \".\"\n )\n[docs] def scrape_all(self, urls: List[str], parser: Union[str, None] = None) -> List[Any]:\n \"\"\"Fetch all urls, then return soups for all results.\"\"\"\n from bs4 import BeautifulSoup\n results = asyncio.run(self.fetch_all(urls))\n final_results = []\n for i, result in enumerate(results):\n url = urls[i]\n if parser is None:\n if url.endswith(\".xml\"):\n parser = \"xml\"\n else:\n parser = self.default_parser\n self._check_parser(parser)\n final_results.append(BeautifulSoup(result, parser))\n return final_results\n def _scrape(self, url: str, parser: Union[str, None] = None) -> Any:\n from bs4 import BeautifulSoup\n if parser is None:\n if url.endswith(\".xml\"):\n parser = \"xml\"\n else:\n parser = self.default_parser\n self._check_parser(parser)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/web_base.html"} {"id": "60afd97800f7-4", "text": "else:\n parser = self.default_parser\n self._check_parser(parser)\n html_doc = self.session.get(url, **self.requests_kwargs)\n if self.raise_for_status:\n html_doc.raise_for_status()\n html_doc.encoding = html_doc.apparent_encoding\n return BeautifulSoup(html_doc.text, parser)\n[docs] def scrape(self, parser: Union[str, None] = None) -> Any:\n \"\"\"Scrape data from webpage and return it in BeautifulSoup format.\"\"\"\n if parser is None:\n parser = self.default_parser\n return self._scrape(self.web_path, parser)\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"Lazy load text from the url(s) in web_path.\"\"\"\n for path in self.web_paths:\n soup = self._scrape(path)\n text = soup.get_text(**self.bs_get_text_kwargs)\n metadata = _build_metadata(soup, path)\n yield Document(page_content=text, metadata=metadata)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load text from the url(s) in web_path.\"\"\"\n return list(self.lazy_load())\n[docs] def aload(self) -> List[Document]:\n \"\"\"Load text from the urls in web_path async into Documents.\"\"\"\n results = self.scrape_all(self.web_paths)\n docs = []\n for i in range(len(results)):\n soup = results[i]\n text = soup.get_text(**self.bs_get_text_kwargs)\n metadata = _build_metadata(soup, self.web_paths[i])\n docs.append(Document(page_content=text, metadata=metadata))\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/web_base.html"} {"id": "1345fb3ed7a8-0", "text": "Source code for langchain.document_loaders.ifixit\n\"\"\"Loader that loads iFixit data.\"\"\"\nfrom typing import List, Optional\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.web_base import WebBaseLoader\nIFIXIT_BASE_URL = \"https://www.ifixit.com/api/2.0\"\n[docs]class IFixitLoader(BaseLoader):\n \"\"\"Load iFixit repair guides, device wikis and answers.\n iFixit is the largest, open repair community on the web. The site contains nearly\n 100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is\n licensed under CC-BY.\n This loader will allow you to download the text of a repair guide, text of Q&A's\n and wikis from devices on iFixit using their open APIs and web scraping.\n \"\"\"\n def __init__(self, web_path: str):\n \"\"\"Initialize with a web path.\"\"\"\n if not web_path.startswith(\"https://www.ifixit.com\"):\n raise ValueError(\"web path must start with 'https://www.ifixit.com'\")\n path = web_path.replace(\"https://www.ifixit.com\", \"\")\n allowed_paths = [\"/Device\", \"/Guide\", \"/Answers\", \"/Teardown\"]\n \"\"\" TODO: Add /Wiki \"\"\"\n if not any(path.startswith(allowed_path) for allowed_path in allowed_paths):\n raise ValueError(\n \"web path must start with /Device, /Guide, /Teardown or /Answers\"\n )\n pieces = [x for x in path.split(\"/\") if x]\n \"\"\"Teardowns are just guides by a different name\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/ifixit.html"} {"id": "1345fb3ed7a8-1", "text": "\"\"\"Teardowns are just guides by a different name\"\"\"\n self.page_type = pieces[0] if pieces[0] != \"Teardown\" else \"Guide\"\n if self.page_type == \"Guide\" or self.page_type == \"Answers\":\n self.id = pieces[2]\n else:\n self.id = pieces[1]\n self.web_path = web_path\n[docs] def load(self) -> List[Document]:\n if self.page_type == \"Device\":\n return self.load_device()\n elif self.page_type == \"Guide\" or self.page_type == \"Teardown\":\n return self.load_guide()\n elif self.page_type == \"Answers\":\n return self.load_questions_and_answers()\n else:\n raise ValueError(\"Unknown page type: \" + self.page_type)\n[docs] @staticmethod\n def load_suggestions(query: str = \"\", doc_type: str = \"all\") -> List[Document]:\n \"\"\"Load suggestions.\n Args:\n query: A query string\n doc_type: The type of document to search for. Can be one of \"all\",\n \"device\", \"guide\", \"teardown\", \"answer\", \"wiki\".\n Returns:\n \"\"\"\n res = requests.get(\n IFIXIT_BASE_URL + \"/suggest/\" + query + \"?doctypes=\" + doc_type\n )\n if res.status_code != 200:\n raise ValueError(\n 'Could not load suggestions for \"' + query + '\"\\n' + res.json()\n )\n data = res.json()\n results = data[\"results\"]\n output = []\n for result in results:\n try:\n loader = IFixitLoader(result[\"url\"])\n if loader.page_type == \"Device\":", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/ifixit.html"} {"id": "1345fb3ed7a8-2", "text": "if loader.page_type == \"Device\":\n output += loader.load_device(include_guides=False)\n else:\n output += loader.load()\n except ValueError:\n continue\n return output\n[docs] def load_questions_and_answers(\n self, url_override: Optional[str] = None\n ) -> List[Document]:\n \"\"\"Load a list of questions and answers.\n Args:\n url_override: A URL to override the default URL.\n Returns: List[Document]\n \"\"\"\n loader = WebBaseLoader(self.web_path if url_override is None else url_override)\n soup = loader.scrape()\n output = []\n title = soup.find(\"h1\", \"post-title\").text\n output.append(\"# \" + title)\n output.append(soup.select_one(\".post-content .post-text\").text.strip())\n answersHeader = soup.find(\"div\", \"post-answers-header\")\n if answersHeader:\n output.append(\"\\n## \" + answersHeader.text.strip())\n for answer in soup.select(\".js-answers-list .post.post-answer\"):\n if answer.has_attr(\"itemprop\") and \"acceptedAnswer\" in answer[\"itemprop\"]:\n output.append(\"\\n### Accepted Answer\")\n elif \"post-helpful\" in answer[\"class\"]:\n output.append(\"\\n### Most Helpful Answer\")\n else:\n output.append(\"\\n### Other Answer\")\n output += [\n a.text.strip() for a in answer.select(\".post-content .post-text\")\n ]\n output.append(\"\\n\")\n text = \"\\n\".join(output).strip()\n metadata = {\"source\": self.web_path, \"title\": title}\n return [Document(page_content=text, metadata=metadata)]\n[docs] def load_device(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/ifixit.html"} {"id": "1345fb3ed7a8-3", "text": "[docs] def load_device(\n self, url_override: Optional[str] = None, include_guides: bool = True\n ) -> List[Document]:\n \"\"\"Loads a device\n Args:\n url_override: A URL to override the default URL.\n include_guides: Whether to include guides linked to from the device.\n Defaults to True.\n Returns:\n \"\"\"\n documents = []\n if url_override is None:\n url = IFIXIT_BASE_URL + \"/wikis/CATEGORY/\" + self.id\n else:\n url = url_override\n res = requests.get(url)\n data = res.json()\n text = \"\\n\".join(\n [\n data[key]\n for key in [\"title\", \"description\", \"contents_raw\"]\n if key in data\n ]\n ).strip()\n metadata = {\"source\": self.web_path, \"title\": data[\"title\"]}\n documents.append(Document(page_content=text, metadata=metadata))\n if include_guides:\n \"\"\"Load and return documents for each guide linked to from the device\"\"\"\n guide_urls = [guide[\"url\"] for guide in data[\"guides\"]]\n for guide_url in guide_urls:\n documents.append(IFixitLoader(guide_url).load()[0])\n return documents\n[docs] def load_guide(self, url_override: Optional[str] = None) -> List[Document]:\n \"\"\"Load a guide\n Args:\n url_override: A URL to override the default URL.\n Returns: List[Document]\n \"\"\"\n if url_override is None:\n url = IFIXIT_BASE_URL + \"/guides/\" + self.id\n else:\n url = url_override\n res = requests.get(url)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/ifixit.html"} {"id": "1345fb3ed7a8-4", "text": "else:\n url = url_override\n res = requests.get(url)\n if res.status_code != 200:\n raise ValueError(\n \"Could not load guide: \" + self.web_path + \"\\n\" + res.json()\n )\n data = res.json()\n doc_parts = [\"# \" + data[\"title\"], data[\"introduction_raw\"]]\n doc_parts.append(\"\\n\\n###Tools Required:\")\n if len(data[\"tools\"]) == 0:\n doc_parts.append(\"\\n - None\")\n else:\n for tool in data[\"tools\"]:\n doc_parts.append(\"\\n - \" + tool[\"text\"])\n doc_parts.append(\"\\n\\n###Parts Required:\")\n if len(data[\"parts\"]) == 0:\n doc_parts.append(\"\\n - None\")\n else:\n for part in data[\"parts\"]:\n doc_parts.append(\"\\n - \" + part[\"text\"])\n for row in data[\"steps\"]:\n doc_parts.append(\n \"\\n\\n## \"\n + (\n row[\"title\"]\n if row[\"title\"] != \"\"\n else \"Step {}\".format(row[\"orderby\"])\n )\n )\n for line in row[\"lines\"]:\n doc_parts.append(line[\"text_raw\"])\n doc_parts.append(data[\"conclusion_raw\"])\n text = \"\\n\".join(doc_parts)\n metadata = {\"source\": self.web_path, \"title\": data[\"title\"]}\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/ifixit.html"} {"id": "06a31f56208f-0", "text": "Source code for langchain.document_loaders.arxiv\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utilities.arxiv import ArxivAPIWrapper\n[docs]class ArxivLoader(BaseLoader):\n \"\"\"Loads a query result from arxiv.org into a list of Documents.\n Each document represents one Document.\n The loader converts the original PDF format into the text.\n \"\"\"\n def __init__(\n self,\n query: str,\n load_max_docs: Optional[int] = 100,\n load_all_available_meta: Optional[bool] = False,\n ):\n self.query = query\n \"\"\"The query to be passed to the arxiv.org API.\"\"\"\n self.load_max_docs = load_max_docs\n \"\"\"The maximum number of documents to load.\"\"\"\n self.load_all_available_meta = load_all_available_meta\n \"\"\"Whether to load all available metadata.\"\"\"\n[docs] def load(self) -> List[Document]:\n arxiv_client = ArxivAPIWrapper(\n load_max_docs=self.load_max_docs,\n load_all_available_meta=self.load_all_available_meta,\n )\n docs = arxiv_client.load(self.query)\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/arxiv.html"} {"id": "25b372d994e5-0", "text": "Source code for langchain.document_loaders.parsers.pdf\n\"\"\"Module contains common parsers for PDFs.\"\"\"\nfrom typing import Any, Iterator, Mapping, Optional, Union\nfrom langchain.document_loaders.base import BaseBlobParser\nfrom langchain.document_loaders.blob_loaders import Blob\nfrom langchain.schema import Document\n[docs]class PyPDFParser(BaseBlobParser):\n \"\"\"Loads a PDF with pypdf and chunks at character level.\"\"\"\n def __init__(self, password: Optional[Union[str, bytes]] = None):\n self.password = password\n[docs] def lazy_parse(self, blob: Blob) -> Iterator[Document]:\n \"\"\"Lazily parse the blob.\"\"\"\n import pypdf\n with blob.as_bytes_io() as pdf_file_obj:\n pdf_reader = pypdf.PdfReader(pdf_file_obj, password=self.password)\n yield from [\n Document(\n page_content=page.extract_text(),\n metadata={\"source\": blob.source, \"page\": page_number},\n )\n for page_number, page in enumerate(pdf_reader.pages)\n ]\n[docs]class PDFMinerParser(BaseBlobParser):\n \"\"\"Parse PDFs with PDFMiner.\"\"\"\n[docs] def lazy_parse(self, blob: Blob) -> Iterator[Document]:\n \"\"\"Lazily parse the blob.\"\"\"\n from pdfminer.high_level import extract_text\n with blob.as_bytes_io() as pdf_file_obj:\n text = extract_text(pdf_file_obj)\n metadata = {\"source\": blob.source}\n yield Document(page_content=text, metadata=metadata)\n[docs]class PyMuPDFParser(BaseBlobParser):\n \"\"\"Parse PDFs with PyMuPDF.\"\"\"\n def __init__(self, text_kwargs: Optional[Mapping[str, Any]] = None) -> None:\n \"\"\"Initialize the parser.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/parsers/pdf.html"} {"id": "25b372d994e5-1", "text": "\"\"\"Initialize the parser.\n Args:\n text_kwargs: Keyword arguments to pass to ``fitz.Page.get_text()``.\n \"\"\"\n self.text_kwargs = text_kwargs or {}\n[docs] def lazy_parse(self, blob: Blob) -> Iterator[Document]:\n \"\"\"Lazily parse the blob.\"\"\"\n import fitz\n with blob.as_bytes_io() as file_path:\n doc = fitz.open(file_path) # open document\n yield from [\n Document(\n page_content=page.get_text(**self.text_kwargs),\n metadata=dict(\n {\n \"source\": blob.source,\n \"file_path\": blob.source,\n \"page\": page.number,\n \"total_pages\": len(doc),\n },\n **{\n k: doc.metadata[k]\n for k in doc.metadata\n if type(doc.metadata[k]) in [str, int]\n },\n ),\n )\n for page in doc\n ]\n[docs]class PyPDFium2Parser(BaseBlobParser):\n \"\"\"Parse PDFs with PyPDFium2.\"\"\"\n def __init__(self) -> None:\n \"\"\"Initialize the parser.\"\"\"\n try:\n import pypdfium2 # noqa:F401\n except ImportError:\n raise ValueError(\n \"pypdfium2 package not found, please install it with\"\n \" `pip install pypdfium2`\"\n )\n[docs] def lazy_parse(self, blob: Blob) -> Iterator[Document]:\n \"\"\"Lazily parse the blob.\"\"\"\n import pypdfium2\n # pypdfium2 is really finicky with respect to closing things,\n # if done incorrectly creates seg faults.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/parsers/pdf.html"} {"id": "25b372d994e5-2", "text": "# if done incorrectly creates seg faults.\n with blob.as_bytes_io() as file_path:\n pdf_reader = pypdfium2.PdfDocument(file_path, autoclose=True)\n try:\n for page_number, page in enumerate(pdf_reader):\n text_page = page.get_textpage()\n content = text_page.get_text_range()\n text_page.close()\n page.close()\n metadata = {\"source\": blob.source, \"page\": page_number}\n yield Document(page_content=content, metadata=metadata)\n finally:\n pdf_reader.close()\n[docs]class PDFPlumberParser(BaseBlobParser):\n \"\"\"Parse PDFs with PDFPlumber.\"\"\"\n def __init__(self, text_kwargs: Optional[Mapping[str, Any]] = None) -> None:\n \"\"\"Initialize the parser.\n Args:\n text_kwargs: Keyword arguments to pass to ``pdfplumber.Page.extract_text()``\n \"\"\"\n self.text_kwargs = text_kwargs or {}\n[docs] def lazy_parse(self, blob: Blob) -> Iterator[Document]:\n \"\"\"Lazily parse the blob.\"\"\"\n import pdfplumber\n with blob.as_bytes_io() as file_path:\n doc = pdfplumber.open(file_path) # open document\n yield from [\n Document(\n page_content=page.extract_text(**self.text_kwargs),\n metadata=dict(\n {\n \"source\": blob.source,\n \"file_path\": blob.source,\n \"page\": page.page_number,\n \"total_pages\": len(doc.pages),\n },\n **{\n k: doc.metadata[k]\n for k in doc.metadata\n if type(doc.metadata[k]) in [str, int]\n },\n ),\n )\n for page in doc.pages", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/parsers/pdf.html"} {"id": "25b372d994e5-3", "text": "},\n ),\n )\n for page in doc.pages\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/parsers/pdf.html"} {"id": "3bfcd037c47e-0", "text": "Source code for langchain.document_loaders.parsers.generic\n\"\"\"Code for generic / auxiliary parsers.\nThis module contains some logic to help assemble more sophisticated parsers.\n\"\"\"\nfrom typing import Iterator, Mapping, Optional\nfrom langchain.document_loaders.base import BaseBlobParser\nfrom langchain.document_loaders.blob_loaders.schema import Blob\nfrom langchain.schema import Document\n[docs]class MimeTypeBasedParser(BaseBlobParser):\n \"\"\"A parser that uses mime-types to determine how to parse a blob.\n This parser is useful for simple pipelines where the mime-type is sufficient\n to determine how to parse a blob.\n To use, configure handlers based on mime-types and pass them to the initializer.\n Example:\n .. code-block:: python\n from langchain.document_loaders.parsers.generic import MimeTypeBasedParser\n parser = MimeTypeBasedParser(\n handlers={\n \"application/pdf\": ...,\n },\n fallback_parser=...,\n )\n \"\"\"\n def __init__(\n self,\n handlers: Mapping[str, BaseBlobParser],\n *,\n fallback_parser: Optional[BaseBlobParser] = None,\n ) -> None:\n \"\"\"Define a parser that uses mime-types to determine how to parse a blob.\n Args:\n handlers: A mapping from mime-types to functions that take a blob, parse it\n and return a document.\n fallback_parser: A fallback_parser parser to use if the mime-type is not\n found in the handlers. If provided, this parser will be\n used to parse blobs with all mime-types not found in\n the handlers.\n If not provided, a ValueError will be raised if the\n mime-type is not found in the handlers.\n \"\"\"\n self.handlers = handlers\n self.fallback_parser = fallback_parser", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/parsers/generic.html"} {"id": "3bfcd037c47e-1", "text": "\"\"\"\n self.handlers = handlers\n self.fallback_parser = fallback_parser\n[docs] def lazy_parse(self, blob: Blob) -> Iterator[Document]:\n \"\"\"Load documents from a blob.\"\"\"\n mimetype = blob.mimetype\n if mimetype is None:\n raise ValueError(f\"{blob} does not have a mimetype.\")\n if mimetype in self.handlers:\n handler = self.handlers[mimetype]\n yield from handler.lazy_parse(blob)\n else:\n if self.fallback_parser is not None:\n yield from self.fallback_parser.lazy_parse(blob)\n else:\n raise ValueError(f\"Unsupported mime type: {mimetype}\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/parsers/generic.html"} {"id": "f5ae3ddde591-0", "text": "Source code for langchain.document_loaders.parsers.audio\nfrom typing import Iterator, Optional\nfrom langchain.document_loaders.base import BaseBlobParser\nfrom langchain.document_loaders.blob_loaders import Blob\nfrom langchain.schema import Document\n[docs]class OpenAIWhisperParser(BaseBlobParser):\n \"\"\"Transcribe and parse audio files.\n Audio transcription is with OpenAI Whisper model.\"\"\"\n def __init__(self, api_key: Optional[str] = None):\n self.api_key = api_key\n[docs] def lazy_parse(self, blob: Blob) -> Iterator[Document]:\n \"\"\"Lazily parse the blob.\"\"\"\n import io\n try:\n import openai\n except ImportError:\n raise ValueError(\n \"openai package not found, please install it with \"\n \"`pip install openai`\"\n )\n try:\n from pydub import AudioSegment\n except ImportError:\n raise ValueError(\n \"pydub package not found, please install it with \" \"`pip install pydub`\"\n )\n # Set the API key if provided\n if self.api_key:\n openai.api_key = self.api_key\n # Audio file from disk\n audio = AudioSegment.from_file(blob.path)\n # Define the duration of each chunk in minutes\n # Need to meet 25MB size limit for Whisper API\n chunk_duration = 20\n chunk_duration_ms = chunk_duration * 60 * 1000\n # Split the audio into chunk_duration_ms chunks\n for split_number, i in enumerate(range(0, len(audio), chunk_duration_ms)):\n # Audio chunk\n chunk = audio[i : i + chunk_duration_ms]\n file_obj = io.BytesIO(chunk.export(format=\"mp3\").read())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/parsers/audio.html"} {"id": "f5ae3ddde591-1", "text": "file_obj = io.BytesIO(chunk.export(format=\"mp3\").read())\n if blob.source is not None:\n file_obj.name = blob.source + f\"_part_{split_number}.mp3\"\n else:\n file_obj.name = f\"part_{split_number}.mp3\"\n # Transcribe\n print(f\"Transcribing part {split_number+1}!\")\n transcript = openai.Audio.transcribe(\"whisper-1\", file_obj)\n yield Document(\n page_content=transcript.text,\n metadata={\"source\": blob.source, \"chunk\": split_number},\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/parsers/audio.html"} {"id": "a4b78bffd36f-0", "text": "Source code for langchain.document_loaders.parsers.registry\n\"\"\"Module includes a registry of default parser configurations.\"\"\"\nfrom langchain.document_loaders.base import BaseBlobParser\nfrom langchain.document_loaders.parsers.generic import MimeTypeBasedParser\nfrom langchain.document_loaders.parsers.pdf import PyMuPDFParser\nfrom langchain.document_loaders.parsers.txt import TextParser\ndef _get_default_parser() -> BaseBlobParser:\n \"\"\"Get default mime-type based parser.\"\"\"\n return MimeTypeBasedParser(\n handlers={\n \"application/pdf\": PyMuPDFParser(),\n \"text/plain\": TextParser(),\n },\n fallback_parser=None,\n )\n_REGISTRY = {\n \"default\": _get_default_parser,\n}\n# PUBLIC API\n[docs]def get_parser(parser_name: str) -> BaseBlobParser:\n \"\"\"Get a parser by parser name.\"\"\"\n if parser_name not in _REGISTRY:\n raise ValueError(f\"Unknown parser combination: {parser_name}\")\n return _REGISTRY[parser_name]()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/parsers/registry.html"} {"id": "267be0d10ba5-0", "text": "Source code for langchain.document_loaders.parsers.grobid\nfrom typing import Dict, Iterator, List, Union\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseBlobParser\nfrom langchain.document_loaders.blob_loaders import Blob\n[docs]class ServerUnavailableException(Exception):\n pass\n[docs]class GrobidParser(BaseBlobParser):\n \"\"\"Loader that uses Grobid to load article PDF files.\"\"\"\n def __init__(\n self,\n segment_sentences: bool,\n grobid_server: str = \"http://localhost:8070/api/processFulltextDocument\",\n ) -> None:\n self.segment_sentences = segment_sentences\n self.grobid_server = grobid_server\n try:\n requests.get(grobid_server)\n except requests.exceptions.RequestException:\n print(\n \"GROBID server does not appear up and running, \\\n please ensure Grobid is installed and the server is running\"\n )\n raise ServerUnavailableException\n[docs] def process_xml(\n self, file_path: str, xml_data: str, segment_sentences: bool\n ) -> Iterator[Document]:\n \"\"\"Process the XML file from Grobin.\"\"\"\n try:\n from bs4 import BeautifulSoup\n except ImportError:\n raise ImportError(\n \"`bs4` package not found, please install it with \" \"`pip install bs4`\"\n )\n soup = BeautifulSoup(xml_data, \"xml\")\n sections = soup.find_all(\"div\")\n title = soup.find_all(\"title\")[0].text\n chunks = []\n for section in sections:\n sect = section.find(\"head\")\n if sect is not None:\n for i, paragraph in enumerate(section.find_all(\"p\")):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/parsers/grobid.html"} {"id": "267be0d10ba5-1", "text": "for i, paragraph in enumerate(section.find_all(\"p\")):\n chunk_bboxes = []\n paragraph_text = []\n for i, sentence in enumerate(paragraph.find_all(\"s\")):\n paragraph_text.append(sentence.text)\n sbboxes = []\n for bbox in sentence.get(\"coords\").split(\";\"):\n box = bbox.split(\",\")\n sbboxes.append(\n {\n \"page\": box[0],\n \"x\": box[1],\n \"y\": box[2],\n \"h\": box[3],\n \"w\": box[4],\n }\n )\n chunk_bboxes.append(sbboxes)\n if segment_sentences is True:\n fpage, lpage = sbboxes[0][\"page\"], sbboxes[-1][\"page\"]\n sentence_dict = {\n \"text\": sentence.text,\n \"para\": str(i),\n \"bboxes\": [sbboxes],\n \"section_title\": sect.text,\n \"section_number\": sect.get(\"n\"),\n \"pages\": (fpage, lpage),\n }\n chunks.append(sentence_dict)\n if segment_sentences is not True:\n fpage, lpage = (\n chunk_bboxes[0][0][\"page\"],\n chunk_bboxes[-1][-1][\"page\"],\n )\n paragraph_dict = {\n \"text\": \"\".join(paragraph_text),\n \"para\": str(i),\n \"bboxes\": chunk_bboxes,\n \"section_title\": sect.text,\n \"section_number\": sect.get(\"n\"),\n \"pages\": (fpage, lpage),\n }\n chunks.append(paragraph_dict)\n yield from [\n Document(\n page_content=chunk[\"text\"],\n metadata=dict(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/parsers/grobid.html"} {"id": "267be0d10ba5-2", "text": "Document(\n page_content=chunk[\"text\"],\n metadata=dict(\n {\n \"text\": str(chunk[\"text\"]),\n \"para\": str(chunk[\"para\"]),\n \"bboxes\": str(chunk[\"bboxes\"]),\n \"pages\": str(chunk[\"pages\"]),\n \"section_title\": str(chunk[\"section_title\"]),\n \"section_number\": str(chunk[\"section_number\"]),\n \"paper_title\": str(title),\n \"file_path\": str(file_path),\n }\n ),\n )\n for chunk in chunks\n ]\n[docs] def lazy_parse(self, blob: Blob) -> Iterator[Document]:\n file_path = blob.source\n if file_path is None:\n raise ValueError(\"blob.source cannot be None.\")\n pdf = open(file_path, \"rb\")\n files = {\"input\": (file_path, pdf, \"application/pdf\", {\"Expires\": \"0\"})}\n try:\n data: Dict[str, Union[str, List[str]]] = {}\n for param in [\"generateIDs\", \"consolidateHeader\", \"segmentSentences\"]:\n data[param] = \"1\"\n data[\"teiCoordinates\"] = [\"head\", \"s\"]\n files = files or {}\n r = requests.request(\n \"POST\",\n self.grobid_server,\n headers=None,\n params=None,\n files=files,\n data=data,\n timeout=60,\n )\n xml_data = r.text\n except requests.exceptions.ReadTimeout:\n xml_data = None\n if xml_data is None:\n return iter([])\n else:\n return self.process_xml(file_path, xml_data, self.segment_sentences)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/parsers/grobid.html"} {"id": "9fc4710e2006-0", "text": "Source code for langchain.document_loaders.parsers.txt\n\"\"\"Module for parsing text files..\"\"\"\nfrom typing import Iterator\nfrom langchain.document_loaders.base import BaseBlobParser\nfrom langchain.document_loaders.blob_loaders import Blob\nfrom langchain.schema import Document\n[docs]class TextParser(BaseBlobParser):\n \"\"\"Parser for text blobs.\"\"\"\n[docs] def lazy_parse(self, blob: Blob) -> Iterator[Document]:\n \"\"\"Lazily parse the blob.\"\"\"\n yield Document(page_content=blob.as_string(), metadata={\"source\": blob.source})", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/parsers/txt.html"} {"id": "cb9b885267df-0", "text": "Source code for langchain.document_loaders.parsers.language.code_segmenter\nfrom abc import ABC, abstractmethod\nfrom typing import List\n[docs]class CodeSegmenter(ABC):\n \"\"\"The abstract class for the code segmenter.\"\"\"\n def __init__(self, code: str):\n self.code = code\n[docs] def is_valid(self) -> bool:\n return True\n[docs] @abstractmethod\n def simplify_code(self) -> str:\n raise NotImplementedError # pragma: no cover\n[docs] @abstractmethod\n def extract_functions_classes(self) -> List[str]:\n raise NotImplementedError # pragma: no cover", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/parsers/language/code_segmenter.html"} {"id": "8c0fd5658871-0", "text": "Source code for langchain.document_loaders.parsers.language.python\nimport ast\nfrom typing import Any, List\nfrom langchain.document_loaders.parsers.language.code_segmenter import CodeSegmenter\n[docs]class PythonSegmenter(CodeSegmenter):\n \"\"\"The code segmenter for Python.\"\"\"\n def __init__(self, code: str):\n super().__init__(code)\n self.source_lines = self.code.splitlines()\n[docs] def is_valid(self) -> bool:\n try:\n ast.parse(self.code)\n return True\n except SyntaxError:\n return False\n def _extract_code(self, node: Any) -> str:\n start = node.lineno - 1\n end = node.end_lineno\n return \"\\n\".join(self.source_lines[start:end])\n[docs] def extract_functions_classes(self) -> List[str]:\n tree = ast.parse(self.code)\n functions_classes = []\n for node in ast.iter_child_nodes(tree):\n if isinstance(node, (ast.FunctionDef, ast.AsyncFunctionDef, ast.ClassDef)):\n functions_classes.append(self._extract_code(node))\n return functions_classes\n[docs] def simplify_code(self) -> str:\n tree = ast.parse(self.code)\n simplified_lines = self.source_lines[:]\n for node in ast.iter_child_nodes(tree):\n if isinstance(node, (ast.FunctionDef, ast.AsyncFunctionDef, ast.ClassDef)):\n start = node.lineno - 1\n simplified_lines[start] = f\"# Code for: {simplified_lines[start]}\"\n assert isinstance(node.end_lineno, int)\n for line_num in range(start + 1, node.end_lineno):\n simplified_lines[line_num] = None # type: ignore\n return \"\\n\".join(line for line in simplified_lines if line is not None)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/parsers/language/python.html"} {"id": "38b7ebbea584-0", "text": "Source code for langchain.document_loaders.parsers.language.javascript\nfrom typing import Any, List\nfrom langchain.document_loaders.parsers.language.code_segmenter import CodeSegmenter\n[docs]class JavaScriptSegmenter(CodeSegmenter):\n \"\"\"The code segmenter for JavaScript.\"\"\"\n def __init__(self, code: str):\n super().__init__(code)\n self.source_lines = self.code.splitlines()\n try:\n import esprima # noqa: F401\n except ImportError:\n raise ImportError(\n \"Could not import esprima Python package. \"\n \"Please install it with `pip install esprima`.\"\n )\n[docs] def is_valid(self) -> bool:\n import esprima\n try:\n esprima.parseScript(self.code)\n return True\n except esprima.Error:\n return False\n def _extract_code(self, node: Any) -> str:\n start = node.loc.start.line - 1\n end = node.loc.end.line\n return \"\\n\".join(self.source_lines[start:end])\n[docs] def extract_functions_classes(self) -> List[str]:\n import esprima\n tree = esprima.parseScript(self.code, loc=True)\n functions_classes = []\n for node in tree.body:\n if isinstance(\n node,\n (esprima.nodes.FunctionDeclaration, esprima.nodes.ClassDeclaration),\n ):\n functions_classes.append(self._extract_code(node))\n return functions_classes\n[docs] def simplify_code(self) -> str:\n import esprima\n tree = esprima.parseScript(self.code, loc=True)\n simplified_lines = self.source_lines[:]\n for node in tree.body:\n if isinstance(\n node,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/parsers/language/javascript.html"} {"id": "38b7ebbea584-1", "text": "for node in tree.body:\n if isinstance(\n node,\n (esprima.nodes.FunctionDeclaration, esprima.nodes.ClassDeclaration),\n ):\n start = node.loc.start.line - 1\n simplified_lines[start] = f\"// Code for: {simplified_lines[start]}\"\n for line_num in range(start + 1, node.loc.end.line):\n simplified_lines[line_num] = None # type: ignore\n return \"\\n\".join(line for line in simplified_lines if line is not None)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/parsers/language/javascript.html"} {"id": "fc76e0c31d47-0", "text": "Source code for langchain.document_loaders.parsers.language.language_parser\nfrom typing import Any, Dict, Iterator, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseBlobParser\nfrom langchain.document_loaders.blob_loaders import Blob\nfrom langchain.document_loaders.parsers.language.javascript import JavaScriptSegmenter\nfrom langchain.document_loaders.parsers.language.python import PythonSegmenter\nfrom langchain.text_splitter import Language\nLANGUAGE_EXTENSIONS: Dict[str, str] = {\n \"py\": Language.PYTHON,\n \"js\": Language.JS,\n}\nLANGUAGE_SEGMENTERS: Dict[str, Any] = {\n Language.PYTHON: PythonSegmenter,\n Language.JS: JavaScriptSegmenter,\n}\n[docs]class LanguageParser(BaseBlobParser):\n \"\"\"\n Language parser that split code using the respective language syntax.\n Each top-level function and class in the code is loaded into separate documents.\n Furthermore, an extra document is generated, containing the remaining top-level code\n that excludes the already segmented functions and classes.\n This approach can potentially improve the accuracy of QA models over source code.\n Currently, the supported languages for code parsing are Python and JavaScript.\n The language used for parsing can be configured, along with the minimum number of\n lines required to activate the splitting based on syntax.\n Examples:\n .. code-block:: python\n from langchain.text_splitter.Language\n from langchain.document_loaders.generic import GenericLoader\n from langchain.document_loaders.parsers import LanguageParser\n loader = GenericLoader.from_filesystem(\n \"./code\",\n glob=\"**/*\",\n suffixes=[\".py\", \".js\"],\n parser=LanguageParser()\n )\n docs = loader.load()\n Example instantiations to manually select the language:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/parsers/language/language_parser.html"} {"id": "fc76e0c31d47-1", "text": "docs = loader.load()\n Example instantiations to manually select the language:\n ... code-block:: python\n from langchain.text_splitter import Language\n loader = GenericLoader.from_filesystem(\n \"./code\",\n glob=\"**/*\",\n suffixes=[\".py\"],\n parser=LanguageParser(language=Language.PYTHON)\n )\n Example instantiations to set number of lines threshold:\n ... code-block:: python\n loader = GenericLoader.from_filesystem(\n \"./code\",\n glob=\"**/*\",\n suffixes=[\".py\"],\n parser=LanguageParser(parser_threshold=200)\n )\n \"\"\"\n def __init__(self, language: Optional[Language] = None, parser_threshold: int = 0):\n \"\"\"\n Language parser that split code using the respective language syntax.\n Args:\n language: If None (default), it will try to infer language from source.\n parser_threshold: Minimum lines needed to activate parsing (0 by default).\n \"\"\"\n self.language = language\n self.parser_threshold = parser_threshold\n[docs] def lazy_parse(self, blob: Blob) -> Iterator[Document]:\n code = blob.as_string()\n language = self.language or (\n LANGUAGE_EXTENSIONS.get(blob.source.rsplit(\".\", 1)[-1])\n if isinstance(blob.source, str)\n else None\n )\n if language is None:\n yield Document(\n page_content=code,\n metadata={\n \"source\": blob.source,\n },\n )\n return\n if self.parser_threshold >= len(code.splitlines()):\n yield Document(\n page_content=code,\n metadata={\n \"source\": blob.source,\n \"language\": language,\n },\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/parsers/language/language_parser.html"} {"id": "fc76e0c31d47-2", "text": "\"language\": language,\n },\n )\n return\n self.Segmenter = LANGUAGE_SEGMENTERS[language]\n segmenter = self.Segmenter(blob.as_string())\n if not segmenter.is_valid():\n yield Document(\n page_content=code,\n metadata={\n \"source\": blob.source,\n },\n )\n return\n for functions_classes in segmenter.extract_functions_classes():\n yield Document(\n page_content=functions_classes,\n metadata={\n \"source\": blob.source,\n \"content_type\": \"functions_classes\",\n \"language\": language,\n },\n )\n yield Document(\n page_content=segmenter.simplify_code(),\n metadata={\n \"source\": blob.source,\n \"content_type\": \"simplified_code\",\n \"language\": language,\n },\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/parsers/language/language_parser.html"} {"id": "c2dc88e0b326-0", "text": "Source code for langchain.document_loaders.parsers.html.bs4\n\"\"\"Loader that uses bs4 to load HTML files, enriching metadata with page title.\"\"\"\nimport logging\nfrom typing import Any, Dict, Iterator, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseBlobParser\nfrom langchain.document_loaders.blob_loaders import Blob\nlogger = logging.getLogger(__name__)\n[docs]class BS4HTMLParser(BaseBlobParser):\n \"\"\"Parser that uses beautiful soup to parse HTML files.\"\"\"\n def __init__(\n self,\n *,\n features: str = \"lxml\",\n get_text_separator: str = \"\",\n **kwargs: Any,\n ) -> None:\n \"\"\"Initialize a bs4 based HTML parser.\"\"\"\n try:\n import bs4 # noqa:F401\n except ImportError:\n raise ValueError(\n \"beautifulsoup4 package not found, please install it with \"\n \"`pip install beautifulsoup4`\"\n )\n self.bs_kwargs = {\"features\": features, **kwargs}\n self.get_text_separator = get_text_separator\n[docs] def lazy_parse(self, blob: Blob) -> Iterator[Document]:\n \"\"\"Load HTML document into document objects.\"\"\"\n from bs4 import BeautifulSoup\n with blob.as_bytes_io() as f:\n soup = BeautifulSoup(f, **self.bs_kwargs)\n text = soup.get_text(self.get_text_separator)\n if soup.title:\n title = str(soup.title.string)\n else:\n title = \"\"\n metadata: Dict[str, Union[str, None]] = {\n \"source\": blob.source,\n \"title\": title,\n }\n yield Document(page_content=text, metadata=metadata)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/parsers/html/bs4.html"} {"id": "57cd1d29eaa7-0", "text": "Source code for langchain.document_loaders.blob_loaders.schema\n\"\"\"Schema for Blobs and Blob Loaders.\nThe goal is to facilitate decoupling of content loading from content parsing code.\nIn addition, content loading code should provide a lazy loading interface by default.\n\"\"\"\nfrom __future__ import annotations\nimport contextlib\nimport mimetypes\nfrom abc import ABC, abstractmethod\nfrom io import BufferedReader, BytesIO\nfrom pathlib import PurePath\nfrom typing import Any, Generator, Iterable, Mapping, Optional, Union\nfrom pydantic import BaseModel, root_validator\nPathLike = Union[str, PurePath]\n[docs]class Blob(BaseModel):\n \"\"\"A blob is used to represent raw data by either reference or value.\n Provides an interface to materialize the blob in different representations, and\n help to decouple the development of data loaders from the downstream parsing of\n the raw data.\n Inspired by: https://developer.mozilla.org/en-US/docs/Web/API/Blob\n \"\"\"\n data: Union[bytes, str, None] # Raw data\n mimetype: Optional[str] = None # Not to be confused with a file extension\n encoding: str = \"utf-8\" # Use utf-8 as default encoding, if decoding to string\n # Location where the original content was found\n # Represent location on the local file system\n # Useful for situations where downstream code assumes it must work with file paths\n # rather than in-memory content.\n path: Optional[PathLike] = None\n[docs] class Config:\n arbitrary_types_allowed = True\n frozen = True\n @property\n def source(self) -> Optional[str]:\n \"\"\"The source location of the blob as string if known otherwise none.\"\"\"\n return str(self.path) if self.path else None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blob_loaders/schema.html"} {"id": "57cd1d29eaa7-1", "text": "return str(self.path) if self.path else None\n[docs] @root_validator(pre=True)\n def check_blob_is_valid(cls, values: Mapping[str, Any]) -> Mapping[str, Any]:\n \"\"\"Verify that either data or path is provided.\"\"\"\n if \"data\" not in values and \"path\" not in values:\n raise ValueError(\"Either data or path must be provided\")\n return values\n[docs] def as_string(self) -> str:\n \"\"\"Read data as a string.\"\"\"\n if self.data is None and self.path:\n with open(str(self.path), \"r\", encoding=self.encoding) as f:\n return f.read()\n elif isinstance(self.data, bytes):\n return self.data.decode(self.encoding)\n elif isinstance(self.data, str):\n return self.data\n else:\n raise ValueError(f\"Unable to get string for blob {self}\")\n[docs] def as_bytes(self) -> bytes:\n \"\"\"Read data as bytes.\"\"\"\n if isinstance(self.data, bytes):\n return self.data\n elif isinstance(self.data, str):\n return self.data.encode(self.encoding)\n elif self.data is None and self.path:\n with open(str(self.path), \"rb\") as f:\n return f.read()\n else:\n raise ValueError(f\"Unable to get bytes for blob {self}\")\n[docs] @contextlib.contextmanager\n def as_bytes_io(self) -> Generator[Union[BytesIO, BufferedReader], None, None]:\n \"\"\"Read data as a byte stream.\"\"\"\n if isinstance(self.data, bytes):\n yield BytesIO(self.data)\n elif self.data is None and self.path:\n with open(str(self.path), \"rb\") as f:\n yield f\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blob_loaders/schema.html"} {"id": "57cd1d29eaa7-2", "text": "yield f\n else:\n raise NotImplementedError(f\"Unable to convert blob {self}\")\n[docs] @classmethod\n def from_path(\n cls,\n path: PathLike,\n *,\n encoding: str = \"utf-8\",\n mime_type: Optional[str] = None,\n guess_type: bool = True,\n ) -> Blob:\n \"\"\"Load the blob from a path like object.\n Args:\n path: path like object to file to be read\n encoding: Encoding to use if decoding the bytes into a string\n mime_type: if provided, will be set as the mime-type of the data\n guess_type: If True, the mimetype will be guessed from the file extension,\n if a mime-type was not provided\n Returns:\n Blob instance\n \"\"\"\n if mime_type is None and guess_type:\n _mimetype = mimetypes.guess_type(path)[0] if guess_type else None\n else:\n _mimetype = mime_type\n # We do not load the data immediately, instead we treat the blob as a\n # reference to the underlying data.\n return cls(data=None, mimetype=_mimetype, encoding=encoding, path=path)\n[docs] @classmethod\n def from_data(\n cls,\n data: Union[str, bytes],\n *,\n encoding: str = \"utf-8\",\n mime_type: Optional[str] = None,\n path: Optional[str] = None,\n ) -> Blob:\n \"\"\"Initialize the blob from in-memory data.\n Args:\n data: the in-memory data associated with the blob\n encoding: Encoding to use if decoding the bytes into a string\n mime_type: if provided, will be set as the mime-type of the data", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blob_loaders/schema.html"} {"id": "57cd1d29eaa7-3", "text": "mime_type: if provided, will be set as the mime-type of the data\n path: if provided, will be set as the source from which the data came\n Returns:\n Blob instance\n \"\"\"\n return cls(data=data, mimetype=mime_type, encoding=encoding, path=path)\n def __repr__(self) -> str:\n \"\"\"Define the blob representation.\"\"\"\n str_repr = f\"Blob {id(self)}\"\n if self.source:\n str_repr += f\" {self.source}\"\n return str_repr\n[docs]class BlobLoader(ABC):\n \"\"\"Abstract interface for blob loaders implementation.\n Implementer should be able to load raw content from a storage system according\n to some criteria and return the raw content lazily as a stream of blobs.\n \"\"\"\n[docs] @abstractmethod\n def yield_blobs(\n self,\n ) -> Iterable[Blob]:\n \"\"\"A lazy loader for raw data represented by LangChain's Blob object.\n Returns:\n A generator over blobs\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blob_loaders/schema.html"} {"id": "d413a5a1014f-0", "text": "Source code for langchain.document_loaders.blob_loaders.file_system\n\"\"\"Use to load blobs from the local file system.\"\"\"\nfrom pathlib import Path\nfrom typing import Callable, Iterable, Iterator, Optional, Sequence, TypeVar, Union\nfrom langchain.document_loaders.blob_loaders.schema import Blob, BlobLoader\nT = TypeVar(\"T\")\ndef _make_iterator(\n length_func: Callable[[], int], show_progress: bool = False\n) -> Callable[[Iterable[T]], Iterator[T]]:\n \"\"\"Create a function that optionally wraps an iterable in tqdm.\"\"\"\n if show_progress:\n try:\n from tqdm.auto import tqdm\n except ImportError:\n raise ImportError(\n \"You must install tqdm to use show_progress=True.\"\n \"You can install tqdm with `pip install tqdm`.\"\n )\n # Make sure to provide `total` here so that tqdm can show\n # a progress bar that takes into account the total number of files.\n def _with_tqdm(iterable: Iterable[T]) -> Iterator[T]:\n \"\"\"Wrap an iterable in a tqdm progress bar.\"\"\"\n return tqdm(iterable, total=length_func())\n iterator = _with_tqdm\n else:\n iterator = iter # type: ignore\n return iterator\n# PUBLIC API\n[docs]class FileSystemBlobLoader(BlobLoader):\n \"\"\"Blob loader for the local file system.\n Example:\n .. code-block:: python\n from langchain.document_loaders.blob_loaders import FileSystemBlobLoader\n loader = FileSystemBlobLoader(\"/path/to/directory\")\n for blob in loader.yield_blobs():\n print(blob)\n \"\"\"\n def __init__(\n self,\n path: Union[str, Path],\n *,\n glob: str = \"**/[!.]*\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blob_loaders/file_system.html"} {"id": "d413a5a1014f-1", "text": "*,\n glob: str = \"**/[!.]*\",\n suffixes: Optional[Sequence[str]] = None,\n show_progress: bool = False,\n ) -> None:\n \"\"\"Initialize with path to directory and how to glob over it.\n Args:\n path: Path to directory to load from\n glob: Glob pattern relative to the specified path\n by default set to pick up all non-hidden files\n suffixes: Provide to keep only files with these suffixes\n Useful when wanting to keep files with different suffixes\n Suffixes must include the dot, e.g. \".txt\"\n show_progress: If true, will show a progress bar as the files are loaded.\n This forces an iteration through all matching files\n to count them prior to loading them.\n Examples:\n ... code-block:: python\n # Recursively load all text files in a directory.\n loader = FileSystemBlobLoader(\"/path/to/directory\", glob=\"**/*.txt\")\n # Recursively load all non-hidden files in a directory.\n loader = FileSystemBlobLoader(\"/path/to/directory\", glob=\"**/[!.]*\")\n # Load all files in a directory without recursion.\n loader = FileSystemBlobLoader(\"/path/to/directory\", glob=\"*\")\n \"\"\"\n if isinstance(path, Path):\n _path = path\n elif isinstance(path, str):\n _path = Path(path)\n else:\n raise TypeError(f\"Expected str or Path, got {type(path)}\")\n self.path = _path\n self.glob = glob\n self.suffixes = set(suffixes or [])\n self.show_progress = show_progress\n[docs] def yield_blobs(\n self,\n ) -> Iterable[Blob]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blob_loaders/file_system.html"} {"id": "d413a5a1014f-2", "text": "self,\n ) -> Iterable[Blob]:\n \"\"\"Yield blobs that match the requested pattern.\"\"\"\n iterator = _make_iterator(\n length_func=self.count_matching_files, show_progress=self.show_progress\n )\n for path in iterator(self._yield_paths()):\n yield Blob.from_path(path)\n def _yield_paths(self) -> Iterable[Path]:\n \"\"\"Yield paths that match the requested pattern.\"\"\"\n paths = self.path.glob(self.glob)\n for path in paths:\n if path.is_file():\n if self.suffixes and path.suffix not in self.suffixes:\n continue\n yield path\n[docs] def count_matching_files(self) -> int:\n \"\"\"Count files that match the pattern without loading them.\"\"\"\n # Carry out a full iteration to count the files without\n # materializing anything expensive in memory.\n num = 0\n for _ in self._yield_paths():\n num += 1\n return num", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blob_loaders/file_system.html"} {"id": "0a7335eef2fd-0", "text": "Source code for langchain.document_loaders.blob_loaders.youtube_audio\nfrom typing import Iterable, List\nfrom langchain.document_loaders.blob_loaders import FileSystemBlobLoader\nfrom langchain.document_loaders.blob_loaders.schema import Blob, BlobLoader\n[docs]class YoutubeAudioLoader(BlobLoader):\n \"\"\"Load YouTube urls as audio file(s).\"\"\"\n def __init__(self, urls: List[str], save_dir: str):\n if not isinstance(urls, list):\n raise TypeError(\"urls must be a list\")\n self.urls = urls\n self.save_dir = save_dir\n[docs] def yield_blobs(self) -> Iterable[Blob]:\n \"\"\"Yield audio blobs for each url.\"\"\"\n try:\n import yt_dlp\n except ImportError:\n raise ValueError(\n \"yt_dlp package not found, please install it with \"\n \"`pip install yt_dlp`\"\n )\n # Use yt_dlp to download audio given a YouTube url\n ydl_opts = {\n \"format\": \"m4a/bestaudio/best\",\n \"noplaylist\": True,\n \"outtmpl\": self.save_dir + \"/%(title)s.%(ext)s\",\n \"postprocessors\": [\n {\n \"key\": \"FFmpegExtractAudio\",\n \"preferredcodec\": \"m4a\",\n }\n ],\n }\n for url in self.urls:\n # Download file\n with yt_dlp.YoutubeDL(ydl_opts) as ydl:\n ydl.download(url)\n # Yield the written blobs\n loader = FileSystemBlobLoader(self.save_dir, glob=\"*.m4a\")\n for blob in loader.yield_blobs():\n yield blob", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blob_loaders/youtube_audio.html"} {"id": "8dda6f6ab2f0-0", "text": "Source code for langchain.experimental.plan_and_execute.agent_executor\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.experimental.plan_and_execute.executors.base import BaseExecutor\nfrom langchain.experimental.plan_and_execute.planners.base import BasePlanner\nfrom langchain.experimental.plan_and_execute.schema import (\n BaseStepContainer,\n ListStepContainer,\n)\n[docs]class PlanAndExecute(Chain):\n planner: BasePlanner\n executor: BaseExecutor\n step_container: BaseStepContainer = Field(default_factory=ListStepContainer)\n input_key: str = \"input\"\n output_key: str = \"output\"\n @property\n def input_keys(self) -> List[str]:\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n return [self.output_key]\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n plan = self.planner.plan(\n inputs,\n callbacks=run_manager.get_child() if run_manager else None,\n )\n if run_manager:\n run_manager.on_text(str(plan), verbose=self.verbose)\n for step in plan.steps:\n _new_inputs = {\n \"previous_steps\": self.step_container,\n \"current_step\": step,\n \"objective\": inputs[self.input_key],\n }\n new_inputs = {**_new_inputs, **inputs}\n response = self.executor.step(\n new_inputs,\n callbacks=run_manager.get_child() if run_manager else None,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/plan_and_execute/agent_executor.html"} {"id": "8dda6f6ab2f0-1", "text": "callbacks=run_manager.get_child() if run_manager else None,\n )\n if run_manager:\n run_manager.on_text(\n f\"*****\\n\\nStep: {step.value}\", verbose=self.verbose\n )\n run_manager.on_text(\n f\"\\n\\nResponse: {response.response}\", verbose=self.verbose\n )\n self.step_container.add_step(step, response)\n return {self.output_key: self.step_container.get_final_response()}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/plan_and_execute/agent_executor.html"} {"id": "48d786d280f5-0", "text": "Source code for langchain.experimental.plan_and_execute.schema\nfrom abc import abstractmethod\nfrom typing import List, Tuple\nfrom pydantic import BaseModel, Field\nfrom langchain.schema import BaseOutputParser\n[docs]class Step(BaseModel):\n value: str\n[docs]class Plan(BaseModel):\n steps: List[Step]\n[docs]class StepResponse(BaseModel):\n response: str\n[docs]class BaseStepContainer(BaseModel):\n[docs] @abstractmethod\n def add_step(self, step: Step, step_response: StepResponse) -> None:\n \"\"\"Add step and step response to the container.\"\"\"\n[docs] @abstractmethod\n def get_final_response(self) -> str:\n \"\"\"Return the final response based on steps taken.\"\"\"\n[docs]class ListStepContainer(BaseModel):\n steps: List[Tuple[Step, StepResponse]] = Field(default_factory=list)\n[docs] def add_step(self, step: Step, step_response: StepResponse) -> None:\n self.steps.append((step, step_response))\n[docs] def get_steps(self) -> List[Tuple[Step, StepResponse]]:\n return self.steps\n[docs] def get_final_response(self) -> str:\n return self.steps[-1][1].response\n[docs]class PlanOutputParser(BaseOutputParser):\n[docs] @abstractmethod\n def parse(self, text: str) -> Plan:\n \"\"\"Parse into a plan.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/plan_and_execute/schema.html"} {"id": "94e2b541a543-0", "text": "Source code for langchain.experimental.plan_and_execute.planners.base\nfrom abc import abstractmethod\nfrom typing import Any, List, Optional\nfrom pydantic import BaseModel\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.chains.llm import LLMChain\nfrom langchain.experimental.plan_and_execute.schema import Plan, PlanOutputParser\n[docs]class BasePlanner(BaseModel):\n[docs] @abstractmethod\n def plan(self, inputs: dict, callbacks: Callbacks = None, **kwargs: Any) -> Plan:\n \"\"\"Given input, decide what to do.\"\"\"\n[docs] @abstractmethod\n async def aplan(\n self, inputs: dict, callbacks: Callbacks = None, **kwargs: Any\n ) -> Plan:\n \"\"\"Given input, decide what to do.\"\"\"\n[docs]class LLMPlanner(BasePlanner):\n llm_chain: LLMChain\n output_parser: PlanOutputParser\n stop: Optional[List] = None\n[docs] def plan(self, inputs: dict, callbacks: Callbacks = None, **kwargs: Any) -> Plan:\n \"\"\"Given input, decide what to do.\"\"\"\n llm_response = self.llm_chain.run(**inputs, stop=self.stop, callbacks=callbacks)\n return self.output_parser.parse(llm_response)\n[docs] async def aplan(\n self, inputs: dict, callbacks: Callbacks = None, **kwargs: Any\n ) -> Plan:\n \"\"\"Given input, decide what to do.\"\"\"\n llm_response = await self.llm_chain.arun(\n **inputs, stop=self.stop, callbacks=callbacks\n )\n return self.output_parser.parse(llm_response)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/plan_and_execute/planners/base.html"} {"id": "1178fb99dea6-0", "text": "Source code for langchain.experimental.plan_and_execute.planners.chat_planner\nimport re\nfrom langchain.chains import LLMChain\nfrom langchain.experimental.plan_and_execute.planners.base import LLMPlanner\nfrom langchain.experimental.plan_and_execute.schema import (\n Plan,\n PlanOutputParser,\n Step,\n)\nfrom langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.schema.messages import SystemMessage\nSYSTEM_PROMPT = (\n \"Let's first understand the problem and devise a plan to solve the problem.\"\n \" Please output the plan starting with the header 'Plan:' \"\n \"and then followed by a numbered list of steps. \"\n \"Please make the plan the minimum number of steps required \"\n \"to accurately complete the task. If the task is a question, \"\n \"the final step should almost always be 'Given the above steps taken, \"\n \"please respond to the users original question'. \"\n \"At the end of your plan, say ''\"\n)\n[docs]class PlanningOutputParser(PlanOutputParser):\n[docs] def parse(self, text: str) -> Plan:\n steps = [Step(value=v) for v in re.split(\"\\n\\s*\\d+\\. \", text)[1:]]\n return Plan(steps=steps)\n[docs]def load_chat_planner(\n llm: BaseLanguageModel, system_prompt: str = SYSTEM_PROMPT\n) -> LLMPlanner:\n \"\"\"\n Load a chat planner.\n Args:\n llm: Language model.\n system_prompt: System prompt.\n Returns:\n LLMPlanner\n \"\"\"\n prompt_template = ChatPromptTemplate.from_messages(\n [", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/plan_and_execute/planners/chat_planner.html"} {"id": "1178fb99dea6-1", "text": "\"\"\"\n prompt_template = ChatPromptTemplate.from_messages(\n [\n SystemMessage(content=system_prompt),\n HumanMessagePromptTemplate.from_template(\"{input}\"),\n ]\n )\n llm_chain = LLMChain(llm=llm, prompt=prompt_template)\n return LLMPlanner(\n llm_chain=llm_chain,\n output_parser=PlanningOutputParser(),\n stop=[\"\"],\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/plan_and_execute/planners/chat_planner.html"} {"id": "14b1592632c8-0", "text": "Source code for langchain.experimental.plan_and_execute.executors.agent_executor\nfrom typing import List\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.structured_chat.base import StructuredChatAgent\nfrom langchain.experimental.plan_and_execute.executors.base import ChainExecutor\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.tools import BaseTool\nHUMAN_MESSAGE_TEMPLATE = \"\"\"Previous steps: {previous_steps}\nCurrent objective: {current_step}\n{agent_scratchpad}\"\"\"\nTASK_PREFIX = \"\"\"{objective}\n\"\"\"\n[docs]def load_agent_executor(\n llm: BaseLanguageModel,\n tools: List[BaseTool],\n verbose: bool = False,\n include_task_in_prompt: bool = False,\n) -> ChainExecutor:\n \"\"\"\n Load an agent executor.\n Args:\n llm: BaseLanguageModel\n tools: List[BaseTool]\n verbose: bool. Defaults to False.\n include_task_in_prompt: bool. Defaults to False.\n Returns:\n ChainExecutor\n \"\"\"\n input_variables = [\"previous_steps\", \"current_step\", \"agent_scratchpad\"]\n template = HUMAN_MESSAGE_TEMPLATE\n if include_task_in_prompt:\n input_variables.append(\"objective\")\n template = TASK_PREFIX + template\n agent = StructuredChatAgent.from_llm_and_tools(\n llm,\n tools,\n human_message_template=template,\n input_variables=input_variables,\n )\n agent_executor = AgentExecutor.from_agent_and_tools(\n agent=agent, tools=tools, verbose=verbose\n )\n return ChainExecutor(chain=agent_executor)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/plan_and_execute/executors/agent_executor.html"} {"id": "930a42c4b86e-0", "text": "Source code for langchain.experimental.plan_and_execute.executors.base\nfrom abc import abstractmethod\nfrom typing import Any\nfrom pydantic import BaseModel\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.chains.base import Chain\nfrom langchain.experimental.plan_and_execute.schema import StepResponse\n[docs]class BaseExecutor(BaseModel):\n[docs] @abstractmethod\n def step(\n self, inputs: dict, callbacks: Callbacks = None, **kwargs: Any\n ) -> StepResponse:\n \"\"\"Take step.\"\"\"\n[docs] @abstractmethod\n async def astep(\n self, inputs: dict, callbacks: Callbacks = None, **kwargs: Any\n ) -> StepResponse:\n \"\"\"Take step.\"\"\"\n[docs]class ChainExecutor(BaseExecutor):\n chain: Chain\n[docs] def step(\n self, inputs: dict, callbacks: Callbacks = None, **kwargs: Any\n ) -> StepResponse:\n \"\"\"Take step.\"\"\"\n response = self.chain.run(**inputs, callbacks=callbacks)\n return StepResponse(response=response)\n[docs] async def astep(\n self, inputs: dict, callbacks: Callbacks = None, **kwargs: Any\n ) -> StepResponse:\n \"\"\"Take step.\"\"\"\n response = await self.chain.arun(**inputs, callbacks=callbacks)\n return StepResponse(response=response)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/plan_and_execute/executors/base.html"} {"id": "f051eb967f8d-0", "text": "Source code for langchain.experimental.autonomous_agents.autogpt.prompt\nimport time\nfrom typing import Any, Callable, List\nfrom pydantic import BaseModel\nfrom langchain.experimental.autonomous_agents.autogpt.prompt_generator import get_prompt\nfrom langchain.prompts.chat import (\n BaseChatPromptTemplate,\n)\nfrom langchain.schema.messages import BaseMessage, HumanMessage, SystemMessage\nfrom langchain.tools.base import BaseTool\nfrom langchain.vectorstores.base import VectorStoreRetriever\n[docs]class AutoGPTPrompt(BaseChatPromptTemplate, BaseModel):\n ai_name: str\n ai_role: str\n tools: List[BaseTool]\n token_counter: Callable[[str], int]\n send_token_limit: int = 4196\n[docs] def construct_full_prompt(self, goals: List[str]) -> str:\n prompt_start = (\n \"Your decisions must always be made independently \"\n \"without seeking user assistance.\\n\"\n \"Play to your strengths as an LLM and pursue simple \"\n \"strategies with no legal complications.\\n\"\n \"If you have completed all your tasks, make sure to \"\n 'use the \"finish\" command.'\n )\n # Construct full prompt\n full_prompt = (\n f\"You are {self.ai_name}, {self.ai_role}\\n{prompt_start}\\n\\nGOALS:\\n\\n\"\n )\n for i, goal in enumerate(goals):\n full_prompt += f\"{i+1}. {goal}\\n\"\n full_prompt += f\"\\n\\n{get_prompt(self.tools)}\"\n return full_prompt\n[docs] def format_messages(self, **kwargs: Any) -> List[BaseMessage]:\n base_prompt = SystemMessage(content=self.construct_full_prompt(kwargs[\"goals\"]))\n time_prompt = SystemMessage(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/autogpt/prompt.html"} {"id": "f051eb967f8d-1", "text": "time_prompt = SystemMessage(\n content=f\"The current time and date is {time.strftime('%c')}\"\n )\n used_tokens = self.token_counter(base_prompt.content) + self.token_counter(\n time_prompt.content\n )\n memory: VectorStoreRetriever = kwargs[\"memory\"]\n previous_messages = kwargs[\"messages\"]\n relevant_docs = memory.get_relevant_documents(str(previous_messages[-10:]))\n relevant_memory = [d.page_content for d in relevant_docs]\n relevant_memory_tokens = sum(\n [self.token_counter(doc) for doc in relevant_memory]\n )\n while used_tokens + relevant_memory_tokens > 2500:\n relevant_memory = relevant_memory[:-1]\n relevant_memory_tokens = sum(\n [self.token_counter(doc) for doc in relevant_memory]\n )\n content_format = (\n f\"This reminds you of these events \"\n f\"from your past:\\n{relevant_memory}\\n\\n\"\n )\n memory_message = SystemMessage(content=content_format)\n used_tokens += self.token_counter(memory_message.content)\n historical_messages: List[BaseMessage] = []\n for message in previous_messages[-10:][::-1]:\n message_tokens = self.token_counter(message.content)\n if used_tokens + message_tokens > self.send_token_limit - 1000:\n break\n historical_messages = [message] + historical_messages\n used_tokens += message_tokens\n input_message = HumanMessage(content=kwargs[\"user_input\"])\n messages: List[BaseMessage] = [base_prompt, time_prompt, memory_message]\n messages += historical_messages\n messages.append(input_message)\n return messages", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/autogpt/prompt.html"} {"id": "094234293ac3-0", "text": "Source code for langchain.experimental.autonomous_agents.autogpt.memory\nfrom typing import Any, Dict, List\nfrom pydantic import Field\nfrom langchain.memory.chat_memory import BaseChatMemory, get_prompt_input_key\nfrom langchain.vectorstores.base import VectorStoreRetriever\n[docs]class AutoGPTMemory(BaseChatMemory):\n retriever: VectorStoreRetriever = Field(exclude=True)\n \"\"\"VectorStoreRetriever object to connect to.\"\"\"\n @property\n def memory_variables(self) -> List[str]:\n return [\"chat_history\", \"relevant_context\"]\n def _get_prompt_input_key(self, inputs: Dict[str, Any]) -> str:\n \"\"\"Get the input key for the prompt.\"\"\"\n if self.input_key is None:\n return get_prompt_input_key(inputs, self.memory_variables)\n return self.input_key\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n input_key = self._get_prompt_input_key(inputs)\n query = inputs[input_key]\n docs = self.retriever.get_relevant_documents(query)\n return {\n \"chat_history\": self.chat_memory.messages[-10:],\n \"relevant_context\": docs,\n }", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/autogpt/memory.html"} {"id": "ec4f932dbf07-0", "text": "Source code for langchain.experimental.autonomous_agents.autogpt.prompt_generator\nimport json\nfrom typing import List\nfrom langchain.tools.base import BaseTool\nFINISH_NAME = \"finish\"\nclass PromptGenerator:\n \"\"\"A class for generating custom prompt strings.\n Does this based on constraints, commands, resources, and performance evaluations.\n \"\"\"\n def __init__(self) -> None:\n \"\"\"Initialize the PromptGenerator object.\n Starts with empty lists of constraints, commands, resources,\n and performance evaluations.\n \"\"\"\n self.constraints: List[str] = []\n self.commands: List[BaseTool] = []\n self.resources: List[str] = []\n self.performance_evaluation: List[str] = []\n self.response_format = {\n \"thoughts\": {\n \"text\": \"thought\",\n \"reasoning\": \"reasoning\",\n \"plan\": \"- short bulleted\\n- list that conveys\\n- long-term plan\",\n \"criticism\": \"constructive self-criticism\",\n \"speak\": \"thoughts summary to say to user\",\n },\n \"command\": {\"name\": \"command name\", \"args\": {\"arg name\": \"value\"}},\n }\n def add_constraint(self, constraint: str) -> None:\n \"\"\"\n Add a constraint to the constraints list.\n Args:\n constraint (str): The constraint to be added.\n \"\"\"\n self.constraints.append(constraint)\n def add_tool(self, tool: BaseTool) -> None:\n self.commands.append(tool)\n def _generate_command_string(self, tool: BaseTool) -> str:\n output = f\"{tool.name}: {tool.description}\"\n output += f\", args json schema: {json.dumps(tool.args)}\"\n return output", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/autogpt/prompt_generator.html"} {"id": "ec4f932dbf07-1", "text": "return output\n def add_resource(self, resource: str) -> None:\n \"\"\"\n Add a resource to the resources list.\n Args:\n resource (str): The resource to be added.\n \"\"\"\n self.resources.append(resource)\n def add_performance_evaluation(self, evaluation: str) -> None:\n \"\"\"\n Add a performance evaluation item to the performance_evaluation list.\n Args:\n evaluation (str): The evaluation item to be added.\n \"\"\"\n self.performance_evaluation.append(evaluation)\n def _generate_numbered_list(self, items: list, item_type: str = \"list\") -> str:\n \"\"\"\n Generate a numbered list from given items based on the item_type.\n Args:\n items (list): A list of items to be numbered.\n item_type (str, optional): The type of items in the list.\n Defaults to 'list'.\n Returns:\n str: The formatted numbered list.\n \"\"\"\n if item_type == \"command\":\n command_strings = [\n f\"{i + 1}. {self._generate_command_string(item)}\"\n for i, item in enumerate(items)\n ]\n finish_description = (\n \"use this to signal that you have finished all your objectives\"\n )\n finish_args = (\n '\"response\": \"final response to let '\n 'people know you have finished your objectives\"'\n )\n finish_string = (\n f\"{len(items) + 1}. {FINISH_NAME}: \"\n f\"{finish_description}, args: {finish_args}\"\n )\n return \"\\n\".join(command_strings + [finish_string])\n else:\n return \"\\n\".join(f\"{i+1}. {item}\" for i, item in enumerate(items))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/autogpt/prompt_generator.html"} {"id": "ec4f932dbf07-2", "text": "def generate_prompt_string(self) -> str:\n \"\"\"Generate a prompt string.\n Returns:\n str: The generated prompt string.\n \"\"\"\n formatted_response_format = json.dumps(self.response_format, indent=4)\n prompt_string = (\n f\"Constraints:\\n{self._generate_numbered_list(self.constraints)}\\n\\n\"\n f\"Commands:\\n\"\n f\"{self._generate_numbered_list(self.commands, item_type='command')}\\n\\n\"\n f\"Resources:\\n{self._generate_numbered_list(self.resources)}\\n\\n\"\n f\"Performance Evaluation:\\n\"\n f\"{self._generate_numbered_list(self.performance_evaluation)}\\n\\n\"\n f\"You should only respond in JSON format as described below \"\n f\"\\nResponse Format: \\n{formatted_response_format} \"\n f\"\\nEnsure the response can be parsed by Python json.loads\"\n )\n return prompt_string\n[docs]def get_prompt(tools: List[BaseTool]) -> str:\n \"\"\"This function generates a prompt string.\n It includes various constraints, commands, resources, and performance evaluations.\n Returns:\n str: The generated prompt string.\n \"\"\"\n # Initialize the PromptGenerator object\n prompt_generator = PromptGenerator()\n # Add constraints to the PromptGenerator object\n prompt_generator.add_constraint(\n \"~4000 word limit for short term memory. \"\n \"Your short term memory is short, \"\n \"so immediately save important information to files.\"\n )\n prompt_generator.add_constraint(\n \"If you are unsure how you previously did something \"\n \"or want to recall past events, \"\n \"thinking about similar events will help you remember.\"\n )\n prompt_generator.add_constraint(\"No user assistance\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/autogpt/prompt_generator.html"} {"id": "ec4f932dbf07-3", "text": ")\n prompt_generator.add_constraint(\"No user assistance\")\n prompt_generator.add_constraint(\n 'Exclusively use the commands listed in double quotes e.g. \"command name\"'\n )\n # Add commands to the PromptGenerator object\n for tool in tools:\n prompt_generator.add_tool(tool)\n # Add resources to the PromptGenerator object\n prompt_generator.add_resource(\n \"Internet access for searches and information gathering.\"\n )\n prompt_generator.add_resource(\"Long Term memory management.\")\n prompt_generator.add_resource(\n \"GPT-3.5 powered Agents for delegation of simple tasks.\"\n )\n prompt_generator.add_resource(\"File output.\")\n # Add performance evaluations to the PromptGenerator object\n prompt_generator.add_performance_evaluation(\n \"Continuously review and analyze your actions \"\n \"to ensure you are performing to the best of your abilities.\"\n )\n prompt_generator.add_performance_evaluation(\n \"Constructively self-criticize your big-picture behavior constantly.\"\n )\n prompt_generator.add_performance_evaluation(\n \"Reflect on past decisions and strategies to refine your approach.\"\n )\n prompt_generator.add_performance_evaluation(\n \"Every command has a cost, so be smart and efficient. \"\n \"Aim to complete tasks in the least number of steps.\"\n )\n # Generate the prompt string\n prompt_string = prompt_generator.generate_prompt_string()\n return prompt_string", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/autogpt/prompt_generator.html"} {"id": "cba869b47ed2-0", "text": "Source code for langchain.experimental.autonomous_agents.autogpt.output_parser\nimport json\nimport re\nfrom abc import abstractmethod\nfrom typing import Dict, NamedTuple\nfrom langchain.schema import BaseOutputParser\n[docs]class AutoGPTAction(NamedTuple):\n name: str\n args: Dict\n[docs]class BaseAutoGPTOutputParser(BaseOutputParser):\n[docs] @abstractmethod\n def parse(self, text: str) -> AutoGPTAction:\n \"\"\"Return AutoGPTAction\"\"\"\n[docs]def preprocess_json_input(input_str: str) -> str:\n \"\"\"Preprocesses a string to be parsed as json.\n Replace single backslashes with double backslashes,\n while leaving already escaped ones intact.\n Args:\n input_str: String to be preprocessed\n Returns:\n Preprocessed string\n \"\"\"\n corrected_str = re.sub(\n r'(? AutoGPTAction:\n try:\n parsed = json.loads(text, strict=False)\n except json.JSONDecodeError:\n preprocessed_text = preprocess_json_input(text)\n try:\n parsed = json.loads(preprocessed_text, strict=False)\n except Exception:\n return AutoGPTAction(\n name=\"ERROR\",\n args={\"error\": f\"Could not parse invalid json: {text}\"},\n )\n try:\n return AutoGPTAction(\n name=parsed[\"command\"][\"name\"],\n args=parsed[\"command\"][\"args\"],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/autogpt/output_parser.html"} {"id": "cba869b47ed2-1", "text": "name=parsed[\"command\"][\"name\"],\n args=parsed[\"command\"][\"args\"],\n )\n except (KeyError, TypeError):\n # If the command is null or incomplete, return an erroneous tool\n return AutoGPTAction(\n name=\"ERROR\", args={\"error\": f\"Incomplete command args: {parsed}\"}\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/autogpt/output_parser.html"} {"id": "e68d930c6b9e-0", "text": "Source code for langchain.experimental.autonomous_agents.baby_agi.task_execution\nfrom langchain import LLMChain, PromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class TaskExecutionChain(LLMChain):\n \"\"\"Chain to execute tasks.\"\"\"\n[docs] @classmethod\n def from_llm(cls, llm: BaseLanguageModel, verbose: bool = True) -> LLMChain:\n \"\"\"Get the response parser.\"\"\"\n execution_template = (\n \"You are an AI who performs one task based on the following objective: \"\n \"{objective}.\"\n \"Take into account these previously completed tasks: {context}.\"\n \" Your task: {task}. Response:\"\n )\n prompt = PromptTemplate(\n template=execution_template,\n input_variables=[\"objective\", \"context\", \"task\"],\n )\n return cls(prompt=prompt, llm=llm, verbose=verbose)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/task_execution.html"} {"id": "ff76217e0bcd-0", "text": "Source code for langchain.experimental.autonomous_agents.baby_agi.task_creation\nfrom langchain import LLMChain, PromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class TaskCreationChain(LLMChain):\n \"\"\"Chain to generates tasks.\"\"\"\n[docs] @classmethod\n def from_llm(cls, llm: BaseLanguageModel, verbose: bool = True) -> LLMChain:\n \"\"\"Get the response parser.\"\"\"\n task_creation_template = (\n \"You are an task creation AI that uses the result of an execution agent\"\n \" to create new tasks with the following objective: {objective},\"\n \" The last completed task has the result: {result}.\"\n \" This result was based on this task description: {task_description}.\"\n \" These are incomplete tasks: {incomplete_tasks}.\"\n \" Based on the result, create new tasks to be completed\"\n \" by the AI system that do not overlap with incomplete tasks.\"\n \" Return the tasks as an array.\"\n )\n prompt = PromptTemplate(\n template=task_creation_template,\n input_variables=[\n \"result\",\n \"task_description\",\n \"incomplete_tasks\",\n \"objective\",\n ],\n )\n return cls(prompt=prompt, llm=llm, verbose=verbose)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/task_creation.html"} {"id": "6ade31fb027c-0", "text": "Source code for langchain.experimental.autonomous_agents.baby_agi.baby_agi\n\"\"\"BabyAGI agent.\"\"\"\nfrom collections import deque\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.experimental.autonomous_agents.baby_agi.task_creation import (\n TaskCreationChain,\n)\nfrom langchain.experimental.autonomous_agents.baby_agi.task_execution import (\n TaskExecutionChain,\n)\nfrom langchain.experimental.autonomous_agents.baby_agi.task_prioritization import (\n TaskPrioritizationChain,\n)\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.vectorstores.base import VectorStore\n[docs]class BabyAGI(Chain, BaseModel):\n \"\"\"Controller model for the BabyAGI agent.\"\"\"\n task_list: deque = Field(default_factory=deque)\n task_creation_chain: Chain = Field(...)\n task_prioritization_chain: Chain = Field(...)\n execution_chain: Chain = Field(...)\n task_id_counter: int = Field(1)\n vectorstore: VectorStore = Field(init=False)\n max_iterations: Optional[int] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def add_task(self, task: Dict) -> None:\n self.task_list.append(task)\n[docs] def print_task_list(self) -> None:\n print(\"\\033[95m\\033[1m\" + \"\\n*****TASK LIST*****\\n\" + \"\\033[0m\\033[0m\")\n for t in self.task_list:\n print(str(t[\"task_id\"]) + \": \" + t[\"task_name\"])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html"} {"id": "6ade31fb027c-1", "text": "print(str(t[\"task_id\"]) + \": \" + t[\"task_name\"])\n[docs] def print_next_task(self, task: Dict) -> None:\n print(\"\\033[92m\\033[1m\" + \"\\n*****NEXT TASK*****\\n\" + \"\\033[0m\\033[0m\")\n print(str(task[\"task_id\"]) + \": \" + task[\"task_name\"])\n[docs] def print_task_result(self, result: str) -> None:\n print(\"\\033[93m\\033[1m\" + \"\\n*****TASK RESULT*****\\n\" + \"\\033[0m\\033[0m\")\n print(result)\n @property\n def input_keys(self) -> List[str]:\n return [\"objective\"]\n @property\n def output_keys(self) -> List[str]:\n return []\n[docs] def get_next_task(\n self, result: str, task_description: str, objective: str\n ) -> List[Dict]:\n \"\"\"Get the next task.\"\"\"\n task_names = [t[\"task_name\"] for t in self.task_list]\n incomplete_tasks = \", \".join(task_names)\n response = self.task_creation_chain.run(\n result=result,\n task_description=task_description,\n incomplete_tasks=incomplete_tasks,\n objective=objective,\n )\n new_tasks = response.split(\"\\n\")\n return [\n {\"task_name\": task_name} for task_name in new_tasks if task_name.strip()\n ]\n[docs] def prioritize_tasks(self, this_task_id: int, objective: str) -> List[Dict]:\n \"\"\"Prioritize tasks.\"\"\"\n task_names = [t[\"task_name\"] for t in list(self.task_list)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html"} {"id": "6ade31fb027c-2", "text": "task_names = [t[\"task_name\"] for t in list(self.task_list)]\n next_task_id = int(this_task_id) + 1\n response = self.task_prioritization_chain.run(\n task_names=\", \".join(task_names),\n next_task_id=str(next_task_id),\n objective=objective,\n )\n new_tasks = response.split(\"\\n\")\n prioritized_task_list = []\n for task_string in new_tasks:\n if not task_string.strip():\n continue\n task_parts = task_string.strip().split(\".\", 1)\n if len(task_parts) == 2:\n task_id = task_parts[0].strip()\n task_name = task_parts[1].strip()\n prioritized_task_list.append(\n {\"task_id\": task_id, \"task_name\": task_name}\n )\n return prioritized_task_list\n def _get_top_tasks(self, query: str, k: int) -> List[str]:\n \"\"\"Get the top k tasks based on the query.\"\"\"\n results = self.vectorstore.similarity_search(query, k=k)\n if not results:\n return []\n return [str(item.metadata[\"task\"]) for item in results]\n[docs] def execute_task(self, objective: str, task: str, k: int = 5) -> str:\n \"\"\"Execute a task.\"\"\"\n context = self._get_top_tasks(query=objective, k=k)\n return self.execution_chain.run(\n objective=objective, context=\"\\n\".join(context), task=task\n )\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Run the agent.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html"} {"id": "6ade31fb027c-3", "text": ") -> Dict[str, Any]:\n \"\"\"Run the agent.\"\"\"\n objective = inputs[\"objective\"]\n first_task = inputs.get(\"first_task\", \"Make a todo list\")\n self.add_task({\"task_id\": 1, \"task_name\": first_task})\n num_iters = 0\n while True:\n if self.task_list:\n self.print_task_list()\n # Step 1: Pull the first task\n task = self.task_list.popleft()\n self.print_next_task(task)\n # Step 2: Execute the task\n result = self.execute_task(objective, task[\"task_name\"])\n this_task_id = int(task[\"task_id\"])\n self.print_task_result(result)\n # Step 3: Store the result in Pinecone\n result_id = f\"result_{task['task_id']}\"\n self.vectorstore.add_texts(\n texts=[result],\n metadatas=[{\"task\": task[\"task_name\"]}],\n ids=[result_id],\n )\n # Step 4: Create new tasks and reprioritize task list\n new_tasks = self.get_next_task(result, task[\"task_name\"], objective)\n for new_task in new_tasks:\n self.task_id_counter += 1\n new_task.update({\"task_id\": self.task_id_counter})\n self.add_task(new_task)\n self.task_list = deque(self.prioritize_tasks(this_task_id, objective))\n num_iters += 1\n if self.max_iterations is not None and num_iters == self.max_iterations:\n print(\n \"\\033[91m\\033[1m\" + \"\\n*****TASK ENDING*****\\n\" + \"\\033[0m\\033[0m\"\n )\n break\n return {}\n[docs] @classmethod", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html"} {"id": "6ade31fb027c-4", "text": ")\n break\n return {}\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n vectorstore: VectorStore,\n verbose: bool = False,\n task_execution_chain: Optional[Chain] = None,\n **kwargs: Dict[str, Any],\n ) -> \"BabyAGI\":\n \"\"\"Initialize the BabyAGI Controller.\"\"\"\n task_creation_chain = TaskCreationChain.from_llm(llm, verbose=verbose)\n task_prioritization_chain = TaskPrioritizationChain.from_llm(\n llm, verbose=verbose\n )\n if task_execution_chain is None:\n execution_chain: Chain = TaskExecutionChain.from_llm(llm, verbose=verbose)\n else:\n execution_chain = task_execution_chain\n return cls(\n task_creation_chain=task_creation_chain,\n task_prioritization_chain=task_prioritization_chain,\n execution_chain=execution_chain,\n vectorstore=vectorstore,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html"} {"id": "a6da1d8e8a0e-0", "text": "Source code for langchain.experimental.autonomous_agents.baby_agi.task_prioritization\nfrom langchain import LLMChain, PromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class TaskPrioritizationChain(LLMChain):\n \"\"\"Chain to prioritize tasks.\"\"\"\n[docs] @classmethod\n def from_llm(cls, llm: BaseLanguageModel, verbose: bool = True) -> LLMChain:\n \"\"\"Get the response parser.\"\"\"\n task_prioritization_template = (\n \"You are a task prioritization AI tasked with cleaning the formatting of \"\n \"and reprioritizing the following tasks: {task_names}.\"\n \" Consider the ultimate objective of your team: {objective}.\"\n \" Do not remove any tasks. Return the result as a numbered list, like:\"\n \" #. First task\"\n \" #. Second task\"\n \" Start the task list with number {next_task_id}.\"\n )\n prompt = PromptTemplate(\n template=task_prioritization_template,\n input_variables=[\"task_names\", \"next_task_id\", \"objective\"],\n )\n return cls(prompt=prompt, llm=llm, verbose=verbose)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/task_prioritization.html"} {"id": "66a08f0a3f63-0", "text": "Source code for langchain.experimental.llms.jsonformer_decoder\n\"\"\"Experimental implementation of jsonformer wrapped LLM.\"\"\"\nfrom __future__ import annotations\nimport json\nfrom typing import TYPE_CHECKING, Any, List, Optional, cast\nfrom pydantic import Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.huggingface_pipeline import HuggingFacePipeline\nif TYPE_CHECKING:\n import jsonformer\n[docs]def import_jsonformer() -> jsonformer:\n \"\"\"Lazily import jsonformer.\"\"\"\n try:\n import jsonformer\n except ImportError:\n raise ValueError(\n \"Could not import jsonformer python package. \"\n \"Please install it with `pip install jsonformer`.\"\n )\n return jsonformer\n[docs]class JsonFormer(HuggingFacePipeline):\n json_schema: dict = Field(..., description=\"The JSON Schema to complete.\")\n max_new_tokens: int = Field(\n default=200, description=\"Maximum number of new tokens to generate.\"\n )\n debug: bool = Field(default=False, description=\"Debug mode.\")\n[docs] @root_validator\n def check_jsonformer_installation(cls, values: dict) -> dict:\n import_jsonformer()\n return values\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n jsonformer = import_jsonformer()\n from transformers import Text2TextGenerationPipeline\n pipeline = cast(Text2TextGenerationPipeline, self.pipeline)\n model = jsonformer.Jsonformer(\n model=pipeline.model,\n tokenizer=pipeline.tokenizer,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/llms/jsonformer_decoder.html"} {"id": "66a08f0a3f63-1", "text": "model=pipeline.model,\n tokenizer=pipeline.tokenizer,\n json_schema=self.json_schema,\n prompt=prompt,\n max_number_tokens=self.max_new_tokens,\n debug=self.debug,\n )\n text = model()\n return json.dumps(text)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/llms/jsonformer_decoder.html"} {"id": "5efdc361853e-0", "text": "Source code for langchain.experimental.llms.rellm_decoder\n\"\"\"Experimental implementation of RELLM wrapped LLM.\"\"\"\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, Any, List, Optional, cast\nfrom pydantic import Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.huggingface_pipeline import HuggingFacePipeline\nfrom langchain.llms.utils import enforce_stop_tokens\nif TYPE_CHECKING:\n import rellm\n from regex import Pattern as RegexPattern\nelse:\n try:\n from regex import Pattern as RegexPattern\n except ImportError:\n pass\n[docs]def import_rellm() -> rellm:\n \"\"\"Lazily import rellm.\"\"\"\n try:\n import rellm\n except ImportError:\n raise ValueError(\n \"Could not import rellm python package. \"\n \"Please install it with `pip install rellm`.\"\n )\n return rellm\n[docs]class RELLM(HuggingFacePipeline):\n regex: RegexPattern = Field(..., description=\"The structured format to complete.\")\n max_new_tokens: int = Field(\n default=200, description=\"Maximum number of new tokens to generate.\"\n )\n[docs] @root_validator\n def check_rellm_installation(cls, values: dict) -> dict:\n import_rellm()\n return values\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n rellm = import_rellm()\n from transformers import Text2TextGenerationPipeline", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/llms/rellm_decoder.html"} {"id": "5efdc361853e-1", "text": "from transformers import Text2TextGenerationPipeline\n pipeline = cast(Text2TextGenerationPipeline, self.pipeline)\n text = rellm.complete_re(\n prompt,\n self.regex,\n tokenizer=pipeline.tokenizer,\n model=pipeline.model,\n max_new_tokens=self.max_new_tokens,\n )\n if stop is not None:\n # This is a bit hacky, but I can't figure out a better way to enforce\n # stop tokens when making calls to huggingface_hub.\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/llms/rellm_decoder.html"} {"id": "8c728e8f751f-0", "text": "Source code for langchain.experimental.generative_agents.generative_agent\nimport re\nfrom datetime import datetime\nfrom typing import Any, Dict, List, Optional, Tuple\nfrom pydantic import BaseModel, Field\nfrom langchain import LLMChain\nfrom langchain.experimental.generative_agents.memory import GenerativeAgentMemory\nfrom langchain.prompts import PromptTemplate\nfrom langchain.schema.language_model import BaseLanguageModel\n[docs]class GenerativeAgent(BaseModel):\n \"\"\"A character with memory and innate characteristics.\"\"\"\n name: str\n \"\"\"The character's name.\"\"\"\n age: Optional[int] = None\n \"\"\"The optional age of the character.\"\"\"\n traits: str = \"N/A\"\n \"\"\"Permanent traits to ascribe to the character.\"\"\"\n status: str\n \"\"\"The traits of the character you wish not to change.\"\"\"\n memory: GenerativeAgentMemory\n \"\"\"The memory object that combines relevance, recency, and 'importance'.\"\"\"\n llm: BaseLanguageModel\n \"\"\"The underlying language model.\"\"\"\n verbose: bool = False\n summary: str = \"\" #: :meta private:\n \"\"\"Stateful self-summary generated via reflection on the character's memory.\"\"\"\n summary_refresh_seconds: int = 3600 #: :meta private:\n \"\"\"How frequently to re-generate the summary.\"\"\"\n last_refreshed: datetime = Field(default_factory=datetime.now) # : :meta private:\n \"\"\"The last time the character's summary was regenerated.\"\"\"\n daily_summaries: List[str] = Field(default_factory=list) # : :meta private:\n \"\"\"Summary of the events in the plan that the agent took.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n # LLM-related methods\n @staticmethod", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html"} {"id": "8c728e8f751f-1", "text": "arbitrary_types_allowed = True\n # LLM-related methods\n @staticmethod\n def _parse_list(text: str) -> List[str]:\n \"\"\"Parse a newline-separated string into a list of strings.\"\"\"\n lines = re.split(r\"\\n\", text.strip())\n return [re.sub(r\"^\\s*\\d+\\.\\s*\", \"\", line).strip() for line in lines]\n[docs] def chain(self, prompt: PromptTemplate) -> LLMChain:\n return LLMChain(\n llm=self.llm, prompt=prompt, verbose=self.verbose, memory=self.memory\n )\n def _get_entity_from_observation(self, observation: str) -> str:\n prompt = PromptTemplate.from_template(\n \"What is the observed entity in the following observation? {observation}\"\n + \"\\nEntity=\"\n )\n return self.chain(prompt).run(observation=observation).strip()\n def _get_entity_action(self, observation: str, entity_name: str) -> str:\n prompt = PromptTemplate.from_template(\n \"What is the {entity} doing in the following observation? {observation}\"\n + \"\\nThe {entity} is\"\n )\n return (\n self.chain(prompt).run(entity=entity_name, observation=observation).strip()\n )\n[docs] def summarize_related_memories(self, observation: str) -> str:\n \"\"\"Summarize memories that are most relevant to an observation.\"\"\"\n prompt = PromptTemplate.from_template(\n \"\"\"\n{q1}?\nContext from memory:\n{relevant_memories}\nRelevant context: \n\"\"\"\n )\n entity_name = self._get_entity_from_observation(observation)\n entity_action = self._get_entity_action(observation, entity_name)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html"} {"id": "8c728e8f751f-2", "text": "entity_action = self._get_entity_action(observation, entity_name)\n q1 = f\"What is the relationship between {self.name} and {entity_name}\"\n q2 = f\"{entity_name} is {entity_action}\"\n return self.chain(prompt=prompt).run(q1=q1, queries=[q1, q2]).strip()\n def _generate_reaction(\n self, observation: str, suffix: str, now: Optional[datetime] = None\n ) -> str:\n \"\"\"React to a given observation or dialogue act.\"\"\"\n prompt = PromptTemplate.from_template(\n \"{agent_summary_description}\"\n + \"\\nIt is {current_time}.\"\n + \"\\n{agent_name}'s status: {agent_status}\"\n + \"\\nSummary of relevant context from {agent_name}'s memory:\"\n + \"\\n{relevant_memories}\"\n + \"\\nMost recent observations: {most_recent_memories}\"\n + \"\\nObservation: {observation}\"\n + \"\\n\\n\"\n + suffix\n )\n agent_summary_description = self.get_summary(now=now)\n relevant_memories_str = self.summarize_related_memories(observation)\n current_time_str = (\n datetime.now().strftime(\"%B %d, %Y, %I:%M %p\")\n if now is None\n else now.strftime(\"%B %d, %Y, %I:%M %p\")\n )\n kwargs: Dict[str, Any] = dict(\n agent_summary_description=agent_summary_description,\n current_time=current_time_str,\n relevant_memories=relevant_memories_str,\n agent_name=self.name,\n observation=observation,\n agent_status=self.status,\n )\n consumed_tokens = self.llm.get_num_tokens(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html"} {"id": "8c728e8f751f-3", "text": ")\n consumed_tokens = self.llm.get_num_tokens(\n prompt.format(most_recent_memories=\"\", **kwargs)\n )\n kwargs[self.memory.most_recent_memories_token_key] = consumed_tokens\n return self.chain(prompt=prompt).run(**kwargs).strip()\n def _clean_response(self, text: str) -> str:\n return re.sub(f\"^{self.name} \", \"\", text.strip()).strip()\n[docs] def generate_reaction(\n self, observation: str, now: Optional[datetime] = None\n ) -> Tuple[bool, str]:\n \"\"\"React to a given observation.\"\"\"\n call_to_action_template = (\n \"Should {agent_name} react to the observation, and if so,\"\n + \" what would be an appropriate reaction? Respond in one line.\"\n + ' If the action is to engage in dialogue, write:\\nSAY: \"what to say\"'\n + \"\\notherwise, write:\\nREACT: {agent_name}'s reaction (if anything).\"\n + \"\\nEither do nothing, react, or say something but not both.\\n\\n\"\n )\n full_result = self._generate_reaction(\n observation, call_to_action_template, now=now\n )\n result = full_result.strip().split(\"\\n\")[0]\n # AAA\n self.memory.save_context(\n {},\n {\n self.memory.add_memory_key: f\"{self.name} observed \"\n f\"{observation} and reacted by {result}\",\n self.memory.now_key: now,\n },\n )\n if \"REACT:\" in result:\n reaction = self._clean_response(result.split(\"REACT:\")[-1])\n return False, f\"{self.name} {reaction}\"\n if \"SAY:\" in result:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html"} {"id": "8c728e8f751f-4", "text": "if \"SAY:\" in result:\n said_value = self._clean_response(result.split(\"SAY:\")[-1])\n return True, f\"{self.name} said {said_value}\"\n else:\n return False, result\n[docs] def generate_dialogue_response(\n self, observation: str, now: Optional[datetime] = None\n ) -> Tuple[bool, str]:\n \"\"\"React to a given observation.\"\"\"\n call_to_action_template = (\n \"What would {agent_name} say? To end the conversation, write:\"\n ' GOODBYE: \"what to say\". Otherwise to continue the conversation,'\n ' write: SAY: \"what to say next\"\\n\\n'\n )\n full_result = self._generate_reaction(\n observation, call_to_action_template, now=now\n )\n result = full_result.strip().split(\"\\n\")[0]\n if \"GOODBYE:\" in result:\n farewell = self._clean_response(result.split(\"GOODBYE:\")[-1])\n self.memory.save_context(\n {},\n {\n self.memory.add_memory_key: f\"{self.name} observed \"\n f\"{observation} and said {farewell}\",\n self.memory.now_key: now,\n },\n )\n return False, f\"{self.name} said {farewell}\"\n if \"SAY:\" in result:\n response_text = self._clean_response(result.split(\"SAY:\")[-1])\n self.memory.save_context(\n {},\n {\n self.memory.add_memory_key: f\"{self.name} observed \"\n f\"{observation} and said {response_text}\",\n self.memory.now_key: now,\n },\n )\n return True, f\"{self.name} said {response_text}\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html"} {"id": "8c728e8f751f-5", "text": ")\n return True, f\"{self.name} said {response_text}\"\n else:\n return False, result\n ######################################################\n # Agent stateful' summary methods. #\n # Each dialog or response prompt includes a header #\n # summarizing the agent's self-description. This is #\n # updated periodically through probing its memories #\n ######################################################\n def _compute_agent_summary(self) -> str:\n \"\"\"\"\"\"\n prompt = PromptTemplate.from_template(\n \"How would you summarize {name}'s core characteristics given the\"\n + \" following statements:\\n\"\n + \"{relevant_memories}\"\n + \"Do not embellish.\"\n + \"\\n\\nSummary: \"\n )\n # The agent seeks to think about their core characteristics.\n return (\n self.chain(prompt)\n .run(name=self.name, queries=[f\"{self.name}'s core characteristics\"])\n .strip()\n )\n[docs] def get_summary(\n self, force_refresh: bool = False, now: Optional[datetime] = None\n ) -> str:\n \"\"\"Return a descriptive summary of the agent.\"\"\"\n current_time = datetime.now() if now is None else now\n since_refresh = (current_time - self.last_refreshed).seconds\n if (\n not self.summary\n or since_refresh >= self.summary_refresh_seconds\n or force_refresh\n ):\n self.summary = self._compute_agent_summary()\n self.last_refreshed = current_time\n age = self.age if self.age is not None else \"N/A\"\n return (\n f\"Name: {self.name} (age: {age})\"\n + f\"\\nInnate traits: {self.traits}\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html"} {"id": "8c728e8f751f-6", "text": "+ f\"\\nInnate traits: {self.traits}\"\n + f\"\\n{self.summary}\"\n )\n[docs] def get_full_header(\n self, force_refresh: bool = False, now: Optional[datetime] = None\n ) -> str:\n \"\"\"Return a full header of the agent's status, summary, and current time.\"\"\"\n now = datetime.now() if now is None else now\n summary = self.get_summary(force_refresh=force_refresh, now=now)\n current_time_str = now.strftime(\"%B %d, %Y, %I:%M %p\")\n return (\n f\"{summary}\\nIt is {current_time_str}.\\n{self.name}'s status: {self.status}\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html"} {"id": "8579cdba7421-0", "text": "Source code for langchain.experimental.generative_agents.memory\nimport logging\nimport re\nfrom datetime import datetime\nfrom typing import Any, Dict, List, Optional\nfrom langchain import LLMChain\nfrom langchain.prompts import PromptTemplate\nfrom langchain.retrievers import TimeWeightedVectorStoreRetriever\nfrom langchain.schema import BaseMemory, Document\nfrom langchain.schema.language_model import BaseLanguageModel\nfrom langchain.utils import mock_now\nlogger = logging.getLogger(__name__)\n[docs]class GenerativeAgentMemory(BaseMemory):\n llm: BaseLanguageModel\n \"\"\"The core language model.\"\"\"\n memory_retriever: TimeWeightedVectorStoreRetriever\n \"\"\"The retriever to fetch related memories.\"\"\"\n verbose: bool = False\n reflection_threshold: Optional[float] = None\n \"\"\"When aggregate_importance exceeds reflection_threshold, stop to reflect.\"\"\"\n current_plan: List[str] = []\n \"\"\"The current plan of the agent.\"\"\"\n # A weight of 0.15 makes this less important than it\n # would be otherwise, relative to salience and time\n importance_weight: float = 0.15\n \"\"\"How much weight to assign the memory importance.\"\"\"\n aggregate_importance: float = 0.0 # : :meta private:\n \"\"\"Track the sum of the 'importance' of recent memories.\n Triggers reflection when it reaches reflection_threshold.\"\"\"\n max_tokens_limit: int = 1200 # : :meta private:\n # input keys\n queries_key: str = \"queries\"\n most_recent_memories_token_key: str = \"recent_memories_token\"\n add_memory_key: str = \"add_memory\"\n # output keys\n relevant_memories_key: str = \"relevant_memories\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html"} {"id": "8579cdba7421-1", "text": "# output keys\n relevant_memories_key: str = \"relevant_memories\"\n relevant_memories_simple_key: str = \"relevant_memories_simple\"\n most_recent_memories_key: str = \"most_recent_memories\"\n now_key: str = \"now\"\n reflecting: bool = False\n[docs] def chain(self, prompt: PromptTemplate) -> LLMChain:\n return LLMChain(llm=self.llm, prompt=prompt, verbose=self.verbose)\n @staticmethod\n def _parse_list(text: str) -> List[str]:\n \"\"\"Parse a newline-separated string into a list of strings.\"\"\"\n lines = re.split(r\"\\n\", text.strip())\n lines = [line for line in lines if line.strip()] # remove empty lines\n return [re.sub(r\"^\\s*\\d+\\.\\s*\", \"\", line).strip() for line in lines]\n def _get_topics_of_reflection(self, last_k: int = 50) -> List[str]:\n \"\"\"Return the 3 most salient high-level questions about recent observations.\"\"\"\n prompt = PromptTemplate.from_template(\n \"{observations}\\n\\n\"\n \"Given only the information above, what are the 3 most salient \"\n \"high-level questions we can answer about the subjects in the statements?\\n\"\n \"Provide each question on a new line.\"\n )\n observations = self.memory_retriever.memory_stream[-last_k:]\n observation_str = \"\\n\".join(\n [self._format_memory_detail(o) for o in observations]\n )\n result = self.chain(prompt).run(observations=observation_str)\n return self._parse_list(result)\n def _get_insights_on_topic(\n self, topic: str, now: Optional[datetime] = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html"} {"id": "8579cdba7421-2", "text": "self, topic: str, now: Optional[datetime] = None\n ) -> List[str]:\n \"\"\"Generate 'insights' on a topic of reflection, based on pertinent memories.\"\"\"\n prompt = PromptTemplate.from_template(\n \"Statements relevant to: '{topic}'\\n\"\n \"---\\n\"\n \"{related_statements}\\n\"\n \"---\\n\"\n \"What 5 high-level novel insights can you infer from the above statements \"\n \"that are relevant for answering the following question?\\n\"\n \"Do not include any insights that are not relevant to the question.\\n\"\n \"Do not repeat any insights that have already been made.\\n\\n\"\n \"Question: {topic}\\n\\n\"\n \"(example format: insight (because of 1, 5, 3))\\n\"\n )\n related_memories = self.fetch_memories(topic, now=now)\n related_statements = \"\\n\".join(\n [\n self._format_memory_detail(memory, prefix=f\"{i+1}. \")\n for i, memory in enumerate(related_memories)\n ]\n )\n result = self.chain(prompt).run(\n topic=topic, related_statements=related_statements\n )\n # TODO: Parse the connections between memories and insights\n return self._parse_list(result)\n[docs] def pause_to_reflect(self, now: Optional[datetime] = None) -> List[str]:\n \"\"\"Reflect on recent observations and generate 'insights'.\"\"\"\n if self.verbose:\n logger.info(\"Character is reflecting\")\n new_insights = []\n topics = self._get_topics_of_reflection()\n for topic in topics:\n insights = self._get_insights_on_topic(topic, now=now)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html"} {"id": "8579cdba7421-3", "text": "insights = self._get_insights_on_topic(topic, now=now)\n for insight in insights:\n self.add_memory(insight, now=now)\n new_insights.extend(insights)\n return new_insights\n def _score_memory_importance(self, memory_content: str) -> float:\n \"\"\"Score the absolute importance of the given memory.\"\"\"\n prompt = PromptTemplate.from_template(\n \"On the scale of 1 to 10, where 1 is purely mundane\"\n + \" (e.g., brushing teeth, making bed) and 10 is\"\n + \" extremely poignant (e.g., a break up, college\"\n + \" acceptance), rate the likely poignancy of the\"\n + \" following piece of memory. Respond with a single integer.\"\n + \"\\nMemory: {memory_content}\"\n + \"\\nRating: \"\n )\n score = self.chain(prompt).run(memory_content=memory_content).strip()\n if self.verbose:\n logger.info(f\"Importance score: {score}\")\n match = re.search(r\"^\\D*(\\d+)\", score)\n if match:\n return (float(match.group(1)) / 10) * self.importance_weight\n else:\n return 0.0\n def _score_memories_importance(self, memory_content: str) -> List[float]:\n \"\"\"Score the absolute importance of the given memory.\"\"\"\n prompt = PromptTemplate.from_template(\n \"On the scale of 1 to 10, where 1 is purely mundane\"\n + \" (e.g., brushing teeth, making bed) and 10 is\"\n + \" extremely poignant (e.g., a break up, college\"\n + \" acceptance), rate the likely poignancy of the\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html"} {"id": "8579cdba7421-4", "text": "+ \" acceptance), rate the likely poignancy of the\"\n + \" following piece of memory. Always answer with only a list of numbers.\"\n + \" If just given one memory still respond in a list.\"\n + \" Memories are separated by semi colans (;)\"\n + \"\\Memories: {memory_content}\"\n + \"\\nRating: \"\n )\n scores = self.chain(prompt).run(memory_content=memory_content).strip()\n if self.verbose:\n logger.info(f\"Importance scores: {scores}\")\n # Split into list of strings and convert to floats\n scores_list = [float(x) for x in scores.split(\";\")]\n return scores_list\n[docs] def add_memories(\n self, memory_content: str, now: Optional[datetime] = None\n ) -> List[str]:\n \"\"\"Add an observations or memories to the agent's memory.\"\"\"\n importance_scores = self._score_memories_importance(memory_content)\n self.aggregate_importance += max(importance_scores)\n memory_list = memory_content.split(\";\")\n documents = []\n for i in range(len(memory_list)):\n documents.append(\n Document(\n page_content=memory_list[i],\n metadata={\"importance\": importance_scores[i]},\n )\n )\n result = self.memory_retriever.add_documents(documents, current_time=now)\n # After an agent has processed a certain amount of memories (as measured by\n # aggregate importance), it is time to reflect on recent events to add\n # more synthesized memories to the agent's memory stream.\n if (\n self.reflection_threshold is not None\n and self.aggregate_importance > self.reflection_threshold\n and not self.reflecting\n ):\n self.reflecting = True", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html"} {"id": "8579cdba7421-5", "text": "and not self.reflecting\n ):\n self.reflecting = True\n self.pause_to_reflect(now=now)\n # Hack to clear the importance from reflection\n self.aggregate_importance = 0.0\n self.reflecting = False\n return result\n[docs] def add_memory(\n self, memory_content: str, now: Optional[datetime] = None\n ) -> List[str]:\n \"\"\"Add an observation or memory to the agent's memory.\"\"\"\n importance_score = self._score_memory_importance(memory_content)\n self.aggregate_importance += importance_score\n document = Document(\n page_content=memory_content, metadata={\"importance\": importance_score}\n )\n result = self.memory_retriever.add_documents([document], current_time=now)\n # After an agent has processed a certain amount of memories (as measured by\n # aggregate importance), it is time to reflect on recent events to add\n # more synthesized memories to the agent's memory stream.\n if (\n self.reflection_threshold is not None\n and self.aggregate_importance > self.reflection_threshold\n and not self.reflecting\n ):\n self.reflecting = True\n self.pause_to_reflect(now=now)\n # Hack to clear the importance from reflection\n self.aggregate_importance = 0.0\n self.reflecting = False\n return result\n[docs] def fetch_memories(\n self, observation: str, now: Optional[datetime] = None\n ) -> List[Document]:\n \"\"\"Fetch related memories.\"\"\"\n if now is not None:\n with mock_now(now):\n return self.memory_retriever.get_relevant_documents(observation)\n else:\n return self.memory_retriever.get_relevant_documents(observation)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html"} {"id": "8579cdba7421-6", "text": "else:\n return self.memory_retriever.get_relevant_documents(observation)\n[docs] def format_memories_detail(self, relevant_memories: List[Document]) -> str:\n content = []\n for mem in relevant_memories:\n content.append(self._format_memory_detail(mem, prefix=\"- \"))\n return \"\\n\".join([f\"{mem}\" for mem in content])\n def _format_memory_detail(self, memory: Document, prefix: str = \"\") -> str:\n created_time = memory.metadata[\"created_at\"].strftime(\"%B %d, %Y, %I:%M %p\")\n return f\"{prefix}[{created_time}] {memory.page_content.strip()}\"\n[docs] def format_memories_simple(self, relevant_memories: List[Document]) -> str:\n return \"; \".join([f\"{mem.page_content}\" for mem in relevant_memories])\n def _get_memories_until_limit(self, consumed_tokens: int) -> str:\n \"\"\"Reduce the number of tokens in the documents.\"\"\"\n result = []\n for doc in self.memory_retriever.memory_stream[::-1]:\n if consumed_tokens >= self.max_tokens_limit:\n break\n consumed_tokens += self.llm.get_num_tokens(doc.page_content)\n if consumed_tokens < self.max_tokens_limit:\n result.append(doc)\n return self.format_memories_simple(result)\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Input keys this memory class will load dynamically.\"\"\"\n return []\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n \"\"\"Return key-value pairs given the text input to the chain.\"\"\"\n queries = inputs.get(self.queries_key)\n now = inputs.get(self.now_key)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html"} {"id": "8579cdba7421-7", "text": "now = inputs.get(self.now_key)\n if queries is not None:\n relevant_memories = [\n mem for query in queries for mem in self.fetch_memories(query, now=now)\n ]\n return {\n self.relevant_memories_key: self.format_memories_detail(\n relevant_memories\n ),\n self.relevant_memories_simple_key: self.format_memories_simple(\n relevant_memories\n ),\n }\n most_recent_memories_token = inputs.get(self.most_recent_memories_token_key)\n if most_recent_memories_token is not None:\n return {\n self.most_recent_memories_key: self._get_memories_until_limit(\n most_recent_memories_token\n )\n }\n return {}\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, Any]) -> None:\n \"\"\"Save the context of this model run to memory.\"\"\"\n # TODO: fix the save memory key\n mem = outputs.get(self.add_memory_key)\n now = outputs.get(self.now_key)\n if mem:\n self.add_memory(mem, now=now)\n[docs] def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"\n # TODO", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html"} {"id": "839f3362320b-0", "text": "langchain.memory.buffer_window.ConversationBufferWindowMemory\u00b6\nclass langchain.memory.buffer_window.ConversationBufferWindowMemory(*, chat_memory: BaseChatMessageHistory = None, output_key: Optional[str] = None, input_key: Optional[str] = None, return_messages: bool = False, human_prefix: str = 'Human', ai_prefix: str = 'AI', memory_key: str = 'history', k: int = 5)[source]\u00b6\nBases: BaseChatMemory\nBuffer for storing conversation memory.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam ai_prefix: str = 'AI'\u00b6\nparam chat_memory: BaseChatMessageHistory [Optional]\u00b6\nparam human_prefix: str = 'Human'\u00b6\nparam input_key: Optional[str] = None\u00b6\nparam k: int = 5\u00b6\nparam output_key: Optional[str] = None\u00b6\nparam return_messages: bool = False\u00b6\nclear() \u2192 None\u00b6\nClear memory contents.\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, str][source]\u00b6\nReturn history buffer.\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None\u00b6\nSave context from this conversation to buffer.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty buffer: List[langchain.schema.messages.BaseMessage]\u00b6\nString buffer of memory.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer_window.ConversationBufferWindowMemory.html"} {"id": "839f3362320b-1", "text": "eg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer_window.ConversationBufferWindowMemory.html"} {"id": "0a7084dec5e6-0", "text": "langchain.memory.summary_buffer.ConversationSummaryBufferMemory\u00b6\nclass langchain.memory.summary_buffer.ConversationSummaryBufferMemory(*, human_prefix: str = 'Human', ai_prefix: str = 'AI', llm: ~langchain.schema.language_model.BaseLanguageModel, prompt: ~langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\\n\\nEXAMPLE\\nCurrent summary:\\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\\n\\nNew lines of conversation:\\nHuman: Why do you think artificial intelligence is a force for good?\\nAI: Because artificial intelligence will help humans reach their full potential.\\n\\nNew summary:\\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\\nEND OF EXAMPLE\\n\\nCurrent summary:\\n{summary}\\n\\nNew lines of conversation:\\n{new_lines}\\n\\nNew summary:', template_format='f-string', validate_template=True), summary_message_cls: ~typing.Type[~langchain.schema.messages.BaseMessage] = , chat_memory: ~langchain.schema.memory.BaseChatMessageHistory = None, output_key: ~typing.Optional[str] = None, input_key: ~typing.Optional[str] = None, return_messages: bool = False, max_token_limit: int = 2000, moving_summary_buffer: str = '', memory_key: str = 'history')[source]\u00b6\nBases: BaseChatMemory, SummarizerMixin\nBuffer with summarizer for storing conversation memory.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.summary_buffer.ConversationSummaryBufferMemory.html"} {"id": "0a7084dec5e6-1", "text": "Raises ValidationError if the input data cannot be parsed to form a valid model.\nparam ai_prefix: str = 'AI'\u00b6\nparam chat_memory: BaseChatMessageHistory [Optional]\u00b6\nparam human_prefix: str = 'Human'\u00b6\nparam input_key: Optional[str] = None\u00b6\nparam llm: BaseLanguageModel [Required]\u00b6\nparam max_token_limit: int = 2000\u00b6\nparam memory_key: str = 'history'\u00b6\nparam moving_summary_buffer: str = ''\u00b6\nparam output_key: Optional[str] = None\u00b6\nparam prompt: BasePromptTemplate = PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\\n\\nEXAMPLE\\nCurrent summary:\\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\\n\\nNew lines of conversation:\\nHuman: Why do you think artificial intelligence is a force for good?\\nAI: Because artificial intelligence will help humans reach their full potential.\\n\\nNew summary:\\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\\nEND OF EXAMPLE\\n\\nCurrent summary:\\n{summary}\\n\\nNew lines of conversation:\\n{new_lines}\\n\\nNew summary:', template_format='f-string', validate_template=True)\u00b6\nparam return_messages: bool = False\u00b6\nparam summary_message_cls: Type[BaseMessage] = \u00b6\nclear() \u2192 None[source]\u00b6\nClear memory contents.\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, Any][source]\u00b6\nReturn history buffer.\npredict_new_summary(messages: List[BaseMessage], existing_summary: str) \u2192 str\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.summary_buffer.ConversationSummaryBufferMemory.html"} {"id": "0a7084dec5e6-2", "text": "predict_new_summary(messages: List[BaseMessage], existing_summary: str) \u2192 str\u00b6\nprune() \u2192 None[source]\u00b6\nPrune buffer if it exceeds max token limit\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None[source]\u00b6\nSave context from this conversation to buffer.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_prompt_input_variables\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that prompt input variables are consistent.\nproperty buffer: List[langchain.schema.messages.BaseMessage]\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.summary_buffer.ConversationSummaryBufferMemory.html"} {"id": "8ff8a45e2242-0", "text": "langchain.memory.chat_message_histories.dynamodb.DynamoDBChatMessageHistory\u00b6\nclass langchain.memory.chat_message_histories.dynamodb.DynamoDBChatMessageHistory(table_name: str, session_id: str, endpoint_url: Optional[str] = None)[source]\u00b6\nBases: BaseChatMessageHistory\nChat message history that stores history in AWS DynamoDB.\nThis class expects that a DynamoDB table with name table_name\nand a partition Key of SessionId is present.\nParameters\ntable_name \u2013 name of the DynamoDB table\nsession_id \u2013 arbitrary key that is used to store the messages\nof a single chat session.\nendpoint_url \u2013 URL of the AWS endpoint to connect to. This argument\nis optional and useful for test purposes, like using Localstack.\nIf you plan to use AWS cloud service, you normally don\u2019t have to\nworry about setting the endpoint_url.\nMethods\n__init__(table_name,\u00a0session_id[,\u00a0endpoint_url])\nadd_ai_message(message)\nConvenience method for adding an AI message string to the store.\nadd_message(message)\nAppend the message to the record in DynamoDB\nadd_user_message(message)\nConvenience method for adding a human message string to the store.\nclear()\nClear session memory from DynamoDB\nAttributes\nmessages\nRetrieve the messages from DynamoDB\nadd_ai_message(message: str) \u2192 None\u00b6\nConvenience method for adding an AI message string to the store.\nParameters\nmessage \u2013 The string contents of an AI message.\nadd_message(message: BaseMessage) \u2192 None[source]\u00b6\nAppend the message to the record in DynamoDB\nadd_user_message(message: str) \u2192 None\u00b6\nConvenience method for adding a human message string to the store.\nParameters\nmessage \u2013 The string contents of a human message.\nclear() \u2192 None[source]\u00b6\nClear session memory from DynamoDB", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.dynamodb.DynamoDBChatMessageHistory.html"} {"id": "8ff8a45e2242-1", "text": "clear() \u2192 None[source]\u00b6\nClear session memory from DynamoDB\nproperty messages: List[langchain.schema.messages.BaseMessage]\u00b6\nRetrieve the messages from DynamoDB", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.dynamodb.DynamoDBChatMessageHistory.html"} {"id": "d7487608d435-0", "text": "langchain.memory.chat_message_histories.file.FileChatMessageHistory\u00b6\nclass langchain.memory.chat_message_histories.file.FileChatMessageHistory(file_path: str)[source]\u00b6\nBases: BaseChatMessageHistory\nChat message history that stores history in a local file.\nParameters\nfile_path \u2013 path of the local file to store the messages.\nMethods\n__init__(file_path)\nadd_ai_message(message)\nConvenience method for adding an AI message string to the store.\nadd_message(message)\nAppend the message to the record in the local file\nadd_user_message(message)\nConvenience method for adding a human message string to the store.\nclear()\nClear session memory from the local file\nAttributes\nmessages\nRetrieve the messages from the local file\nadd_ai_message(message: str) \u2192 None\u00b6\nConvenience method for adding an AI message string to the store.\nParameters\nmessage \u2013 The string contents of an AI message.\nadd_message(message: BaseMessage) \u2192 None[source]\u00b6\nAppend the message to the record in the local file\nadd_user_message(message: str) \u2192 None\u00b6\nConvenience method for adding a human message string to the store.\nParameters\nmessage \u2013 The string contents of a human message.\nclear() \u2192 None[source]\u00b6\nClear session memory from the local file\nproperty messages: List[langchain.schema.messages.BaseMessage]\u00b6\nRetrieve the messages from the local file", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.file.FileChatMessageHistory.html"} {"id": "110096a514d9-0", "text": "langchain.memory.summary.SummarizerMixin\u00b6\nclass langchain.memory.summary.SummarizerMixin(*, human_prefix: str = 'Human', ai_prefix: str = 'AI', llm: ~langchain.schema.language_model.BaseLanguageModel, prompt: ~langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\\n\\nEXAMPLE\\nCurrent summary:\\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\\n\\nNew lines of conversation:\\nHuman: Why do you think artificial intelligence is a force for good?\\nAI: Because artificial intelligence will help humans reach their full potential.\\n\\nNew summary:\\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\\nEND OF EXAMPLE\\n\\nCurrent summary:\\n{summary}\\n\\nNew lines of conversation:\\n{new_lines}\\n\\nNew summary:', template_format='f-string', validate_template=True), summary_message_cls: ~typing.Type[~langchain.schema.messages.BaseMessage] = )[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam ai_prefix: str = 'AI'\u00b6\nparam human_prefix: str = 'Human'\u00b6\nparam llm: langchain.schema.language_model.BaseLanguageModel [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.summary.SummarizerMixin.html"} {"id": "110096a514d9-1", "text": "param llm: langchain.schema.language_model.BaseLanguageModel [Required]\u00b6\nparam prompt: langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\\n\\nEXAMPLE\\nCurrent summary:\\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\\n\\nNew lines of conversation:\\nHuman: Why do you think artificial intelligence is a force for good?\\nAI: Because artificial intelligence will help humans reach their full potential.\\n\\nNew summary:\\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\\nEND OF EXAMPLE\\n\\nCurrent summary:\\n{summary}\\n\\nNew lines of conversation:\\n{new_lines}\\n\\nNew summary:', template_format='f-string', validate_template=True)\u00b6\nparam summary_message_cls: Type[langchain.schema.messages.BaseMessage] = \u00b6\npredict_new_summary(messages: List[BaseMessage], existing_summary: str) \u2192 str[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.summary.SummarizerMixin.html"} {"id": "058ce7339407-0", "text": "langchain.memory.vectorstore.VectorStoreRetrieverMemory\u00b6\nclass langchain.memory.vectorstore.VectorStoreRetrieverMemory(*, retriever: VectorStoreRetriever, memory_key: str = 'history', input_key: Optional[str] = None, return_docs: bool = False)[source]\u00b6\nBases: BaseMemory\nClass for a VectorStore-backed memory object.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam input_key: Optional[str] = None\u00b6\nKey name to index the inputs to load_memory_variables.\nparam memory_key: str = 'history'\u00b6\nKey name to locate the memories in the result of load_memory_variables.\nparam retriever: langchain.vectorstores.base.VectorStoreRetriever [Required]\u00b6\nVectorStoreRetriever object to connect to.\nparam return_docs: bool = False\u00b6\nWhether or not to return the result of querying the database directly.\nclear() \u2192 None[source]\u00b6\nNothing to clear.\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, Union[List[Document], str]][source]\u00b6\nReturn history buffer.\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None[source]\u00b6\nSave context from this conversation to buffer.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.vectorstore.VectorStoreRetrieverMemory.html"} {"id": "058ce7339407-1", "text": "property lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty memory_variables: List[str]\u00b6\nThe list of keys emitted from the load_memory_variables method.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.vectorstore.VectorStoreRetrieverMemory.html"} {"id": "5695dc20875e-0", "text": "langchain.memory.token_buffer.ConversationTokenBufferMemory\u00b6\nclass langchain.memory.token_buffer.ConversationTokenBufferMemory(*, chat_memory: BaseChatMessageHistory = None, output_key: Optional[str] = None, input_key: Optional[str] = None, return_messages: bool = False, human_prefix: str = 'Human', ai_prefix: str = 'AI', llm: BaseLanguageModel, memory_key: str = 'history', max_token_limit: int = 2000)[source]\u00b6\nBases: BaseChatMemory\nBuffer for storing conversation memory.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam ai_prefix: str = 'AI'\u00b6\nparam chat_memory: BaseChatMessageHistory [Optional]\u00b6\nparam human_prefix: str = 'Human'\u00b6\nparam input_key: Optional[str] = None\u00b6\nparam llm: langchain.schema.language_model.BaseLanguageModel [Required]\u00b6\nparam max_token_limit: int = 2000\u00b6\nparam memory_key: str = 'history'\u00b6\nparam output_key: Optional[str] = None\u00b6\nparam return_messages: bool = False\u00b6\nclear() \u2192 None\u00b6\nClear memory contents.\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, Any][source]\u00b6\nReturn history buffer.\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None[source]\u00b6\nSave context from this conversation to buffer. Pruned.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty buffer: List[langchain.schema.messages.BaseMessage]\u00b6\nString buffer of memory.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.token_buffer.ConversationTokenBufferMemory.html"} {"id": "5695dc20875e-1", "text": "property lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.token_buffer.ConversationTokenBufferMemory.html"} {"id": "5f715d9f15cf-0", "text": "langchain.memory.entity.BaseEntityStore\u00b6\nclass langchain.memory.entity.BaseEntityStore[source]\u00b6\nBases: BaseModel, ABC\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nabstract clear() \u2192 None[source]\u00b6\nDelete all entities from store.\nabstract delete(key: str) \u2192 None[source]\u00b6\nDelete entity value from store.\nabstract exists(key: str) \u2192 bool[source]\u00b6\nCheck if entity exists in store.\nabstract get(key: str, default: Optional[str] = None) \u2192 Optional[str][source]\u00b6\nGet entity value from store.\nabstract set(key: str, value: Optional[str]) \u2192 None[source]\u00b6\nSet entity value in store.", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.BaseEntityStore.html"} {"id": "2da67fc22814-0", "text": "langchain.memory.entity.InMemoryEntityStore\u00b6\nclass langchain.memory.entity.InMemoryEntityStore(*, store: Dict[str, Optional[str]] = {})[source]\u00b6\nBases: BaseEntityStore\nBasic in-memory entity store.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam store: Dict[str, Optional[str]] = {}\u00b6\nclear() \u2192 None[source]\u00b6\nDelete all entities from store.\ndelete(key: str) \u2192 None[source]\u00b6\nDelete entity value from store.\nexists(key: str) \u2192 bool[source]\u00b6\nCheck if entity exists in store.\nget(key: str, default: Optional[str] = None) \u2192 Optional[str][source]\u00b6\nGet entity value from store.\nset(key: str, value: Optional[str]) \u2192 None[source]\u00b6\nSet entity value in store.", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.InMemoryEntityStore.html"} {"id": "6125aab613b7-0", "text": "langchain.memory.chat_message_histories.sql.SQLChatMessageHistory\u00b6\nclass langchain.memory.chat_message_histories.sql.SQLChatMessageHistory(session_id: str, connection_string: str, table_name: str = 'message_store')[source]\u00b6\nBases: BaseChatMessageHistory\nChat message history stored in an SQL database.\nMethods\n__init__(session_id,\u00a0connection_string[,\u00a0...])\nadd_ai_message(message)\nConvenience method for adding an AI message string to the store.\nadd_message(message)\nAppend the message to the record in db\nadd_user_message(message)\nConvenience method for adding a human message string to the store.\nclear()\nClear session memory from db\nAttributes\nmessages\nRetrieve all messages from db\nadd_ai_message(message: str) \u2192 None\u00b6\nConvenience method for adding an AI message string to the store.\nParameters\nmessage \u2013 The string contents of an AI message.\nadd_message(message: BaseMessage) \u2192 None[source]\u00b6\nAppend the message to the record in db\nadd_user_message(message: str) \u2192 None\u00b6\nConvenience method for adding a human message string to the store.\nParameters\nmessage \u2013 The string contents of a human message.\nclear() \u2192 None[source]\u00b6\nClear session memory from db\nproperty messages: List[langchain.schema.messages.BaseMessage]\u00b6\nRetrieve all messages from db", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.sql.SQLChatMessageHistory.html"} {"id": "f891113c9db9-0", "text": "langchain.memory.motorhead_memory.MotorheadMemory\u00b6\nclass langchain.memory.motorhead_memory.MotorheadMemory(*, chat_memory: BaseChatMessageHistory = None, output_key: Optional[str] = None, input_key: Optional[str] = None, return_messages: bool = False, url: str = 'https://api.getmetal.io/v1/motorhead', session_id: str, context: Optional[str] = None, api_key: Optional[str] = None, client_id: Optional[str] = None, timeout: int = 3000, memory_key: str = 'history')[source]\u00b6\nBases: BaseChatMemory\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_key: Optional[str] = None\u00b6\nparam chat_memory: BaseChatMessageHistory [Optional]\u00b6\nparam client_id: Optional[str] = None\u00b6\nparam context: Optional[str] = None\u00b6\nparam input_key: Optional[str] = None\u00b6\nparam output_key: Optional[str] = None\u00b6\nparam return_messages: bool = False\u00b6\nparam session_id: str [Required]\u00b6\nparam url: str = 'https://api.getmetal.io/v1/motorhead'\u00b6\nclear() \u2192 None\u00b6\nClear memory contents.\ndelete_session() \u2192 None[source]\u00b6\nDelete a session\nasync init() \u2192 None[source]\u00b6\nload_memory_variables(values: Dict[str, Any]) \u2192 Dict[str, Any][source]\u00b6\nReturn key-value pairs given the text input to the chain.\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None[source]\u00b6\nSave context from this conversation to buffer.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.motorhead_memory.MotorheadMemory.html"} {"id": "f891113c9db9-1", "text": "to_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty memory_variables: List[str]\u00b6\nThe string keys this memory class will add to chain inputs.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.motorhead_memory.MotorheadMemory.html"} {"id": "81a6884d7530-0", "text": "langchain.memory.chat_message_histories.firestore.FirestoreChatMessageHistory\u00b6\nclass langchain.memory.chat_message_histories.firestore.FirestoreChatMessageHistory(collection_name: str, session_id: str, user_id: str)[source]\u00b6\nBases: BaseChatMessageHistory\nChat history backed by Google Firestore.\nInitialize a new instance of the FirestoreChatMessageHistory class.\nParameters\ncollection_name \u2013 The name of the collection to use.\nsession_id \u2013 The session ID for the chat..\nuser_id \u2013 The user ID for the chat.\nMethods\n__init__(collection_name,\u00a0session_id,\u00a0user_id)\nInitialize a new instance of the FirestoreChatMessageHistory class.\nadd_ai_message(message)\nConvenience method for adding an AI message string to the store.\nadd_message(message)\nAdd a Message object to the store.\nadd_user_message(message)\nConvenience method for adding a human message string to the store.\nclear()\nClear session memory from this memory and Firestore.\nload_messages()\nRetrieve the messages from Firestore\nprepare_firestore()\nPrepare the Firestore client.\nupsert_messages([new_message])\nUpdate the Firestore document.\nAttributes\nmessages\nA list of Messages stored in-memory.\nadd_ai_message(message: str) \u2192 None\u00b6\nConvenience method for adding an AI message string to the store.\nParameters\nmessage \u2013 The string contents of an AI message.\nadd_message(message: BaseMessage) \u2192 None[source]\u00b6\nAdd a Message object to the store.\nParameters\nmessage \u2013 A BaseMessage object to store.\nadd_user_message(message: str) \u2192 None\u00b6\nConvenience method for adding a human message string to the store.\nParameters\nmessage \u2013 The string contents of a human message.\nclear() \u2192 None[source]\u00b6\nClear session memory from this memory and Firestore.\nload_messages() \u2192 None[source]\u00b6\nRetrieve the messages from Firestore", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.firestore.FirestoreChatMessageHistory.html"} {"id": "81a6884d7530-1", "text": "load_messages() \u2192 None[source]\u00b6\nRetrieve the messages from Firestore\nprepare_firestore() \u2192 None[source]\u00b6\nPrepare the Firestore client.\nUse this function to make sure your database is ready.\nupsert_messages(new_message: Optional[BaseMessage] = None) \u2192 None[source]\u00b6\nUpdate the Firestore document.\nmessages: List[BaseMessage]\u00b6\nA list of Messages stored in-memory.", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.firestore.FirestoreChatMessageHistory.html"} {"id": "4969e7535fd6-0", "text": "langchain.memory.chat_message_histories.postgres.PostgresChatMessageHistory\u00b6\nclass langchain.memory.chat_message_histories.postgres.PostgresChatMessageHistory(session_id: str, connection_string: str = 'postgresql://postgres:mypassword@localhost/chat_history', table_name: str = 'message_store')[source]\u00b6\nBases: BaseChatMessageHistory\nChat message history stored in a Postgres database.\nMethods\n__init__(session_id[,\u00a0connection_string,\u00a0...])\nadd_ai_message(message)\nConvenience method for adding an AI message string to the store.\nadd_message(message)\nAppend the message to the record in PostgreSQL\nadd_user_message(message)\nConvenience method for adding a human message string to the store.\nclear()\nClear session memory from PostgreSQL\nAttributes\nmessages\nRetrieve the messages from PostgreSQL\nadd_ai_message(message: str) \u2192 None\u00b6\nConvenience method for adding an AI message string to the store.\nParameters\nmessage \u2013 The string contents of an AI message.\nadd_message(message: BaseMessage) \u2192 None[source]\u00b6\nAppend the message to the record in PostgreSQL\nadd_user_message(message: str) \u2192 None\u00b6\nConvenience method for adding a human message string to the store.\nParameters\nmessage \u2013 The string contents of a human message.\nclear() \u2192 None[source]\u00b6\nClear session memory from PostgreSQL\nproperty messages: List[langchain.schema.messages.BaseMessage]\u00b6\nRetrieve the messages from PostgreSQL", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.postgres.PostgresChatMessageHistory.html"} {"id": "4ce6a8e5de09-0", "text": "langchain.memory.kg.ConversationKGMemory\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html"} {"id": "4ce6a8e5de09-1", "text": "class langchain.memory.kg.ConversationKGMemory(*, chat_memory: ~langchain.schema.memory.BaseChatMessageHistory = None, output_key: ~typing.Optional[str] = None, input_key: ~typing.Optional[str] = None, return_messages: bool = False, k: int = 2, human_prefix: str = 'Human', ai_prefix: str = 'AI', kg: ~langchain.graphs.networkx_graph.NetworkxEntityGraph = None, knowledge_extraction_prompt: ~langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template=\"You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrating them with your knowledge stored within your weights as well as that stored in a knowledge graph. Extract all of the knowledge triples from the last line of conversation. A knowledge triple is a clause that contains a subject, a predicate, and an object. The subject is the entity being described, the predicate is the property of the subject that is being described, and the object is the value of the property.\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: Did you hear aliens landed in Area 51?\\nAI: No, I didn't hear that. What do you know about Area 51?\\nPerson #1: It's a secret military base in Nevada.\\nAI: What do you know about Nevada?\\nLast line of conversation:\\nPerson #1: It's a state in the US. It's also the number 1 producer of gold in the US.\\n\\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: Hello.\\nAI: Hi! How are", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html"} {"id": "4ce6a8e5de09-2", "text": "history:\\nPerson #1: Hello.\\nAI: Hi! How are you?\\nPerson #1: I'm good. How are you?\\nAI: I'm good too.\\nLast line of conversation:\\nPerson #1: I'm going to the store.\\n\\nOutput: NONE\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: What do you know about Descartes?\\nAI: Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century.\\nPerson #1: The Descartes I'm referring to is a standup comedian and interior designer from Montreal.\\nAI: Oh yes, He is a comedian and an interior designer. He has been in the industry for 30 years. His favorite food is baked bean pie.\\nLast line of conversation:\\nPerson #1: Oh huh. I know Descartes likes to drive antique scooters and play the mandolin.\\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\\nEND OF EXAMPLE\\n\\nConversation history (for reference only):\\n{history}\\nLast line of conversation (for extraction):\\nHuman: {input}\\n\\nOutput:\", template_format='f-string', validate_template=True), entity_extraction_prompt: ~langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\\n\\nThe conversation history is provided just in case of a coreference (e.g. \"What do you know about him\" where \"him\" is defined in a previous line) -- ignore items mentioned there", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html"} {"id": "4ce6a8e5de09-3", "text": "know about him\" where \"him\" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\\n\\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: how\\'s it going today?\\nAI: \"It\\'s going great! How about you?\"\\nPerson #1: good! busy working on Langchain. lots to do.\\nAI: \"That sounds like a lot of work! What kind of things are you doing to make Langchain better?\"\\nLast line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\\nOutput: Langchain\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: how\\'s it going today?\\nAI: \"It\\'s going great! How about you?\"\\nPerson #1: good! busy working on Langchain. lots to do.\\nAI: \"That sounds like a lot of work! What kind of things are you doing to make Langchain better?\"\\nLast line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\\'m working with Person #2.\\nOutput: Langchain, Person #2\\nEND OF EXAMPLE\\n\\nConversation history (for reference only):\\n{history}\\nLast line of conversation (for extraction):\\nHuman: {input}\\n\\nOutput:', template_format='f-string', validate_template=True), llm: ~langchain.schema.language_model.BaseLanguageModel, summary_message_cls:", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html"} {"id": "4ce6a8e5de09-4", "text": "llm: ~langchain.schema.language_model.BaseLanguageModel, summary_message_cls: ~typing.Type[~langchain.schema.messages.BaseMessage] = , memory_key: str = 'history')[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html"} {"id": "4ce6a8e5de09-5", "text": "Bases: BaseChatMemory\nKnowledge graph memory for storing conversation memory.\nIntegrates with external knowledge graph to store and retrieve\ninformation about knowledge triples in the conversation.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam ai_prefix: str = 'AI'\u00b6\nparam chat_memory: BaseChatMessageHistory [Optional]\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html"} {"id": "4ce6a8e5de09-6", "text": "param entity_extraction_prompt: langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\\n\\nThe conversation history is provided just in case of a coreference (e.g. \"What do you know about him\" where \"him\" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\\n\\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: how\\'s it going today?\\nAI: \"It\\'s going great! How about you?\"\\nPerson #1: good! busy working on Langchain. lots to do.\\nAI: \"That sounds like a lot of work! What kind of things are you doing to make Langchain better?\"\\nLast line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\\nOutput: Langchain\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: how\\'s it going today?\\nAI: \"It\\'s going great! How about you?\"\\nPerson #1: good! busy working on Langchain. lots to do.\\nAI: \"That sounds like a lot of work! What kind of things are you doing to make Langchain better?\"\\nLast line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html"} {"id": "4ce6a8e5de09-7", "text": "line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\\'m working with Person #2.\\nOutput: Langchain, Person #2\\nEND OF EXAMPLE\\n\\nConversation history (for reference only):\\n{history}\\nLast line of conversation (for extraction):\\nHuman: {input}\\n\\nOutput:', template_format='f-string', validate_template=True)\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html"} {"id": "4ce6a8e5de09-8", "text": "param human_prefix: str = 'Human'\u00b6\nparam input_key: Optional[str] = None\u00b6\nparam k: int = 2\u00b6\nparam kg: langchain.graphs.networkx_graph.NetworkxEntityGraph [Optional]\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html"} {"id": "4ce6a8e5de09-9", "text": "param knowledge_extraction_prompt: langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template=\"You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrating them with your knowledge stored within your weights as well as that stored in a knowledge graph. Extract all of the knowledge triples from the last line of conversation. A knowledge triple is a clause that contains a subject, a predicate, and an object. The subject is the entity being described, the predicate is the property of the subject that is being described, and the object is the value of the property.\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: Did you hear aliens landed in Area 51?\\nAI: No, I didn't hear that. What do you know about Area 51?\\nPerson #1: It's a secret military base in Nevada.\\nAI: What do you know about Nevada?\\nLast line of conversation:\\nPerson #1: It's a state in the US. It's also the number 1 producer of gold in the US.\\n\\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: Hello.\\nAI: Hi! How are you?\\nPerson #1: I'm good. How are you?\\nAI: I'm good too.\\nLast line of conversation:\\nPerson #1: I'm going to the store.\\n\\nOutput: NONE\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: What do you know about Descartes?\\nAI: Descartes was a French philosopher, mathematician, and scientist who lived in the 17th", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html"} {"id": "4ce6a8e5de09-10", "text": "Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century.\\nPerson #1: The Descartes I'm referring to is a standup comedian and interior designer from Montreal.\\nAI: Oh yes, He is a comedian and an interior designer. He has been in the industry for 30 years. His favorite food is baked bean pie.\\nLast line of conversation:\\nPerson #1: Oh huh. I know Descartes likes to drive antique scooters and play the mandolin.\\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\\nEND OF EXAMPLE\\n\\nConversation history (for reference only):\\n{history}\\nLast line of conversation (for extraction):\\nHuman: {input}\\n\\nOutput:\", template_format='f-string', validate_template=True)\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html"} {"id": "4ce6a8e5de09-11", "text": "param llm: langchain.schema.language_model.BaseLanguageModel [Required]\u00b6\nparam output_key: Optional[str] = None\u00b6\nparam return_messages: bool = False\u00b6\nparam summary_message_cls: Type[langchain.schema.messages.BaseMessage] = \u00b6\nNumber of previous utterances to include in the context.\nclear() \u2192 None[source]\u00b6\nClear memory contents.\nget_current_entities(input_string: str) \u2192 List[str][source]\u00b6\nget_knowledge_triplets(input_string: str) \u2192 List[KnowledgeTriple][source]\u00b6\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, Any][source]\u00b6\nReturn history buffer.\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None[source]\u00b6\nSave context from this conversation to buffer.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html"} {"id": "3e929ec0cad4-0", "text": "langchain.memory.chat_message_histories.cassandra.CassandraChatMessageHistory\u00b6\nclass langchain.memory.chat_message_histories.cassandra.CassandraChatMessageHistory(session_id: str, session: Session, keyspace: str, table_name: str = 'message_store', ttl_seconds: int | None = None)[source]\u00b6\nBases: BaseChatMessageHistory\nChat message history that stores history in Cassandra.\nParameters\nsession_id \u2013 arbitrary key that is used to store the messages\nof a single chat session.\nsession \u2013 a Cassandra Session object (an open DB connection)\nkeyspace \u2013 name of the keyspace to use.\ntable_name \u2013 name of the table to use.\nttl_seconds \u2013 time-to-live (seconds) for automatic expiration\nof stored entries. None (default) for no expiration.\nMethods\n__init__(session_id,\u00a0session,\u00a0keyspace[,\u00a0...])\nadd_ai_message(message)\nConvenience method for adding an AI message string to the store.\nadd_message(message)\nWrite a message to the table\nadd_user_message(message)\nConvenience method for adding a human message string to the store.\nclear()\nClear session memory from DB\nAttributes\nmessages\nRetrieve all session messages from DB\nadd_ai_message(message: str) \u2192 None\u00b6\nConvenience method for adding an AI message string to the store.\nParameters\nmessage \u2013 The string contents of an AI message.\nadd_message(message: BaseMessage) \u2192 None[source]\u00b6\nWrite a message to the table\nadd_user_message(message: str) \u2192 None\u00b6\nConvenience method for adding a human message string to the store.\nParameters\nmessage \u2013 The string contents of a human message.\nclear() \u2192 None[source]\u00b6\nClear session memory from DB\nproperty messages: List[langchain.schema.messages.BaseMessage]\u00b6\nRetrieve all session messages from DB", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.cassandra.CassandraChatMessageHistory.html"} {"id": "ee9076cd8174-0", "text": "langchain.memory.entity.RedisEntityStore\u00b6\nclass langchain.memory.entity.RedisEntityStore(session_id: str = 'default', url: str = 'redis://localhost:6379/0', key_prefix: str = 'memory_store', ttl: Optional[int] = 86400, recall_ttl: Optional[int] = 259200, *args: Any, redis_client: Any = None)[source]\u00b6\nBases: BaseEntityStore\nRedis-backed Entity store. Entities get a TTL of 1 day by default, and\nthat TTL is extended by 3 days every time the entity is read back.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam key_prefix: str = 'memory_store'\u00b6\nparam recall_ttl: Optional[int] = 259200\u00b6\nparam redis_client: Any = None\u00b6\nparam session_id: str = 'default'\u00b6\nparam ttl: Optional[int] = 86400\u00b6\nclear() \u2192 None[source]\u00b6\nDelete all entities from store.\ndelete(key: str) \u2192 None[source]\u00b6\nDelete entity value from store.\nexists(key: str) \u2192 bool[source]\u00b6\nCheck if entity exists in store.\nget(key: str, default: Optional[str] = None) \u2192 Optional[str][source]\u00b6\nGet entity value from store.\nset(key: str, value: Optional[str]) \u2192 None[source]\u00b6\nSet entity value in store.\nproperty full_key_prefix: str\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.RedisEntityStore.html"} {"id": "1856a854206e-0", "text": "langchain.memory.chat_message_histories.in_memory.ChatMessageHistory\u00b6\nclass langchain.memory.chat_message_histories.in_memory.ChatMessageHistory(*, messages: List[BaseMessage] = [])[source]\u00b6\nBases: BaseChatMessageHistory, BaseModel\nIn memory implementation of chat message history.\nStores messages in an in memory list.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam messages: List[langchain.schema.messages.BaseMessage] = []\u00b6\nA list of Messages stored in-memory.\nadd_ai_message(message: str) \u2192 None\u00b6\nConvenience method for adding an AI message string to the store.\nParameters\nmessage \u2013 The string contents of an AI message.\nadd_message(message: BaseMessage) \u2192 None[source]\u00b6\nAdd a self-created message to the store\nadd_user_message(message: str) \u2192 None\u00b6\nConvenience method for adding a human message string to the store.\nParameters\nmessage \u2013 The string contents of a human message.\nclear() \u2192 None[source]\u00b6\nRemove all messages from the store", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.in_memory.ChatMessageHistory.html"} {"id": "e5c04ad55b80-0", "text": "langchain.memory.simple.SimpleMemory\u00b6\nclass langchain.memory.simple.SimpleMemory(*, memories: Dict[str, Any] = {})[source]\u00b6\nBases: BaseMemory\nSimple memory for storing context or other bits of information that shouldn\u2019t\never change between prompts.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam memories: Dict[str, Any] = {}\u00b6\nclear() \u2192 None[source]\u00b6\nNothing to clear, got a memory like a vault.\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, str][source]\u00b6\nReturn key-value pairs given the text input to the chain.\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None[source]\u00b6\nNothing should be saved or changed, my memory is set in stone.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty memory_variables: List[str]\u00b6\nThe string keys this memory class will add to chain inputs.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.simple.SimpleMemory.html"} {"id": "ceca5637bcd0-0", "text": "langchain.memory.entity.ConversationEntityMemory\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html"} {"id": "ceca5637bcd0-1", "text": "class langchain.memory.entity.ConversationEntityMemory(*, chat_memory: BaseChatMessageHistory = None, output_key: Optional[str] = None, input_key: Optional[str] = None, return_messages: bool = False, human_prefix: str = 'Human', ai_prefix: str = 'AI', llm: BaseLanguageModel, entity_extraction_prompt: BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\\n\\nThe conversation history is provided just in case of a coreference (e.g. \"What do you know about him\" where \"him\" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\\n\\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: how\\'s it going today?\\nAI: \"It\\'s going great! How about you?\"\\nPerson #1: good! busy working on Langchain. lots to do.\\nAI: \"That sounds like a lot of work! What kind of things are you doing to make Langchain better?\"\\nLast line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\\nOutput: Langchain\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: how\\'s it going today?\\nAI: \"It\\'s going great! How about you?\"\\nPerson", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html"} {"id": "ceca5637bcd0-2", "text": "going today?\\nAI: \"It\\'s going great! How about you?\"\\nPerson #1: good! busy working on Langchain. lots to do.\\nAI: \"That sounds like a lot of work! What kind of things are you doing to make Langchain better?\"\\nLast line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\\'m working with Person #2.\\nOutput: Langchain, Person #2\\nEND OF EXAMPLE\\n\\nConversation history (for reference only):\\n{history}\\nLast line of conversation (for extraction):\\nHuman: {input}\\n\\nOutput:', template_format='f-string', validate_template=True), entity_summarization_prompt: BasePromptTemplate = PromptTemplate(input_variables=['entity', 'summary', 'history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant helping a human keep track of facts about relevant people, places, and concepts in their life. Update the summary of the provided entity in the \"Entity\" section based on the last line of your conversation with the human. If you are writing the summary for the first time, return a single sentence.\\nThe update should only include facts that are relayed in the last line of conversation about the provided entity, and should only contain facts about the provided entity.\\n\\nIf there is no new information about the provided entity or the information is not worth noting (not an important or relevant fact to remember long-term), return the existing summary unchanged.\\n\\nFull conversation history (for context):\\n{history}\\n\\nEntity to summarize:\\n{entity}\\n\\nExisting summary of {entity}:\\n{summary}\\n\\nLast line of conversation:\\nHuman: {input}\\nUpdated summary:', template_format='f-string', validate_template=True), entity_cache:", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html"} {"id": "ceca5637bcd0-3", "text": "{input}\\nUpdated summary:', template_format='f-string', validate_template=True), entity_cache: List[str] = [], k: int = 3, chat_history_key: str = 'history', entity_store: BaseEntityStore = None)[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html"} {"id": "ceca5637bcd0-4", "text": "Bases: BaseChatMemory\nEntity extractor & summarizer memory.\nExtracts named entities from the recent chat history and generates summaries.\nWith a swapable entity store, persisting entities across conversations.\nDefaults to an in-memory entity store, and can be swapped out for a Redis,\nSQLite, or other entity store.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam ai_prefix: str = 'AI'\u00b6\nparam chat_history_key: str = 'history'\u00b6\nparam chat_memory: BaseChatMessageHistory [Optional]\u00b6\nparam entity_cache: List[str] = []\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html"} {"id": "ceca5637bcd0-5", "text": "param entity_extraction_prompt: langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\\n\\nThe conversation history is provided just in case of a coreference (e.g. \"What do you know about him\" where \"him\" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\\n\\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: how\\'s it going today?\\nAI: \"It\\'s going great! How about you?\"\\nPerson #1: good! busy working on Langchain. lots to do.\\nAI: \"That sounds like a lot of work! What kind of things are you doing to make Langchain better?\"\\nLast line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\\nOutput: Langchain\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: how\\'s it going today?\\nAI: \"It\\'s going great! How about you?\"\\nPerson #1: good! busy working on Langchain. lots to do.\\nAI: \"That sounds like a lot of work! What kind of things are you doing to make Langchain better?\"\\nLast line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html"} {"id": "ceca5637bcd0-6", "text": "line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\\'m working with Person #2.\\nOutput: Langchain, Person #2\\nEND OF EXAMPLE\\n\\nConversation history (for reference only):\\n{history}\\nLast line of conversation (for extraction):\\nHuman: {input}\\n\\nOutput:', template_format='f-string', validate_template=True)\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html"} {"id": "ceca5637bcd0-7", "text": "param entity_store: langchain.memory.entity.BaseEntityStore [Optional]\u00b6\nparam entity_summarization_prompt: langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['entity', 'summary', 'history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant helping a human keep track of facts about relevant people, places, and concepts in their life. Update the summary of the provided entity in the \"Entity\" section based on the last line of your conversation with the human. If you are writing the summary for the first time, return a single sentence.\\nThe update should only include facts that are relayed in the last line of conversation about the provided entity, and should only contain facts about the provided entity.\\n\\nIf there is no new information about the provided entity or the information is not worth noting (not an important or relevant fact to remember long-term), return the existing summary unchanged.\\n\\nFull conversation history (for context):\\n{history}\\n\\nEntity to summarize:\\n{entity}\\n\\nExisting summary of {entity}:\\n{summary}\\n\\nLast line of conversation:\\nHuman: {input}\\nUpdated summary:', template_format='f-string', validate_template=True)\u00b6\nparam human_prefix: str = 'Human'\u00b6\nparam input_key: Optional[str] = None\u00b6\nparam k: int = 3\u00b6\nparam llm: langchain.schema.language_model.BaseLanguageModel [Required]\u00b6\nparam output_key: Optional[str] = None\u00b6\nparam return_messages: bool = False\u00b6\nclear() \u2192 None[source]\u00b6\nClear memory contents.\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, Any][source]\u00b6\nReturns chat history and all generated entities with summaries if available,\nand updates or clears the recent entity cache.\nNew entity name can be found when calling this method, before the entity", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html"} {"id": "ceca5637bcd0-8", "text": "New entity name can be found when calling this method, before the entity\nsummaries are generated, so the entity cache values may be empty if no entity\ndescriptions are generated yet.\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None[source]\u00b6\nSave context from this conversation history to the entity store.\nGenerates a summary for each entity in the entity cache by prompting\nthe model, and saves these summaries to the entity store.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty buffer: List[langchain.schema.messages.BaseMessage]\u00b6\nAccess chat memory messages.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.ConversationEntityMemory.html"} {"id": "32aafe7d7ab3-0", "text": "langchain.memory.combined.CombinedMemory\u00b6\nclass langchain.memory.combined.CombinedMemory(*, memories: List[BaseMemory])[source]\u00b6\nBases: BaseMemory\nClass for combining multiple memories\u2019 data together.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam memories: List[langchain.schema.memory.BaseMemory] [Required]\u00b6\nFor tracking all the memories that should be accessed.\nvalidator check_input_key\u00a0 \u00bb\u00a0 memories[source]\u00b6\nCheck that if memories are of type BaseChatMemory that input keys exist.\nvalidator check_repeated_memory_variable\u00a0 \u00bb\u00a0 memories[source]\u00b6\nclear() \u2192 None[source]\u00b6\nClear context from this session for every memory.\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, str][source]\u00b6\nLoad all vars from sub-memories.\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None[source]\u00b6\nSave context from this session for every memory.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty memory_variables: List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.combined.CombinedMemory.html"} {"id": "32aafe7d7ab3-1", "text": "Return whether or not the class is serializable.\nproperty memory_variables: List[str]\u00b6\nAll the memory variables that this instance provides.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.combined.CombinedMemory.html"} {"id": "227d006381dd-0", "text": "langchain.memory.entity.SQLiteEntityStore\u00b6\nclass langchain.memory.entity.SQLiteEntityStore(session_id: str = 'default', db_file: str = 'entities.db', table_name: str = 'memory_store', *args: Any)[source]\u00b6\nBases: BaseEntityStore\nSQLite-backed Entity store\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam session_id: str = 'default'\u00b6\nparam table_name: str = 'memory_store'\u00b6\nclear() \u2192 None[source]\u00b6\nDelete all entities from store.\ndelete(key: str) \u2192 None[source]\u00b6\nDelete entity value from store.\nexists(key: str) \u2192 bool[source]\u00b6\nCheck if entity exists in store.\nget(key: str, default: Optional[str] = None) \u2192 Optional[str][source]\u00b6\nGet entity value from store.\nset(key: str, value: Optional[str]) \u2192 None[source]\u00b6\nSet entity value in store.\nproperty full_table_name: str\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.SQLiteEntityStore.html"} {"id": "5ced2a7de7cb-0", "text": "langchain.memory.utils.get_prompt_input_key\u00b6\nlangchain.memory.utils.get_prompt_input_key(inputs: Dict[str, Any], memory_variables: List[str]) \u2192 str[source]\u00b6\nGet the prompt input key.\nParameters\ninputs \u2013 Dict[str, Any]\nmemory_variables \u2013 List[str]\nReturns\nA prompt input key.", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.utils.get_prompt_input_key.html"} {"id": "6d97af194375-0", "text": "langchain.memory.buffer.ConversationBufferMemory\u00b6\nclass langchain.memory.buffer.ConversationBufferMemory(*, chat_memory: BaseChatMessageHistory = None, output_key: Optional[str] = None, input_key: Optional[str] = None, return_messages: bool = False, human_prefix: str = 'Human', ai_prefix: str = 'AI', memory_key: str = 'history')[source]\u00b6\nBases: BaseChatMemory\nBuffer for storing conversation memory.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam ai_prefix: str = 'AI'\u00b6\nparam chat_memory: BaseChatMessageHistory [Optional]\u00b6\nparam human_prefix: str = 'Human'\u00b6\nparam input_key: Optional[str] = None\u00b6\nparam output_key: Optional[str] = None\u00b6\nparam return_messages: bool = False\u00b6\nclear() \u2192 None\u00b6\nClear memory contents.\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, Any][source]\u00b6\nReturn history buffer.\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None\u00b6\nSave context from this conversation to buffer.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty buffer: Any\u00b6\nString buffer of memory.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer.ConversationBufferMemory.html"} {"id": "6d97af194375-1", "text": "Return a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer.ConversationBufferMemory.html"} {"id": "9f86f1d3bd93-0", "text": "langchain.memory.chat_memory.BaseChatMemory\u00b6\nclass langchain.memory.chat_memory.BaseChatMemory(*, chat_memory: BaseChatMessageHistory = None, output_key: Optional[str] = None, input_key: Optional[str] = None, return_messages: bool = False)[source]\u00b6\nBases: BaseMemory, ABC\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam chat_memory: langchain.schema.memory.BaseChatMessageHistory [Optional]\u00b6\nparam input_key: Optional[str] = None\u00b6\nparam output_key: Optional[str] = None\u00b6\nparam return_messages: bool = False\u00b6\nclear() \u2192 None[source]\u00b6\nClear memory contents.\nabstract load_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, Any]\u00b6\nReturn key-value pairs given the text input to the chain.\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None[source]\u00b6\nSave context from this conversation to buffer.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nabstract property memory_variables: List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_memory.BaseChatMemory.html"} {"id": "9f86f1d3bd93-1", "text": "abstract property memory_variables: List[str]\u00b6\nThe string keys this memory class will add to chain inputs.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_memory.BaseChatMemory.html"} {"id": "fa31629b4d9d-0", "text": "langchain.memory.summary.ConversationSummaryMemory\u00b6\nclass langchain.memory.summary.ConversationSummaryMemory(*, human_prefix: str = 'Human', ai_prefix: str = 'AI', llm: ~langchain.schema.language_model.BaseLanguageModel, prompt: ~langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\\n\\nEXAMPLE\\nCurrent summary:\\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\\n\\nNew lines of conversation:\\nHuman: Why do you think artificial intelligence is a force for good?\\nAI: Because artificial intelligence will help humans reach their full potential.\\n\\nNew summary:\\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\\nEND OF EXAMPLE\\n\\nCurrent summary:\\n{summary}\\n\\nNew lines of conversation:\\n{new_lines}\\n\\nNew summary:', template_format='f-string', validate_template=True), summary_message_cls: ~typing.Type[~langchain.schema.messages.BaseMessage] = , chat_memory: ~langchain.schema.memory.BaseChatMessageHistory = None, output_key: ~typing.Optional[str] = None, input_key: ~typing.Optional[str] = None, return_messages: bool = False, buffer: str = '', memory_key: str = 'history')[source]\u00b6\nBases: BaseChatMemory, SummarizerMixin\nConversation summarizer to memory.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam ai_prefix: str = 'AI'\u00b6\nparam buffer: str = ''\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.summary.ConversationSummaryMemory.html"} {"id": "fa31629b4d9d-1", "text": "param ai_prefix: str = 'AI'\u00b6\nparam buffer: str = ''\u00b6\nparam chat_memory: BaseChatMessageHistory [Optional]\u00b6\nparam human_prefix: str = 'Human'\u00b6\nparam input_key: Optional[str] = None\u00b6\nparam llm: BaseLanguageModel [Required]\u00b6\nparam output_key: Optional[str] = None\u00b6\nparam prompt: BasePromptTemplate = PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\\n\\nEXAMPLE\\nCurrent summary:\\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\\n\\nNew lines of conversation:\\nHuman: Why do you think artificial intelligence is a force for good?\\nAI: Because artificial intelligence will help humans reach their full potential.\\n\\nNew summary:\\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\\nEND OF EXAMPLE\\n\\nCurrent summary:\\n{summary}\\n\\nNew lines of conversation:\\n{new_lines}\\n\\nNew summary:', template_format='f-string', validate_template=True)\u00b6\nparam return_messages: bool = False\u00b6\nparam summary_message_cls: Type[BaseMessage] = \u00b6\nclear() \u2192 None[source]\u00b6\nClear memory contents.\nclassmethod from_messages(llm: BaseLanguageModel, chat_memory: BaseChatMessageHistory, *, summarize_step: int = 2, **kwargs: Any) \u2192 ConversationSummaryMemory[source]\u00b6\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, Any][source]\u00b6\nReturn history buffer.\npredict_new_summary(messages: List[BaseMessage], existing_summary: str) \u2192 str\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.summary.ConversationSummaryMemory.html"} {"id": "fa31629b4d9d-2", "text": "predict_new_summary(messages: List[BaseMessage], existing_summary: str) \u2192 str\u00b6\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None[source]\u00b6\nSave context from this conversation to buffer.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_prompt_input_variables\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that prompt input variables are consistent.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.summary.ConversationSummaryMemory.html"} {"id": "31c80b10424e-0", "text": "langchain.memory.buffer.ConversationStringBufferMemory\u00b6\nclass langchain.memory.buffer.ConversationStringBufferMemory(*, human_prefix: str = 'Human', ai_prefix: str = 'AI', buffer: str = '', output_key: Optional[str] = None, input_key: Optional[str] = None, memory_key: str = 'history')[source]\u00b6\nBases: BaseMemory\nBuffer for storing conversation memory.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam ai_prefix: str = 'AI'\u00b6\nPrefix to use for AI generated responses.\nparam buffer: str = ''\u00b6\nparam human_prefix: str = 'Human'\u00b6\nparam input_key: Optional[str] = None\u00b6\nparam output_key: Optional[str] = None\u00b6\nclear() \u2192 None[source]\u00b6\nClear memory contents.\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, str][source]\u00b6\nReturn history buffer.\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None[source]\u00b6\nSave context from this conversation to buffer.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_chains\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that return messages is not True.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer.ConversationStringBufferMemory.html"} {"id": "31c80b10424e-1", "text": "Return a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty memory_variables: List[str]\u00b6\nWill always return list of memory variables.\n:meta private:\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer.ConversationStringBufferMemory.html"} {"id": "a765d120d22a-0", "text": "langchain.memory.readonly.ReadOnlySharedMemory\u00b6\nclass langchain.memory.readonly.ReadOnlySharedMemory(*, memory: BaseMemory)[source]\u00b6\nBases: BaseMemory\nA memory wrapper that is read-only and cannot be changed.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam memory: langchain.schema.memory.BaseMemory [Required]\u00b6\nclear() \u2192 None[source]\u00b6\nNothing to clear, got a memory like a vault.\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, str][source]\u00b6\nLoad memory variables from memory.\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None[source]\u00b6\nNothing should be saved or changed\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty memory_variables: List[str]\u00b6\nReturn memory variables.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.readonly.ReadOnlySharedMemory.html"} {"id": "02c0d994fb63-0", "text": "langchain.memory.chat_message_histories.mongodb.MongoDBChatMessageHistory\u00b6\nclass langchain.memory.chat_message_histories.mongodb.MongoDBChatMessageHistory(connection_string: str, session_id: str, database_name: str = 'chat_history', collection_name: str = 'message_store')[source]\u00b6\nBases: BaseChatMessageHistory\nChat message history that stores history in MongoDB.\nParameters\nconnection_string \u2013 connection string to connect to MongoDB\nsession_id \u2013 arbitrary key that is used to store the messages\nof a single chat session.\ndatabase_name \u2013 name of the database to use\ncollection_name \u2013 name of the collection to use\nMethods\n__init__(connection_string,\u00a0session_id[,\u00a0...])\nadd_ai_message(message)\nConvenience method for adding an AI message string to the store.\nadd_message(message)\nAppend the message to the record in MongoDB\nadd_user_message(message)\nConvenience method for adding a human message string to the store.\nclear()\nClear session memory from MongoDB\nAttributes\nmessages\nRetrieve the messages from MongoDB\nadd_ai_message(message: str) \u2192 None\u00b6\nConvenience method for adding an AI message string to the store.\nParameters\nmessage \u2013 The string contents of an AI message.\nadd_message(message: BaseMessage) \u2192 None[source]\u00b6\nAppend the message to the record in MongoDB\nadd_user_message(message: str) \u2192 None\u00b6\nConvenience method for adding a human message string to the store.\nParameters\nmessage \u2013 The string contents of a human message.\nclear() \u2192 None[source]\u00b6\nClear session memory from MongoDB\nproperty messages: List[langchain.schema.messages.BaseMessage]\u00b6\nRetrieve the messages from MongoDB", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.mongodb.MongoDBChatMessageHistory.html"} {"id": "c97aa06f143a-0", "text": "langchain.memory.chat_message_histories.momento.MomentoChatMessageHistory\u00b6\nclass langchain.memory.chat_message_histories.momento.MomentoChatMessageHistory(session_id: str, cache_client: momento.CacheClient, cache_name: str, *, key_prefix: str = 'message_store:', ttl: Optional[timedelta] = None, ensure_cache_exists: bool = True)[source]\u00b6\nBases: BaseChatMessageHistory\nChat message history cache that uses Momento as a backend.\nSee https://gomomento.com/\nInstantiate a chat message history cache that uses Momento as a backend.\nNote: to instantiate the cache client passed to MomentoChatMessageHistory,\nyou must have a Momento account at https://gomomento.com/.\nParameters\nsession_id (str) \u2013 The session ID to use for this chat session.\ncache_client (CacheClient) \u2013 The Momento cache client.\ncache_name (str) \u2013 The name of the cache to use to store the messages.\nkey_prefix (str, optional) \u2013 The prefix to apply to the cache key.\nDefaults to \u201cmessage_store:\u201d.\nttl (Optional[timedelta], optional) \u2013 The TTL to use for the messages.\nDefaults to None, ie the default TTL of the cache will be used.\nensure_cache_exists (bool, optional) \u2013 Create the cache if it doesn\u2019t exist.\nDefaults to True.\nRaises\nImportError \u2013 Momento python package is not installed.\nTypeError \u2013 cache_client is not of type momento.CacheClientObject\nMethods\n__init__(session_id,\u00a0cache_client,\u00a0cache_name,\u00a0*)\nInstantiate a chat message history cache that uses Momento as a backend.\nadd_ai_message(message)\nConvenience method for adding an AI message string to the store.\nadd_message(message)\nStore a message in the cache.\nadd_user_message(message)", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.momento.MomentoChatMessageHistory.html"} {"id": "c97aa06f143a-1", "text": "add_message(message)\nStore a message in the cache.\nadd_user_message(message)\nConvenience method for adding a human message string to the store.\nclear()\nRemove the session's messages from the cache.\nfrom_client_params(session_id,\u00a0cache_name,\u00a0...)\nConstruct cache from CacheClient parameters.\nAttributes\nmessages\nRetrieve the messages from Momento.\nadd_ai_message(message: str) \u2192 None\u00b6\nConvenience method for adding an AI message string to the store.\nParameters\nmessage \u2013 The string contents of an AI message.\nadd_message(message: BaseMessage) \u2192 None[source]\u00b6\nStore a message in the cache.\nParameters\nmessage (BaseMessage) \u2013 The message object to store.\nRaises\nSdkException \u2013 Momento service or network error.\nException \u2013 Unexpected response.\nadd_user_message(message: str) \u2192 None\u00b6\nConvenience method for adding a human message string to the store.\nParameters\nmessage \u2013 The string contents of a human message.\nclear() \u2192 None[source]\u00b6\nRemove the session\u2019s messages from the cache.\nRaises\nSdkException \u2013 Momento service or network error.\nException \u2013 Unexpected response.\nclassmethod from_client_params(session_id: str, cache_name: str, ttl: timedelta, *, configuration: Optional[momento.config.Configuration] = None, auth_token: Optional[str] = None, **kwargs: Any) \u2192 MomentoChatMessageHistory[source]\u00b6\nConstruct cache from CacheClient parameters.\nproperty messages: list[langchain.schema.messages.BaseMessage]\u00b6\nRetrieve the messages from Momento.\nRaises\nSdkException \u2013 Momento service or network error\nException \u2013 Unexpected response\nReturns\nList of cached messages\nReturn type\nlist[BaseMessage]", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.momento.MomentoChatMessageHistory.html"} {"id": "3fe35497059c-0", "text": "langchain.memory.chat_message_histories.redis.RedisChatMessageHistory\u00b6\nclass langchain.memory.chat_message_histories.redis.RedisChatMessageHistory(session_id: str, url: str = 'redis://localhost:6379/0', key_prefix: str = 'message_store:', ttl: Optional[int] = None)[source]\u00b6\nBases: BaseChatMessageHistory\nChat message history stored in a Redis database.\nMethods\n__init__(session_id[,\u00a0url,\u00a0key_prefix,\u00a0ttl])\nadd_ai_message(message)\nConvenience method for adding an AI message string to the store.\nadd_message(message)\nAppend the message to the record in Redis\nadd_user_message(message)\nConvenience method for adding a human message string to the store.\nclear()\nClear session memory from Redis\nAttributes\nkey\nConstruct the record key to use\nmessages\nRetrieve the messages from Redis\nadd_ai_message(message: str) \u2192 None\u00b6\nConvenience method for adding an AI message string to the store.\nParameters\nmessage \u2013 The string contents of an AI message.\nadd_message(message: BaseMessage) \u2192 None[source]\u00b6\nAppend the message to the record in Redis\nadd_user_message(message: str) \u2192 None\u00b6\nConvenience method for adding a human message string to the store.\nParameters\nmessage \u2013 The string contents of a human message.\nclear() \u2192 None[source]\u00b6\nClear session memory from Redis\nproperty key: str\u00b6\nConstruct the record key to use\nproperty messages: List[langchain.schema.messages.BaseMessage]\u00b6\nRetrieve the messages from Redis", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.redis.RedisChatMessageHistory.html"} {"id": "a29ebbe687ce-0", "text": "langchain.memory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory\u00b6\nclass langchain.memory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory(cosmos_endpoint: str, cosmos_database: str, cosmos_container: str, session_id: str, user_id: str, credential: Any = None, connection_string: Optional[str] = None, ttl: Optional[int] = None, cosmos_client_kwargs: Optional[dict] = None)[source]\u00b6\nBases: BaseChatMessageHistory\nChat history backed by Azure CosmosDB.\nInitializes a new instance of the CosmosDBChatMessageHistory class.\nMake sure to call prepare_cosmos or use the context manager to make\nsure your database is ready.\nEither a credential or a connection string must be provided.\nParameters\ncosmos_endpoint \u2013 The connection endpoint for the Azure Cosmos DB account.\ncosmos_database \u2013 The name of the database to use.\ncosmos_container \u2013 The name of the container to use.\nsession_id \u2013 The session ID to use, can be overwritten while loading.\nuser_id \u2013 The user ID to use, can be overwritten while loading.\ncredential \u2013 The credential to use to authenticate to Azure Cosmos DB.\nconnection_string \u2013 The connection string to use to authenticate.\nttl \u2013 The time to live (in seconds) to use for documents in the container.\ncosmos_client_kwargs \u2013 Additional kwargs to pass to the CosmosClient.\nMethods\n__init__(cosmos_endpoint,\u00a0cosmos_database,\u00a0...)\nInitializes a new instance of the CosmosDBChatMessageHistory class.\nadd_ai_message(message)\nConvenience method for adding an AI message string to the store.\nadd_message(message)\nAdd a self-created message to the store\nadd_user_message(message)\nConvenience method for adding a human message string to the store.\nclear()\nClear session memory from this memory and cosmos.", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory.html"} {"id": "a29ebbe687ce-1", "text": "clear()\nClear session memory from this memory and cosmos.\nload_messages()\nRetrieve the messages from Cosmos\nprepare_cosmos()\nPrepare the CosmosDB client.\nupsert_messages()\nUpdate the cosmosdb item.\nAttributes\nmessages\nA list of Messages stored in-memory.\nadd_ai_message(message: str) \u2192 None\u00b6\nConvenience method for adding an AI message string to the store.\nParameters\nmessage \u2013 The string contents of an AI message.\nadd_message(message: BaseMessage) \u2192 None[source]\u00b6\nAdd a self-created message to the store\nadd_user_message(message: str) \u2192 None\u00b6\nConvenience method for adding a human message string to the store.\nParameters\nmessage \u2013 The string contents of a human message.\nclear() \u2192 None[source]\u00b6\nClear session memory from this memory and cosmos.\nload_messages() \u2192 None[source]\u00b6\nRetrieve the messages from Cosmos\nprepare_cosmos() \u2192 None[source]\u00b6\nPrepare the CosmosDB client.\nUse this function or the context manager to make sure your database is ready.\nupsert_messages() \u2192 None[source]\u00b6\nUpdate the cosmosdb item.\nmessages: List[BaseMessage]\u00b6\nA list of Messages stored in-memory.", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory.html"} {"id": "1208e37e0f96-0", "text": "langchain.memory.chat_message_histories.zep.ZepChatMessageHistory\u00b6\nclass langchain.memory.chat_message_histories.zep.ZepChatMessageHistory(session_id: str, url: str = 'http://localhost:8000', api_key: Optional[str] = None)[source]\u00b6\nBases: BaseChatMessageHistory\nA ChatMessageHistory implementation that uses Zep as a backend.\nRecommended usage:\n# Set up Zep Chat History\nzep_chat_history = ZepChatMessageHistory(\n session_id=session_id,\n url=ZEP_API_URL,\n api_key=,\n)\n# Use a standard ConversationBufferMemory to encapsulate the Zep chat history\nmemory = ConversationBufferMemory(\n memory_key=\"chat_history\", chat_memory=zep_chat_history\n)\nZep provides long-term conversation storage for LLM apps. The server stores,\nsummarizes, embeds, indexes, and enriches conversational AI chat\nhistories, and exposes them via simple, low-latency APIs.\nFor server installation instructions and more, see:\nhttps://docs.getzep.com/deployment/quickstart/\nThis class is a thin wrapper around the zep-python package. Additional\nZep functionality is exposed via the zep_summary and zep_messages\nproperties.\nFor more information on the zep-python package, see:\nhttps://github.com/getzep/zep-python\nMethods\n__init__(session_id[,\u00a0url,\u00a0api_key])\nadd_ai_message(message)\nConvenience method for adding an AI message string to the store.\nadd_message(message)\nAppend the message to the Zep memory history\nadd_user_message(message)\nConvenience method for adding a human message string to the store.\nclear()\nClear session memory from Zep.\nsearch(query[,\u00a0metadata,\u00a0limit])", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.zep.ZepChatMessageHistory.html"} {"id": "1208e37e0f96-1", "text": "Clear session memory from Zep.\nsearch(query[,\u00a0metadata,\u00a0limit])\nSearch Zep memory for messages matching the query\nAttributes\nmessages\nRetrieve messages from Zep memory\nzep_messages\nRetrieve summary from Zep memory\nzep_summary\nRetrieve summary from Zep memory\nadd_ai_message(message: str) \u2192 None\u00b6\nConvenience method for adding an AI message string to the store.\nParameters\nmessage \u2013 The string contents of an AI message.\nadd_message(message: BaseMessage) \u2192 None[source]\u00b6\nAppend the message to the Zep memory history\nadd_user_message(message: str) \u2192 None\u00b6\nConvenience method for adding a human message string to the store.\nParameters\nmessage \u2013 The string contents of a human message.\nclear() \u2192 None[source]\u00b6\nClear session memory from Zep. Note that Zep is long-term storage for memory\nand this is not advised unless you have specific data retention requirements.\nsearch(query: str, metadata: Optional[Dict] = None, limit: Optional[int] = None) \u2192 List[MemorySearchResult][source]\u00b6\nSearch Zep memory for messages matching the query\nproperty messages: List[langchain.schema.messages.BaseMessage]\u00b6\nRetrieve messages from Zep memory\nproperty zep_messages: List[Message]\u00b6\nRetrieve summary from Zep memory\nproperty zep_summary: Optional[str]\u00b6\nRetrieve summary from Zep memory", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.zep.ZepChatMessageHistory.html"} {"id": "6091813ff6e9-0", "text": "langchain.memory.chat_message_histories.sql.create_message_model\u00b6\nlangchain.memory.chat_message_histories.sql.create_message_model(table_name, DynamicBase)[source]\u00b6\nCreate a message model for a given table name.\n:param table_name: The name of the table to use.\n:param DynamicBase: The base class to use for the model.\nReturns\nThe model class.", "source": "https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.sql.create_message_model.html"} {"id": "2b3d881e8bd6-0", "text": "langchain.utils.stringify_dict\u00b6\nlangchain.utils.stringify_dict(data: dict) \u2192 str[source]\u00b6\nStringify a dictionary.\nParameters\ndata \u2013 The dictionary to stringify.\nReturns\nThe stringified dictionary.\nReturn type\nstr", "source": "https://api.python.langchain.com/en/latest/utils/langchain.utils.stringify_dict.html"} {"id": "afefd32e0b97-0", "text": "langchain.utils.get_from_dict_or_env\u00b6\nlangchain.utils.get_from_dict_or_env(data: Dict[str, Any], key: str, env_key: str, default: Optional[str] = None) \u2192 str[source]\u00b6\nGet a value from a dictionary or an environment variable.", "source": "https://api.python.langchain.com/en/latest/utils/langchain.utils.get_from_dict_or_env.html"} {"id": "b9a9f9b37777-0", "text": "langchain.utils.raise_for_status_with_text\u00b6\nlangchain.utils.raise_for_status_with_text(response: Response) \u2192 None[source]\u00b6\nRaise an error with the response text.", "source": "https://api.python.langchain.com/en/latest/utils/langchain.utils.raise_for_status_with_text.html"} {"id": "b3303c264f67-0", "text": "langchain.utils.xor_args\u00b6\nlangchain.utils.xor_args(*arg_groups: Tuple[str, ...]) \u2192 Callable[source]\u00b6\nValidate specified keyword args are mutually exclusive.", "source": "https://api.python.langchain.com/en/latest/utils/langchain.utils.xor_args.html"} {"id": "8cba698c2f92-0", "text": "langchain.utils.stringify_value\u00b6\nlangchain.utils.stringify_value(val: Any) \u2192 str[source]\u00b6\nStringify a value.\nParameters\nval \u2013 The value to stringify.\nReturns\nThe stringified value.\nReturn type\nstr", "source": "https://api.python.langchain.com/en/latest/utils/langchain.utils.stringify_value.html"} {"id": "bd9db7ef914e-0", "text": "langchain.utils.comma_list\u00b6\nlangchain.utils.comma_list(items: List[Any]) \u2192 str[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/utils/langchain.utils.comma_list.html"} {"id": "370c5d20bf49-0", "text": "langchain.utils.get_from_env\u00b6\nlangchain.utils.get_from_env(key: str, env_key: str, default: Optional[str] = None) \u2192 str[source]\u00b6\nGet a value from a dictionary or an environment variable.", "source": "https://api.python.langchain.com/en/latest/utils/langchain.utils.get_from_env.html"} {"id": "632759a6af1e-0", "text": "langchain.utils.guard_import\u00b6\nlangchain.utils.guard_import(module_name: str, *, pip_name: Optional[str] = None, package: Optional[str] = None) \u2192 Any[source]\u00b6\nDynamically imports a module and raises a helpful exception if the module is not\ninstalled.", "source": "https://api.python.langchain.com/en/latest/utils/langchain.utils.guard_import.html"} {"id": "255abe4a1aeb-0", "text": "langchain.utils.mock_now\u00b6\nlangchain.utils.mock_now(dt_value)[source]\u00b6\nContext manager for mocking out datetime.now() in unit tests.\nExample:\nwith mock_now(datetime.datetime(2011, 2, 3, 10, 11)):\nassert datetime.datetime.now() == datetime.datetime(2011, 2, 3, 10, 11)", "source": "https://api.python.langchain.com/en/latest/utils/langchain.utils.mock_now.html"} {"id": "16662ef9856e-0", "text": "langchain.utils.check_package_version\u00b6\nlangchain.utils.check_package_version(package: str, lt_version: Optional[str] = None, lte_version: Optional[str] = None, gt_version: Optional[str] = None, gte_version: Optional[str] = None) \u2192 None[source]\u00b6\nCheck the version of a package.", "source": "https://api.python.langchain.com/en/latest/utils/langchain.utils.check_package_version.html"} {"id": "6cdfaf0a44c8-0", "text": "langchain.tools.scenexplain.tool.SceneXplainInput\u00b6\nclass langchain.tools.scenexplain.tool.SceneXplainInput(*, query: str)[source]\u00b6\nBases: BaseModel\nInput for SceneXplain.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam query: str [Required]\u00b6\nThe link to the image to explain", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.scenexplain.tool.SceneXplainInput.html"} {"id": "76d487ea798f-0", "text": "langchain.tools.playwright.utils.create_async_playwright_browser\u00b6\nlangchain.tools.playwright.utils.create_async_playwright_browser(headless: bool = True) \u2192 AsyncBrowser[source]\u00b6\nCreate an async playwright browser.\nParameters\nheadless \u2013 Whether to run the browser in headless mode. Defaults to True.\nReturns\nThe playwright browser.\nReturn type\nAsyncBrowser", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.utils.create_async_playwright_browser.html"} {"id": "f2a41bdc2c3d-0", "text": "langchain.tools.base.ToolMetaclass\u00b6\nclass langchain.tools.base.ToolMetaclass(name: str, bases: Tuple[Type, ...], dct: dict)[source]\u00b6\nBases: ModelMetaclass\nMetaclass for BaseTool to ensure the provided args_schema\ndoesn\u2019t silently ignored.\nCreate the definition of the new tool class.\nMethods\n__init__(*args,\u00a0**kwargs)\nmro()\nReturn a type's method resolution order.\nregister(subclass)\nRegister a virtual subclass of an ABC.\n__call__(*args, **kwargs)\u00b6\nCall self as a function.\nmro()\u00b6\nReturn a type\u2019s method resolution order.\nregister(subclass)\u00b6\nRegister a virtual subclass of an ABC.\nReturns the subclass, to allow usage as a class decorator.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.base.ToolMetaclass.html"} {"id": "7a96f798aeed-0", "text": "langchain.tools.powerbi.tool.InfoPowerBITool\u00b6\nclass langchain.tools.powerbi.tool.InfoPowerBITool(*, name: str = 'schema_powerbi', description: str = '\\n\u00a0\u00a0\u00a0 Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.\\n\u00a0\u00a0\u00a0 Be sure that the tables actually exist by calling list_tables_powerbi first!\\n\\n\u00a0\u00a0\u00a0 Example Input: \"table1, table2, table3\"\\n\u00a0\u00a0\u00a0 ', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, powerbi: PowerBIDataset)[source]\u00b6\nBases: BaseTool\nTool for getting metadata about a PowerBI Dataset.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = '\\n\u00a0\u00a0\u00a0 Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.\\n\u00a0\u00a0\u00a0 Be sure that the tables actually exist by calling list_tables_powerbi first!\\n\\n\u00a0\u00a0\u00a0 Example Input: \"table1, table2, table3\"\\n\u00a0\u00a0\u00a0 '\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.powerbi.tool.InfoPowerBITool.html"} {"id": "7a96f798aeed-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'schema_powerbi'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]\u00b6\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.powerbi.tool.InfoPowerBITool.html"} {"id": "7a96f798aeed-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.powerbi.tool.InfoPowerBITool.html"} {"id": "bdc4682cf764-0", "text": "langchain.tools.gmail.utils.clean_email_body\u00b6\nlangchain.tools.gmail.utils.clean_email_body(body: str) \u2192 str[source]\u00b6\nClean email body.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.utils.clean_email_body.html"} {"id": "006c7a6395f1-0", "text": "langchain.tools.searx_search.tool.SearxSearchResults\u00b6\nclass langchain.tools.searx_search.tool.SearxSearchResults(*, name: str = 'Searx Search Results', description: str = 'A meta search engine.Useful for when you need to answer questions about current events.Input should be a search query. Output is a JSON array of the query results', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, wrapper: SearxSearchWrapper, num_results: int = 4, kwargs: dict = None, **extra_data: Any)[source]\u00b6\nBases: BaseTool\nTool that has the capability to query a Searx instance and get back json.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A meta search engine.Useful for when you need to answer questions about current events.Input should be a search query. Output is a JSON array of the query results'\u00b6\nUsed to tell the model how/when/why to use the tool.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.searx_search.tool.SearxSearchResults.html"} {"id": "006c7a6395f1-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam kwargs: dict [Optional]\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'Searx Search Results'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam num_results: int = 4\u00b6\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\nparam wrapper: langchain.utilities.searx_search.SearxSearchWrapper [Required]\u00b6\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.searx_search.tool.SearxSearchResults.html"} {"id": "006c7a6395f1-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config[source]\u00b6\nBases: object\nPydantic config.\nextra = 'allow'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.searx_search.tool.SearxSearchResults.html"} {"id": "e0453c9d2971-0", "text": "langchain.tools.zapier.tool.ZapierNLARunAction\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.zapier.tool.ZapierNLARunAction.html"} {"id": "e0453c9d2971-1", "text": "class langchain.tools.zapier.tool.ZapierNLARunAction(*, name: str = '', description: str = '', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, api_wrapper: ZapierNLAWrapper = None, action_id: str, params: Optional[dict] = None, base_prompt: str = 'A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example \"get the latest email from my bank\" or \"send a slack message to the #general channel\". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are [\\'Message_Text\\', \\'Channel\\'], your instruction should be something like \\'send a slack message to the #general channel with the text hello world\\'. Another example: if the params are [\\'Calendar\\', \\'Search_Term\\'], your instruction should be something like \\'find the meeting in my personal calendar at 3pm\\'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say \\'not enough information provided in the instruction, missing \\'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: {zapier_description}, and has params: {params}', zapier_description: str, params_schema: Dict[str,", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.zapier.tool.ZapierNLARunAction.html"} {"id": "e0453c9d2971-2", "text": "and has params: {params}', zapier_description: str, params_schema: Dict[str, str] = None)[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.zapier.tool.ZapierNLARunAction.html"} {"id": "e0453c9d2971-3", "text": "Bases: BaseTool\nExecutes an action that is identified by action_id, must be exposed(enabled) by the current user (associated with the set api_key). Change\nyour exposed actions here: https://nla.zapier.com/demo/start/\nThe return JSON is guaranteed to be less than ~500 words (350\ntokens) making it safe to inject into the prompt of another LLM\ncall.\nParameters\naction_id \u2013 a specific action ID (from list actions) of the action to execute\n(the set api_key must be associated with the action owner)\ninstructions \u2013 a natural language instruction string for using the action\n(eg. \u201cget the latest email from Mike Knoop\u201d for \u201cGmail: find email\u201d action)\nparams \u2013 a dict, optional. Any params provided will override AI guesses\nfrom instructions (see \u201cunderstanding the AI guessing flow\u201d here:\nhttps://nla.zapier.com/docs/using-the-api#ai-guessing)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam action_id: str [Required]\u00b6\nparam api_wrapper: langchain.utilities.zapier.ZapierNLAWrapper [Optional]\u00b6\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.zapier.tool.ZapierNLARunAction.html"} {"id": "e0453c9d2971-4", "text": "Pydantic model class to validate and parse the tool\u2019s input arguments.\nparam base_prompt: str = 'A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example \"get the latest email from my bank\" or \"send a slack message to the #general channel\". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are [\\'Message_Text\\', \\'Channel\\'], your instruction should be something like \\'send a slack message to the #general channel with the text hello world\\'. Another example: if the params are [\\'Calendar\\', \\'Search_Term\\'], your instruction should be something like \\'find the meeting in my personal calendar at 3pm\\'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say \\'not enough information provided in the instruction, missing \\'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: {zapier_description}, and has params: {params}'\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = ''\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.zapier.tool.ZapierNLARunAction.html"} {"id": "e0453c9d2971-5", "text": "Optional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = ''\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam params: Optional[dict] = None\u00b6\nparam params_schema: Dict[str, str] [Optional]\u00b6\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\nparam zapier_description: str [Required]\u00b6\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.zapier.tool.ZapierNLARunAction.html"} {"id": "e0453c9d2971-6", "text": "Raise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nvalidator set_name_description\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.zapier.tool.ZapierNLARunAction.html"} {"id": "644621b54f55-0", "text": "langchain.tools.gmail.utils.import_google\u00b6\nlangchain.tools.gmail.utils.import_google() \u2192 Tuple[Request, Credentials][source]\u00b6\nImport google libraries.\nReturns\nRequest and Credentials classes.\nReturn type\nTuple[Request, Credentials]", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.utils.import_google.html"} {"id": "f200cfaf7501-0", "text": "langchain.tools.playwright.get_elements.GetElementsTool\u00b6\nclass langchain.tools.playwright.get_elements.GetElementsTool(*, name: str = 'get_elements', description: str = 'Retrieve elements in the current web page matching the given CSS selector', args_schema: ~typing.Type[~pydantic.main.BaseModel] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, sync_browser: Optional['SyncBrowser'] = None, async_browser: Optional['AsyncBrowser'] = None)[source]\u00b6\nBases: BaseBrowserTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Type[BaseModel] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam async_browser: Optional['AsyncBrowser'] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Retrieve elements in the current web page matching the given CSS selector'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.get_elements.GetElementsTool.html"} {"id": "f200cfaf7501-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'get_elements'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam sync_browser: Optional['SyncBrowser'] = None\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.get_elements.GetElementsTool.html"} {"id": "f200cfaf7501-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nclassmethod from_browser(sync_browser: Optional[SyncBrowser] = None, async_browser: Optional[AsyncBrowser] = None) \u2192 BaseBrowserTool\u00b6\nInstantiate the tool.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nvalidator validate_browser_provided\u00a0 \u00bb\u00a0 all fields\u00b6\nCheck that the arguments are valid.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.get_elements.GetElementsTool.html"} {"id": "8318891fd5da-0", "text": "langchain.tools.office365.send_event.O365SendEvent\u00b6\nclass langchain.tools.office365.send_event.O365SendEvent(*, name: str = 'send_event', description: str = 'Use this tool to create and send an event with the provided event fields.', args_schema: ~typing.Type[~langchain.tools.office365.send_event.SendEventSchema] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, account: Account = None)[source]\u00b6\nBases: O365BaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam account: Account [Optional]\u00b6\nparam args_schema: Type[langchain.tools.office365.send_event.SendEventSchema] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Use this tool to create and send an event with the provided event fields.'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.send_event.O365SendEvent.html"} {"id": "8318891fd5da-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'send_event'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.send_event.O365SendEvent.html"} {"id": "8318891fd5da-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.send_event.O365SendEvent.html"} {"id": "7ee9724f4674-0", "text": "langchain.tools.spark_sql.tool.InfoSparkSQLTool\u00b6\nclass langchain.tools.spark_sql.tool.InfoSparkSQLTool(*, name: str = 'schema_sql_db', description: str = '\\n\u00a0\u00a0\u00a0 Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.\\n\u00a0\u00a0\u00a0 Be sure that the tables actually exist by calling list_tables_sql_db first!\\n\\n\u00a0\u00a0\u00a0 Example Input: \"table1, table2, table3\"\\n\u00a0\u00a0\u00a0 ', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, db: SparkSQL)[source]\u00b6\nBases: BaseSparkSQLTool, BaseTool\nTool for getting metadata about a Spark SQL.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam db: langchain.utilities.spark_sql.SparkSQL [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.spark_sql.tool.InfoSparkSQLTool.html"} {"id": "7ee9724f4674-1", "text": "param db: langchain.utilities.spark_sql.SparkSQL [Required]\u00b6\nparam description: str = '\\n\u00a0\u00a0\u00a0 Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.\\n\u00a0\u00a0\u00a0 Be sure that the tables actually exist by calling list_tables_sql_db first!\\n\\n\u00a0\u00a0\u00a0 Example Input: \"table1, table2, table3\"\\n\u00a0\u00a0\u00a0 '\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'schema_sql_db'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.spark_sql.tool.InfoSparkSQLTool.html"} {"id": "7ee9724f4674-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: Config\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.spark_sql.tool.InfoSparkSQLTool.html"} {"id": "441ee5555ab5-0", "text": "langchain.tools.openapi.utils.api_models.APIPropertyLocation\u00b6\nclass langchain.tools.openapi.utils.api_models.APIPropertyLocation(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]\u00b6\nBases: Enum\nThe location of the property.\nMethods\nfrom_str(location)\nParse an APIPropertyLocation.\nAttributes\nQUERY\nPATH\nHEADER\nCOOKIE\nclassmethod from_str(location: str) \u2192 APIPropertyLocation[source]\u00b6\nParse an APIPropertyLocation.\nCOOKIE = 'cookie'\u00b6\nHEADER = 'header'\u00b6\nPATH = 'path'\u00b6\nQUERY = 'query'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.openapi.utils.api_models.APIPropertyLocation.html"} {"id": "f5524eeeacbd-0", "text": "langchain.tools.azure_cognitive_services.speech2text.AzureCogsSpeech2TextTool\u00b6\nclass langchain.tools.azure_cognitive_services.speech2text.AzureCogsSpeech2TextTool(*, name: str = 'azure_cognitive_services_speech2text', description: str = 'A wrapper around Azure Cognitive Services Speech2Text. Useful for when you need to transcribe audio to text. Input should be a url to an audio file.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, azure_cogs_key: str = '', azure_cogs_region: str = '', speech_language: str = 'en-US', speech_config: Any = None)[source]\u00b6\nBases: BaseTool\nTool that queries the Azure Cognitive Services Speech2Text API.\nIn order to set this up, follow instructions at:\nhttps://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-speech-to-text?pivots=programming-language-python\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.azure_cognitive_services.speech2text.AzureCogsSpeech2TextTool.html"} {"id": "f5524eeeacbd-1", "text": "param callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A wrapper around Azure Cognitive Services Speech2Text. Useful for when you need to transcribe audio to text. Input should be a url to an audio file.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'azure_cognitive_services_speech2text'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.azure_cognitive_services.speech2text.AzureCogsSpeech2TextTool.html"} {"id": "f5524eeeacbd-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and endpoint exists in environment.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.azure_cognitive_services.speech2text.AzureCogsSpeech2TextTool.html"} {"id": "5560af442764-0", "text": "langchain.tools.requests.tool.RequestsDeleteTool\u00b6\nclass langchain.tools.requests.tool.RequestsDeleteTool(*, name: str = 'requests_delete', description: str = 'A portal to the internet. Use this when you need to make a DELETE request to a URL. Input should be a specific url, and the output will be the text response of the DELETE request.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, requests_wrapper: TextRequestsWrapper)[source]\u00b6\nBases: BaseRequestsTool, BaseTool\nTool for making a DELETE request to an API endpoint.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A portal to the internet. Use this when you need to make a DELETE request to a URL. Input should be a specific url, and the output will be the text response of the DELETE request.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.requests.tool.RequestsDeleteTool.html"} {"id": "5560af442764-1", "text": "You can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'requests_delete'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam requests_wrapper: langchain.requests.TextRequestsWrapper [Required]\u00b6\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.requests.tool.RequestsDeleteTool.html"} {"id": "5560af442764-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.requests.tool.RequestsDeleteTool.html"} {"id": "564093ddcf84-0", "text": "langchain.tools.file_management.delete.DeleteFileTool\u00b6\nclass langchain.tools.file_management.delete.DeleteFileTool(*, name: str = 'file_delete', description: str = 'Delete a file', args_schema: ~typing.Type[~pydantic.main.BaseModel] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, root_dir: ~typing.Optional[str] = None)[source]\u00b6\nBases: BaseFileToolMixin, BaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Type[pydantic.main.BaseModel] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Delete a file'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.delete.DeleteFileTool.html"} {"id": "564093ddcf84-1", "text": "You can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'file_delete'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam root_dir: Optional[str] = None\u00b6\nThe final path will be chosen relative to root_dir if specified.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.delete.DeleteFileTool.html"} {"id": "564093ddcf84-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nget_relative_path(file_path: str) \u2192 Path\u00b6\nGet the relative path, returning an error if unsupported.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.delete.DeleteFileTool.html"} {"id": "c8e80a0e582a-0", "text": "langchain.tools.file_management.file_search.FileSearchTool\u00b6\nclass langchain.tools.file_management.file_search.FileSearchTool(*, name: str = 'file_search', description: str = 'Recursively search for files in a subdirectory that match the regex pattern', args_schema: ~typing.Type[~pydantic.main.BaseModel] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, root_dir: ~typing.Optional[str] = None)[source]\u00b6\nBases: BaseFileToolMixin, BaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Type[pydantic.main.BaseModel] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Recursively search for files in a subdirectory that match the regex pattern'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.file_search.FileSearchTool.html"} {"id": "c8e80a0e582a-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'file_search'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam root_dir: Optional[str] = None\u00b6\nThe final path will be chosen relative to root_dir if specified.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.file_search.FileSearchTool.html"} {"id": "c8e80a0e582a-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nget_relative_path(file_path: str) \u2192 Path\u00b6\nGet the relative path, returning an error if unsupported.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.file_search.FileSearchTool.html"} {"id": "054471d32853-0", "text": "langchain.tools.youtube.search.YouTubeSearchTool\u00b6\nclass langchain.tools.youtube.search.YouTubeSearchTool(*, name: str = 'youtube_search', description: str = 'search for youtube videos associated with a person. the input to this tool should be a comma separated list, the first part contains a person name and the second a number that is the maximum number of video results to return aka num_results. the second part is optional', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False)[source]\u00b6\nBases: BaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'search for youtube videos associated with a person. the input to this tool should be a comma separated list, the first part contains a person name and the second a number that is the maximum number of video results to return aka num_results. the second part is optional'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.youtube.search.YouTubeSearchTool.html"} {"id": "054471d32853-1", "text": "You can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'youtube_search'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.youtube.search.YouTubeSearchTool.html"} {"id": "054471d32853-2", "text": "Run the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.youtube.search.YouTubeSearchTool.html"} {"id": "f4d1f04ad762-0", "text": "langchain.tools.gmail.utils.get_gmail_credentials\u00b6\nlangchain.tools.gmail.utils.get_gmail_credentials(token_file: Optional[str] = None, client_secrets_file: Optional[str] = None, scopes: Optional[List[str]] = None) \u2192 Credentials[source]\u00b6\nGet credentials.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.utils.get_gmail_credentials.html"} {"id": "a44f13bb88d2-0", "text": "langchain.tools.steamship_image_generation.tool.SteamshipImageGenerationTool\u00b6\nclass langchain.tools.steamship_image_generation.tool.SteamshipImageGenerationTool(*, name: str = 'GenerateImage', description: str = 'Useful for when you need to generate an image.Input: A detailed text-2-image prompt describing an imageOutput: the UUID of a generated image', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, model_name: ModelName, size: Optional[str] = '512x512', steamship: Steamship, return_urls: Optional[bool] = False)[source]\u00b6\nBases: BaseTool\nTool used to generate images from a text-prompt.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Useful for when you need to generate an image.Input: A detailed text-2-image prompt describing an imageOutput: the UUID of a generated image'\u00b6\nUsed to tell the model how/when/why to use the tool.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.steamship_image_generation.tool.SteamshipImageGenerationTool.html"} {"id": "a44f13bb88d2-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam model_name: ModelName [Required]\u00b6\nparam name: str = 'GenerateImage'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam return_urls: Optional[bool] = False\u00b6\nparam size: Optional[str] = '512x512'\u00b6\nparam steamship: Steamship [Required]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.steamship_image_generation.tool.SteamshipImageGenerationTool.html"} {"id": "a44f13bb88d2-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nvalidator validate_size\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.steamship_image_generation.tool.SteamshipImageGenerationTool.html"} {"id": "ea112eabb8b0-0", "text": "langchain.tools.plugin.ApiConfig\u00b6\nclass langchain.tools.plugin.ApiConfig(*, type: str, url: str, has_user_authentication: Optional[bool] = False)[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam has_user_authentication: Optional[bool] = False\u00b6\nparam type: str [Required]\u00b6\nparam url: str [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.plugin.ApiConfig.html"} {"id": "1f99370b8c68-0", "text": "langchain.tools.file_management.utils.is_relative_to\u00b6\nlangchain.tools.file_management.utils.is_relative_to(path: Path, root: Path) \u2192 bool[source]\u00b6\nCheck if path is relative to root.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.utils.is_relative_to.html"} {"id": "c0de3aa7769b-0", "text": "langchain.tools.dataforseo_api_search.tool.DataForSeoAPISearchResults\u00b6\nclass langchain.tools.dataforseo_api_search.tool.DataForSeoAPISearchResults(*, name: str = 'DataForSeo Results JSON', description: str = 'A comprehensive Google Search API provided by DataForSeo.This tool is useful for obtaining real-time data on current events or popular searches.The input should be a search query and the output is a JSON object of the query results.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, api_wrapper: DataForSeoAPIWrapper = None)[source]\u00b6\nBases: BaseTool\nTool that has capability to query the DataForSeo Google Search API\nand get back json.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_wrapper: langchain.utilities.dataforseo_api_search.DataForSeoAPIWrapper [Optional]\u00b6\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.dataforseo_api_search.tool.DataForSeoAPISearchResults.html"} {"id": "c0de3aa7769b-1", "text": "param callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A comprehensive Google Search API provided by DataForSeo.This tool is useful for obtaining real-time data on current events or popular searches.The input should be a search query and the output is a JSON object of the query results.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'DataForSeo Results JSON'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.dataforseo_api_search.tool.DataForSeoAPISearchResults.html"} {"id": "c0de3aa7769b-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.dataforseo_api_search.tool.DataForSeoAPISearchResults.html"} {"id": "e7b20e3b1d53-0", "text": "langchain.tools.sql_database.tool.BaseSQLDatabaseTool\u00b6\nclass langchain.tools.sql_database.tool.BaseSQLDatabaseTool(*, db: SQLDatabase)[source]\u00b6\nBases: BaseModel\nBase tool for interacting with a SQL database.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam db: langchain.sql_database.SQLDatabase [Required]\u00b6\nmodel Config[source]\u00b6\nBases: Config\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.sql_database.tool.BaseSQLDatabaseTool.html"} {"id": "83afc8b9449d-0", "text": "langchain.tools.python.tool.PythonAstREPLTool\u00b6\nclass langchain.tools.python.tool.PythonAstREPLTool(*, name: str = 'python_repl_ast', description: str = 'A Python shell. Use this to execute python commands. Input should be a valid python command. When using this tool, sometimes output is abbreviated - make sure it does not look abbreviated before using it in your answer.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, globals: Optional[Dict] = None, locals: Optional[Dict] = None, sanitize_input: bool = True)[source]\u00b6\nBases: BaseTool\nA tool for running python code in a REPL.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A Python shell. Use this to execute python commands. Input should be a valid python command. When using this tool, sometimes output is abbreviated - make sure it does not look abbreviated before using it in your answer.'\u00b6\nUsed to tell the model how/when/why to use the tool.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.python.tool.PythonAstREPLTool.html"} {"id": "83afc8b9449d-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam globals: Optional[Dict] [Optional]\u00b6\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam locals: Optional[Dict] [Optional]\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'python_repl_ast'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam sanitize_input: bool = True\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.python.tool.PythonAstREPLTool.html"} {"id": "83afc8b9449d-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nvalidator validate_python_version\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate valid python version.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.python.tool.PythonAstREPLTool.html"} {"id": "2ffe30136368-0", "text": "langchain.tools.json.tool.JsonSpec\u00b6\nclass langchain.tools.json.tool.JsonSpec(*, dict_: Dict, max_value_length: int = 200)[source]\u00b6\nBases: BaseModel\nBase class for JSON spec.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam dict_: Dict [Required]\u00b6\nparam max_value_length: int = 200\u00b6\nclassmethod from_file(path: Path) \u2192 JsonSpec[source]\u00b6\nCreate a JsonSpec from a file.\nkeys(text: str) \u2192 str[source]\u00b6\nReturn the keys of the dict at the given path.\nParameters\ntext \u2013 Python representation of the path to the dict (e.g. data[\u201ckey1\u201d][0][\u201ckey2\u201d]).\nvalue(text: str) \u2192 str[source]\u00b6\nReturn the value of the dict at the given path.\nParameters\ntext \u2013 Python representation of the path to the dict (e.g. data[\u201ckey1\u201d][0][\u201ckey2\u201d]).", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.json.tool.JsonSpec.html"} {"id": "8178d76f163d-0", "text": "langchain.tools.sql_database.tool.InfoSQLDatabaseTool\u00b6\nclass langchain.tools.sql_database.tool.InfoSQLDatabaseTool(*, name: str = 'sql_db_schema', description: str = '\\n\u00a0\u00a0\u00a0 Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.\u00a0\u00a0\u00a0 \\n\\n\u00a0\u00a0\u00a0 Example Input: \"table1, table2, table3\"\\n\u00a0\u00a0\u00a0 ', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, db: SQLDatabase)[source]\u00b6\nBases: BaseSQLDatabaseTool, BaseTool\nTool for getting metadata about a SQL database.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam db: langchain.sql_database.SQLDatabase [Required]\u00b6\nparam description: str = '\\n\u00a0\u00a0\u00a0 Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.\u00a0\u00a0\u00a0 \\n\\n\u00a0\u00a0\u00a0 Example Input: \"table1, table2, table3\"\\n\u00a0\u00a0\u00a0 '\u00b6\nUsed to tell the model how/when/why to use the tool.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.sql_database.tool.InfoSQLDatabaseTool.html"} {"id": "8178d76f163d-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'sql_db_schema'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.sql_database.tool.InfoSQLDatabaseTool.html"} {"id": "8178d76f163d-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: Config\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.sql_database.tool.InfoSQLDatabaseTool.html"} {"id": "be6be1d6e711-0", "text": "langchain.tools.azure_cognitive_services.utils.detect_file_src_type\u00b6\nlangchain.tools.azure_cognitive_services.utils.detect_file_src_type(file_path: str) \u2192 str[source]\u00b6\nDetect if the file is local or remote.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.azure_cognitive_services.utils.detect_file_src_type.html"} {"id": "17c8c6e33b47-0", "text": "langchain.tools.python.tool.PythonREPLTool\u00b6\nclass langchain.tools.python.tool.PythonREPLTool(*, name: str = 'Python_REPL', description: str = 'A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, python_repl: PythonREPL = None, sanitize_input: bool = True)[source]\u00b6\nBases: BaseTool\nA tool for running python code in a REPL.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.python.tool.PythonREPLTool.html"} {"id": "17c8c6e33b47-1", "text": "You can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'Python_REPL'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam python_repl: langchain.utilities.python.PythonREPL [Optional]\u00b6\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam sanitize_input: bool = True\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.python.tool.PythonREPLTool.html"} {"id": "17c8c6e33b47-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.python.tool.PythonREPLTool.html"} {"id": "3e6a1f944d40-0", "text": "langchain.tools.arxiv.tool.ArxivQueryRun\u00b6\nclass langchain.tools.arxiv.tool.ArxivQueryRun(*, name: str = 'arxiv', description: str = 'A wrapper around Arxiv.org Useful for when you need to answer questions about Physics, Mathematics, Computer Science, Quantitative Biology, Quantitative Finance, Statistics, Electrical Engineering, and Economics from scientific articles on arxiv.org. Input should be a search query.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, api_wrapper: ArxivAPIWrapper = None)[source]\u00b6\nBases: BaseTool\nTool that adds the capability to search using the Arxiv API.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_wrapper: langchain.utilities.arxiv.ArxivAPIWrapper [Optional]\u00b6\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.arxiv.tool.ArxivQueryRun.html"} {"id": "3e6a1f944d40-1", "text": "param callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A wrapper around Arxiv.org Useful for when you need to answer questions about Physics, Mathematics, Computer Science, Quantitative Biology, Quantitative Finance, Statistics, Electrical Engineering, and Economics from scientific articles on arxiv.org. Input should be a search query.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'arxiv'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.arxiv.tool.ArxivQueryRun.html"} {"id": "3e6a1f944d40-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.arxiv.tool.ArxivQueryRun.html"} {"id": "fc405b04b552-0", "text": "langchain.tools.brave_search.tool.BraveSearch\u00b6\nclass langchain.tools.brave_search.tool.BraveSearch(*, name: str = 'brave_search', description: str = 'a search engine. useful for when you need to answer questions about current events. input should be a search query.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, search_wrapper: BraveSearchWrapper)[source]\u00b6\nBases: BaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'a search engine. useful for when you need to answer questions about current events. input should be a search query.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.brave_search.tool.BraveSearch.html"} {"id": "fc405b04b552-1", "text": "param metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'brave_search'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam search_wrapper: BraveSearchWrapper [Required]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nclassmethod from_api_key(api_key: str, search_kwargs: Optional[dict] = None, **kwargs: Any) \u2192 BraveSearch[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.brave_search.tool.BraveSearch.html"} {"id": "fc405b04b552-2", "text": "validator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.brave_search.tool.BraveSearch.html"} {"id": "20aff4c4f090-0", "text": "langchain.tools.playwright.utils.run_async\u00b6\nlangchain.tools.playwright.utils.run_async(coro: Coroutine[Any, Any, T]) \u2192 T[source]\u00b6\nRun an async coroutine.\nParameters\ncoro \u2013 The coroutine to run. Coroutine[Any, Any, T]\nReturns\nThe result of the coroutine.\nReturn type\nT", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.utils.run_async.html"} {"id": "e1336beea560-0", "text": "langchain.tools.playwright.base.BaseBrowserTool\u00b6\nclass langchain.tools.playwright.base.BaseBrowserTool(*, name: str, description: str, args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, sync_browser: Optional['SyncBrowser'] = None, async_browser: Optional['AsyncBrowser'] = None)[source]\u00b6\nBases: BaseTool\nBase class for browser tools.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam async_browser: Optional['AsyncBrowser'] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str [Required]\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.base.BaseBrowserTool.html"} {"id": "e1336beea560-1", "text": "This metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str [Required]\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam sync_browser: Optional['SyncBrowser'] = None\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nclassmethod from_browser(sync_browser: Optional[SyncBrowser] = None, async_browser: Optional[AsyncBrowser] = None) \u2192 BaseBrowserTool[source]\u00b6\nInstantiate the tool.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.base.BaseBrowserTool.html"} {"id": "e1336beea560-2", "text": "Raise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nvalidator validate_browser_provided\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nCheck that the arguments are valid.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.base.BaseBrowserTool.html"} {"id": "bff1df85f2da-0", "text": "langchain.tools.interaction.tool.StdInInquireTool\u00b6\nlangchain.tools.interaction.tool.StdInInquireTool(*args: Any, **kwargs: Any) \u2192 HumanInputRun[source]\u00b6\nTool for asking the user for input.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.interaction.tool.StdInInquireTool.html"} {"id": "85d81de74aa1-0", "text": "langchain.tools.base.ToolException\u00b6\nclass langchain.tools.base.ToolException[source]\u00b6\nBases: Exception\nAn optional exception that tool throws when execution error occurs.\nWhen this exception is thrown, the agent will not stop working,\nbut will handle the exception according to the handle_tool_error\nvariable of the tool, and the processing result will be returned\nto the agent as observation, and printed in red on the console.\nadd_note()\u00b6\nException.add_note(note) \u2013\nadd a note to the exception\nwith_traceback()\u00b6\nException.with_traceback(tb) \u2013\nset self.__traceback__ to tb and return self.\nargs\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.base.ToolException.html"} {"id": "7b346b5caebc-0", "text": "langchain.tools.azure_cognitive_services.form_recognizer.AzureCogsFormRecognizerTool\u00b6\nclass langchain.tools.azure_cognitive_services.form_recognizer.AzureCogsFormRecognizerTool(*, name: str = 'azure_cognitive_services_form_recognizer', description: str = 'A wrapper around Azure Cognitive Services Form Recognizer. Useful for when you need to extract text, tables, and key-value pairs from documents. Input should be a url to a document.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, azure_cogs_key: str = '', azure_cogs_endpoint: str = '', doc_analysis_client: Any = None)[source]\u00b6\nBases: BaseTool\nTool that queries the Azure Cognitive Services Form Recognizer API.\nIn order to set this up, follow instructions at:\nhttps://learn.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/quickstarts/get-started-sdks-rest-api?view=form-recog-3.0.0&pivots=programming-language-python\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.azure_cognitive_services.form_recognizer.AzureCogsFormRecognizerTool.html"} {"id": "7b346b5caebc-1", "text": "Deprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A wrapper around Azure Cognitive Services Form Recognizer. Useful for when you need to extract text, tables, and key-value pairs from documents. Input should be a url to a document.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'azure_cognitive_services_form_recognizer'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.azure_cognitive_services.form_recognizer.AzureCogsFormRecognizerTool.html"} {"id": "7b346b5caebc-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and endpoint exists in environment.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.azure_cognitive_services.form_recognizer.AzureCogsFormRecognizerTool.html"} {"id": "374649917e2a-0", "text": "langchain.tools.requests.tool.BaseRequestsTool\u00b6\nclass langchain.tools.requests.tool.BaseRequestsTool(*, requests_wrapper: TextRequestsWrapper)[source]\u00b6\nBases: BaseModel\nBase class for requests tools.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam requests_wrapper: langchain.requests.TextRequestsWrapper [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.requests.tool.BaseRequestsTool.html"} {"id": "f4d6505b180f-0", "text": "langchain.tools.office365.utils.authenticate\u00b6\nlangchain.tools.office365.utils.authenticate() \u2192 Account[source]\u00b6\nAuthenticate using the Microsoft Grah API", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.utils.authenticate.html"} {"id": "05fa10cacb52-0", "text": "langchain.tools.python.tool.sanitize_input\u00b6\nlangchain.tools.python.tool.sanitize_input(query: str) \u2192 str[source]\u00b6\nSanitize input to the python REPL.\nRemove whitespace, backtick & python (if llm mistakes python console as terminal)\nParameters\nquery \u2013 The query to sanitize\nReturns\nThe sanitized query\nReturn type\nstr", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.python.tool.sanitize_input.html"} {"id": "80bfc8531db9-0", "text": "langchain.tools.jira.tool.JiraAction\u00b6\nclass langchain.tools.jira.tool.JiraAction(*, name: str = '', description: str = '', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, api_wrapper: JiraAPIWrapper = None, mode: str)[source]\u00b6\nBases: BaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_wrapper: langchain.utilities.jira.JiraAPIWrapper [Optional]\u00b6\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = ''\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.jira.tool.JiraAction.html"} {"id": "80bfc8531db9-1", "text": "and passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam mode: str [Required]\u00b6\nparam name: str = ''\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.jira.tool.JiraAction.html"} {"id": "80bfc8531db9-2", "text": "Raise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.jira.tool.JiraAction.html"} {"id": "cfdbf1bc49d3-0", "text": "langchain.tools.gmail.create_draft.GmailCreateDraft\u00b6\nclass langchain.tools.gmail.create_draft.GmailCreateDraft(*, name: str = 'create_gmail_draft', description: str = 'Use this tool to create a draft email with the provided message fields.', args_schema: ~typing.Type[~langchain.tools.gmail.create_draft.CreateDraftSchema] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, api_resource: Resource = None)[source]\u00b6\nBases: GmailBaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_resource: Resource [Optional]\u00b6\nparam args_schema: Type[langchain.tools.gmail.create_draft.CreateDraftSchema] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Use this tool to create a draft email with the provided message fields.'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.create_draft.GmailCreateDraft.html"} {"id": "cfdbf1bc49d3-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'create_gmail_draft'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.create_draft.GmailCreateDraft.html"} {"id": "cfdbf1bc49d3-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nclassmethod from_api_resource(api_resource: Resource) \u2192 GmailBaseTool\u00b6\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.create_draft.GmailCreateDraft.html"} {"id": "df3dfba2f9b2-0", "text": "langchain.tools.json.tool.JsonListKeysTool\u00b6\nclass langchain.tools.json.tool.JsonListKeysTool(*, name: str = 'json_spec_list_keys', description: str = '\\n\u00a0\u00a0\u00a0 Can be used to list all keys at a given path. \\n\u00a0\u00a0\u00a0 Before calling this you should be SURE that the path to this exists.\\n\u00a0\u00a0\u00a0 The input is a text representation of the path to the dict in Python syntax (e.g. data[\"key1\"][0][\"key2\"]).\\n\u00a0\u00a0\u00a0 ', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, spec: JsonSpec)[source]\u00b6\nBases: BaseTool\nTool for listing keys in a JSON spec.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.json.tool.JsonListKeysTool.html"} {"id": "df3dfba2f9b2-1", "text": "param callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = '\\n\u00a0\u00a0\u00a0 Can be used to list all keys at a given path. \\n\u00a0\u00a0\u00a0 Before calling this you should be SURE that the path to this exists.\\n\u00a0\u00a0\u00a0 The input is a text representation of the path to the dict in Python syntax (e.g. data[\"key1\"][0][\"key2\"]).\\n\u00a0\u00a0\u00a0 '\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'json_spec_list_keys'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam spec: JsonSpec [Required]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.json.tool.JsonListKeysTool.html"} {"id": "df3dfba2f9b2-2", "text": "param verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.json.tool.JsonListKeysTool.html"} {"id": "a1722af7608d-0", "text": "langchain.tools.playwright.utils.get_current_page\u00b6\nlangchain.tools.playwright.utils.get_current_page(browser: SyncBrowser) \u2192 SyncPage[source]\u00b6\nGet the current page of the browser.\n:param browser: The browser to get the current page from.\nReturns\nThe current page.\nReturn type\nSyncPage", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.utils.get_current_page.html"} {"id": "d21adac7dd98-0", "text": "langchain.tools.office365.send_message.O365SendMessage\u00b6\nclass langchain.tools.office365.send_message.O365SendMessage(*, name: str = 'send_email', description: str = 'Use this tool to send an email with the provided message fields.', args_schema: ~typing.Type[~langchain.tools.office365.send_message.SendMessageSchema] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, account: Account = None)[source]\u00b6\nBases: O365BaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam account: Account [Optional]\u00b6\nparam args_schema: Type[langchain.tools.office365.send_message.SendMessageSchema] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Use this tool to send an email with the provided message fields.'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.send_message.O365SendMessage.html"} {"id": "d21adac7dd98-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'send_email'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.send_message.O365SendMessage.html"} {"id": "d21adac7dd98-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.send_message.O365SendMessage.html"} {"id": "81d181dbd506-0", "text": "langchain.tools.file_management.copy.FileCopyInput\u00b6\nclass langchain.tools.file_management.copy.FileCopyInput(*, source_path: str, destination_path: str)[source]\u00b6\nBases: BaseModel\nInput for CopyFileTool.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam destination_path: str [Required]\u00b6\nPath to save the copied file\nparam source_path: str [Required]\u00b6\nPath of the file to copy", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.copy.FileCopyInput.html"} {"id": "d103aa3bcb60-0", "text": "langchain.tools.azure_cognitive_services.text2speech.AzureCogsText2SpeechTool\u00b6\nclass langchain.tools.azure_cognitive_services.text2speech.AzureCogsText2SpeechTool(*, name: str = 'azure_cognitive_services_text2speech', description: str = 'A wrapper around Azure Cognitive Services Text2Speech. Useful for when you need to convert text to speech. ', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, azure_cogs_key: str = '', azure_cogs_region: str = '', speech_language: str = 'en-US', speech_config: Any = None)[source]\u00b6\nBases: BaseTool\nTool that queries the Azure Cognitive Services Text2Speech API.\nIn order to set this up, follow instructions at:\nhttps://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-text-to-speech?pivots=programming-language-python\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.azure_cognitive_services.text2speech.AzureCogsText2SpeechTool.html"} {"id": "d103aa3bcb60-1", "text": "param callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A wrapper around Azure Cognitive Services Text2Speech. Useful for when you need to convert text to speech. '\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'azure_cognitive_services_text2speech'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.azure_cognitive_services.text2speech.AzureCogsText2SpeechTool.html"} {"id": "d103aa3bcb60-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and endpoint exists in environment.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.azure_cognitive_services.text2speech.AzureCogsText2SpeechTool.html"} {"id": "0bbeade7993e-0", "text": "langchain.tools.file_management.move.FileMoveInput\u00b6\nclass langchain.tools.file_management.move.FileMoveInput(*, source_path: str, destination_path: str)[source]\u00b6\nBases: BaseModel\nInput for MoveFileTool.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam destination_path: str [Required]\u00b6\nNew path for the moved file\nparam source_path: str [Required]\u00b6\nPath of the file to move", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.move.FileMoveInput.html"} {"id": "8623880de631-0", "text": "langchain.tools.gmail.utils.import_googleapiclient_resource_builder\u00b6\nlangchain.tools.gmail.utils.import_googleapiclient_resource_builder() \u2192 build_resource[source]\u00b6\nImport googleapiclient.discovery.build function.\nReturns\ngoogleapiclient.discovery.build function.\nReturn type\nbuild_resource", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.utils.import_googleapiclient_resource_builder.html"} {"id": "c228bc38bf16-0", "text": "langchain.tools.gmail.utils.import_installed_app_flow\u00b6\nlangchain.tools.gmail.utils.import_installed_app_flow() \u2192 InstalledAppFlow[source]\u00b6\nImport InstalledAppFlow class.\nReturns\nInstalledAppFlow class.\nReturn type\nInstalledAppFlow", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.utils.import_installed_app_flow.html"} {"id": "9f06a988a7a4-0", "text": "langchain.tools.file_management.write.WriteFileTool\u00b6\nclass langchain.tools.file_management.write.WriteFileTool(*, name: str = 'write_file', description: str = 'Write file to disk', args_schema: ~typing.Type[~pydantic.main.BaseModel] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, root_dir: ~typing.Optional[str] = None)[source]\u00b6\nBases: BaseFileToolMixin, BaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Type[pydantic.main.BaseModel] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Write file to disk'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.write.WriteFileTool.html"} {"id": "9f06a988a7a4-1", "text": "You can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'write_file'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam root_dir: Optional[str] = None\u00b6\nThe final path will be chosen relative to root_dir if specified.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.write.WriteFileTool.html"} {"id": "9f06a988a7a4-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nget_relative_path(file_path: str) \u2192 Path\u00b6\nGet the relative path, returning an error if unsupported.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.write.WriteFileTool.html"} {"id": "a83690a94293-0", "text": "langchain.tools.office365.send_event.SendEventSchema\u00b6\nclass langchain.tools.office365.send_event.SendEventSchema(*, body: str, attendees: List[str], subject: str, start_datetime: str, end_datetime: str)[source]\u00b6\nBases: BaseModel\nInput for CreateEvent Tool.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam attendees: List[str] [Required]\u00b6\nThe list of attendees for the event.\nparam body: str [Required]\u00b6\nThe message body to include in the event.\nparam end_datetime: str [Required]\u00b6\nThe end datetime for the event in the following format: YYYY-MM-DDTHH:MM:SS\u00b1hh:mm, where \u201cT\u201d separates the date and time components, and the time zone offset is specified as \u00b1hh:mm. For example: \u201c2023-06-09T10:30:00+03:00\u201d represents June 9th, 2023, at 10:30 AM in a time zone with a positive offset of 3 hours from Coordinated Universal Time (UTC).\nparam start_datetime: str [Required]\u00b6\nThe start datetime for the event in the following format: YYYY-MM-DDTHH:MM:SS\u00b1hh:mm, where \u201cT\u201d separates the date and time components, and the time zone offset is specified as \u00b1hh:mm. For example: \u201c2023-06-09T10:30:00+03:00\u201d represents June 9th, 2023, at 10:30 AM in a time zone with a positive offset of 3 hours from Coordinated Universal Time (UTC).\nparam subject: str [Required]\u00b6\nThe subject of the event.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.send_event.SendEventSchema.html"} {"id": "22bd942f0496-0", "text": "langchain.tools.wolfram_alpha.tool.WolframAlphaQueryRun\u00b6\nclass langchain.tools.wolfram_alpha.tool.WolframAlphaQueryRun(*, name: str = 'wolfram_alpha', description: str = 'A wrapper around Wolfram Alpha. Useful for when you need to answer questions about Math, Science, Technology, Culture, Society and Everyday Life. Input should be a search query.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, api_wrapper: WolframAlphaAPIWrapper)[source]\u00b6\nBases: BaseTool\nTool that adds the capability to query using the Wolfram Alpha SDK.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_wrapper: langchain.utilities.wolfram_alpha.WolframAlphaAPIWrapper [Required]\u00b6\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A wrapper around Wolfram Alpha. Useful for when you need to answer questions about Math, Science, Technology, Culture, Society and Everyday Life. Input should be a search query.'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.wolfram_alpha.tool.WolframAlphaQueryRun.html"} {"id": "22bd942f0496-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'wolfram_alpha'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.wolfram_alpha.tool.WolframAlphaQueryRun.html"} {"id": "22bd942f0496-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.wolfram_alpha.tool.WolframAlphaQueryRun.html"} {"id": "49527026f29d-0", "text": "langchain.tools.scenexplain.tool.SceneXplainTool\u00b6\nclass langchain.tools.scenexplain.tool.SceneXplainTool(*, name: str = 'image_explainer', description: str = 'An Image Captioning Tool: Use this tool to generate a detailed caption for an image. The input can be an image file of any format, and the output will be a text description that covers every detail of the image.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, api_wrapper: SceneXplainAPIWrapper = None)[source]\u00b6\nBases: BaseTool\nTool that adds the capability to explain images.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_wrapper: langchain.utilities.scenexplain.SceneXplainAPIWrapper [Optional]\u00b6\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'An Image Captioning Tool: Use this tool to generate a detailed caption for an image. The input can be an image file of any format, and the output will be a text description that covers every detail of the image.'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.scenexplain.tool.SceneXplainTool.html"} {"id": "49527026f29d-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'image_explainer'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.scenexplain.tool.SceneXplainTool.html"} {"id": "49527026f29d-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.scenexplain.tool.SceneXplainTool.html"} {"id": "9c6c2fde4181-0", "text": "langchain.tools.gmail.get_message.GmailGetMessage\u00b6\nclass langchain.tools.gmail.get_message.GmailGetMessage(*, name: str = 'get_gmail_message', description: str = 'Use this tool to fetch an email by message ID. Returns the thread ID, snipet, body, subject, and sender.', args_schema: ~typing.Type[~langchain.tools.gmail.get_message.SearchArgsSchema] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, api_resource: Resource = None)[source]\u00b6\nBases: GmailBaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_resource: Resource [Optional]\u00b6\nparam args_schema: Type[langchain.tools.gmail.get_message.SearchArgsSchema] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.get_message.GmailGetMessage.html"} {"id": "9c6c2fde4181-1", "text": "param callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Use this tool to fetch an email by message ID. Returns the thread ID, snipet, body, subject, and sender.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'get_gmail_message'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.get_message.GmailGetMessage.html"} {"id": "9c6c2fde4181-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nclassmethod from_api_resource(api_resource: Resource) \u2192 GmailBaseTool\u00b6\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.get_message.GmailGetMessage.html"} {"id": "0cd6af501625-0", "text": "langchain.tools.office365.utils.clean_body\u00b6\nlangchain.tools.office365.utils.clean_body(body: str) \u2192 str[source]\u00b6\nClean body of a message or event.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.utils.clean_body.html"} {"id": "cac01658ab92-0", "text": "langchain.tools.pubmed.tool.PubmedQueryRun\u00b6\nclass langchain.tools.pubmed.tool.PubmedQueryRun(*, name: str = 'PubMed', description: str = 'A wrapper around PubMed.org Useful for when you need to answer questions about Physics, Mathematics, Computer Science, Quantitative Biology, Quantitative Finance, Statistics, Electrical Engineering, and Economics from scientific articles on PubMed.org. Input should be a search query.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, api_wrapper: PubMedAPIWrapper = None)[source]\u00b6\nBases: BaseTool\nTool that adds the capability to search using the PubMed API.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_wrapper: langchain.utilities.pupmed.PubMedAPIWrapper [Optional]\u00b6\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A wrapper around PubMed.org Useful for when you need to answer questions about Physics, Mathematics, Computer Science, Quantitative Biology, Quantitative Finance, Statistics, Electrical Engineering, and Economics from scientific articles on PubMed.org. Input should be a search query.'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.pubmed.tool.PubmedQueryRun.html"} {"id": "cac01658ab92-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'PubMed'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.pubmed.tool.PubmedQueryRun.html"} {"id": "cac01658ab92-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.pubmed.tool.PubmedQueryRun.html"} {"id": "abbd5ee46021-0", "text": "langchain.tools.dataforseo_api_search.tool.DataForSeoAPISearchRun\u00b6\nclass langchain.tools.dataforseo_api_search.tool.DataForSeoAPISearchRun(*, name: str = 'dataforseo_api_search', description: str = 'A robust Google Search API provided by DataForSeo.This tool is handy when you need information about trending topics or current events.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, api_wrapper: DataForSeoAPIWrapper)[source]\u00b6\nBases: BaseTool\nTool that adds the capability to query the DataForSeo Google search API.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_wrapper: langchain.utilities.dataforseo_api_search.DataForSeoAPIWrapper [Required]\u00b6\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A robust Google Search API provided by DataForSeo.This tool is handy when you need information about trending topics or current events.'\u00b6\nUsed to tell the model how/when/why to use the tool.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.dataforseo_api_search.tool.DataForSeoAPISearchRun.html"} {"id": "abbd5ee46021-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'dataforseo_api_search'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.dataforseo_api_search.tool.DataForSeoAPISearchRun.html"} {"id": "abbd5ee46021-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.dataforseo_api_search.tool.DataForSeoAPISearchRun.html"} {"id": "b6a40f9ec7b5-0", "text": "langchain.tools.playwright.get_elements.GetElementsToolInput\u00b6\nclass langchain.tools.playwright.get_elements.GetElementsToolInput(*, selector: str, attributes: List[str] = None)[source]\u00b6\nBases: BaseModel\nInput for GetElementsTool.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam attributes: List[str] [Optional]\u00b6\nSet of attributes to retrieve for each element\nparam selector: str [Required]\u00b6\nCSS selector, such as \u2018*\u2019, \u2018div\u2019, \u2018p\u2019, \u2018a\u2019, #id, .classname", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.get_elements.GetElementsToolInput.html"} {"id": "802c9ad3017b-0", "text": "langchain.tools.gmail.send_message.SendMessageSchema\u00b6\nclass langchain.tools.gmail.send_message.SendMessageSchema(*, message: str, to: Union[str, List[str]], subject: str, cc: Optional[Union[str, List[str]]] = None, bcc: Optional[Union[str, List[str]]] = None)[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam bcc: Optional[Union[str, List[str]]] = None\u00b6\nThe list of BCC recipients.\nparam cc: Optional[Union[str, List[str]]] = None\u00b6\nThe list of CC recipients.\nparam message: str [Required]\u00b6\nThe message to send.\nparam subject: str [Required]\u00b6\nThe subject of the message.\nparam to: Union[str, List[str]] [Required]\u00b6\nThe list of recipients.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.send_message.SendMessageSchema.html"} {"id": "b7b075767734-0", "text": "langchain.tools.sleep.tool.SleepInput\u00b6\nclass langchain.tools.sleep.tool.SleepInput(*, sleep_time: int)[source]\u00b6\nBases: BaseModel\nInput for CopyFileTool.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam sleep_time: int [Required]\u00b6\nTime to sleep in seconds", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.sleep.tool.SleepInput.html"} {"id": "71d4b04c27aa-0", "text": "langchain.tools.file_management.utils.FileValidationError\u00b6\nclass langchain.tools.file_management.utils.FileValidationError[source]\u00b6\nBases: ValueError\nError for paths outside the root directory.\nadd_note()\u00b6\nException.add_note(note) \u2013\nadd a note to the exception\nwith_traceback()\u00b6\nException.with_traceback(tb) \u2013\nset self.__traceback__ to tb and return self.\nargs\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.utils.FileValidationError.html"} {"id": "451ba649668e-0", "text": "langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksTool\u00b6\nclass langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksTool(*, name: str = 'extract_hyperlinks', description: str = 'Extract all hyperlinks on the current webpage', args_schema: ~typing.Type[~pydantic.main.BaseModel] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, sync_browser: Optional['SyncBrowser'] = None, async_browser: Optional['AsyncBrowser'] = None)[source]\u00b6\nBases: BaseBrowserTool\nExtract all hyperlinks on the page.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Type[BaseModel] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam async_browser: Optional['AsyncBrowser'] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksTool.html"} {"id": "451ba649668e-1", "text": "param callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Extract all hyperlinks on the current webpage'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'extract_hyperlinks'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam sync_browser: Optional['SyncBrowser'] = None\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksTool.html"} {"id": "451ba649668e-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator check_bs_import\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nCheck that the arguments are valid.\nclassmethod from_browser(sync_browser: Optional[SyncBrowser] = None, async_browser: Optional[AsyncBrowser] = None) \u2192 BaseBrowserTool\u00b6\nInstantiate the tool.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nstatic scrape_page(page: Any, html_content: str, absolute_urls: bool) \u2192 str[source]\u00b6\nvalidator validate_browser_provided\u00a0 \u00bb\u00a0 all fields\u00b6\nCheck that the arguments are valid.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksTool.html"} {"id": "215989067058-0", "text": "langchain.tools.zapier.tool.ZapierNLAListActions\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.zapier.tool.ZapierNLAListActions.html"} {"id": "215989067058-1", "text": "class langchain.tools.zapier.tool.ZapierNLAListActions(*, name: str = 'ZapierNLA_list_actions', description: str = 'A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example \"get the latest email from my bank\" or \"send a slack message to the #general channel\". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are [\\'Message_Text\\', \\'Channel\\'], your instruction should be something like \\'send a slack message to the #general channel with the text hello world\\'. Another example: if the params are [\\'Calendar\\', \\'Search_Term\\'], your instruction should be something like \\'find the meeting in my personal calendar at 3pm\\'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say \\'not enough information provided in the instruction, missing \\'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: {zapier_description}, and has params: {params}This tool returns a list of the user\\'s exposed actions.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, api_wrapper: ZapierNLAWrapper = None)[source]\u00b6\nBases: BaseTool", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.zapier.tool.ZapierNLAListActions.html"} {"id": "215989067058-2", "text": "Bases: BaseTool\nReturns a list of all exposed (enabled) actions associated withcurrent user (associated with the set api_key). Change your exposed\nactions here: https://nla.zapier.com/demo/start/\nThe return list can be empty if no actions exposed. Else will contain\na list of action objects:\n[{\u201cid\u201d: str,\n\u201cdescription\u201d: str,\n\u201cparams\u201d: Dict[str, str]\n}]\nparams will always contain an instructions key, the only required\nparam. All others optional and if provided will override any AI guesses\n(see \u201cunderstanding the AI guessing flow\u201d here:\nhttps://nla.zapier.com/docs/using-the-api#ai-guessing)\nParameters\nNone \u2013 \nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_wrapper: langchain.utilities.zapier.ZapierNLAWrapper [Optional]\u00b6\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.zapier.tool.ZapierNLAListActions.html"} {"id": "215989067058-3", "text": "param callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example \"get the latest email from my bank\" or \"send a slack message to the #general channel\". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are [\\'Message_Text\\', \\'Channel\\'], your instruction should be something like \\'send a slack message to the #general channel with the text hello world\\'. Another example: if the params are [\\'Calendar\\', \\'Search_Term\\'], your instruction should be something like \\'find the meeting in my personal calendar at 3pm\\'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say \\'not enough information provided in the instruction, missing \\'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: {zapier_description}, and has params: {params}This tool returns a list of the user\\'s exposed actions.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.zapier.tool.ZapierNLAListActions.html"} {"id": "215989067058-4", "text": "and passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'ZapierNLA_list_actions'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.zapier.tool.ZapierNLAListActions.html"} {"id": "215989067058-5", "text": "Raise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.zapier.tool.ZapierNLAListActions.html"} {"id": "ce7e114227e6-0", "text": "langchain.tools.base.StructuredTool\u00b6\nclass langchain.tools.base.StructuredTool(*, name: str, description: str = '', args_schema: Type[BaseModel], return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, func: Callable[[...], Any], coroutine: Optional[Callable[[...], Awaitable[Any]]] = None)[source]\u00b6\nBases: BaseTool\nTool that can operate on any number of inputs.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Type[pydantic.main.BaseModel] [Required]\u00b6\nThe input arguments\u2019 schema.\nThe tool schema.\nparam callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None\u00b6\nCallbacks to be called during tool execution.\nparam coroutine: Optional[Callable[[...], Awaitable[Any]]] = None\u00b6\nThe asynchronous version of the function.\nparam description: str = ''\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam func: Callable[[...], Any] [Required]\u00b6\nThe function to run when the tool is called.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.base.StructuredTool.html"} {"id": "ce7e114227e6-1", "text": "The function to run when the tool is called.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str [Required]\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.base.StructuredTool.html"} {"id": "ce7e114227e6-2", "text": "Run the tool asynchronously.\nclassmethod from_function(func: Callable, name: Optional[str] = None, description: Optional[str] = None, return_direct: bool = False, args_schema: Optional[Type[BaseModel]] = None, infer_schema: bool = True, **kwargs: Any) \u2192 StructuredTool[source]\u00b6\nCreate tool from a given function.\nA classmethod that helps to create a tool from a function.\nParameters\nfunc \u2013 The function from which to create a tool\nname \u2013 The name of the tool. Defaults to the function name\ndescription \u2013 The description of the tool. Defaults to the function docstring\nreturn_direct \u2013 Whether to return the result directly or as a callback\nargs_schema \u2013 The schema of the tool\u2019s input arguments\ninfer_schema \u2013 Whether to infer the schema from the function\u2019s signature\n**kwargs \u2013 Additional arguments to pass to the tool\nReturns\nThe tool\nExamples\n\u2026 code-block:: python\ndef add(a: int, b: int) -> int:\u201c\u201d\u201dAdd two numbers\u201d\u201d\u201d\nreturn a + b\ntool = StructuredTool.from_function(add)\ntool.run(1, 2) # 3\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nThe tool\u2019s input arguments.\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.base.StructuredTool.html"} {"id": "ce7e114227e6-3", "text": "Whether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.base.StructuredTool.html"} {"id": "c9b7abf6bdd7-0", "text": "langchain.tools.file_management.write.WriteFileInput\u00b6\nclass langchain.tools.file_management.write.WriteFileInput(*, file_path: str, text: str, append: bool = False)[source]\u00b6\nBases: BaseModel\nInput for WriteFileTool.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam append: bool = False\u00b6\nWhether to append to an existing file.\nparam file_path: str [Required]\u00b6\nname of file\nparam text: str [Required]\u00b6\ntext to write to file", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.write.WriteFileInput.html"} {"id": "aa10c6b56460-0", "text": "langchain.tools.playwright.base.lazy_import_playwright_browsers\u00b6\nlangchain.tools.playwright.base.lazy_import_playwright_browsers() \u2192 Tuple[Type[AsyncBrowser], Type[SyncBrowser]][source]\u00b6\nLazy import playwright browsers.\nReturns\nAsyncBrowser and SyncBrowser classes.\nReturn type\nTuple[Type[AsyncBrowser], Type[SyncBrowser]]", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.base.lazy_import_playwright_browsers.html"} {"id": "4d6afaedadf5-0", "text": "langchain.tools.google_places.tool.GooglePlacesSchema\u00b6\nclass langchain.tools.google_places.tool.GooglePlacesSchema(*, query: str)[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam query: str [Required]\u00b6\nQuery for google maps", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.google_places.tool.GooglePlacesSchema.html"} {"id": "fd63a05ad586-0", "text": "langchain.tools.gmail.base.GmailBaseTool\u00b6\nclass langchain.tools.gmail.base.GmailBaseTool(*, name: str, description: str, args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, api_resource: Resource = None)[source]\u00b6\nBases: BaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_resource: Resource [Optional]\u00b6\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str [Required]\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.base.GmailBaseTool.html"} {"id": "fd63a05ad586-1", "text": "and passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str [Required]\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nclassmethod from_api_resource(api_resource: Resource) \u2192 GmailBaseTool[source]\u00b6\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.base.GmailBaseTool.html"} {"id": "fd63a05ad586-2", "text": "Raise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.base.GmailBaseTool.html"} {"id": "e905dd0c16ce-0", "text": "langchain.tools.gmail.search.GmailSearch\u00b6\nclass langchain.tools.gmail.search.GmailSearch(*, name: str = 'search_gmail', description: str = 'Use this tool to search for email messages or threads. The input must be a valid Gmail query. The output is a JSON list of the requested resource.', args_schema: ~typing.Type[~langchain.tools.gmail.search.SearchArgsSchema] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, api_resource: Resource = None)[source]\u00b6\nBases: GmailBaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_resource: Resource [Optional]\u00b6\nparam args_schema: Type[langchain.tools.gmail.search.SearchArgsSchema] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.search.GmailSearch.html"} {"id": "e905dd0c16ce-1", "text": "param callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Use this tool to search for email messages or threads. The input must be a valid Gmail query. The output is a JSON list of the requested resource.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'search_gmail'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.search.GmailSearch.html"} {"id": "e905dd0c16ce-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nclassmethod from_api_resource(api_resource: Resource) \u2192 GmailBaseTool\u00b6\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.search.GmailSearch.html"} {"id": "eb76ea46dd09-0", "text": "langchain.tools.office365.events_search.SearchEventsInput\u00b6\nclass langchain.tools.office365.events_search.SearchEventsInput(*, start_datetime: str, end_datetime: str, max_results: int = 10, truncate: bool = True)[source]\u00b6\nBases: BaseModel\nInput for SearchEmails Tool.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam end_datetime: str [Required]\u00b6\nThe end datetime for the search query in the following format: YYYY-MM-DDTHH:MM:SS\u00b1hh:mm, where \u201cT\u201d separates the date and time components, and the time zone offset is specified as \u00b1hh:mm. For example: \u201c2023-06-09T10:30:00+03:00\u201d represents June 9th, 2023, at 10:30 AM in a time zone with a positive offset of 3 hours from Coordinated Universal Time (UTC).\nparam max_results: int = 10\u00b6\nThe maximum number of results to return.\nparam start_datetime: str [Required]\u00b6\nThe start datetime for the search query in the following format: YYYY-MM-DDTHH:MM:SS\u00b1hh:mm, where \u201cT\u201d separates the date and time components, and the time zone offset is specified as \u00b1hh:mm. For example: \u201c2023-06-09T10:30:00+03:00\u201d represents June 9th, 2023, at 10:30 AM in a time zone with a positive offset of 3 hours from Coordinated Universal Time (UTC).\nparam truncate: bool = True\u00b6\nWhether the event\u2019s body is trucated to meet token number limits. Set to False for searches that will retrieve very few results, otherwise, set to True.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.events_search.SearchEventsInput.html"} {"id": "18b66dbb59a0-0", "text": "langchain.tools.bing_search.tool.BingSearchRun\u00b6\nclass langchain.tools.bing_search.tool.BingSearchRun(*, name: str = 'bing_search', description: str = 'A wrapper around Bing Search. Useful for when you need to answer questions about current events. Input should be a search query.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, api_wrapper: BingSearchAPIWrapper)[source]\u00b6\nBases: BaseTool\nTool that adds the capability to query the Bing search API.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_wrapper: langchain.utilities.bing_search.BingSearchAPIWrapper [Required]\u00b6\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A wrapper around Bing Search. Useful for when you need to answer questions about current events. Input should be a search query.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.bing_search.tool.BingSearchRun.html"} {"id": "18b66dbb59a0-1", "text": "You can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'bing_search'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.bing_search.tool.BingSearchRun.html"} {"id": "18b66dbb59a0-2", "text": "Run the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.bing_search.tool.BingSearchRun.html"} {"id": "45ede055d32e-0", "text": "langchain.tools.gmail.get_thread.GetThreadSchema\u00b6\nclass langchain.tools.gmail.get_thread.GetThreadSchema(*, thread_id: str)[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam thread_id: str [Required]\u00b6\nThe thread ID.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.get_thread.GetThreadSchema.html"} {"id": "829e02e0e9bf-0", "text": "langchain.tools.file_management.copy.CopyFileTool\u00b6\nclass langchain.tools.file_management.copy.CopyFileTool(*, name: str = 'copy_file', description: str = 'Create a copy of a file in a specified location', args_schema: ~typing.Type[~pydantic.main.BaseModel] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, root_dir: ~typing.Optional[str] = None)[source]\u00b6\nBases: BaseFileToolMixin, BaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Type[pydantic.main.BaseModel] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Create a copy of a file in a specified location'\u00b6\nUsed to tell the model how/when/why to use the tool.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.copy.CopyFileTool.html"} {"id": "829e02e0e9bf-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'copy_file'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam root_dir: Optional[str] = None\u00b6\nThe final path will be chosen relative to root_dir if specified.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.copy.CopyFileTool.html"} {"id": "829e02e0e9bf-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nget_relative_path(file_path: str) \u2192 Path\u00b6\nGet the relative path, returning an error if unsupported.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.copy.CopyFileTool.html"} {"id": "0c48c6826f1c-0", "text": "langchain.tools.openapi.utils.api_models.APIRequestBody\u00b6\nclass langchain.tools.openapi.utils.api_models.APIRequestBody(*, description: Optional[str] = None, properties: List[APIRequestBodyProperty], media_type: str)[source]\u00b6\nBases: BaseModel\nA model for a request body.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam description: Optional[str] = None\u00b6\nThe description of the request body.\nparam media_type: str [Required]\u00b6\nThe media type of the request body.\nparam properties: List[langchain.tools.openapi.utils.api_models.APIRequestBodyProperty] [Required]\u00b6\nclassmethod from_request_body(request_body: RequestBody, spec: OpenAPISpec) \u2192 APIRequestBody[source]\u00b6\nInstantiate from an OpenAPI RequestBody.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.openapi.utils.api_models.APIRequestBody.html"} {"id": "892f1efed5e1-0", "text": "langchain.tools.gmail.send_message.GmailSendMessage\u00b6\nclass langchain.tools.gmail.send_message.GmailSendMessage(*, name: str = 'send_gmail_message', description: str = 'Use this tool to send email messages. The input is the message, recipents', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, api_resource: Resource = None)[source]\u00b6\nBases: GmailBaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_resource: Resource [Optional]\u00b6\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Use this tool to send email messages. The input is the message, recipents'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.send_message.GmailSendMessage.html"} {"id": "892f1efed5e1-1", "text": "Optional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'send_gmail_message'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nclassmethod from_api_resource(api_resource: Resource) \u2192 GmailBaseTool\u00b6\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.send_message.GmailSendMessage.html"} {"id": "892f1efed5e1-2", "text": "Raise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.send_message.GmailSendMessage.html"} {"id": "27af750fb08f-0", "text": "langchain.tools.searx_search.tool.SearxSearchRun\u00b6\nclass langchain.tools.searx_search.tool.SearxSearchRun(*, name: str = 'searx_search', description: str = 'A meta search engine.Useful for when you need to answer questions about current events.Input should be a search query.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, wrapper: SearxSearchWrapper, kwargs: dict = None)[source]\u00b6\nBases: BaseTool\nTool that adds the capability to query a Searx instance.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A meta search engine.Useful for when you need to answer questions about current events.Input should be a search query.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.searx_search.tool.SearxSearchRun.html"} {"id": "27af750fb08f-1", "text": "Handle the content of the ToolException thrown.\nparam kwargs: dict [Optional]\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'searx_search'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\nparam wrapper: langchain.utilities.searx_search.SearxSearchWrapper [Required]\u00b6\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.searx_search.tool.SearxSearchRun.html"} {"id": "27af750fb08f-2", "text": "Run the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.searx_search.tool.SearxSearchRun.html"} {"id": "b59fd78eb4e5-0", "text": "langchain.tools.ifttt.IFTTTWebhook\u00b6\nclass langchain.tools.ifttt.IFTTTWebhook(*, name: str, description: str, args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, url: str)[source]\u00b6\nBases: BaseTool\nIFTTT Webhook.\nParameters\nname \u2013 name of the tool\ndescription \u2013 description of the tool\nurl \u2013 url to hit with the json event.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str [Required]\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.ifttt.IFTTTWebhook.html"} {"id": "b59fd78eb4e5-1", "text": "This metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str [Required]\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam url: str [Required]\u00b6\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.ifttt.IFTTTWebhook.html"} {"id": "b59fd78eb4e5-2", "text": "Raise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.ifttt.IFTTTWebhook.html"} {"id": "860abdd5753e-0", "text": "langchain.tools.playwright.click.ClickTool\u00b6\nclass langchain.tools.playwright.click.ClickTool(*, name: str = 'click_element', description: str = 'Click on an element with the given CSS selector', args_schema: ~typing.Type[~pydantic.main.BaseModel] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, sync_browser: Optional['SyncBrowser'] = None, async_browser: Optional['AsyncBrowser'] = None, visible_only: bool = True, playwright_strict: bool = False, playwright_timeout: float = 1000)[source]\u00b6\nBases: BaseBrowserTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Type[BaseModel] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam async_browser: Optional['AsyncBrowser'] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.click.ClickTool.html"} {"id": "860abdd5753e-1", "text": "param callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Click on an element with the given CSS selector'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'click_element'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam playwright_strict: bool = False\u00b6\nWhether to employ Playwright\u2019s strict mode when clicking on elements.\nparam playwright_timeout: float = 1000\u00b6\nTimeout (in ms) for Playwright to wait for element to be ready.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam sync_browser: Optional['SyncBrowser'] = None\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\nparam visible_only: bool = True\u00b6\nWhether to consider only visible elements.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.click.ClickTool.html"} {"id": "860abdd5753e-2", "text": "param visible_only: bool = True\u00b6\nWhether to consider only visible elements.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nclassmethod from_browser(sync_browser: Optional[SyncBrowser] = None, async_browser: Optional[AsyncBrowser] = None) \u2192 BaseBrowserTool\u00b6\nInstantiate the tool.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nvalidator validate_browser_provided\u00a0 \u00bb\u00a0 all fields\u00b6\nCheck that the arguments are valid.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.click.ClickTool.html"} {"id": "6cb9952a30ae-0", "text": "langchain.tools.gmail.create_draft.CreateDraftSchema\u00b6\nclass langchain.tools.gmail.create_draft.CreateDraftSchema(*, message: str, to: List[str], subject: str, cc: Optional[List[str]] = None, bcc: Optional[List[str]] = None)[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam bcc: Optional[List[str]] = None\u00b6\nThe list of BCC recipients.\nparam cc: Optional[List[str]] = None\u00b6\nThe list of CC recipients.\nparam message: str [Required]\u00b6\nThe message to include in the draft.\nparam subject: str [Required]\u00b6\nThe subject of the message.\nparam to: List[str] [Required]\u00b6\nThe list of recipients.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.create_draft.CreateDraftSchema.html"} {"id": "7215ca7679df-0", "text": "langchain.tools.requests.tool.RequestsPostTool\u00b6\nclass langchain.tools.requests.tool.RequestsPostTool(*, name: str = 'requests_post', description: str = 'Use this when you want to POST to a website.\\n\u00a0\u00a0\u00a0 Input should be a json string with two keys: \"url\" and \"data\".\\n\u00a0\u00a0\u00a0 The value of \"url\" should be a string, and the value of \"data\" should be a dictionary of \\n\u00a0\u00a0\u00a0 key-value pairs you want to POST to the url.\\n\u00a0\u00a0\u00a0 Be careful to always use double quotes for strings in the json string\\n\u00a0\u00a0\u00a0 The output will be the text response of the POST request.\\n\u00a0\u00a0\u00a0 ', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, requests_wrapper: TextRequestsWrapper)[source]\u00b6\nBases: BaseRequestsTool, BaseTool\nTool for making a POST request to an API endpoint.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.requests.tool.RequestsPostTool.html"} {"id": "7215ca7679df-1", "text": "param callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Use this when you want to POST to a website.\\n\u00a0\u00a0\u00a0 Input should be a json string with two keys: \"url\" and \"data\".\\n\u00a0\u00a0\u00a0 The value of \"url\" should be a string, and the value of \"data\" should be a dictionary of \\n\u00a0\u00a0\u00a0 key-value pairs you want to POST to the url.\\n\u00a0\u00a0\u00a0 Be careful to always use double quotes for strings in the json string\\n\u00a0\u00a0\u00a0 The output will be the text response of the POST request.\\n\u00a0\u00a0\u00a0 '\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'requests_post'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam requests_wrapper: langchain.requests.TextRequestsWrapper [Required]\u00b6\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.requests.tool.RequestsPostTool.html"} {"id": "7215ca7679df-2", "text": "and passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.requests.tool.RequestsPostTool.html"} {"id": "971ddf1724aa-0", "text": "langchain.tools.sql_database.tool.ListSQLDatabaseTool\u00b6\nclass langchain.tools.sql_database.tool.ListSQLDatabaseTool(*, name: str = 'sql_db_list_tables', description: str = 'Input is an empty string, output is a comma separated list of tables in the database.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, db: SQLDatabase)[source]\u00b6\nBases: BaseSQLDatabaseTool, BaseTool\nTool for getting tables names.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam db: langchain.sql_database.SQLDatabase [Required]\u00b6\nparam description: str = 'Input is an empty string, output is a comma separated list of tables in the database.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.sql_database.tool.ListSQLDatabaseTool.html"} {"id": "971ddf1724aa-1", "text": "Handle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'sql_db_list_tables'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.sql_database.tool.ListSQLDatabaseTool.html"} {"id": "971ddf1724aa-2", "text": "Raise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: Config\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.sql_database.tool.ListSQLDatabaseTool.html"} {"id": "360b59719ca6-0", "text": "langchain.tools.powerbi.tool.QueryPowerBITool\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.powerbi.tool.QueryPowerBITool.html"} {"id": "360b59719ca6-1", "text": "class langchain.tools.powerbi.tool.QueryPowerBITool(*, name: str = 'query_powerbi', description: str = '\\n\u00a0\u00a0\u00a0 Input to this tool is a detailed question about the dataset, output is a result from the dataset. It will try to answer the question using the dataset, and if it cannot, it will ask for clarification.\\n\\n\u00a0\u00a0\u00a0 Example Input: \"How many rows are in table1?\"\\n\u00a0\u00a0\u00a0 ', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, llm_chain: LLMChain, powerbi: PowerBIDataset, examples: Optional[str] = '\\nQuestion: How many rows are in the table ?\\nDAX: EVALUATE ROW(\"Number of rows\", COUNTROWS(
))\\n----\\nQuestion: How many rows are in the table
where is not empty?\\nDAX: EVALUATE ROW(\"Number of rows\", COUNTROWS(FILTER(
,
[] <> \"\")))\\n----\\nQuestion: What was the average of in
?\\nDAX: EVALUATE ROW(\"Average\", AVERAGE(
[]))\\n----\\n', session_cache: Dict[str, Any] = None, max_iterations: int = 5, output_token_limit: int = 4000, tiktoken_model_name: Optional[str] = None)[source]\u00b6\nBases: BaseTool", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.powerbi.tool.QueryPowerBITool.html"} {"id": "360b59719ca6-2", "text": "Bases: BaseTool\nTool for querying a Power BI Dataset.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = '\\n\u00a0\u00a0\u00a0 Input to this tool is a detailed question about the dataset, output is a result from the dataset. It will try to answer the question using the dataset, and if it cannot, it will ask for clarification.\\n\\n\u00a0\u00a0\u00a0 Example Input: \"How many rows are in table1?\"\\n\u00a0\u00a0\u00a0 '\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam examples: Optional[str] = '\\nQuestion: How many rows are in the table
?\\nDAX: EVALUATE ROW(\"Number of rows\", COUNTROWS(
))\\n----\\nQuestion: How many rows are in the table
where is not empty?\\nDAX: EVALUATE ROW(\"Number of rows\", COUNTROWS(FILTER(
,
[] <> \"\")))\\n----\\nQuestion: What was the average of in
?\\nDAX: EVALUATE ROW(\"Average\", AVERAGE(
[]))\\n----\\n'\u00b6\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.powerbi.tool.QueryPowerBITool.html"} {"id": "360b59719ca6-3", "text": "Handle the content of the ToolException thrown.\nparam llm_chain: langchain.chains.llm.LLMChain [Required]\u00b6\nparam max_iterations: int = 5\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'query_powerbi'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam output_token_limit: int = 4000\u00b6\nparam powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]\u00b6\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam session_cache: Dict[str, Any] [Optional]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam tiktoken_model_name: Optional[str] = None\u00b6\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.powerbi.tool.QueryPowerBITool.html"} {"id": "360b59719ca6-4", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nvalidator validate_llm_chain_input_variables\u00a0 \u00bb\u00a0 llm_chain[source]\u00b6\nMake sure the LLM chain has the correct input variables.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.powerbi.tool.QueryPowerBITool.html"} {"id": "e7309a9ed268-0", "text": "langchain.tools.playwright.navigate.NavigateTool\u00b6\nclass langchain.tools.playwright.navigate.NavigateTool(*, name: str = 'navigate_browser', description: str = 'Navigate a browser to the specified URL', args_schema: ~typing.Type[~pydantic.main.BaseModel] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, sync_browser: Optional['SyncBrowser'] = None, async_browser: Optional['AsyncBrowser'] = None)[source]\u00b6\nBases: BaseBrowserTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Type[BaseModel] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam async_browser: Optional['AsyncBrowser'] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Navigate a browser to the specified URL'\u00b6\nUsed to tell the model how/when/why to use the tool.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.navigate.NavigateTool.html"} {"id": "e7309a9ed268-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'navigate_browser'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam sync_browser: Optional['SyncBrowser'] = None\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.navigate.NavigateTool.html"} {"id": "e7309a9ed268-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nclassmethod from_browser(sync_browser: Optional[SyncBrowser] = None, async_browser: Optional[AsyncBrowser] = None) \u2192 BaseBrowserTool\u00b6\nInstantiate the tool.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nvalidator validate_browser_provided\u00a0 \u00bb\u00a0 all fields\u00b6\nCheck that the arguments are valid.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.navigate.NavigateTool.html"} {"id": "3c6d8aa07f82-0", "text": "langchain.tools.playwright.navigate_back.NavigateBackTool\u00b6\nclass langchain.tools.playwright.navigate_back.NavigateBackTool(*, name: str = 'previous_webpage', description: str = 'Navigate back to the previous page in the browser history', args_schema: ~typing.Type[~pydantic.main.BaseModel] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, sync_browser: Optional['SyncBrowser'] = None, async_browser: Optional['AsyncBrowser'] = None)[source]\u00b6\nBases: BaseBrowserTool\nNavigate back to the previous page in the browser history.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Type[BaseModel] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam async_browser: Optional['AsyncBrowser'] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Navigate back to the previous page in the browser history'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.navigate_back.NavigateBackTool.html"} {"id": "3c6d8aa07f82-1", "text": "param description: str = 'Navigate back to the previous page in the browser history'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'previous_webpage'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam sync_browser: Optional['SyncBrowser'] = None\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.navigate_back.NavigateBackTool.html"} {"id": "3c6d8aa07f82-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nclassmethod from_browser(sync_browser: Optional[SyncBrowser] = None, async_browser: Optional[AsyncBrowser] = None) \u2192 BaseBrowserTool\u00b6\nInstantiate the tool.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nvalidator validate_browser_provided\u00a0 \u00bb\u00a0 all fields\u00b6\nCheck that the arguments are valid.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.navigate_back.NavigateBackTool.html"} {"id": "d5c01ace26ce-0", "text": "langchain.tools.convert_to_openai.format_tool_to_openai_function\u00b6\nlangchain.tools.convert_to_openai.format_tool_to_openai_function(tool: BaseTool) \u2192 FunctionDescription[source]\u00b6\nFormat tool into the OpenAI function API.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.convert_to_openai.format_tool_to_openai_function.html"} {"id": "bf00eb109223-0", "text": "langchain.tools.shell.tool.ShellInput\u00b6\nclass langchain.tools.shell.tool.ShellInput(*, commands: Union[str, List[str]])[source]\u00b6\nBases: BaseModel\nCommands for the Bash Shell tool.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam commands: Union[str, List[str]] [Required]\u00b6\nList of shell commands to run.\nList of shell commands to run. Deserialized using json.loads", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.shell.tool.ShellInput.html"} {"id": "0823f42699f9-0", "text": "langchain.tools.sleep.tool.SleepTool\u00b6\nclass langchain.tools.sleep.tool.SleepTool(*, name: str = 'sleep', description: str = 'Make agent sleep for a specified number of seconds.', args_schema: ~typing.Type[~pydantic.main.BaseModel] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False)[source]\u00b6\nBases: BaseTool\nTool that adds the capability to sleep.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Type[pydantic.main.BaseModel] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Make agent sleep for a specified number of seconds.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.sleep.tool.SleepTool.html"} {"id": "0823f42699f9-1", "text": "You can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'sleep'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.sleep.tool.SleepTool.html"} {"id": "0823f42699f9-2", "text": "Run the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.sleep.tool.SleepTool.html"} {"id": "e5365433bff3-0", "text": "langchain.tools.requests.tool.RequestsPutTool\u00b6\nclass langchain.tools.requests.tool.RequestsPutTool(*, name: str = 'requests_put', description: str = 'Use this when you want to PUT to a website.\\n\u00a0\u00a0\u00a0 Input should be a json string with two keys: \"url\" and \"data\".\\n\u00a0\u00a0\u00a0 The value of \"url\" should be a string, and the value of \"data\" should be a dictionary of \\n\u00a0\u00a0\u00a0 key-value pairs you want to PUT to the url.\\n\u00a0\u00a0\u00a0 Be careful to always use double quotes for strings in the json string.\\n\u00a0\u00a0\u00a0 The output will be the text response of the PUT request.\\n\u00a0\u00a0\u00a0 ', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, requests_wrapper: TextRequestsWrapper)[source]\u00b6\nBases: BaseRequestsTool, BaseTool\nTool for making a PUT request to an API endpoint.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.requests.tool.RequestsPutTool.html"} {"id": "e5365433bff3-1", "text": "param callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Use this when you want to PUT to a website.\\n\u00a0\u00a0\u00a0 Input should be a json string with two keys: \"url\" and \"data\".\\n\u00a0\u00a0\u00a0 The value of \"url\" should be a string, and the value of \"data\" should be a dictionary of \\n\u00a0\u00a0\u00a0 key-value pairs you want to PUT to the url.\\n\u00a0\u00a0\u00a0 Be careful to always use double quotes for strings in the json string.\\n\u00a0\u00a0\u00a0 The output will be the text response of the PUT request.\\n\u00a0\u00a0\u00a0 '\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'requests_put'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam requests_wrapper: langchain.requests.TextRequestsWrapper [Required]\u00b6\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.requests.tool.RequestsPutTool.html"} {"id": "e5365433bff3-2", "text": "and passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.requests.tool.RequestsPutTool.html"} {"id": "c700fd98eca0-0", "text": "langchain.tools.graphql.tool.BaseGraphQLTool\u00b6\nclass langchain.tools.graphql.tool.BaseGraphQLTool(*, name: str = 'query_graphql', description: str = \"\u00a0\u00a0\u00a0 Input to this tool is a detailed and correct GraphQL query, output is a result from the API.\\n\u00a0\u00a0\u00a0 If the query is not correct, an error message will be returned.\\n\u00a0\u00a0\u00a0 If an error is returned with 'Bad request' in it, rewrite the query and try again.\\n\u00a0\u00a0\u00a0 If an error is returned with 'Unauthorized' in it, do not try again, but tell the user to change their authentication.\\n\\n\u00a0\u00a0\u00a0 Example Input: query {{ allUsers {{ id, name, email }} }}\u00a0\u00a0\u00a0 \", args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, graphql_wrapper: GraphQLAPIWrapper)[source]\u00b6\nBases: BaseTool\nBase tool for querying a GraphQL API.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.graphql.tool.BaseGraphQLTool.html"} {"id": "c700fd98eca0-1", "text": "param callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = \"\u00a0\u00a0\u00a0 Input to this tool is a detailed and correct GraphQL query, output is a result from the API.\\n\u00a0\u00a0\u00a0 If the query is not correct, an error message will be returned.\\n\u00a0\u00a0\u00a0 If an error is returned with 'Bad request' in it, rewrite the query and try again.\\n\u00a0\u00a0\u00a0 If an error is returned with 'Unauthorized' in it, do not try again, but tell the user to change their authentication.\\n\\n\u00a0\u00a0\u00a0 Example Input: query {{ allUsers {{ id, name, email }} }}\u00a0\u00a0\u00a0 \"\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam graphql_wrapper: langchain.utilities.graphql.GraphQLAPIWrapper [Required]\u00b6\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'query_graphql'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.graphql.tool.BaseGraphQLTool.html"} {"id": "c700fd98eca0-2", "text": "and passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.graphql.tool.BaseGraphQLTool.html"} {"id": "6e82b79848dd-0", "text": "langchain.tools.gmail.get_message.SearchArgsSchema\u00b6\nclass langchain.tools.gmail.get_message.SearchArgsSchema(*, message_id: str)[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam message_id: str [Required]\u00b6\nThe unique ID of the email message, retrieved from a search.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.get_message.SearchArgsSchema.html"} {"id": "05af8f2498c5-0", "text": "langchain.tools.openapi.utils.api_models.APIPropertyBase\u00b6\nclass langchain.tools.openapi.utils.api_models.APIPropertyBase(*, name: str, required: bool, type: Union[str, Type, tuple, None, Enum] = None, default: Optional[Any] = None, description: Optional[str] = None)[source]\u00b6\nBases: BaseModel\nBase model for an API property.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam default: Optional[Any] = None\u00b6\nThe default value of the property.\nparam description: Optional[str] = None\u00b6\nThe description of the property.\nparam name: str [Required]\u00b6\nThe name of the property.\nparam required: bool [Required]\u00b6\nWhether the property is required.\nparam type: Union[str, Type, tuple, None, enum.Enum] = None\u00b6\nThe type of the property.\nEither a primitive type, a component/parameter type,\nor an array or \u2018object\u2019 (dict) of the above.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.openapi.utils.api_models.APIPropertyBase.html"} {"id": "cf67cf207347-0", "text": "langchain.tools.office365.create_draft_message.O365CreateDraftMessage\u00b6\nclass langchain.tools.office365.create_draft_message.O365CreateDraftMessage(*, name: str = 'create_email_draft', description: str = 'Use this tool to create a draft email with the provided message fields.', args_schema: ~typing.Type[~langchain.tools.office365.create_draft_message.CreateDraftMessageSchema] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, account: Account = None)[source]\u00b6\nBases: O365BaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam account: Account [Optional]\u00b6\nparam args_schema: Type[langchain.tools.office365.create_draft_message.CreateDraftMessageSchema] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.create_draft_message.O365CreateDraftMessage.html"} {"id": "cf67cf207347-1", "text": "param callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Use this tool to create a draft email with the provided message fields.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'create_email_draft'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.create_draft_message.O365CreateDraftMessage.html"} {"id": "cf67cf207347-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.create_draft_message.O365CreateDraftMessage.html"} {"id": "89654f96eb36-0", "text": "langchain.tools.base.SchemaAnnotationError\u00b6\nclass langchain.tools.base.SchemaAnnotationError[source]\u00b6\nBases: TypeError\nRaised when \u2018args_schema\u2019 is missing or has an incorrect type annotation.\nadd_note()\u00b6\nException.add_note(note) \u2013\nadd a note to the exception\nwith_traceback()\u00b6\nException.with_traceback(tb) \u2013\nset self.__traceback__ to tb and return self.\nargs\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.base.SchemaAnnotationError.html"} {"id": "7512607c5052-0", "text": "langchain.tools.file_management.move.MoveFileTool\u00b6\nclass langchain.tools.file_management.move.MoveFileTool(*, name: str = 'move_file', description: str = 'Move or rename a file from one location to another', args_schema: ~typing.Type[~pydantic.main.BaseModel] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, root_dir: ~typing.Optional[str] = None)[source]\u00b6\nBases: BaseFileToolMixin, BaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Type[pydantic.main.BaseModel] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Move or rename a file from one location to another'\u00b6\nUsed to tell the model how/when/why to use the tool.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.move.MoveFileTool.html"} {"id": "7512607c5052-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'move_file'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam root_dir: Optional[str] = None\u00b6\nThe final path will be chosen relative to root_dir if specified.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.move.MoveFileTool.html"} {"id": "7512607c5052-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nget_relative_path(file_path: str) \u2192 Path\u00b6\nGet the relative path, returning an error if unsupported.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.move.MoveFileTool.html"} {"id": "45784b34e7b9-0", "text": "langchain.tools.sql_database.tool.QuerySQLCheckerTool\u00b6\nclass langchain.tools.sql_database.tool.QuerySQLCheckerTool(*, name: str = 'sql_db_query_checker', description: str = '\\n\u00a0\u00a0\u00a0 Use this tool to double check if your query is correct before executing it.\\n\u00a0\u00a0\u00a0 Always use this tool before executing a query with query_sql_db!\\n\u00a0\u00a0\u00a0 ', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, db: SQLDatabase, template: str = '\\n{query}\\nDouble check the {dialect} query above for common mistakes, including:\\n- Using NOT IN with NULL values\\n- Using UNION when UNION ALL should have been used\\n- Using BETWEEN for exclusive ranges\\n- Data type mismatch in predicates\\n- Properly quoting identifiers\\n- Using the correct number of arguments for functions\\n- Casting to the correct data type\\n- Using the proper columns for joins\\n\\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.', llm: BaseLanguageModel, llm_chain: LLMChain)[source]\u00b6\nBases: BaseSQLDatabaseTool, BaseTool\nUse an LLM to check if a query is correct.\nAdapted from https://www.patterns.app/blog/2023/01/18/crunchbot-sql-analyst-gpt/\nCreate a new model by parsing and validating input data from keyword arguments.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.sql_database.tool.QuerySQLCheckerTool.html"} {"id": "45784b34e7b9-1", "text": "Create a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam db: SQLDatabase [Required]\u00b6\nparam description: str = '\\n\u00a0\u00a0\u00a0 Use this tool to double check if your query is correct before executing it.\\n\u00a0\u00a0\u00a0 Always use this tool before executing a query with query_sql_db!\\n\u00a0\u00a0\u00a0 '\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam llm: langchain.schema.language_model.BaseLanguageModel [Required]\u00b6\nparam llm_chain: langchain.chains.llm.LLMChain [Required]\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'sql_db_query_checker'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.sql_database.tool.QuerySQLCheckerTool.html"} {"id": "45784b34e7b9-2", "text": "that after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam template: str = '\\n{query}\\nDouble check the {dialect} query above for common mistakes, including:\\n- Using NOT IN with NULL values\\n- Using UNION when UNION ALL should have been used\\n- Using BETWEEN for exclusive ranges\\n- Data type mismatch in predicates\\n- Properly quoting identifiers\\n- Using the correct number of arguments for functions\\n- Casting to the correct data type\\n- Using the proper columns for joins\\n\\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.'\u00b6\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator initialize_llm_chain\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.sql_database.tool.QuerySQLCheckerTool.html"} {"id": "45784b34e7b9-3", "text": "Raise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: Config\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.sql_database.tool.QuerySQLCheckerTool.html"} {"id": "2ce908b6fb5b-0", "text": "langchain.tools.google_serper.tool.GoogleSerperRun\u00b6\nclass langchain.tools.google_serper.tool.GoogleSerperRun(*, name: str = 'google_serper', description: str = 'A low-cost Google Search API.Useful for when you need to answer questions about current events.Input should be a search query.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, api_wrapper: GoogleSerperAPIWrapper)[source]\u00b6\nBases: BaseTool\nTool that adds the capability to query the Serper.dev Google search API.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_wrapper: langchain.utilities.google_serper.GoogleSerperAPIWrapper [Required]\u00b6\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A low-cost Google Search API.Useful for when you need to answer questions about current events.Input should be a search query.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.google_serper.tool.GoogleSerperRun.html"} {"id": "2ce908b6fb5b-1", "text": "You can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'google_serper'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.google_serper.tool.GoogleSerperRun.html"} {"id": "2ce908b6fb5b-2", "text": "Run the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.google_serper.tool.GoogleSerperRun.html"} {"id": "de009e360d5a-0", "text": "langchain.tools.bing_search.tool.BingSearchResults\u00b6\nclass langchain.tools.bing_search.tool.BingSearchResults(*, name: str = 'Bing Search Results JSON', description: str = 'A wrapper around Bing Search. Useful for when you need to answer questions about current events. Input should be a search query. Output is a JSON array of the query results', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, num_results: int = 4, api_wrapper: BingSearchAPIWrapper)[source]\u00b6\nBases: BaseTool\nTool that has capability to query the Bing Search API and get back json.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_wrapper: langchain.utilities.bing_search.BingSearchAPIWrapper [Required]\u00b6\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A wrapper around Bing Search. Useful for when you need to answer questions about current events. Input should be a search query. Output is a JSON array of the query results'\u00b6\nUsed to tell the model how/when/why to use the tool.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.bing_search.tool.BingSearchResults.html"} {"id": "de009e360d5a-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'Bing Search Results JSON'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam num_results: int = 4\u00b6\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.bing_search.tool.BingSearchResults.html"} {"id": "de009e360d5a-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.bing_search.tool.BingSearchResults.html"} {"id": "9f95b2ccbbe6-0", "text": "langchain.tools.playwright.extract_text.ExtractTextTool\u00b6\nclass langchain.tools.playwright.extract_text.ExtractTextTool(*, name: str = 'extract_text', description: str = 'Extract all the text on the current webpage', args_schema: ~typing.Type[~pydantic.main.BaseModel] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, sync_browser: Optional['SyncBrowser'] = None, async_browser: Optional['AsyncBrowser'] = None)[source]\u00b6\nBases: BaseBrowserTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Type[BaseModel] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam async_browser: Optional['AsyncBrowser'] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Extract all the text on the current webpage'\u00b6\nUsed to tell the model how/when/why to use the tool.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.extract_text.ExtractTextTool.html"} {"id": "9f95b2ccbbe6-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'extract_text'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam sync_browser: Optional['SyncBrowser'] = None\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.extract_text.ExtractTextTool.html"} {"id": "9f95b2ccbbe6-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator check_acheck_bs_importrgs\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nCheck that the arguments are valid.\nclassmethod from_browser(sync_browser: Optional[SyncBrowser] = None, async_browser: Optional[AsyncBrowser] = None) \u2192 BaseBrowserTool\u00b6\nInstantiate the tool.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nvalidator validate_browser_provided\u00a0 \u00bb\u00a0 all fields\u00b6\nCheck that the arguments are valid.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.extract_text.ExtractTextTool.html"} {"id": "874695121bea-0", "text": "langchain.tools.file_management.delete.FileDeleteInput\u00b6\nclass langchain.tools.file_management.delete.FileDeleteInput(*, file_path: str)[source]\u00b6\nBases: BaseModel\nInput for DeleteFileTool.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam file_path: str [Required]\u00b6\nPath of the file to delete", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.delete.FileDeleteInput.html"} {"id": "64abec482638-0", "text": "langchain.tools.base.create_schema_from_function\u00b6\nlangchain.tools.base.create_schema_from_function(model_name: str, func: Callable) \u2192 Type[BaseModel][source]\u00b6\nCreate a pydantic schema from a function\u2019s signature.\n:param model_name: Name to assign to the generated pydandic schema\n:param func: Function to generate the schema from\nReturns\nA pydantic model with the same arguments as the function", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.base.create_schema_from_function.html"} {"id": "5226687fbc26-0", "text": "langchain.tools.gmail.search.Resource\u00b6\nclass langchain.tools.gmail.search.Resource(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]\u00b6\nBases: str, Enum\nEnumerator of Resources to search.\nMethods\n__init__(*args,\u00a0**kwds)\ncapitalize()\nReturn a capitalized version of the string.\ncasefold()\nReturn a version of the string suitable for caseless comparisons.\ncenter(width[,\u00a0fillchar])\nReturn a centered string of length width.\ncount(sub[,\u00a0start[,\u00a0end]])\nReturn the number of non-overlapping occurrences of substring sub in string S[start:end].\nencode([encoding,\u00a0errors])\nEncode the string using the codec registered for encoding.\nendswith(suffix[,\u00a0start[,\u00a0end]])\nReturn True if S ends with the specified suffix, False otherwise.\nexpandtabs([tabsize])\nReturn a copy where all tab characters are expanded using spaces.\nfind(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nformat(*args,\u00a0**kwargs)\nReturn a formatted version of S, using substitutions from args and kwargs.\nformat_map(mapping)\nReturn a formatted version of S, using substitutions from mapping.\nindex(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nisalnum()\nReturn True if the string is an alpha-numeric string, False otherwise.\nisalpha()\nReturn True if the string is an alphabetic string, False otherwise.\nisascii()\nReturn True if all characters in the string are ASCII, False otherwise.\nisdecimal()\nReturn True if the string is a decimal string, False otherwise.\nisdigit()", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.search.Resource.html"} {"id": "5226687fbc26-1", "text": "Return True if the string is a decimal string, False otherwise.\nisdigit()\nReturn True if the string is a digit string, False otherwise.\nisidentifier()\nReturn True if the string is a valid Python identifier, False otherwise.\nislower()\nReturn True if the string is a lowercase string, False otherwise.\nisnumeric()\nReturn True if the string is a numeric string, False otherwise.\nisprintable()\nReturn True if the string is printable, False otherwise.\nisspace()\nReturn True if the string is a whitespace string, False otherwise.\nistitle()\nReturn True if the string is a title-cased string, False otherwise.\nisupper()\nReturn True if the string is an uppercase string, False otherwise.\njoin(iterable,\u00a0/)\nConcatenate any number of strings.\nljust(width[,\u00a0fillchar])\nReturn a left-justified string of length width.\nlower()\nReturn a copy of the string converted to lowercase.\nlstrip([chars])\nReturn a copy of the string with leading whitespace removed.\nmaketrans\nReturn a translation table usable for str.translate().\npartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nremoveprefix(prefix,\u00a0/)\nReturn a str with the given prefix string removed if present.\nremovesuffix(suffix,\u00a0/)\nReturn a str with the given suffix string removed if present.\nreplace(old,\u00a0new[,\u00a0count])\nReturn a copy with all occurrences of substring old replaced by new.\nrfind(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].\nrindex(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.search.Resource.html"} {"id": "5226687fbc26-2", "text": "rjust(width[,\u00a0fillchar])\nReturn a right-justified string of length width.\nrpartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nrsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nrstrip([chars])\nReturn a copy of the string with trailing whitespace removed.\nsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nsplitlines([keepends])\nReturn a list of the lines in the string, breaking at line boundaries.\nstartswith(prefix[,\u00a0start[,\u00a0end]])\nReturn True if S starts with the specified prefix, False otherwise.\nstrip([chars])\nReturn a copy of the string with leading and trailing whitespace removed.\nswapcase()\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\nReturn a version of the string where each word is titlecased.\ntranslate(table,\u00a0/)\nReplace each character in the string using the given translation table.\nupper()\nReturn a copy of the string converted to uppercase.\nzfill(width,\u00a0/)\nPad a numeric string with zeros on the left, to fill a field of the given width.\nAttributes\nTHREADS\nMESSAGES\ncapitalize()\u00b6\nReturn a capitalized version of the string.\nMore specifically, make the first character have upper case and the rest lower\ncase.\ncasefold()\u00b6\nReturn a version of the string suitable for caseless comparisons.\ncenter(width, fillchar=' ', /)\u00b6\nReturn a centered string of length width.\nPadding is done using the specified fill character (default is a space).\ncount(sub[, start[, end]]) \u2192 int\u00b6\nReturn the number of non-overlapping occurrences of substring sub in", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.search.Resource.html"} {"id": "5226687fbc26-3", "text": "Return the number of non-overlapping occurrences of substring sub in\nstring S[start:end]. Optional arguments start and end are\ninterpreted as in slice notation.\nencode(encoding='utf-8', errors='strict')\u00b6\nEncode the string using the codec registered for encoding.\nencodingThe encoding in which to encode the string.\nerrorsThe error handling scheme to use for encoding errors.\nThe default is \u2018strict\u2019 meaning that encoding errors raise a\nUnicodeEncodeError. Other possible values are \u2018ignore\u2019, \u2018replace\u2019 and\n\u2018xmlcharrefreplace\u2019 as well as any other name registered with\ncodecs.register_error that can handle UnicodeEncodeErrors.\nendswith(suffix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S ends with the specified suffix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nsuffix can also be a tuple of strings to try.\nexpandtabs(tabsize=8)\u00b6\nReturn a copy where all tab characters are expanded using spaces.\nIf tabsize is not given, a tab size of 8 characters is assumed.\nfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nformat(*args, **kwargs) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from args and kwargs.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nformat_map(mapping) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from mapping.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.search.Resource.html"} {"id": "5226687fbc26-4", "text": "Return the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nisalnum()\u00b6\nReturn True if the string is an alpha-numeric string, False otherwise.\nA string is alpha-numeric if all characters in the string are alpha-numeric and\nthere is at least one character in the string.\nisalpha()\u00b6\nReturn True if the string is an alphabetic string, False otherwise.\nA string is alphabetic if all characters in the string are alphabetic and there\nis at least one character in the string.\nisascii()\u00b6\nReturn True if all characters in the string are ASCII, False otherwise.\nASCII characters have code points in the range U+0000-U+007F.\nEmpty string is ASCII too.\nisdecimal()\u00b6\nReturn True if the string is a decimal string, False otherwise.\nA string is a decimal string if all characters in the string are decimal and\nthere is at least one character in the string.\nisdigit()\u00b6\nReturn True if the string is a digit string, False otherwise.\nA string is a digit string if all characters in the string are digits and there\nis at least one character in the string.\nisidentifier()\u00b6\nReturn True if the string is a valid Python identifier, False otherwise.\nCall keyword.iskeyword(s) to test whether string s is a reserved identifier,\nsuch as \u201cdef\u201d or \u201cclass\u201d.\nislower()\u00b6\nReturn True if the string is a lowercase string, False otherwise.\nA string is lowercase if all cased characters in the string are lowercase and\nthere is at least one cased character in the string.\nisnumeric()\u00b6\nReturn True if the string is a numeric string, False otherwise.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.search.Resource.html"} {"id": "5226687fbc26-5", "text": "isnumeric()\u00b6\nReturn True if the string is a numeric string, False otherwise.\nA string is numeric if all characters in the string are numeric and there is at\nleast one character in the string.\nisprintable()\u00b6\nReturn True if the string is printable, False otherwise.\nA string is printable if all of its characters are considered printable in\nrepr() or if it is empty.\nisspace()\u00b6\nReturn True if the string is a whitespace string, False otherwise.\nA string is whitespace if all characters in the string are whitespace and there\nis at least one character in the string.\nistitle()\u00b6\nReturn True if the string is a title-cased string, False otherwise.\nIn a title-cased string, upper- and title-case characters may only\nfollow uncased characters and lowercase characters only cased ones.\nisupper()\u00b6\nReturn True if the string is an uppercase string, False otherwise.\nA string is uppercase if all cased characters in the string are uppercase and\nthere is at least one cased character in the string.\njoin(iterable, /)\u00b6\nConcatenate any number of strings.\nThe string whose method is called is inserted in between each given string.\nThe result is returned as a new string.\nExample: \u2018.\u2019.join([\u2018ab\u2019, \u2018pq\u2019, \u2018rs\u2019]) -> \u2018ab.pq.rs\u2019\nljust(width, fillchar=' ', /)\u00b6\nReturn a left-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nlower()\u00b6\nReturn a copy of the string converted to lowercase.\nlstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nstatic maketrans()\u00b6\nReturn a translation table usable for str.translate().", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.search.Resource.html"} {"id": "5226687fbc26-6", "text": "static maketrans()\u00b6\nReturn a translation table usable for str.translate().\nIf there is only one argument, it must be a dictionary mapping Unicode\nordinals (integers) or characters to Unicode ordinals, strings or None.\nCharacter keys will be then converted to ordinals.\nIf there are two arguments, they must be strings of equal length, and\nin the resulting dictionary, each character in x will be mapped to the\ncharacter at the same position in y. If there is a third argument, it\nmust be a string, whose characters will be mapped to None in the result.\npartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string. If the separator is found,\nreturns a 3-tuple containing the part before the separator, the separator\nitself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing the original string\nand two empty strings.\nremoveprefix(prefix, /)\u00b6\nReturn a str with the given prefix string removed if present.\nIf the string starts with the prefix string, return string[len(prefix):].\nOtherwise, return a copy of the original string.\nremovesuffix(suffix, /)\u00b6\nReturn a str with the given suffix string removed if present.\nIf the string ends with the suffix string and that suffix is not empty,\nreturn string[:-len(suffix)]. Otherwise, return a copy of the original\nstring.\nreplace(old, new, count=- 1, /)\u00b6\nReturn a copy with all occurrences of substring old replaced by new.\ncountMaximum number of occurrences to replace.\n-1 (the default value) means replace all occurrences.\nIf the optional argument count is given, only the first count occurrences are\nreplaced.\nrfind(sub[, start[, end]]) \u2192 int\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.search.Resource.html"} {"id": "5226687fbc26-7", "text": "replaced.\nrfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nrindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nrjust(width, fillchar=' ', /)\u00b6\nReturn a right-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nrpartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string, starting at the end. If\nthe separator is found, returns a 3-tuple containing the part before the\nseparator, the separator itself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing two empty strings\nand the original string.\nrsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nSplitting starts at the end of the string and works to the front.\nrstrip(chars=None, /)\u00b6\nReturn a copy of the string with trailing whitespace removed.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.search.Resource.html"} {"id": "5226687fbc26-8", "text": "rstrip(chars=None, /)\u00b6\nReturn a copy of the string with trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nNote, str.split() is mainly useful for data that has been intentionally\ndelimited. With natural text that includes punctuation, consider using\nthe regular expression module.\nsplitlines(keepends=False)\u00b6\nReturn a list of the lines in the string, breaking at line boundaries.\nLine breaks are not included in the resulting list unless keepends is given and\ntrue.\nstartswith(prefix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S starts with the specified prefix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nprefix can also be a tuple of strings to try.\nstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading and trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nswapcase()\u00b6\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\u00b6\nReturn a version of the string where each word is titlecased.\nMore specifically, words start with uppercased characters and all remaining\ncased characters have lower case.\ntranslate(table, /)\u00b6\nReplace each character in the string using the given translation table.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.search.Resource.html"} {"id": "5226687fbc26-9", "text": "translate(table, /)\u00b6\nReplace each character in the string using the given translation table.\ntableTranslation table, which must be a mapping of Unicode ordinals to\nUnicode ordinals, strings, or None.\nThe table must implement lookup/indexing via __getitem__, for instance a\ndictionary or list. If this operation raises LookupError, the character is\nleft untouched. Characters mapped to None are deleted.\nupper()\u00b6\nReturn a copy of the string converted to uppercase.\nzfill(width, /)\u00b6\nPad a numeric string with zeros on the left, to fill a field of the given width.\nThe string is never truncated.\nMESSAGES = 'messages'\u00b6\nTHREADS = 'threads'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.search.Resource.html"} {"id": "8cc90d4ef0a6-0", "text": "langchain.tools.spark_sql.tool.ListSparkSQLTool\u00b6\nclass langchain.tools.spark_sql.tool.ListSparkSQLTool(*, name: str = 'list_tables_sql_db', description: str = 'Input is an empty string, output is a comma separated list of tables in the Spark SQL.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, db: SparkSQL)[source]\u00b6\nBases: BaseSparkSQLTool, BaseTool\nTool for getting tables names.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam db: langchain.utilities.spark_sql.SparkSQL [Required]\u00b6\nparam description: str = 'Input is an empty string, output is a comma separated list of tables in the Spark SQL.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.spark_sql.tool.ListSparkSQLTool.html"} {"id": "8cc90d4ef0a6-1", "text": "Handle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'list_tables_sql_db'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.spark_sql.tool.ListSparkSQLTool.html"} {"id": "8cc90d4ef0a6-2", "text": "Raise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: Config\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.spark_sql.tool.ListSparkSQLTool.html"} {"id": "3d2b2ba0e2a7-0", "text": "langchain.tools.file_management.read.ReadFileInput\u00b6\nclass langchain.tools.file_management.read.ReadFileInput(*, file_path: str)[source]\u00b6\nBases: BaseModel\nInput for ReadFileTool.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam file_path: str [Required]\u00b6\nname of file", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.read.ReadFileInput.html"} {"id": "920f6af67e86-0", "text": "langchain.tools.requests.tool.RequestsPatchTool\u00b6\nclass langchain.tools.requests.tool.RequestsPatchTool(*, name: str = 'requests_patch', description: str = 'Use this when you want to PATCH to a website.\\n\u00a0\u00a0\u00a0 Input should be a json string with two keys: \"url\" and \"data\".\\n\u00a0\u00a0\u00a0 The value of \"url\" should be a string, and the value of \"data\" should be a dictionary of \\n\u00a0\u00a0\u00a0 key-value pairs you want to PATCH to the url.\\n\u00a0\u00a0\u00a0 Be careful to always use double quotes for strings in the json string\\n\u00a0\u00a0\u00a0 The output will be the text response of the PATCH request.\\n\u00a0\u00a0\u00a0 ', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, requests_wrapper: TextRequestsWrapper)[source]\u00b6\nBases: BaseRequestsTool, BaseTool\nTool for making a PATCH request to an API endpoint.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.requests.tool.RequestsPatchTool.html"} {"id": "920f6af67e86-1", "text": "param callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Use this when you want to PATCH to a website.\\n\u00a0\u00a0\u00a0 Input should be a json string with two keys: \"url\" and \"data\".\\n\u00a0\u00a0\u00a0 The value of \"url\" should be a string, and the value of \"data\" should be a dictionary of \\n\u00a0\u00a0\u00a0 key-value pairs you want to PATCH to the url.\\n\u00a0\u00a0\u00a0 Be careful to always use double quotes for strings in the json string\\n\u00a0\u00a0\u00a0 The output will be the text response of the PATCH request.\\n\u00a0\u00a0\u00a0 '\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'requests_patch'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam requests_wrapper: langchain.requests.TextRequestsWrapper [Required]\u00b6\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.requests.tool.RequestsPatchTool.html"} {"id": "920f6af67e86-2", "text": "and passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.requests.tool.RequestsPatchTool.html"} {"id": "94d3422b0621-0", "text": "langchain.tools.file_management.utils.BaseFileToolMixin\u00b6\nclass langchain.tools.file_management.utils.BaseFileToolMixin(*, root_dir: Optional[str] = None)[source]\u00b6\nBases: BaseModel\nMixin for file system tools.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam root_dir: Optional[str] = None\u00b6\nThe final path will be chosen relative to root_dir if specified.\nget_relative_path(file_path: str) \u2192 Path[source]\u00b6\nGet the relative path, returning an error if unsupported.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.utils.BaseFileToolMixin.html"} {"id": "73c369974519-0", "text": "langchain.tools.office365.events_search.O365SearchEvents\u00b6\nclass langchain.tools.office365.events_search.O365SearchEvents(*, name: str = 'events_search', description: str = \" Use this tool to search for the user's calendar events. The input must be the start and end datetimes for the search query. The output is a JSON list of all the events in the user's calendar between the start and end times. You can assume that the user can\u00a0 not schedule any meeting over existing meetings, and that the user is busy during meetings. Any times without events are free for the user. \", args_schema: ~typing.Type[~pydantic.main.BaseModel] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, account: Account = None)[source]\u00b6\nBases: O365BaseTool\nClass for searching calendar events in Office 365\nFree, but setup is required\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam account: Account [Optional]\u00b6\nparam args_schema: Type[pydantic.main.BaseModel] = \u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.events_search.O365SearchEvents.html"} {"id": "73c369974519-1", "text": "Pydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = \" Use this tool to search for the user's calendar events. The input must be the start and end datetimes for the search query. The output is a JSON list of all the events in the user's calendar between the start and end times. You can assume that the user can\u00a0 not schedule any meeting over existing meetings, and that the user is busy during meetings. Any times without events are free for the user. \"\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'events_search'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.events_search.O365SearchEvents.html"} {"id": "73c369974519-2", "text": "and passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.events_search.O365SearchEvents.html"} {"id": "f60b9edb75d7-0", "text": "langchain.tools.google_search.tool.GoogleSearchRun\u00b6\nclass langchain.tools.google_search.tool.GoogleSearchRun(*, name: str = 'google_search', description: str = 'A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, api_wrapper: GoogleSearchAPIWrapper)[source]\u00b6\nBases: BaseTool\nTool that adds the capability to query the Google search API.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_wrapper: langchain.utilities.google_search.GoogleSearchAPIWrapper [Required]\u00b6\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.google_search.tool.GoogleSearchRun.html"} {"id": "f60b9edb75d7-1", "text": "Handle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'google_search'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.google_search.tool.GoogleSearchRun.html"} {"id": "f60b9edb75d7-2", "text": "Raise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.google_search.tool.GoogleSearchRun.html"} {"id": "1cffb4a45942-0", "text": "langchain.tools.base.Tool\u00b6\nclass langchain.tools.base.Tool(name: str, func: Callable, description: str, *, args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, coroutine: Optional[Callable[[...], Awaitable[str]]] = None)[source]\u00b6\nBases: BaseTool\nTool that takes in function or coroutine directly.\nInitialize tool.\nparam args_schema: Optional[Type[pydantic.main.BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None\u00b6\nCallbacks to be called during tool execution.\nparam coroutine: Optional[Callable[[...], Awaitable[str]]] = None\u00b6\nThe asynchronous version of the function.\nparam description: str = ''\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam func: Callable[[...], str] [Required]\u00b6\nThe function to run when the tool is called.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.base.Tool.html"} {"id": "1cffb4a45942-1", "text": "Handle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str [Required]\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nclassmethod from_function(func: Callable, name: str, description: str, return_direct: bool = False, args_schema: Optional[Type[BaseModel]] = None, **kwargs: Any) \u2192 Tool[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.base.Tool.html"} {"id": "1cffb4a45942-2", "text": "Initialize tool from a function.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nThe tool\u2019s input arguments.\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.base.Tool.html"} {"id": "debb4c1ba3bd-0", "text": "langchain.tools.file_management.read.ReadFileTool\u00b6\nclass langchain.tools.file_management.read.ReadFileTool(*, name: str = 'read_file', description: str = 'Read file from disk', args_schema: ~typing.Type[~pydantic.main.BaseModel] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, root_dir: ~typing.Optional[str] = None)[source]\u00b6\nBases: BaseFileToolMixin, BaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Type[pydantic.main.BaseModel] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Read file from disk'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.read.ReadFileTool.html"} {"id": "debb4c1ba3bd-1", "text": "You can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'read_file'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam root_dir: Optional[str] = None\u00b6\nThe final path will be chosen relative to root_dir if specified.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.read.ReadFileTool.html"} {"id": "debb4c1ba3bd-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nget_relative_path(file_path: str) \u2192 Path\u00b6\nGet the relative path, returning an error if unsupported.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.read.ReadFileTool.html"} {"id": "302d14377529-0", "text": "langchain.tools.ddg_search.tool.DuckDuckGoSearchRun\u00b6\nclass langchain.tools.ddg_search.tool.DuckDuckGoSearchRun(*, name: str = 'duckduckgo_search', description: str = 'A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, api_wrapper: DuckDuckGoSearchAPIWrapper = None)[source]\u00b6\nBases: BaseTool\nTool that adds the capability to query the DuckDuckGo search API.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_wrapper: langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper [Optional]\u00b6\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.'\u00b6\nUsed to tell the model how/when/why to use the tool.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.ddg_search.tool.DuckDuckGoSearchRun.html"} {"id": "302d14377529-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'duckduckgo_search'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.ddg_search.tool.DuckDuckGoSearchRun.html"} {"id": "302d14377529-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.ddg_search.tool.DuckDuckGoSearchRun.html"} {"id": "4d23e200cdb0-0", "text": "langchain.tools.google_search.tool.GoogleSearchResults\u00b6\nclass langchain.tools.google_search.tool.GoogleSearchResults(*, name: str = 'Google Search Results JSON', description: str = 'A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query. Output is a JSON array of the query results', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, num_results: int = 4, api_wrapper: GoogleSearchAPIWrapper)[source]\u00b6\nBases: BaseTool\nTool that has capability to query the Google Search API and get back json.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_wrapper: langchain.utilities.google_search.GoogleSearchAPIWrapper [Required]\u00b6\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query. Output is a JSON array of the query results'\u00b6\nUsed to tell the model how/when/why to use the tool.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.google_search.tool.GoogleSearchResults.html"} {"id": "4d23e200cdb0-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'Google Search Results JSON'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam num_results: int = 4\u00b6\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.google_search.tool.GoogleSearchResults.html"} {"id": "4d23e200cdb0-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.google_search.tool.GoogleSearchResults.html"} {"id": "eeace670b4ab-0", "text": "langchain.tools.playwright.click.ClickToolInput\u00b6\nclass langchain.tools.playwright.click.ClickToolInput(*, selector: str)[source]\u00b6\nBases: BaseModel\nInput for ClickTool.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam selector: str [Required]\u00b6\nCSS selector for the element to click", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.click.ClickToolInput.html"} {"id": "e5cab53d2f44-0", "text": "langchain.tools.file_management.list_dir.DirectoryListingInput\u00b6\nclass langchain.tools.file_management.list_dir.DirectoryListingInput(*, dir_path: str = '.')[source]\u00b6\nBases: BaseModel\nInput for ListDirectoryTool.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam dir_path: str = '.'\u00b6\nSubdirectory to list.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.list_dir.DirectoryListingInput.html"} {"id": "83b9c50ecdb9-0", "text": "langchain.tools.spark_sql.tool.QuerySparkSQLTool\u00b6\nclass langchain.tools.spark_sql.tool.QuerySparkSQLTool(*, name: str = 'query_sql_db', description: str = '\\n\u00a0\u00a0\u00a0 Input to this tool is a detailed and correct SQL query, output is a result from the Spark SQL.\\n\u00a0\u00a0\u00a0 If the query is not correct, an error message will be returned.\\n\u00a0\u00a0\u00a0 If an error is returned, rewrite the query, check the query, and try again.\\n\u00a0\u00a0\u00a0 ', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, db: SparkSQL)[source]\u00b6\nBases: BaseSparkSQLTool, BaseTool\nTool for querying a Spark SQL.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam db: langchain.utilities.spark_sql.SparkSQL [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.spark_sql.tool.QuerySparkSQLTool.html"} {"id": "83b9c50ecdb9-1", "text": "param db: langchain.utilities.spark_sql.SparkSQL [Required]\u00b6\nparam description: str = '\\n\u00a0\u00a0\u00a0 Input to this tool is a detailed and correct SQL query, output is a result from the Spark SQL.\\n\u00a0\u00a0\u00a0 If the query is not correct, an error message will be returned.\\n\u00a0\u00a0\u00a0 If an error is returned, rewrite the query, check the query, and try again.\\n\u00a0\u00a0\u00a0 '\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'query_sql_db'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.spark_sql.tool.QuerySparkSQLTool.html"} {"id": "83b9c50ecdb9-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: Config\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.spark_sql.tool.QuerySparkSQLTool.html"} {"id": "014506df1477-0", "text": "langchain.tools.file_management.utils.get_validated_relative_path\u00b6\nlangchain.tools.file_management.utils.get_validated_relative_path(root: Path, user_path: str) \u2192 Path[source]\u00b6\nResolve a relative path, raising an error if not within the root directory.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.utils.get_validated_relative_path.html"} {"id": "6e4afec7311f-0", "text": "langchain.tools.gmail.search.SearchArgsSchema\u00b6\nclass langchain.tools.gmail.search.SearchArgsSchema(*, query: str, resource: Resource = Resource.MESSAGES, max_results: int = 10)[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam max_results: int = 10\u00b6\nThe maximum number of results to return.\nparam query: str [Required]\u00b6\nThe Gmail query. Example filters include from:sender, to:recipient, subject:subject, -filtered_term, in:folder, is:important|read|starred, after:year/mo/date, before:year/mo/date, label:label_name \u201cexact phrase\u201d. Search newer/older than using d (day), m (month), and y (year): newer_than:2d, older_than:1y. Attachments with extension example: filename:pdf. Multiple term matching example: from:amy OR from:david.\nparam resource: langchain.tools.gmail.search.Resource = Resource.MESSAGES\u00b6\nWhether to search for threads or messages.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.search.SearchArgsSchema.html"} {"id": "8658d5032cc4-0", "text": "langchain.tools.plugin.AIPluginTool\u00b6\nclass langchain.tools.plugin.AIPluginTool(*, name: str, description: str, args_schema: ~typing.Type[~langchain.tools.plugin.AIPluginToolSchema] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, plugin: ~langchain.tools.plugin.AIPlugin, api_spec: str)[source]\u00b6\nBases: BaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_spec: str [Required]\u00b6\nparam args_schema: Type[AIPluginToolSchema] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str [Required]\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.plugin.AIPluginTool.html"} {"id": "8658d5032cc4-1", "text": "You can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str [Required]\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam plugin: AIPlugin [Required]\u00b6\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.plugin.AIPluginTool.html"} {"id": "8658d5032cc4-2", "text": "Run the tool asynchronously.\nclassmethod from_plugin_url(url: str) \u2192 AIPluginTool[source]\u00b6\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.plugin.AIPluginTool.html"} {"id": "bf63258de717-0", "text": "langchain.tools.requests.tool.RequestsGetTool\u00b6\nclass langchain.tools.requests.tool.RequestsGetTool(*, name: str = 'requests_get', description: str = 'A portal to the internet. Use this when you need to get specific content from a website. Input should be a\u00a0 url (i.e. https://www.google.com). The output will be the text response of the GET request.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, requests_wrapper: TextRequestsWrapper)[source]\u00b6\nBases: BaseRequestsTool, BaseTool\nTool for making a GET request to an API endpoint.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A portal to the internet. Use this when you need to get specific content from a website. Input should be a\u00a0 url (i.e. https://www.google.com). The output will be the text response of the GET request.'\u00b6\nUsed to tell the model how/when/why to use the tool.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.requests.tool.RequestsGetTool.html"} {"id": "bf63258de717-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'requests_get'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam requests_wrapper: langchain.requests.TextRequestsWrapper [Required]\u00b6\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.requests.tool.RequestsGetTool.html"} {"id": "bf63258de717-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.requests.tool.RequestsGetTool.html"} {"id": "73ce4f193781-0", "text": "langchain.tools.metaphor_search.tool.MetaphorSearchResults\u00b6\nclass langchain.tools.metaphor_search.tool.MetaphorSearchResults(*, name: str = 'metaphor_search_results_json', description: str = 'A wrapper around Metaphor Search. Input should be a Metaphor-optimized query. Output is a JSON array of the query results', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, api_wrapper: MetaphorSearchAPIWrapper)[source]\u00b6\nBases: BaseTool\nTool that has capability to query the Metaphor Search API and get back json.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_wrapper: langchain.utilities.metaphor_search.MetaphorSearchAPIWrapper [Required]\u00b6\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A wrapper around Metaphor Search. Input should be a Metaphor-optimized query. Output is a JSON array of the query results'\u00b6\nUsed to tell the model how/when/why to use the tool.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.metaphor_search.tool.MetaphorSearchResults.html"} {"id": "73ce4f193781-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'metaphor_search_results_json'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.metaphor_search.tool.MetaphorSearchResults.html"} {"id": "73ce4f193781-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.metaphor_search.tool.MetaphorSearchResults.html"} {"id": "a9d4d7259f99-0", "text": "langchain.tools.openapi.utils.api_models.APIProperty\u00b6\nclass langchain.tools.openapi.utils.api_models.APIProperty(*, name: str, required: bool, type: Union[str, Type, tuple, None, Enum] = None, default: Optional[Any] = None, description: Optional[str] = None, location: APIPropertyLocation)[source]\u00b6\nBases: APIPropertyBase\nA model for a property in the query, path, header, or cookie params.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam default: Optional[Any] = None\u00b6\nThe default value of the property.\nparam description: Optional[str] = None\u00b6\nThe description of the property.\nparam location: langchain.tools.openapi.utils.api_models.APIPropertyLocation [Required]\u00b6\nThe path/how it\u2019s being passed to the endpoint.\nparam name: str [Required]\u00b6\nThe name of the property.\nparam required: bool [Required]\u00b6\nWhether the property is required.\nparam type: Union[str, Type, tuple, None, enum.Enum] = None\u00b6\nThe type of the property.\nEither a primitive type, a component/parameter type,\nor an array or \u2018object\u2019 (dict) of the above.\nclassmethod from_parameter(parameter: Parameter, spec: OpenAPISpec) \u2192 APIProperty[source]\u00b6\nInstantiate from an OpenAPI Parameter.\nstatic is_supported_location(location: str) \u2192 bool[source]\u00b6\nReturn whether the provided location is supported.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.openapi.utils.api_models.APIProperty.html"} {"id": "f31e223e8bec-0", "text": "langchain.tools.vectorstore.tool.BaseVectorStoreTool\u00b6\nclass langchain.tools.vectorstore.tool.BaseVectorStoreTool(*, vectorstore: VectorStore, llm: BaseLanguageModel = None)[source]\u00b6\nBases: BaseModel\nBase class for tools that use a VectorStore.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam llm: langchain.schema.language_model.BaseLanguageModel [Optional]\u00b6\nparam vectorstore: langchain.vectorstores.base.VectorStore [Required]\u00b6\nmodel Config[source]\u00b6\nBases: Config\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.vectorstore.tool.BaseVectorStoreTool.html"} {"id": "652ef8d73a5b-0", "text": "langchain.tools.vectorstore.tool.VectorStoreQAWithSourcesTool\u00b6\nclass langchain.tools.vectorstore.tool.VectorStoreQAWithSourcesTool(*, name: str, description: str, args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, vectorstore: VectorStore, llm: BaseLanguageModel = None)[source]\u00b6\nBases: BaseVectorStoreTool, BaseTool\nTool for the VectorDBQAWithSources chain.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str [Required]\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam llm: langchain.schema.language_model.BaseLanguageModel [Optional]\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.vectorstore.tool.VectorStoreQAWithSourcesTool.html"} {"id": "652ef8d73a5b-1", "text": "Optional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str [Required]\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam vectorstore: langchain.vectorstores.base.VectorStore [Required]\u00b6\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nstatic get_description(name: str, description: str) \u2192 str[source]\u00b6\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.vectorstore.tool.VectorStoreQAWithSourcesTool.html"} {"id": "652ef8d73a5b-2", "text": "Raise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: Config\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.vectorstore.tool.VectorStoreQAWithSourcesTool.html"} {"id": "2825f77a0b25-0", "text": "langchain.tools.wikipedia.tool.WikipediaQueryRun\u00b6\nclass langchain.tools.wikipedia.tool.WikipediaQueryRun(*, name: str = 'Wikipedia', description: str = 'A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, facts, historical events, or other subjects. Input should be a search query.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, api_wrapper: WikipediaAPIWrapper)[source]\u00b6\nBases: BaseTool\nTool that adds the capability to search using the Wikipedia API.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_wrapper: langchain.utilities.wikipedia.WikipediaAPIWrapper [Required]\u00b6\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, facts, historical events, or other subjects. Input should be a search query.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.wikipedia.tool.WikipediaQueryRun.html"} {"id": "2825f77a0b25-1", "text": "You can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'Wikipedia'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.wikipedia.tool.WikipediaQueryRun.html"} {"id": "2825f77a0b25-2", "text": "Run the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.wikipedia.tool.WikipediaQueryRun.html"} {"id": "e7384e599c6d-0", "text": "langchain.tools.openapi.utils.api_models.APIOperation\u00b6\nclass langchain.tools.openapi.utils.api_models.APIOperation(*, operation_id: str, description: Optional[str] = None, base_url: str, path: str, method: HTTPVerb, properties: Sequence[APIProperty], request_body: Optional[APIRequestBody] = None)[source]\u00b6\nBases: BaseModel\nA model for a single API operation.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam base_url: str [Required]\u00b6\nThe base URL of the operation.\nparam description: Optional[str] = None\u00b6\nThe description of the operation.\nparam method: langchain.utilities.openapi.HTTPVerb [Required]\u00b6\nThe HTTP method of the operation.\nparam operation_id: str [Required]\u00b6\nThe unique identifier of the operation.\nparam path: str [Required]\u00b6\nThe path of the operation.\nparam properties: Sequence[langchain.tools.openapi.utils.api_models.APIProperty] [Required]\u00b6\nparam request_body: Optional[langchain.tools.openapi.utils.api_models.APIRequestBody] = None\u00b6\nThe request body of the operation.\nclassmethod from_openapi_spec(spec: OpenAPISpec, path: str, method: str) \u2192 APIOperation[source]\u00b6\nCreate an APIOperation from an OpenAPI spec.\nclassmethod from_openapi_url(spec_url: str, path: str, method: str) \u2192 APIOperation[source]\u00b6\nCreate an APIOperation from an OpenAPI URL.\nto_typescript() \u2192 str[source]\u00b6\nGet typescript string representation of the operation.\nstatic ts_type_from_python(type_: Union[str, Type, tuple, None, Enum]) \u2192 str[source]\u00b6\nproperty body_params: List[str]\u00b6\nproperty path_params: List[str]\u00b6\nproperty query_params: List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.openapi.utils.api_models.APIOperation.html"} {"id": "a8e2c977a627-0", "text": "langchain.tools.ddg_search.tool.DuckDuckGoSearchResults\u00b6\nclass langchain.tools.ddg_search.tool.DuckDuckGoSearchResults(*, name: str = 'DuckDuckGo Results JSON', description: str = 'A wrapper around Duck Duck Go Search. Useful for when you need to answer questions about current events. Input should be a search query. Output is a JSON array of the query results', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, num_results: int = 4, api_wrapper: DuckDuckGoSearchAPIWrapper = None)[source]\u00b6\nBases: BaseTool\nTool that queries the Duck Duck Go Search API and get back json.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_wrapper: langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper [Optional]\u00b6\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.ddg_search.tool.DuckDuckGoSearchResults.html"} {"id": "a8e2c977a627-1", "text": "param callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A wrapper around Duck Duck Go Search. Useful for when you need to answer questions about current events. Input should be a search query. Output is a JSON array of the query results'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'DuckDuckGo Results JSON'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam num_results: int = 4\u00b6\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.ddg_search.tool.DuckDuckGoSearchResults.html"} {"id": "a8e2c977a627-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.ddg_search.tool.DuckDuckGoSearchResults.html"} {"id": "82f76a71eb8e-0", "text": "langchain.tools.office365.send_message.SendMessageSchema\u00b6\nclass langchain.tools.office365.send_message.SendMessageSchema(*, body: str, to: List[str], subject: str, cc: Optional[List[str]] = None, bcc: Optional[List[str]] = None)[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam bcc: Optional[List[str]] = None\u00b6\nThe list of BCC recipients.\nparam body: str [Required]\u00b6\nThe message body to be sent.\nparam cc: Optional[List[str]] = None\u00b6\nThe list of CC recipients.\nparam subject: str [Required]\u00b6\nThe subject of the message.\nparam to: List[str] [Required]\u00b6\nThe list of recipients.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.send_message.SendMessageSchema.html"} {"id": "85f6a1a97b5d-0", "text": "langchain.tools.office365.messages_search.SearchEmailsInput\u00b6\nclass langchain.tools.office365.messages_search.SearchEmailsInput(*, folder: str = None, query: str, max_results: int = 10, truncate: bool = True)[source]\u00b6\nBases: BaseModel\nInput for SearchEmails Tool.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam folder: str = None\u00b6\nIf the user wants to search in only one folder, the name of the folder. Default folders are \u201cinbox\u201d, \u201cdrafts\u201d, \u201csent items\u201d, \u201cdeleted ttems\u201d, but users can search custom folders as well.\nparam max_results: int = 10\u00b6\nThe maximum number of results to return.\nparam query: str [Required]\u00b6\nThe Microsoift Graph v1.0 $search query. Example filters include from:sender, from:sender, to:recipient, subject:subject, recipients:list_of_recipients, body:excitement, importance:high, received>2022-12-01, received<2021-12-01, sent>2022-12-01, sent<2021-12-01, hasAttachments:true attachment:api-catalog.md, cc:samanthab@contoso.com, bcc:samanthab@contoso.com, body:excitement date range example: received:2023-06-08..2023-06-09 matching example: from:amy OR from:david.\nparam truncate: bool = True\u00b6\nWhether the email body is trucated to meet token number limits. Set to False for searches that will retrieve very few results, otherwise, set to True", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.messages_search.SearchEmailsInput.html"} {"id": "d7cb91f062bb-0", "text": "langchain.tools.steamship_image_generation.utils.make_image_public\u00b6\nlangchain.tools.steamship_image_generation.utils.make_image_public(client: Steamship, block: Block) \u2192 str[source]\u00b6\nUpload a block to a signed URL and return the public URL.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.steamship_image_generation.utils.make_image_public.html"} {"id": "9f04cd5af641-0", "text": "langchain.tools.gmail.get_thread.GmailGetThread\u00b6\nclass langchain.tools.gmail.get_thread.GmailGetThread(*, name: str = 'get_gmail_thread', description: str = 'Use this tool to search for email messages. The input must be a valid Gmail query. The output is a JSON list of messages.', args_schema: ~typing.Type[~langchain.tools.gmail.get_thread.GetThreadSchema] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, api_resource: Resource = None)[source]\u00b6\nBases: GmailBaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_resource: Resource [Optional]\u00b6\nparam args_schema: Type[langchain.tools.gmail.get_thread.GetThreadSchema] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.get_thread.GmailGetThread.html"} {"id": "9f04cd5af641-1", "text": "param callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Use this tool to search for email messages. The input must be a valid Gmail query. The output is a JSON list of messages.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'get_gmail_thread'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.get_thread.GmailGetThread.html"} {"id": "9f04cd5af641-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nclassmethod from_api_resource(api_resource: Resource) \u2192 GmailBaseTool\u00b6\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.get_thread.GmailGetThread.html"} {"id": "cdab64d794fc-0", "text": "langchain.tools.vectorstore.tool.VectorStoreQATool\u00b6\nclass langchain.tools.vectorstore.tool.VectorStoreQATool(*, name: str, description: str, args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, vectorstore: VectorStore, llm: BaseLanguageModel = None)[source]\u00b6\nBases: BaseVectorStoreTool, BaseTool\nTool for the VectorDBQA chain. To be initialized with name and chain.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str [Required]\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam llm: langchain.schema.language_model.BaseLanguageModel [Optional]\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.vectorstore.tool.VectorStoreQATool.html"} {"id": "cdab64d794fc-1", "text": "Optional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str [Required]\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam vectorstore: langchain.vectorstores.base.VectorStore [Required]\u00b6\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nstatic get_description(name: str, description: str) \u2192 str[source]\u00b6\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.vectorstore.tool.VectorStoreQATool.html"} {"id": "cdab64d794fc-2", "text": "Raise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: Config\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.vectorstore.tool.VectorStoreQATool.html"} {"id": "031e0d620d4b-0", "text": "langchain.tools.ddg_search.tool.DuckDuckGoSearchTool\u00b6\nlangchain.tools.ddg_search.tool.DuckDuckGoSearchTool(*args: Any, **kwargs: Any) \u2192 DuckDuckGoSearchRun[source]\u00b6\nDeprecated. Use DuckDuckGoSearchRun instead.\nParameters\n*args \u2013 \n**kwargs \u2013 \nReturns\nDuckDuckGoSearchRun", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.ddg_search.tool.DuckDuckGoSearchTool.html"} {"id": "2661b166a98a-0", "text": "langchain.tools.office365.base.O365BaseTool\u00b6\nclass langchain.tools.office365.base.O365BaseTool(*, name: str, description: str, args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, account: Account = None)[source]\u00b6\nBases: BaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam account: Account [Optional]\u00b6\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str [Required]\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.base.O365BaseTool.html"} {"id": "2661b166a98a-1", "text": "and passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str [Required]\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.base.O365BaseTool.html"} {"id": "2661b166a98a-2", "text": "Raise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.base.O365BaseTool.html"} {"id": "eccd1286ee1e-0", "text": "langchain.tools.sql_database.tool.QuerySQLDataBaseTool\u00b6\nclass langchain.tools.sql_database.tool.QuerySQLDataBaseTool(*, name: str = 'sql_db_query', description: str = '\\n\u00a0\u00a0\u00a0 Input to this tool is a detailed and correct SQL query, output is a result from the database.\\n\u00a0\u00a0\u00a0 If the query is not correct, an error message will be returned.\\n\u00a0\u00a0\u00a0 If an error is returned, rewrite the query, check the query, and try again.\\n\u00a0\u00a0\u00a0 ', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, db: SQLDatabase)[source]\u00b6\nBases: BaseSQLDatabaseTool, BaseTool\nTool for querying a SQL database.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam db: langchain.sql_database.SQLDatabase [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.sql_database.tool.QuerySQLDataBaseTool.html"} {"id": "eccd1286ee1e-1", "text": "param db: langchain.sql_database.SQLDatabase [Required]\u00b6\nparam description: str = '\\n\u00a0\u00a0\u00a0 Input to this tool is a detailed and correct SQL query, output is a result from the database.\\n\u00a0\u00a0\u00a0 If the query is not correct, an error message will be returned.\\n\u00a0\u00a0\u00a0 If an error is returned, rewrite the query, check the query, and try again.\\n\u00a0\u00a0\u00a0 '\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'sql_db_query'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.sql_database.tool.QuerySQLDataBaseTool.html"} {"id": "eccd1286ee1e-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: Config\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.sql_database.tool.QuerySQLDataBaseTool.html"} {"id": "2f411f48512c-0", "text": "langchain.tools.google_serper.tool.GoogleSerperResults\u00b6\nclass langchain.tools.google_serper.tool.GoogleSerperResults(*, name: str = 'google_serrper_results_json', description: str = 'A low-cost Google Search API.Useful for when you need to answer questions about current events.Input should be a search query. Output is a JSON object of the query results', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, api_wrapper: GoogleSerperAPIWrapper = None)[source]\u00b6\nBases: BaseTool\nTool that has capability to query the Serper.dev Google Search API\nand get back json.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_wrapper: langchain.utilities.google_serper.GoogleSerperAPIWrapper [Optional]\u00b6\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A low-cost Google Search API.Useful for when you need to answer questions about current events.Input should be a search query. Output is a JSON object of the query results'\u00b6\nUsed to tell the model how/when/why to use the tool.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.google_serper.tool.GoogleSerperResults.html"} {"id": "2f411f48512c-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'google_serrper_results_json'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.google_serper.tool.GoogleSerperResults.html"} {"id": "2f411f48512c-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.google_serper.tool.GoogleSerperResults.html"} {"id": "2c2f9553c8c8-0", "text": "langchain.tools.azure_cognitive_services.image_analysis.AzureCogsImageAnalysisTool\u00b6\nclass langchain.tools.azure_cognitive_services.image_analysis.AzureCogsImageAnalysisTool(*, name: str = 'azure_cognitive_services_image_analysis', description: str = 'A wrapper around Azure Cognitive Services Image Analysis. Useful for when you need to analyze images. Input should be a url to an image.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, azure_cogs_key: str = '', azure_cogs_endpoint: str = '', vision_service: Any = None, analysis_options: Any = None)[source]\u00b6\nBases: BaseTool\nTool that queries the Azure Cognitive Services Image Analysis API.\nIn order to set this up, follow instructions at:\nhttps://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.azure_cognitive_services.image_analysis.AzureCogsImageAnalysisTool.html"} {"id": "2c2f9553c8c8-1", "text": "param callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A wrapper around Azure Cognitive Services Image Analysis. Useful for when you need to analyze images. Input should be a url to an image.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'azure_cognitive_services_image_analysis'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.azure_cognitive_services.image_analysis.AzureCogsImageAnalysisTool.html"} {"id": "2c2f9553c8c8-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and endpoint exists in environment.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.azure_cognitive_services.image_analysis.AzureCogsImageAnalysisTool.html"} {"id": "2765de47bd40-0", "text": "langchain.tools.steamship_image_generation.tool.ModelName\u00b6\nclass langchain.tools.steamship_image_generation.tool.ModelName(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]\u00b6\nBases: str, Enum\nSupported Image Models for generation.\nMethods\n__init__(*args,\u00a0**kwds)\ncapitalize()\nReturn a capitalized version of the string.\ncasefold()\nReturn a version of the string suitable for caseless comparisons.\ncenter(width[,\u00a0fillchar])\nReturn a centered string of length width.\ncount(sub[,\u00a0start[,\u00a0end]])\nReturn the number of non-overlapping occurrences of substring sub in string S[start:end].\nencode([encoding,\u00a0errors])\nEncode the string using the codec registered for encoding.\nendswith(suffix[,\u00a0start[,\u00a0end]])\nReturn True if S ends with the specified suffix, False otherwise.\nexpandtabs([tabsize])\nReturn a copy where all tab characters are expanded using spaces.\nfind(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nformat(*args,\u00a0**kwargs)\nReturn a formatted version of S, using substitutions from args and kwargs.\nformat_map(mapping)\nReturn a formatted version of S, using substitutions from mapping.\nindex(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nisalnum()\nReturn True if the string is an alpha-numeric string, False otherwise.\nisalpha()\nReturn True if the string is an alphabetic string, False otherwise.\nisascii()\nReturn True if all characters in the string are ASCII, False otherwise.\nisdecimal()\nReturn True if the string is a decimal string, False otherwise.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.steamship_image_generation.tool.ModelName.html"} {"id": "2765de47bd40-1", "text": "isdecimal()\nReturn True if the string is a decimal string, False otherwise.\nisdigit()\nReturn True if the string is a digit string, False otherwise.\nisidentifier()\nReturn True if the string is a valid Python identifier, False otherwise.\nislower()\nReturn True if the string is a lowercase string, False otherwise.\nisnumeric()\nReturn True if the string is a numeric string, False otherwise.\nisprintable()\nReturn True if the string is printable, False otherwise.\nisspace()\nReturn True if the string is a whitespace string, False otherwise.\nistitle()\nReturn True if the string is a title-cased string, False otherwise.\nisupper()\nReturn True if the string is an uppercase string, False otherwise.\njoin(iterable,\u00a0/)\nConcatenate any number of strings.\nljust(width[,\u00a0fillchar])\nReturn a left-justified string of length width.\nlower()\nReturn a copy of the string converted to lowercase.\nlstrip([chars])\nReturn a copy of the string with leading whitespace removed.\nmaketrans\nReturn a translation table usable for str.translate().\npartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nremoveprefix(prefix,\u00a0/)\nReturn a str with the given prefix string removed if present.\nremovesuffix(suffix,\u00a0/)\nReturn a str with the given suffix string removed if present.\nreplace(old,\u00a0new[,\u00a0count])\nReturn a copy with all occurrences of substring old replaced by new.\nrfind(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].\nrindex(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.steamship_image_generation.tool.ModelName.html"} {"id": "2765de47bd40-2", "text": "rjust(width[,\u00a0fillchar])\nReturn a right-justified string of length width.\nrpartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nrsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nrstrip([chars])\nReturn a copy of the string with trailing whitespace removed.\nsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nsplitlines([keepends])\nReturn a list of the lines in the string, breaking at line boundaries.\nstartswith(prefix[,\u00a0start[,\u00a0end]])\nReturn True if S starts with the specified prefix, False otherwise.\nstrip([chars])\nReturn a copy of the string with leading and trailing whitespace removed.\nswapcase()\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\nReturn a version of the string where each word is titlecased.\ntranslate(table,\u00a0/)\nReplace each character in the string using the given translation table.\nupper()\nReturn a copy of the string converted to uppercase.\nzfill(width,\u00a0/)\nPad a numeric string with zeros on the left, to fill a field of the given width.\nAttributes\nDALL_E\nSTABLE_DIFFUSION\ncapitalize()\u00b6\nReturn a capitalized version of the string.\nMore specifically, make the first character have upper case and the rest lower\ncase.\ncasefold()\u00b6\nReturn a version of the string suitable for caseless comparisons.\ncenter(width, fillchar=' ', /)\u00b6\nReturn a centered string of length width.\nPadding is done using the specified fill character (default is a space).\ncount(sub[, start[, end]]) \u2192 int\u00b6\nReturn the number of non-overlapping occurrences of substring sub in", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.steamship_image_generation.tool.ModelName.html"} {"id": "2765de47bd40-3", "text": "Return the number of non-overlapping occurrences of substring sub in\nstring S[start:end]. Optional arguments start and end are\ninterpreted as in slice notation.\nencode(encoding='utf-8', errors='strict')\u00b6\nEncode the string using the codec registered for encoding.\nencodingThe encoding in which to encode the string.\nerrorsThe error handling scheme to use for encoding errors.\nThe default is \u2018strict\u2019 meaning that encoding errors raise a\nUnicodeEncodeError. Other possible values are \u2018ignore\u2019, \u2018replace\u2019 and\n\u2018xmlcharrefreplace\u2019 as well as any other name registered with\ncodecs.register_error that can handle UnicodeEncodeErrors.\nendswith(suffix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S ends with the specified suffix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nsuffix can also be a tuple of strings to try.\nexpandtabs(tabsize=8)\u00b6\nReturn a copy where all tab characters are expanded using spaces.\nIf tabsize is not given, a tab size of 8 characters is assumed.\nfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nformat(*args, **kwargs) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from args and kwargs.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nformat_map(mapping) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from mapping.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.steamship_image_generation.tool.ModelName.html"} {"id": "2765de47bd40-4", "text": "Return the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nisalnum()\u00b6\nReturn True if the string is an alpha-numeric string, False otherwise.\nA string is alpha-numeric if all characters in the string are alpha-numeric and\nthere is at least one character in the string.\nisalpha()\u00b6\nReturn True if the string is an alphabetic string, False otherwise.\nA string is alphabetic if all characters in the string are alphabetic and there\nis at least one character in the string.\nisascii()\u00b6\nReturn True if all characters in the string are ASCII, False otherwise.\nASCII characters have code points in the range U+0000-U+007F.\nEmpty string is ASCII too.\nisdecimal()\u00b6\nReturn True if the string is a decimal string, False otherwise.\nA string is a decimal string if all characters in the string are decimal and\nthere is at least one character in the string.\nisdigit()\u00b6\nReturn True if the string is a digit string, False otherwise.\nA string is a digit string if all characters in the string are digits and there\nis at least one character in the string.\nisidentifier()\u00b6\nReturn True if the string is a valid Python identifier, False otherwise.\nCall keyword.iskeyword(s) to test whether string s is a reserved identifier,\nsuch as \u201cdef\u201d or \u201cclass\u201d.\nislower()\u00b6\nReturn True if the string is a lowercase string, False otherwise.\nA string is lowercase if all cased characters in the string are lowercase and\nthere is at least one cased character in the string.\nisnumeric()\u00b6\nReturn True if the string is a numeric string, False otherwise.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.steamship_image_generation.tool.ModelName.html"} {"id": "2765de47bd40-5", "text": "isnumeric()\u00b6\nReturn True if the string is a numeric string, False otherwise.\nA string is numeric if all characters in the string are numeric and there is at\nleast one character in the string.\nisprintable()\u00b6\nReturn True if the string is printable, False otherwise.\nA string is printable if all of its characters are considered printable in\nrepr() or if it is empty.\nisspace()\u00b6\nReturn True if the string is a whitespace string, False otherwise.\nA string is whitespace if all characters in the string are whitespace and there\nis at least one character in the string.\nistitle()\u00b6\nReturn True if the string is a title-cased string, False otherwise.\nIn a title-cased string, upper- and title-case characters may only\nfollow uncased characters and lowercase characters only cased ones.\nisupper()\u00b6\nReturn True if the string is an uppercase string, False otherwise.\nA string is uppercase if all cased characters in the string are uppercase and\nthere is at least one cased character in the string.\njoin(iterable, /)\u00b6\nConcatenate any number of strings.\nThe string whose method is called is inserted in between each given string.\nThe result is returned as a new string.\nExample: \u2018.\u2019.join([\u2018ab\u2019, \u2018pq\u2019, \u2018rs\u2019]) -> \u2018ab.pq.rs\u2019\nljust(width, fillchar=' ', /)\u00b6\nReturn a left-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nlower()\u00b6\nReturn a copy of the string converted to lowercase.\nlstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nstatic maketrans()\u00b6\nReturn a translation table usable for str.translate().", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.steamship_image_generation.tool.ModelName.html"} {"id": "2765de47bd40-6", "text": "static maketrans()\u00b6\nReturn a translation table usable for str.translate().\nIf there is only one argument, it must be a dictionary mapping Unicode\nordinals (integers) or characters to Unicode ordinals, strings or None.\nCharacter keys will be then converted to ordinals.\nIf there are two arguments, they must be strings of equal length, and\nin the resulting dictionary, each character in x will be mapped to the\ncharacter at the same position in y. If there is a third argument, it\nmust be a string, whose characters will be mapped to None in the result.\npartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string. If the separator is found,\nreturns a 3-tuple containing the part before the separator, the separator\nitself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing the original string\nand two empty strings.\nremoveprefix(prefix, /)\u00b6\nReturn a str with the given prefix string removed if present.\nIf the string starts with the prefix string, return string[len(prefix):].\nOtherwise, return a copy of the original string.\nremovesuffix(suffix, /)\u00b6\nReturn a str with the given suffix string removed if present.\nIf the string ends with the suffix string and that suffix is not empty,\nreturn string[:-len(suffix)]. Otherwise, return a copy of the original\nstring.\nreplace(old, new, count=- 1, /)\u00b6\nReturn a copy with all occurrences of substring old replaced by new.\ncountMaximum number of occurrences to replace.\n-1 (the default value) means replace all occurrences.\nIf the optional argument count is given, only the first count occurrences are\nreplaced.\nrfind(sub[, start[, end]]) \u2192 int\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.steamship_image_generation.tool.ModelName.html"} {"id": "2765de47bd40-7", "text": "replaced.\nrfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nrindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nrjust(width, fillchar=' ', /)\u00b6\nReturn a right-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nrpartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string, starting at the end. If\nthe separator is found, returns a 3-tuple containing the part before the\nseparator, the separator itself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing two empty strings\nand the original string.\nrsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nSplitting starts at the end of the string and works to the front.\nrstrip(chars=None, /)\u00b6\nReturn a copy of the string with trailing whitespace removed.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.steamship_image_generation.tool.ModelName.html"} {"id": "2765de47bd40-8", "text": "rstrip(chars=None, /)\u00b6\nReturn a copy of the string with trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nNote, str.split() is mainly useful for data that has been intentionally\ndelimited. With natural text that includes punctuation, consider using\nthe regular expression module.\nsplitlines(keepends=False)\u00b6\nReturn a list of the lines in the string, breaking at line boundaries.\nLine breaks are not included in the resulting list unless keepends is given and\ntrue.\nstartswith(prefix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S starts with the specified prefix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nprefix can also be a tuple of strings to try.\nstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading and trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nswapcase()\u00b6\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\u00b6\nReturn a version of the string where each word is titlecased.\nMore specifically, words start with uppercased characters and all remaining\ncased characters have lower case.\ntranslate(table, /)\u00b6\nReplace each character in the string using the given translation table.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.steamship_image_generation.tool.ModelName.html"} {"id": "2765de47bd40-9", "text": "translate(table, /)\u00b6\nReplace each character in the string using the given translation table.\ntableTranslation table, which must be a mapping of Unicode ordinals to\nUnicode ordinals, strings, or None.\nThe table must implement lookup/indexing via __getitem__, for instance a\ndictionary or list. If this operation raises LookupError, the character is\nleft untouched. Characters mapped to None are deleted.\nupper()\u00b6\nReturn a copy of the string converted to uppercase.\nzfill(width, /)\u00b6\nPad a numeric string with zeros on the left, to fill a field of the given width.\nThe string is never truncated.\nDALL_E = 'dall-e'\u00b6\nSTABLE_DIFFUSION = 'stable-diffusion'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.steamship_image_generation.tool.ModelName.html"} {"id": "d9c3c3ada057-0", "text": "langchain.tools.openapi.utils.api_models.APIRequestBodyProperty\u00b6\nclass langchain.tools.openapi.utils.api_models.APIRequestBodyProperty(*, name: str, required: bool, type: Union[str, Type, tuple, None, Enum] = None, default: Optional[Any] = None, description: Optional[str] = None, properties: List[APIRequestBodyProperty], references_used: List[str])[source]\u00b6\nBases: APIPropertyBase\nA model for a request body property.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam default: Optional[Any] = None\u00b6\nThe default value of the property.\nparam description: Optional[str] = None\u00b6\nThe description of the property.\nparam name: str [Required]\u00b6\nThe name of the property.\nparam properties: List[langchain.tools.openapi.utils.api_models.APIRequestBodyProperty] [Required]\u00b6\nThe sub-properties of the property.\nparam references_used: List[str] [Required]\u00b6\nThe references used by the property.\nparam required: bool [Required]\u00b6\nWhether the property is required.\nparam type: Union[str, Type, tuple, None, enum.Enum] = None\u00b6\nThe type of the property.\nEither a primitive type, a component/parameter type,\nor an array or \u2018object\u2019 (dict) of the above.\nclassmethod from_schema(schema: Schema, name: str, required: bool, spec: OpenAPISpec, references_used: Optional[List[str]] = None) \u2192 APIRequestBodyProperty[source]\u00b6\nRecursively populate from an OpenAPI Schema.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.openapi.utils.api_models.APIRequestBodyProperty.html"} {"id": "8412a982a697-0", "text": "langchain.tools.human.tool.HumanInputRun\u00b6\nclass langchain.tools.human.tool.HumanInputRun(*, name: str = 'human', description: str = 'You can ask a human for guidance when you think you got stuck or you are not sure what to do next. The input should be a question for the human.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, prompt_func: Callable[[str], None] = None, input_func: Callable = None)[source]\u00b6\nBases: BaseTool\nTool that adds the capability to ask user for input.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'You can ask a human for guidance when you think you got stuck or you are not sure what to do next. The input should be a question for the human.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.human.tool.HumanInputRun.html"} {"id": "8412a982a697-1", "text": "You can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam input_func: Callable [Optional]\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'human'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam prompt_func: Callable[[str], None] [Optional]\u00b6\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.human.tool.HumanInputRun.html"} {"id": "8412a982a697-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.human.tool.HumanInputRun.html"} {"id": "20e9e74efd53-0", "text": "langchain.tools.playwright.utils.create_sync_playwright_browser\u00b6\nlangchain.tools.playwright.utils.create_sync_playwright_browser(headless: bool = True) \u2192 SyncBrowser[source]\u00b6\nCreate a playwright browser.\nParameters\nheadless \u2013 Whether to run the browser in headless mode. Defaults to True.\nReturns\nThe playwright browser.\nReturn type\nSyncBrowser", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.utils.create_sync_playwright_browser.html"} {"id": "fdb4b29fc430-0", "text": "langchain.tools.spark_sql.tool.BaseSparkSQLTool\u00b6\nclass langchain.tools.spark_sql.tool.BaseSparkSQLTool(*, db: SparkSQL)[source]\u00b6\nBases: BaseModel\nBase tool for interacting with Spark SQL.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam db: langchain.utilities.spark_sql.SparkSQL [Required]\u00b6\nmodel Config[source]\u00b6\nBases: Config\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.spark_sql.tool.BaseSparkSQLTool.html"} {"id": "5b17cf26e88f-0", "text": "langchain.tools.shell.tool.ShellTool\u00b6\nclass langchain.tools.shell.tool.ShellTool(*, name: str = 'terminal', description: str = 'Run shell commands on this Linux machine.', args_schema: ~typing.Type[~pydantic.main.BaseModel] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, process: ~langchain.utilities.bash.BashProcess = None)[source]\u00b6\nBases: BaseTool\nTool to run shell commands.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Type[pydantic.main.BaseModel] = \u00b6\nSchema for input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Run shell commands on this Linux machine.'\u00b6\nDescription of tool.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.shell.tool.ShellTool.html"} {"id": "5b17cf26e88f-1", "text": "Handle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'terminal'\u00b6\nName of tool.\nparam process: langchain.utilities.bash.BashProcess [Optional]\u00b6\nBash process to run commands.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.shell.tool.ShellTool.html"} {"id": "5b17cf26e88f-2", "text": "Raise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.shell.tool.ShellTool.html"} {"id": "f7ddcc1c707b-0", "text": "langchain.tools.base.tool\u00b6\nlangchain.tools.base.tool(*args: Union[str, Callable], return_direct: bool = False, args_schema: Optional[Type[BaseModel]] = None, infer_schema: bool = True) \u2192 Callable[source]\u00b6\nMake tools out of functions, can be used with or without arguments.\nParameters\n*args \u2013 The arguments to the tool.\nreturn_direct \u2013 Whether to return directly from the tool rather\nthan continuing the agent loop.\nargs_schema \u2013 optional argument schema for user to specify\ninfer_schema \u2013 Whether to infer the schema of the arguments from\nthe function\u2019s signature. This also makes the resultant tool\naccept a dictionary input to its run() function.\nRequires:\nFunction must be of type (str) -> str\nFunction must have a docstring\nExamples\n@tool\ndef search_api(query: str) -> str:\n # Searches the API for the query.\n return\n@tool(\"search\", return_direct=True)\ndef search_api(query: str) -> str:\n # Searches the API for the query.\n return", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.base.tool.html"} {"id": "9644f044a483-0", "text": "langchain.tools.gmail.utils.build_resource_service\u00b6\nlangchain.tools.gmail.utils.build_resource_service(credentials: Optional[Credentials] = None, service_name: str = 'gmail', service_version: str = 'v1') \u2192 Resource[source]\u00b6\nBuild a Gmail service.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.gmail.utils.build_resource_service.html"} {"id": "6dee0a6ce128-0", "text": "langchain.tools.file_management.file_search.FileSearchInput\u00b6\nclass langchain.tools.file_management.file_search.FileSearchInput(*, dir_path: str = '.', pattern: str)[source]\u00b6\nBases: BaseModel\nInput for FileSearchTool.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam dir_path: str = '.'\u00b6\nSubdirectory to search in.\nparam pattern: str [Required]\u00b6\nUnix shell regex, where * matches everything.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.file_search.FileSearchInput.html"} {"id": "66cea458ebec-0", "text": "langchain.tools.spark_sql.tool.QueryCheckerTool\u00b6\nclass langchain.tools.spark_sql.tool.QueryCheckerTool(*, name: str = 'query_checker_sql_db', description: str = '\\n\u00a0\u00a0\u00a0 Use this tool to double check if your query is correct before executing it.\\n\u00a0\u00a0\u00a0 Always use this tool before executing a query with query_sql_db!\\n\u00a0\u00a0\u00a0 ', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, db: SparkSQL, template: str = '\\n{query}\\nDouble check the Spark SQL query above for common mistakes, including:\\n- Using NOT IN with NULL values\\n- Using UNION when UNION ALL should have been used\\n- Using BETWEEN for exclusive ranges\\n- Data type mismatch in predicates\\n- Properly quoting identifiers\\n- Using the correct number of arguments for functions\\n- Casting to the correct data type\\n- Using the proper columns for joins\\n\\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.', llm: BaseLanguageModel, llm_chain: LLMChain)[source]\u00b6\nBases: BaseSparkSQLTool, BaseTool\nUse an LLM to check if a query is correct.\nAdapted from https://www.patterns.app/blog/2023/01/18/crunchbot-sql-analyst-gpt/\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.spark_sql.tool.QueryCheckerTool.html"} {"id": "66cea458ebec-1", "text": "Raises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam db: SparkSQL [Required]\u00b6\nparam description: str = '\\n\u00a0\u00a0\u00a0 Use this tool to double check if your query is correct before executing it.\\n\u00a0\u00a0\u00a0 Always use this tool before executing a query with query_sql_db!\\n\u00a0\u00a0\u00a0 '\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam llm: langchain.schema.language_model.BaseLanguageModel [Required]\u00b6\nparam llm_chain: langchain.chains.llm.LLMChain [Required]\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'query_checker_sql_db'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.spark_sql.tool.QueryCheckerTool.html"} {"id": "66cea458ebec-2", "text": "Optional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam template: str = '\\n{query}\\nDouble check the Spark SQL query above for common mistakes, including:\\n- Using NOT IN with NULL values\\n- Using UNION when UNION ALL should have been used\\n- Using BETWEEN for exclusive ranges\\n- Data type mismatch in predicates\\n- Properly quoting identifiers\\n- Using the correct number of arguments for functions\\n- Casting to the correct data type\\n- Using the proper columns for joins\\n\\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.'\u00b6\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator initialize_llm_chain\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.spark_sql.tool.QueryCheckerTool.html"} {"id": "66cea458ebec-3", "text": "Raise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: Config\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.spark_sql.tool.QueryCheckerTool.html"} {"id": "1086b66159fc-0", "text": "langchain.tools.playwright.navigate.NavigateToolInput\u00b6\nclass langchain.tools.playwright.navigate.NavigateToolInput(*, url: str)[source]\u00b6\nBases: BaseModel\nInput for NavigateToolInput.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam url: str [Required]\u00b6\nurl to navigate to", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.navigate.NavigateToolInput.html"} {"id": "b634d1dd0a46-0", "text": "langchain.tools.azure_cognitive_services.utils.download_audio_from_url\u00b6\nlangchain.tools.azure_cognitive_services.utils.download_audio_from_url(audio_url: str) \u2192 str[source]\u00b6\nDownload audio from url to local.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.azure_cognitive_services.utils.download_audio_from_url.html"} {"id": "7abd6efb95ff-0", "text": "langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksToolInput\u00b6\nclass langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksToolInput(*, absolute_urls: bool = False)[source]\u00b6\nBases: BaseModel\nInput for ExtractHyperlinksTool.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam absolute_urls: bool = False\u00b6\nReturn absolute URLs instead of relative URLs", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksToolInput.html"} {"id": "378782a4737b-0", "text": "langchain.tools.base.BaseTool\u00b6\nclass langchain.tools.base.BaseTool(*, name: str, description: str, args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False)[source]\u00b6\nBases: ABC, BaseModel\nInterface LangChain tools must implement.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[pydantic.main.BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str [Required]\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.base.BaseTool.html"} {"id": "378782a4737b-1", "text": "and passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str [Required]\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str[source]\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any[source]\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nRaise deprecation warning if callback_manager is used.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.base.BaseTool.html"} {"id": "378782a4737b-2", "text": "Raise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any[source]\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.base.BaseTool.html"} {"id": "f35b431364cb-0", "text": "langchain.tools.office365.messages_search.O365SearchEmails\u00b6\nclass langchain.tools.office365.messages_search.O365SearchEmails(*, name: str = 'messages_search', description: str = 'Use this tool to search for email messages. The input must be a valid Microsoft Graph v1.0 $search query. The output is a JSON list of the requested resource.', args_schema: ~typing.Type[~pydantic.main.BaseModel] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, account: Account = None)[source]\u00b6\nBases: O365BaseTool\nClass for searching email messages in Office 365\nFree, but setup is required\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam account: Account [Optional]\u00b6\nparam args_schema: Type[pydantic.main.BaseModel] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.messages_search.O365SearchEmails.html"} {"id": "f35b431364cb-1", "text": "Deprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Use this tool to search for email messages. The input must be a valid Microsoft Graph v1.0 $search query. The output is a JSON list of the requested resource.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'messages_search'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.messages_search.O365SearchEmails.html"} {"id": "f35b431364cb-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.messages_search.O365SearchEmails.html"} {"id": "04521d069e5c-0", "text": "langchain.tools.plugin.AIPluginToolSchema\u00b6\nclass langchain.tools.plugin.AIPluginToolSchema(*, tool_input: Optional[str] = '')[source]\u00b6\nBases: BaseModel\nAIPLuginToolSchema.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam tool_input: Optional[str] = ''\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.plugin.AIPluginToolSchema.html"} {"id": "e6d911d55544-0", "text": "langchain.tools.openweathermap.tool.OpenWeatherMapQueryRun\u00b6\nclass langchain.tools.openweathermap.tool.OpenWeatherMapQueryRun(*, name: str = 'OpenWeatherMap', description: str = 'A wrapper around OpenWeatherMap API. Useful for fetching current weather information for a specified location. Input should be a location string (e.g. London,GB).', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, api_wrapper: OpenWeatherMapAPIWrapper = None)[source]\u00b6\nBases: BaseTool\nTool that adds the capability to query using the OpenWeatherMap API.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_wrapper: langchain.utilities.openweathermap.OpenWeatherMapAPIWrapper [Optional]\u00b6\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A wrapper around OpenWeatherMap API. Useful for fetching current weather information for a specified location. Input should be a location string (e.g. London,GB).'\u00b6\nUsed to tell the model how/when/why to use the tool.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.openweathermap.tool.OpenWeatherMapQueryRun.html"} {"id": "e6d911d55544-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'OpenWeatherMap'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.openweathermap.tool.OpenWeatherMapQueryRun.html"} {"id": "e6d911d55544-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.openweathermap.tool.OpenWeatherMapQueryRun.html"} {"id": "adfae2234a8d-0", "text": "langchain.tools.office365.create_draft_message.CreateDraftMessageSchema\u00b6\nclass langchain.tools.office365.create_draft_message.CreateDraftMessageSchema(*, body: str, to: List[str], subject: str, cc: Optional[List[str]] = None, bcc: Optional[List[str]] = None)[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam bcc: Optional[List[str]] = None\u00b6\nThe list of BCC recipients.\nparam body: str [Required]\u00b6\nThe message body to include in the draft.\nparam cc: Optional[List[str]] = None\u00b6\nThe list of CC recipients.\nparam subject: str [Required]\u00b6\nThe subject of the message.\nparam to: List[str] [Required]\u00b6\nThe list of recipients.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.office365.create_draft_message.CreateDraftMessageSchema.html"} {"id": "cdaaafe74785-0", "text": "langchain.tools.plugin.AIPlugin\u00b6\nclass langchain.tools.plugin.AIPlugin(*, schema_version: str, name_for_model: str, name_for_human: str, description_for_model: str, description_for_human: str, auth: Optional[dict] = None, api: ApiConfig, logo_url: Optional[str] = None, contact_email: Optional[str] = None, legal_info_url: Optional[str] = None)[source]\u00b6\nBases: BaseModel\nAI Plugin Definition.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api: langchain.tools.plugin.ApiConfig [Required]\u00b6\nparam auth: Optional[dict] = None\u00b6\nparam contact_email: Optional[str] = None\u00b6\nparam description_for_human: str [Required]\u00b6\nparam description_for_model: str [Required]\u00b6\nparam legal_info_url: Optional[str] = None\u00b6\nparam logo_url: Optional[str] = None\u00b6\nparam name_for_human: str [Required]\u00b6\nparam name_for_model: str [Required]\u00b6\nparam schema_version: str [Required]\u00b6\nclassmethod from_url(url: str) \u2192 AIPlugin[source]\u00b6\nInstantiate AIPlugin from a URL.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.plugin.AIPlugin.html"} {"id": "6960da7603fe-0", "text": "langchain.tools.file_management.list_dir.ListDirectoryTool\u00b6\nclass langchain.tools.file_management.list_dir.ListDirectoryTool(*, name: str = 'list_directory', description: str = 'List files and directories in a specified folder', args_schema: ~typing.Type[~pydantic.main.BaseModel] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, root_dir: ~typing.Optional[str] = None)[source]\u00b6\nBases: BaseFileToolMixin, BaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Type[pydantic.main.BaseModel] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'List files and directories in a specified folder'\u00b6\nUsed to tell the model how/when/why to use the tool.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.list_dir.ListDirectoryTool.html"} {"id": "6960da7603fe-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'list_directory'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam root_dir: Optional[str] = None\u00b6\nThe final path will be chosen relative to root_dir if specified.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.list_dir.ListDirectoryTool.html"} {"id": "6960da7603fe-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nget_relative_path(file_path: str) \u2192 Path\u00b6\nGet the relative path, returning an error if unsupported.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.file_management.list_dir.ListDirectoryTool.html"} {"id": "c7c1077f6824-0", "text": "langchain.tools.plugin.marshal_spec\u00b6\nlangchain.tools.plugin.marshal_spec(txt: str) \u2192 dict[source]\u00b6\nConvert the yaml or json serialized spec to a dict.\nParameters\ntxt \u2013 The yaml or json serialized spec.\nReturns\nThe spec as a dict.\nReturn type\ndict", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.plugin.marshal_spec.html"} {"id": "916109c889d1-0", "text": "langchain.tools.convert_to_openai.FunctionDescription\u00b6\nclass langchain.tools.convert_to_openai.FunctionDescription[source]\u00b6\nBases: TypedDict\nRepresentation of a callable function to the OpenAI API.\nMethods\n__init__(*args,\u00a0**kwargs)\nclear()\ncopy()\nfromkeys([value])\nCreate a new dictionary with keys from iterable and values set to value.\nget(key[,\u00a0default])\nReturn the value for key if key is in the dictionary, else default.\nitems()\nkeys()\npop(k[,d])\nIf the key is not found, return the default if given; otherwise, raise a KeyError.\npopitem()\nRemove and return a (key, value) pair as a 2-tuple.\nsetdefault(key[,\u00a0default])\nInsert key with a value of default if key is not in the dictionary.\nupdate([E,\u00a0]**F)\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]\nvalues()\nAttributes\nname\nThe name of the function.\ndescription\nA description of the function.\nparameters\nThe parameters of the function.\nclear() \u2192 None.\u00a0 Remove all items from D.\u00b6\ncopy() \u2192 a shallow copy of D\u00b6\nfromkeys(value=None, /)\u00b6\nCreate a new dictionary with keys from iterable and values set to value.\nget(key, default=None, /)\u00b6\nReturn the value for key if key is in the dictionary, else default.\nitems() \u2192 a set-like object providing a view on D's items\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.convert_to_openai.FunctionDescription.html"} {"id": "916109c889d1-1", "text": "items() \u2192 a set-like object providing a view on D's items\u00b6\nkeys() \u2192 a set-like object providing a view on D's keys\u00b6\npop(k[, d]) \u2192 v, remove specified key and return the corresponding value.\u00b6\nIf the key is not found, return the default if given; otherwise,\nraise a KeyError.\npopitem()\u00b6\nRemove and return a (key, value) pair as a 2-tuple.\nPairs are returned in LIFO (last-in, first-out) order.\nRaises KeyError if the dict is empty.\nsetdefault(key, default=None, /)\u00b6\nInsert key with a value of default if key is not in the dictionary.\nReturn the value for key if key is in the dictionary, else default.\nupdate([E, ]**F) \u2192 None.\u00a0 Update D from dict/iterable E and F.\u00b6\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k]\nIf E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v\nIn either case, this is followed by: for k in F: D[k] = F[k]\nvalues() \u2192 an object providing a view on D's values\u00b6\ndescription: str\u00b6\nA description of the function.\nname: str\u00b6\nThe name of the function.\nparameters: dict\u00b6\nThe parameters of the function.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.convert_to_openai.FunctionDescription.html"} {"id": "80e04e9ad92d-0", "text": "langchain.tools.powerbi.tool.ListPowerBITool\u00b6\nclass langchain.tools.powerbi.tool.ListPowerBITool(*, name: str = 'list_tables_powerbi', description: str = 'Input is an empty string, output is a comma separated list of tables in the database.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, powerbi: PowerBIDataset)[source]\u00b6\nBases: BaseTool\nTool for getting tables names.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Input is an empty string, output is a comma separated list of tables in the database.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.powerbi.tool.ListPowerBITool.html"} {"id": "80e04e9ad92d-1", "text": "Optional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'list_tables_powerbi'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]\u00b6\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.powerbi.tool.ListPowerBITool.html"} {"id": "80e04e9ad92d-2", "text": "Raise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.powerbi.tool.ListPowerBITool.html"} {"id": "ab2a7232bf5b-0", "text": "langchain.tools.google_places.tool.GooglePlacesTool\u00b6\nclass langchain.tools.google_places.tool.GooglePlacesTool(*, name: str = 'google_places', description: str = 'A wrapper around Google Places. Useful for when you need to validate or discover addressed from ambiguous text. Input should be a search query.', args_schema: ~typing.Type[~pydantic.main.BaseModel] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, api_wrapper: ~langchain.utilities.google_places_api.GooglePlacesAPIWrapper = None)[source]\u00b6\nBases: BaseTool\nTool that adds the capability to query the Google places API.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_wrapper: langchain.utilities.google_places_api.GooglePlacesAPIWrapper [Optional]\u00b6\nparam args_schema: Type[pydantic.main.BaseModel] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.google_places.tool.GooglePlacesTool.html"} {"id": "ab2a7232bf5b-1", "text": "Deprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'A wrapper around Google Places. Useful for when you need to validate or discover addressed from ambiguous text. Input should be a search query.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'google_places'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.google_places.tool.GooglePlacesTool.html"} {"id": "ab2a7232bf5b-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.google_places.tool.GooglePlacesTool.html"} {"id": "f57581f2d2fd-0", "text": "langchain.tools.json.tool.JsonGetValueTool\u00b6\nclass langchain.tools.json.tool.JsonGetValueTool(*, name: str = 'json_spec_get_value', description: str = '\\n\u00a0\u00a0\u00a0 Can be used to see value in string format at a given path.\\n\u00a0\u00a0\u00a0 Before calling this you should be SURE that the path to this exists.\\n\u00a0\u00a0\u00a0 The input is a text representation of the path to the dict in Python syntax (e.g. data[\"key1\"][0][\"key2\"]).\\n\u00a0\u00a0\u00a0 ', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, spec: JsonSpec)[source]\u00b6\nBases: BaseTool\nTool for getting a value in a JSON spec.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.json.tool.JsonGetValueTool.html"} {"id": "f57581f2d2fd-1", "text": "param callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = '\\n\u00a0\u00a0\u00a0 Can be used to see value in string format at a given path.\\n\u00a0\u00a0\u00a0 Before calling this you should be SURE that the path to this exists.\\n\u00a0\u00a0\u00a0 The input is a text representation of the path to the dict in Python syntax (e.g. data[\"key1\"][0][\"key2\"]).\\n\u00a0\u00a0\u00a0 '\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'json_spec_get_value'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam spec: JsonSpec [Required]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.json.tool.JsonGetValueTool.html"} {"id": "f57581f2d2fd-2", "text": "param verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.json.tool.JsonGetValueTool.html"} {"id": "f1de1799da07-0", "text": "langchain.tools.playwright.current_page.CurrentWebPageTool\u00b6\nclass langchain.tools.playwright.current_page.CurrentWebPageTool(*, name: str = 'current_webpage', description: str = 'Returns the URL of the current page', args_schema: ~typing.Type[~pydantic.main.BaseModel] = , return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, sync_browser: Optional['SyncBrowser'] = None, async_browser: Optional['AsyncBrowser'] = None)[source]\u00b6\nBases: BaseBrowserTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Type[BaseModel] = \u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam async_browser: Optional['AsyncBrowser'] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Returns the URL of the current page'\u00b6\nUsed to tell the model how/when/why to use the tool.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.current_page.CurrentWebPageTool.html"} {"id": "f1de1799da07-1", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'current_webpage'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam sync_browser: Optional['SyncBrowser'] = None\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.current_page.CurrentWebPageTool.html"} {"id": "f1de1799da07-2", "text": "Make tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nclassmethod from_browser(sync_browser: Optional[SyncBrowser] = None, async_browser: Optional[AsyncBrowser] = None) \u2192 BaseBrowserTool\u00b6\nInstantiate the tool.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nvalidator validate_browser_provided\u00a0 \u00bb\u00a0 all fields\u00b6\nCheck that the arguments are valid.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.current_page.CurrentWebPageTool.html"} {"id": "c9df58d5b520-0", "text": "langchain.retrievers.pubmed.PubMedRetriever\u00b6\nclass langchain.retrievers.pubmed.PubMedRetriever(*, top_k_results: int = 3, load_max_docs: int = 25, doc_content_chars_max: int = 2000, load_all_available_meta: bool = False, email: str = 'your_email@example.com', base_url_esearch: str = 'https://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?', base_url_efetch: str = 'https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?', max_retry: int = 5, sleep_time: float = 0.2, ARXIV_MAX_QUERY_LENGTH: int = 300, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: BaseRetriever, PubMedAPIWrapper\nIt is effectively a wrapper for PubMedAPIWrapper.\nIt wraps load() to get_relevant_documents().\nIt uses all PubMedAPIWrapper arguments without any change.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam doc_content_chars_max: int = 2000\u00b6\nparam email: str = 'your_email@example.com'\u00b6\nparam load_all_available_meta: bool = False\u00b6\nparam load_max_docs: int = 25\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam tags: Optional[List[str]] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.pubmed.PubMedRetriever.html"} {"id": "c9df58d5b520-1", "text": "use case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam top_k_results: int = 3\u00b6\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.pubmed.PubMedRetriever.html"} {"id": "c9df58d5b520-2", "text": "Parameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nload(query: str) \u2192 List[dict]\u00b6\nSearch PubMed for documents matching the query.\nReturn a list of dictionaries containing the document metadata.\nload_docs(query: str) \u2192 List[Document]\u00b6\nretrieve_article(uid: str, webenv: str) \u2192 dict\u00b6\nrun(query: str) \u2192 str\u00b6\nRun PubMed search and get the article meta information.\nSee https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch\nIt uses only the most informative fields of article meta information.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.pubmed.PubMedRetriever.html"} {"id": "4a00fec1e866-0", "text": "langchain.retrievers.self_query.base.SelfQueryRetriever\u00b6\nclass langchain.retrievers.self_query.base.SelfQueryRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, vectorstore: VectorStore, llm_chain: LLMChain, search_type: str = 'similarity', search_kwargs: dict = None, structured_query_translator: Visitor, verbose: bool = False, use_original_query: bool = False)[source]\u00b6\nBases: BaseRetriever, BaseModel\nRetriever that wraps around a vector store and uses an LLM to generate\nthe vector store queries.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam llm_chain: langchain.chains.llm.LLMChain [Required]\u00b6\nThe LLMChain for generating the vector store queries.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam search_kwargs: dict [Optional]\u00b6\nKeyword arguments to pass in to the vector store search.\nparam search_type: str = 'similarity'\u00b6\nThe search type to perform on the vector store.\nparam structured_query_translator: langchain.chains.query_constructor.ir.Visitor [Required]\u00b6\nTranslator for turning internal query language into vectorstore search params.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.self_query.base.SelfQueryRetriever.html"} {"id": "4a00fec1e866-1", "text": "These tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam use_original_query: bool = False\u00b6\nparam vectorstore: langchain.vectorstores.base.VectorStore [Required]\u00b6\nThe underlying vector store from which documents will be retrieved.\nparam verbose: bool = False\u00b6\nUse original query instead of the revised new query from LLM\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nclassmethod from_llm(llm: BaseLanguageModel, vectorstore: VectorStore, document_contents: str, metadata_field_info: List[AttributeInfo], structured_query_translator: Optional[Visitor] = None, chain_kwargs: Optional[Dict] = None, enable_limit: bool = False, use_original_query: bool = False, **kwargs: Any) \u2192 SelfQueryRetriever[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.self_query.base.SelfQueryRetriever.html"} {"id": "4a00fec1e866-2", "text": "get_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_translator\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate translator.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.self_query.base.SelfQueryRetriever.html"} {"id": "aef118b1cb4b-0", "text": "langchain.retrievers.zep.ZepRetriever\u00b6\nclass langchain.retrievers.zep.ZepRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, zep_client: Any = None, session_id: str, top_k: Optional[int] = None)[source]\u00b6\nBases: BaseRetriever\nA Retriever implementation for the Zep long-term memory store. Search your\nuser\u2019s long-term chat history with Zep.\nNote: You will need to provide the user\u2019s session_id to use this retriever.\nMore on Zep:\nZep provides long-term conversation storage for LLM apps. The server stores,\nsummarizes, embeds, indexes, and enriches conversational AI chat\nhistories, and exposes them via simple, low-latency APIs.\nFor server installation instructions, see:\nhttps://docs.getzep.com/deployment/quickstart/\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam session_id: str [Required]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.zep.ZepRetriever.html"} {"id": "aef118b1cb4b-1", "text": "use case.\nparam top_k: Optional[int] = None\u00b6\nparam zep_client: Any = None\u00b6\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nvalidator create_client\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.zep.ZepRetriever.html"} {"id": "aef118b1cb4b-2", "text": "List of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.zep.ZepRetriever.html"} {"id": "d0324468ed05-0", "text": "langchain.retrievers.llama_index.LlamaIndexGraphRetriever\u00b6\nclass langchain.retrievers.llama_index.LlamaIndexGraphRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, graph: Any = None, query_configs: List[Dict] = None)[source]\u00b6\nBases: BaseRetriever\nQuestion-answering with sources over an LlamaIndex graph data structure.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam graph: Any = None\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam query_configs: List[Dict] [Optional]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.llama_index.LlamaIndexGraphRetriever.html"} {"id": "d0324468ed05-1", "text": ":param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.llama_index.LlamaIndexGraphRetriever.html"} {"id": "d0324468ed05-2", "text": "eg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.llama_index.LlamaIndexGraphRetriever.html"} {"id": "dfde8ef8d2a9-0", "text": "langchain.retrievers.self_query.weaviate.WeaviateTranslator\u00b6\nclass langchain.retrievers.self_query.weaviate.WeaviateTranslator[source]\u00b6\nBases: Visitor\nLogic for converting internal query language elements to valid filters.\nMethods\n__init__()\nvisit_comparison(comparison)\nTranslate a Comparison.\nvisit_operation(operation)\nTranslate an Operation.\nvisit_structured_query(structured_query)\nTranslate a StructuredQuery.\nAttributes\nallowed_comparators\nallowed_operators\nSubset of allowed logical operators.\nvisit_comparison(comparison: Comparison) \u2192 Dict[source]\u00b6\nTranslate a Comparison.\nvisit_operation(operation: Operation) \u2192 Dict[source]\u00b6\nTranslate an Operation.\nvisit_structured_query(structured_query: StructuredQuery) \u2192 Tuple[str, dict][source]\u00b6\nTranslate a StructuredQuery.\nallowed_comparators: Optional[Sequence[Comparator]] = []\u00b6\nallowed_operators: Optional[Sequence[Operator]] = [, ]\u00b6\nSubset of allowed logical operators.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.self_query.weaviate.WeaviateTranslator.html"} {"id": "44ce6c84ef1e-0", "text": "langchain.retrievers.document_compressors.cohere_rerank.CohereRerank\u00b6\nclass langchain.retrievers.document_compressors.cohere_rerank.CohereRerank(*, client: Client, top_n: int = 3, model: str = 'rerank-english-v2.0')[source]\u00b6\nBases: BaseDocumentCompressor\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam client: Client [Required]\u00b6\nparam model: str = 'rerank-english-v2.0'\u00b6\nparam top_n: int = 3\u00b6\nasync acompress_documents(documents: Sequence[Document], query: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Document][source]\u00b6\nCompress retrieved documents given the query context.\ncompress_documents(documents: Sequence[Document], query: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Document][source]\u00b6\nCompress retrieved documents given the query context.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.cohere_rerank.CohereRerank.html"} {"id": "7c0a2c2d2ed8-0", "text": "langchain.retrievers.pinecone_hybrid_search.hash_text\u00b6\nlangchain.retrievers.pinecone_hybrid_search.hash_text(text: str) \u2192 str[source]\u00b6\nHash a text using SHA256.\nParameters\ntext \u2013 Text to hash.\nReturns\nHashed text.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.pinecone_hybrid_search.hash_text.html"} {"id": "5b229c54019d-0", "text": "langchain.retrievers.databerry.DataberryRetriever\u00b6\nclass langchain.retrievers.databerry.DataberryRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, datastore_url: str, top_k: Optional[int] = None, api_key: Optional[str] = None)[source]\u00b6\nBases: BaseRetriever\nRetriever that uses the Databerry API.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_key: Optional[str] = None\u00b6\nparam datastore_url: str [Required]\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam top_k: Optional[int] = None\u00b6\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.databerry.DataberryRetriever.html"} {"id": "5b229c54019d-1", "text": ":param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.databerry.DataberryRetriever.html"} {"id": "5b229c54019d-2", "text": "property lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.databerry.DataberryRetriever.html"} {"id": "d3944fe76da4-0", "text": "langchain.retrievers.document_compressors.base.BaseDocumentCompressor\u00b6\nclass langchain.retrievers.document_compressors.base.BaseDocumentCompressor[source]\u00b6\nBases: BaseModel, ABC\nBase abstraction interface for document compression.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nabstract async acompress_documents(documents: Sequence[Document], query: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Document][source]\u00b6\nCompress retrieved documents given the query context.\nabstract compress_documents(documents: Sequence[Document], query: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Document][source]\u00b6\nCompress retrieved documents given the query context.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.base.BaseDocumentCompressor.html"} {"id": "0b610833f51a-0", "text": "langchain.retrievers.multi_query.MultiQueryRetriever\u00b6\nclass langchain.retrievers.multi_query.MultiQueryRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, retriever: BaseRetriever, llm_chain: LLMChain, verbose: bool = True, parser_key: str = 'lines')[source]\u00b6\nBases: BaseRetriever\nGiven a user query, use an LLM to write a set of queries.\nRetrieve docs for each query. Rake the unique union of all retrieved docs.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam llm_chain: langchain.chains.llm.LLMChain [Required]\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam parser_key: str = 'lines'\u00b6\nparam retriever: langchain.schema.retriever.BaseRetriever [Required]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam verbose: bool = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.multi_query.MultiQueryRetriever.html"} {"id": "0b610833f51a-1", "text": "use case.\nparam verbose: bool = True\u00b6\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nclassmethod from_llm(retriever: BaseRetriever, llm: BaseLLM, prompt: PromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='You are an AI language model assistant. Your task is \\n\u00a0\u00a0\u00a0 to generate 3 different versions of the given user \\n\u00a0\u00a0\u00a0 question to retrieve relevant documents from a vector\u00a0 database. \\n\u00a0\u00a0\u00a0 By generating multiple perspectives on the user question, \\n\u00a0\u00a0\u00a0 your goal is to help the user overcome some of the limitations \\n\u00a0\u00a0\u00a0 of distance-based similarity search. Provide these alternative \\n\u00a0\u00a0\u00a0 questions seperated by newlines. Original question: {question}', template_format='f-string', validate_template=True), parser_key: str = 'lines') \u2192 MultiQueryRetriever[source]\u00b6\nInitialize from llm using default template.\nParameters\nretriever \u2013 retriever to query documents from\nllm \u2013 llm for query generation using DEFAULT_QUERY_PROMPT\nReturns\nMultiQueryRetriever", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.multi_query.MultiQueryRetriever.html"} {"id": "0b610833f51a-2", "text": "Returns\nMultiQueryRetriever\ngenerate_queries(question: str, run_manager: CallbackManagerForRetrieverRun) \u2192 List[str][source]\u00b6\nGenerate queries based upon user input.\nParameters\nquestion \u2013 user query\nReturns\nList of LLM generated queries that are similar to the user input\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nretrieve_documents(queries: List[str], run_manager: CallbackManagerForRetrieverRun) \u2192 List[Document][source]\u00b6\nRun all LLM generated queries.\nParameters\nqueries \u2013 query list\nReturns\nList of retrived Documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nunique_union(documents: List[Document]) \u2192 List[Document][source]\u00b6\nGet uniqe Documents.\nParameters\ndocuments \u2013 List of retrived Documents\nReturns\nList of unique retrived Documents\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.multi_query.MultiQueryRetriever.html"} {"id": "0b610833f51a-3", "text": "constructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.multi_query.MultiQueryRetriever.html"} {"id": "4981767dbf0f-0", "text": "langchain.retrievers.metal.MetalRetriever\u00b6\nclass langchain.retrievers.metal.MetalRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, params: Optional[dict] = None)[source]\u00b6\nBases: BaseRetriever\nRetriever that uses the Metal API.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam client: Any = None\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam params: Optional[dict] = None\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.metal.MetalRetriever.html"} {"id": "4981767dbf0f-1", "text": "These tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_client\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that the client is of the correct type.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.metal.MetalRetriever.html"} {"id": "4981767dbf0f-2", "text": "Return a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.metal.MetalRetriever.html"} {"id": "fb73e9339f6f-0", "text": "langchain.retrievers.llama_index.LlamaIndexRetriever\u00b6\nclass langchain.retrievers.llama_index.LlamaIndexRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, index: Any = None, query_kwargs: Dict = None)[source]\u00b6\nBases: BaseRetriever\nQuestion-answering with sources over an LlamaIndex data structure.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam index: Any = None\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam query_kwargs: Dict [Optional]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.llama_index.LlamaIndexRetriever.html"} {"id": "fb73e9339f6f-1", "text": "These tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.llama_index.LlamaIndexRetriever.html"} {"id": "fb73e9339f6f-2", "text": "property lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.llama_index.LlamaIndexRetriever.html"} {"id": "f7d52d60b7f8-0", "text": "langchain.retrievers.elastic_search_bm25.ElasticSearchBM25Retriever\u00b6\nclass langchain.retrievers.elastic_search_bm25.ElasticSearchBM25Retriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, index_name: str)[source]\u00b6\nBases: BaseRetriever\nWrapper around Elasticsearch using BM25 as a retrieval method.\nTo connect to an Elasticsearch instance that requires login credentials,\nincluding Elastic Cloud, use the Elasticsearch URL format\nhttps://username:password@es_host:9243. For example, to connect to Elastic\nCloud, create the Elasticsearch URL with the required authentication details and\npass it to the ElasticVectorSearch constructor as the named parameter\nelasticsearch_url.\nYou can obtain your Elastic Cloud URL and login credentials by logging in to the\nElastic Cloud console at https://cloud.elastic.co, selecting your deployment, and\nnavigating to the \u201cDeployments\u201d page.\nTo obtain your Elastic Cloud password for the default \u201celastic\u201d user:\nLog in to the Elastic Cloud console at https://cloud.elastic.co\nGo to \u201cSecurity\u201d > \u201cUsers\u201d\nLocate the \u201celastic\u201d user and click \u201cEdit\u201d\nClick \u201cReset password\u201d\nFollow the prompts to reset the password\nThe format for Elastic Cloud URLs is\nhttps://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam client: Any = None\u00b6\nparam index_name: str [Required]\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.elastic_search_bm25.ElasticSearchBM25Retriever.html"} {"id": "f7d52d60b7f8-1", "text": "This metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nadd_texts(texts: Iterable[str], refresh_indices: bool = True) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the retriever.\nParameters\ntexts \u2013 Iterable of strings to add to the retriever.\nrefresh_indices \u2013 bool to refresh ElasticSearch indices\nReturns\nList of ids from adding the texts into the retriever.\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.elastic_search_bm25.ElasticSearchBM25Retriever.html"} {"id": "f7d52d60b7f8-2", "text": "and passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nclassmethod create(elasticsearch_url: str, index_name: str, k1: float = 2.0, b: float = 0.75) \u2192 ElasticSearchBM25Retriever[source]\u00b6\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.elastic_search_bm25.ElasticSearchBM25Retriever.html"} {"id": "f7d52d60b7f8-3", "text": "Return whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.elastic_search_bm25.ElasticSearchBM25Retriever.html"} {"id": "e0979f370a3e-0", "text": "langchain.retrievers.kendra.AdditionalResultAttribute\u00b6\nclass langchain.retrievers.kendra.AdditionalResultAttribute(*, Key: str, ValueType: Literal['TEXT_WITH_HIGHLIGHTS_VALUE'], Value: AdditionalResultAttributeValue, **extra_data: Any)[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam Key: str [Required]\u00b6\nparam Value: langchain.retrievers.kendra.AdditionalResultAttributeValue [Required]\u00b6\nparam ValueType: Literal['TEXT_WITH_HIGHLIGHTS_VALUE'] [Required]\u00b6\nget_value_text() \u2192 str[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.kendra.AdditionalResultAttribute.html"} {"id": "abe0ac5b8014-0", "text": "langchain.retrievers.document_compressors.base.DocumentCompressorPipeline\u00b6\nclass langchain.retrievers.document_compressors.base.DocumentCompressorPipeline(*, transformers: List[Union[BaseDocumentTransformer, BaseDocumentCompressor]])[source]\u00b6\nBases: BaseDocumentCompressor\nDocument compressor that uses a pipeline of transformers.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam transformers: List[Union[langchain.schema.document.BaseDocumentTransformer, langchain.retrievers.document_compressors.base.BaseDocumentCompressor]] [Required]\u00b6\nList of document filters that are chained together and run in sequence.\nasync acompress_documents(documents: Sequence[Document], query: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Document][source]\u00b6\nCompress retrieved documents given the query context.\ncompress_documents(documents: Sequence[Document], query: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Document][source]\u00b6\nTransform a list of documents.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.base.DocumentCompressorPipeline.html"} {"id": "a9abcbe85acd-0", "text": "langchain.retrievers.document_compressors.chain_extract.default_get_input\u00b6\nlangchain.retrievers.document_compressors.chain_extract.default_get_input(query: str, doc: Document) \u2192 Dict[str, Any][source]\u00b6\nReturn the compression chain input.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.chain_extract.default_get_input.html"} {"id": "671bb2e90996-0", "text": "langchain.retrievers.kendra.QueryResult\u00b6\nclass langchain.retrievers.kendra.QueryResult(*, ResultItems: List[QueryResultItem], **extra_data: Any)[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam ResultItems: List[langchain.retrievers.kendra.QueryResultItem] [Required]\u00b6\nget_top_k_docs(top_n: int) \u2192 List[Document][source]\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.kendra.QueryResult.html"} {"id": "1323970d5237-0", "text": "langchain.retrievers.kendra.AdditionalResultAttributeValue\u00b6\nclass langchain.retrievers.kendra.AdditionalResultAttributeValue(*, TextWithHighlightsValue: TextWithHighLights, **extra_data: Any)[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam TextWithHighlightsValue: langchain.retrievers.kendra.TextWithHighLights [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.kendra.AdditionalResultAttributeValue.html"} {"id": "fd9ab6f25cb3-0", "text": "langchain.retrievers.document_compressors.chain_extract.LLMChainExtractor\u00b6\nclass langchain.retrievers.document_compressors.chain_extract.LLMChainExtractor(*, llm_chain: ~langchain.chains.llm.LLMChain, get_input: ~typing.Callable[[str, ~langchain.schema.document.Document], dict] = )[source]\u00b6\nBases: BaseDocumentCompressor\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam get_input: Callable[[str, langchain.schema.document.Document], dict] = \u00b6\nCallable for constructing the chain input from the query and a Document.\nparam llm_chain: langchain.chains.llm.LLMChain [Required]\u00b6\nLLM wrapper to use for compressing documents.\nasync acompress_documents(documents: Sequence[Document], query: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Document][source]\u00b6\nCompress page content of raw documents asynchronously.\ncompress_documents(documents: Sequence[Document], query: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Document][source]\u00b6\nCompress page content of raw documents.\nclassmethod from_llm(llm: BaseLanguageModel, prompt: Optional[PromptTemplate] = None, get_input: Optional[Callable[[str, Document], str]] = None, llm_chain_kwargs: Optional[dict] = None) \u2192 LLMChainExtractor[source]\u00b6\nInitialize from LLM.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.chain_extract.LLMChainExtractor.html"} {"id": "9ff64e0f295b-0", "text": "langchain.retrievers.docarray.DocArrayRetriever\u00b6\nclass langchain.retrievers.docarray.DocArrayRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, index: Any = None, embeddings: Embeddings, search_field: str, content_field: str, search_type: SearchType = SearchType.similarity, top_k: int = 1, filters: Optional[Any] = None)[source]\u00b6\nBases: BaseRetriever\nRetriever class for DocArray Document Indices.\nCurrently, supports 5 backends:\nInMemoryExactNNIndex, HnswDocumentIndex, QdrantDocumentIndex,\nElasticDocIndex, and WeaviateDocumentIndex.\nParameters\nindex \u2013 One of the above-mentioned index instances\nembeddings \u2013 Embedding model to represent text as vectors\nsearch_field \u2013 Field to consider for searching in the documents.\nShould be an embedding/vector/tensor.\ncontent_field \u2013 Field that represents the main content in your document schema.\nWill be used as a page_content. Everything else will go into metadata.\nsearch_type \u2013 Type of search to perform (similarity / mmr)\nfilters \u2013 Filters applied for document retrieval.\ntop_k \u2013 Number of documents to return\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam content_field: str [Required]\u00b6\nparam embeddings: langchain.embeddings.base.Embeddings [Required]\u00b6\nparam filters: Optional[Any] = None\u00b6\nparam index: Any = None\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.docarray.DocArrayRetriever.html"} {"id": "9ff64e0f295b-1", "text": "and passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam search_field: str [Required]\u00b6\nparam search_type: langchain.retrievers.docarray.SearchType = SearchType.similarity\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam top_k: int = 1\u00b6\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.docarray.DocArrayRetriever.html"} {"id": "9ff64e0f295b-2", "text": ":param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.docarray.DocArrayRetriever.html"} {"id": "c1f64dffd0e2-0", "text": "langchain.retrievers.kendra.clean_excerpt\u00b6\nlangchain.retrievers.kendra.clean_excerpt(excerpt: str) \u2192 str[source]\u00b6\nCleans an excerpt from Kendra.\nParameters\nexcerpt \u2013 The excerpt to clean.\nReturns\nThe cleaned excerpt.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.kendra.clean_excerpt.html"} {"id": "5c5d49e8f548-0", "text": "langchain.retrievers.kendra.AmazonKendraRetriever\u00b6\nclass langchain.retrievers.kendra.AmazonKendraRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, index_id: str, region_name: Optional[str] = None, credentials_profile_name: Optional[str] = None, top_k: int = 3, attribute_filter: Optional[Dict] = None, client: Any = None)[source]\u00b6\nBases: BaseRetriever\nRetriever class to query documents from Amazon Kendra Index.\nParameters\nindex_id \u2013 Kendra index id\nregion_name \u2013 The aws region e.g., us-west-2.\nFallsback to AWS_DEFAULT_REGION env variable\nor region specified in ~/.aws/config.\ncredentials_profile_name \u2013 The name of the profile in the ~/.aws/credentials\nor ~/.aws/config files, which has either access keys or role information\nspecified. If not specified, the default credential profile or, if on an\nEC2 instance, credentials from IMDS will be used.\ntop_k \u2013 No of results to return\nattribute_filter \u2013 Additional filtering of results based on metadata\nSee: https://docs.aws.amazon.com/kendra/latest/APIReference\nclient \u2013 boto3 client for Kendra\nExample\nretriever = AmazonKendraRetriever(\n index_id=\"c0806df7-e76b-4bce-9b5c-d5582f6b1a03\"\n)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam attribute_filter: Optional[Dict] = None\u00b6\nparam client: Any = None\u00b6\nparam credentials_profile_name: Optional[str] = None\u00b6\nparam index_id: str [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.kendra.AmazonKendraRetriever.html"} {"id": "5c5d49e8f548-1", "text": "param index_id: str [Required]\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam region_name: Optional[str] = None\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam top_k: int = 3\u00b6\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nvalidator create_client\u00a0 \u00bb\u00a0 all fields[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.kendra.AmazonKendraRetriever.html"} {"id": "5c5d49e8f548-2", "text": "Returns\nList of relevant documents\nvalidator create_client\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.kendra.AmazonKendraRetriever.html"} {"id": "57794de028da-0", "text": "langchain.retrievers.kendra.TextWithHighLights\u00b6\nclass langchain.retrievers.kendra.TextWithHighLights(*, Text: str, Highlights: Optional[Any] = None, **extra_data: Any)[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam Highlights: Optional[Any] = None\u00b6\nparam Text: str [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.kendra.TextWithHighLights.html"} {"id": "146bd77761f6-0", "text": "langchain.retrievers.document_compressors.chain_extract.NoOutputParser\u00b6\nclass langchain.retrievers.document_compressors.chain_extract.NoOutputParser(*, no_output_str: str = 'NO_OUTPUT')[source]\u00b6\nBases: BaseOutputParser[str]\nParse outputs that could return a null string of some sort.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam no_output_str: str = 'NO_OUTPUT'\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 str[source]\u00b6\nParse a single string model output into some structure.\nParameters\ntext \u2013 String output of language model.\nReturns\nStructured output.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.chain_extract.NoOutputParser.html"} {"id": "146bd77761f6-1", "text": "to_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.chain_extract.NoOutputParser.html"} {"id": "efafcbd79106-0", "text": "langchain.retrievers.tfidf.TFIDFRetriever\u00b6\nclass langchain.retrievers.tfidf.TFIDFRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, vectorizer: Any = None, docs: List[Document], tfidf_array: Any = None, k: int = 4)[source]\u00b6\nBases: BaseRetriever\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam docs: List[langchain.schema.document.Document] [Required]\u00b6\nparam k: int = 4\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam tfidf_array: Any = None\u00b6\nparam vectorizer: Any = None\u00b6\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.tfidf.TFIDFRetriever.html"} {"id": "efafcbd79106-1", "text": ":param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nclassmethod from_documents(documents: Iterable[Document], *, tfidf_params: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 TFIDFRetriever[source]\u00b6\nclassmethod from_texts(texts: Iterable[str], metadatas: Optional[Iterable[dict]] = None, tfidf_params: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 TFIDFRetriever[source]\u00b6\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.tfidf.TFIDFRetriever.html"} {"id": "efafcbd79106-2", "text": "to_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.tfidf.TFIDFRetriever.html"} {"id": "de46f8f562de-0", "text": "langchain.retrievers.svm.SVMRetriever\u00b6\nclass langchain.retrievers.svm.SVMRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, embeddings: Embeddings, index: Any = None, texts: List[str], k: int = 4, relevancy_threshold: Optional[float] = None)[source]\u00b6\nBases: BaseRetriever\nSVM Retriever.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam embeddings: Embeddings [Required]\u00b6\nparam index: Any = None\u00b6\nparam k: int = 4\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam relevancy_threshold: Optional[float] = None\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam texts: List[str] [Required]\u00b6\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.svm.SVMRetriever.html"} {"id": "de46f8f562de-1", "text": "Asynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nclassmethod from_texts(texts: List[str], embeddings: Embeddings, **kwargs: Any) \u2192 SVMRetriever[source]\u00b6\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.svm.SVMRetriever.html"} {"id": "de46f8f562de-2", "text": "property lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.svm.SVMRetriever.html"} {"id": "f1e466aeecae-0", "text": "langchain.retrievers.zilliz.ZillizRetriever\u00b6\nclass langchain.retrievers.zilliz.ZillizRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, embedding_function: Embeddings, collection_name: str = 'LangChainCollection', connection_args: Optional[Dict[str, Any]] = None, consistency_level: str = 'Session', search_params: Optional[dict] = None, store: Zilliz, retriever: BaseRetriever)[source]\u00b6\nBases: BaseRetriever\nRetriever that uses the Zilliz API.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam collection_name: str = 'LangChainCollection'\u00b6\nparam connection_args: Optional[Dict[str, Any]] = None\u00b6\nparam consistency_level: str = 'Session'\u00b6\nparam embedding_function: langchain.embeddings.base.Embeddings [Required]\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam retriever: langchain.schema.retriever.BaseRetriever [Required]\u00b6\nparam search_params: Optional[dict] = None\u00b6\nparam store: langchain.vectorstores.zilliz.Zilliz [Required]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.zilliz.ZillizRetriever.html"} {"id": "f1e466aeecae-1", "text": "and passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nadd_texts(texts: List[str], metadatas: Optional[List[dict]] = None) \u2192 None[source]\u00b6\nAdd text to the Zilliz store\nParameters\ntexts (List[str]) \u2013 The text\nmetadatas (List[dict]) \u2013 Metadata dicts, must line up with existing store\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nvalidator create_client\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.zilliz.ZillizRetriever.html"} {"id": "f1e466aeecae-2", "text": "These tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.zilliz.ZillizRetriever.html"} {"id": "ddd63156a62d-0", "text": "langchain.retrievers.vespa_retriever.VespaRetriever\u00b6\nclass langchain.retrievers.vespa_retriever.VespaRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, app: Vespa, body: Dict, content_field: str, metadata_fields: Sequence[str])[source]\u00b6\nBases: BaseRetriever\nRetriever that uses the Vespa.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam app: Vespa [Required]\u00b6\nparam body: Dict [Required]\u00b6\nparam content_field: str [Required]\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam metadata_fields: Sequence[str] [Required]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.vespa_retriever.VespaRetriever.html"} {"id": "ddd63156a62d-1", "text": ":param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nclassmethod from_params(url: str, content_field: str, *, k: Optional[int] = None, metadata_fields: Union[Sequence[str], Literal['*']] = (), sources: Optional[Union[Sequence[str], Literal['*']]] = None, _filter: Optional[str] = None, yql: Optional[str] = None, **kwargs: Any) \u2192 VespaRetriever[source]\u00b6\nInstantiate retriever from params.\nParameters\nurl (str) \u2013 Vespa app URL.\ncontent_field (str) \u2013 Field in results to return as Document page_content.\nk (Optional[int]) \u2013 Number of Documents to return. Defaults to None.\nmetadata_fields (Sequence[str] or \"*\") \u2013 Fields in results to include in\ndocument metadata. Defaults to empty tuple ().\nsources (Sequence[str] or \"*\" or None) \u2013 Sources to retrieve\nfrom. Defaults to None.\n_filter (Optional[str]) \u2013 Document filter condition expressed in YQL.\nDefaults to None.\nyql (Optional[str]) \u2013 Full YQL query to be used. Should not be specified\nif _filter or sources are specified. Defaults to None.\nkwargs (Any) \u2013 Keyword arguments added to query body.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.vespa_retriever.VespaRetriever.html"} {"id": "ddd63156a62d-2", "text": "kwargs (Any) \u2013 Keyword arguments added to query body.\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nget_relevant_documents_with_filter(query: str, *, _filter: Optional[str] = None) \u2192 List[Document][source]\u00b6\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.vespa_retriever.VespaRetriever.html"} {"id": "79ce9add4c1e-0", "text": "langchain.retrievers.document_compressors.chain_filter.LLMChainFilter\u00b6\nclass langchain.retrievers.document_compressors.chain_filter.LLMChainFilter(*, llm_chain: ~langchain.chains.llm.LLMChain, get_input: ~typing.Callable[[str, ~langchain.schema.document.Document], dict] = )[source]\u00b6\nBases: BaseDocumentCompressor\nFilter that drops documents that aren\u2019t relevant to the query.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam get_input: Callable[[str, langchain.schema.document.Document], dict] = \u00b6\nCallable for constructing the chain input from the query and a Document.\nparam llm_chain: langchain.chains.llm.LLMChain [Required]\u00b6\nLLM wrapper to use for filtering documents.\nThe chain prompt is expected to have a BooleanOutputParser.\nasync acompress_documents(documents: Sequence[Document], query: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Document][source]\u00b6\nFilter down documents.\ncompress_documents(documents: Sequence[Document], query: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Document][source]\u00b6\nFilter down documents based on their relevance to the query.\nclassmethod from_llm(llm: BaseLanguageModel, prompt: Optional[BasePromptTemplate] = None, **kwargs: Any) \u2192 LLMChainFilter[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.chain_filter.LLMChainFilter.html"} {"id": "d5a226d9ef9b-0", "text": "langchain.retrievers.self_query.chroma.ChromaTranslator\u00b6\nclass langchain.retrievers.self_query.chroma.ChromaTranslator[source]\u00b6\nBases: Visitor\nLogic for converting internal query language elements to valid filters.\nMethods\n__init__()\nvisit_comparison(comparison)\nTranslate a Comparison.\nvisit_operation(operation)\nTranslate an Operation.\nvisit_structured_query(structured_query)\nTranslate a StructuredQuery.\nAttributes\nallowed_comparators\nSubset of allowed logical comparators.\nallowed_operators\nSubset of allowed logical operators.\nvisit_comparison(comparison: Comparison) \u2192 Dict[source]\u00b6\nTranslate a Comparison.\nvisit_operation(operation: Operation) \u2192 Dict[source]\u00b6\nTranslate an Operation.\nvisit_structured_query(structured_query: StructuredQuery) \u2192 Tuple[str, dict][source]\u00b6\nTranslate a StructuredQuery.\nallowed_comparators: Optional[Sequence[Comparator]] = [, , , , ]\u00b6\nSubset of allowed logical comparators.\nallowed_operators: Optional[Sequence[Operator]] = [, ]\u00b6\nSubset of allowed logical operators.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.self_query.chroma.ChromaTranslator.html"} {"id": "cae908ae685c-0", "text": "langchain.retrievers.docarray.SearchType\u00b6\nclass langchain.retrievers.docarray.SearchType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]\u00b6\nBases: str, Enum\nEnumerator of the types of search to perform.\nMethods\n__init__(*args,\u00a0**kwds)\ncapitalize()\nReturn a capitalized version of the string.\ncasefold()\nReturn a version of the string suitable for caseless comparisons.\ncenter(width[,\u00a0fillchar])\nReturn a centered string of length width.\ncount(sub[,\u00a0start[,\u00a0end]])\nReturn the number of non-overlapping occurrences of substring sub in string S[start:end].\nencode([encoding,\u00a0errors])\nEncode the string using the codec registered for encoding.\nendswith(suffix[,\u00a0start[,\u00a0end]])\nReturn True if S ends with the specified suffix, False otherwise.\nexpandtabs([tabsize])\nReturn a copy where all tab characters are expanded using spaces.\nfind(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nformat(*args,\u00a0**kwargs)\nReturn a formatted version of S, using substitutions from args and kwargs.\nformat_map(mapping)\nReturn a formatted version of S, using substitutions from mapping.\nindex(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nisalnum()\nReturn True if the string is an alpha-numeric string, False otherwise.\nisalpha()\nReturn True if the string is an alphabetic string, False otherwise.\nisascii()\nReturn True if all characters in the string are ASCII, False otherwise.\nisdecimal()\nReturn True if the string is a decimal string, False otherwise.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.docarray.SearchType.html"} {"id": "cae908ae685c-1", "text": "isdecimal()\nReturn True if the string is a decimal string, False otherwise.\nisdigit()\nReturn True if the string is a digit string, False otherwise.\nisidentifier()\nReturn True if the string is a valid Python identifier, False otherwise.\nislower()\nReturn True if the string is a lowercase string, False otherwise.\nisnumeric()\nReturn True if the string is a numeric string, False otherwise.\nisprintable()\nReturn True if the string is printable, False otherwise.\nisspace()\nReturn True if the string is a whitespace string, False otherwise.\nistitle()\nReturn True if the string is a title-cased string, False otherwise.\nisupper()\nReturn True if the string is an uppercase string, False otherwise.\njoin(iterable,\u00a0/)\nConcatenate any number of strings.\nljust(width[,\u00a0fillchar])\nReturn a left-justified string of length width.\nlower()\nReturn a copy of the string converted to lowercase.\nlstrip([chars])\nReturn a copy of the string with leading whitespace removed.\nmaketrans\nReturn a translation table usable for str.translate().\npartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nremoveprefix(prefix,\u00a0/)\nReturn a str with the given prefix string removed if present.\nremovesuffix(suffix,\u00a0/)\nReturn a str with the given suffix string removed if present.\nreplace(old,\u00a0new[,\u00a0count])\nReturn a copy with all occurrences of substring old replaced by new.\nrfind(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].\nrindex(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.docarray.SearchType.html"} {"id": "cae908ae685c-2", "text": "rjust(width[,\u00a0fillchar])\nReturn a right-justified string of length width.\nrpartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nrsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nrstrip([chars])\nReturn a copy of the string with trailing whitespace removed.\nsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nsplitlines([keepends])\nReturn a list of the lines in the string, breaking at line boundaries.\nstartswith(prefix[,\u00a0start[,\u00a0end]])\nReturn True if S starts with the specified prefix, False otherwise.\nstrip([chars])\nReturn a copy of the string with leading and trailing whitespace removed.\nswapcase()\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\nReturn a version of the string where each word is titlecased.\ntranslate(table,\u00a0/)\nReplace each character in the string using the given translation table.\nupper()\nReturn a copy of the string converted to uppercase.\nzfill(width,\u00a0/)\nPad a numeric string with zeros on the left, to fill a field of the given width.\nAttributes\nsimilarity\nmmr\ncapitalize()\u00b6\nReturn a capitalized version of the string.\nMore specifically, make the first character have upper case and the rest lower\ncase.\ncasefold()\u00b6\nReturn a version of the string suitable for caseless comparisons.\ncenter(width, fillchar=' ', /)\u00b6\nReturn a centered string of length width.\nPadding is done using the specified fill character (default is a space).\ncount(sub[, start[, end]]) \u2192 int\u00b6\nReturn the number of non-overlapping occurrences of substring sub in", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.docarray.SearchType.html"} {"id": "cae908ae685c-3", "text": "Return the number of non-overlapping occurrences of substring sub in\nstring S[start:end]. Optional arguments start and end are\ninterpreted as in slice notation.\nencode(encoding='utf-8', errors='strict')\u00b6\nEncode the string using the codec registered for encoding.\nencodingThe encoding in which to encode the string.\nerrorsThe error handling scheme to use for encoding errors.\nThe default is \u2018strict\u2019 meaning that encoding errors raise a\nUnicodeEncodeError. Other possible values are \u2018ignore\u2019, \u2018replace\u2019 and\n\u2018xmlcharrefreplace\u2019 as well as any other name registered with\ncodecs.register_error that can handle UnicodeEncodeErrors.\nendswith(suffix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S ends with the specified suffix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nsuffix can also be a tuple of strings to try.\nexpandtabs(tabsize=8)\u00b6\nReturn a copy where all tab characters are expanded using spaces.\nIf tabsize is not given, a tab size of 8 characters is assumed.\nfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nformat(*args, **kwargs) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from args and kwargs.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nformat_map(mapping) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from mapping.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.docarray.SearchType.html"} {"id": "cae908ae685c-4", "text": "Return the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nisalnum()\u00b6\nReturn True if the string is an alpha-numeric string, False otherwise.\nA string is alpha-numeric if all characters in the string are alpha-numeric and\nthere is at least one character in the string.\nisalpha()\u00b6\nReturn True if the string is an alphabetic string, False otherwise.\nA string is alphabetic if all characters in the string are alphabetic and there\nis at least one character in the string.\nisascii()\u00b6\nReturn True if all characters in the string are ASCII, False otherwise.\nASCII characters have code points in the range U+0000-U+007F.\nEmpty string is ASCII too.\nisdecimal()\u00b6\nReturn True if the string is a decimal string, False otherwise.\nA string is a decimal string if all characters in the string are decimal and\nthere is at least one character in the string.\nisdigit()\u00b6\nReturn True if the string is a digit string, False otherwise.\nA string is a digit string if all characters in the string are digits and there\nis at least one character in the string.\nisidentifier()\u00b6\nReturn True if the string is a valid Python identifier, False otherwise.\nCall keyword.iskeyword(s) to test whether string s is a reserved identifier,\nsuch as \u201cdef\u201d or \u201cclass\u201d.\nislower()\u00b6\nReturn True if the string is a lowercase string, False otherwise.\nA string is lowercase if all cased characters in the string are lowercase and\nthere is at least one cased character in the string.\nisnumeric()\u00b6\nReturn True if the string is a numeric string, False otherwise.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.docarray.SearchType.html"} {"id": "cae908ae685c-5", "text": "isnumeric()\u00b6\nReturn True if the string is a numeric string, False otherwise.\nA string is numeric if all characters in the string are numeric and there is at\nleast one character in the string.\nisprintable()\u00b6\nReturn True if the string is printable, False otherwise.\nA string is printable if all of its characters are considered printable in\nrepr() or if it is empty.\nisspace()\u00b6\nReturn True if the string is a whitespace string, False otherwise.\nA string is whitespace if all characters in the string are whitespace and there\nis at least one character in the string.\nistitle()\u00b6\nReturn True if the string is a title-cased string, False otherwise.\nIn a title-cased string, upper- and title-case characters may only\nfollow uncased characters and lowercase characters only cased ones.\nisupper()\u00b6\nReturn True if the string is an uppercase string, False otherwise.\nA string is uppercase if all cased characters in the string are uppercase and\nthere is at least one cased character in the string.\njoin(iterable, /)\u00b6\nConcatenate any number of strings.\nThe string whose method is called is inserted in between each given string.\nThe result is returned as a new string.\nExample: \u2018.\u2019.join([\u2018ab\u2019, \u2018pq\u2019, \u2018rs\u2019]) -> \u2018ab.pq.rs\u2019\nljust(width, fillchar=' ', /)\u00b6\nReturn a left-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nlower()\u00b6\nReturn a copy of the string converted to lowercase.\nlstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nstatic maketrans()\u00b6\nReturn a translation table usable for str.translate().", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.docarray.SearchType.html"} {"id": "cae908ae685c-6", "text": "static maketrans()\u00b6\nReturn a translation table usable for str.translate().\nIf there is only one argument, it must be a dictionary mapping Unicode\nordinals (integers) or characters to Unicode ordinals, strings or None.\nCharacter keys will be then converted to ordinals.\nIf there are two arguments, they must be strings of equal length, and\nin the resulting dictionary, each character in x will be mapped to the\ncharacter at the same position in y. If there is a third argument, it\nmust be a string, whose characters will be mapped to None in the result.\npartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string. If the separator is found,\nreturns a 3-tuple containing the part before the separator, the separator\nitself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing the original string\nand two empty strings.\nremoveprefix(prefix, /)\u00b6\nReturn a str with the given prefix string removed if present.\nIf the string starts with the prefix string, return string[len(prefix):].\nOtherwise, return a copy of the original string.\nremovesuffix(suffix, /)\u00b6\nReturn a str with the given suffix string removed if present.\nIf the string ends with the suffix string and that suffix is not empty,\nreturn string[:-len(suffix)]. Otherwise, return a copy of the original\nstring.\nreplace(old, new, count=- 1, /)\u00b6\nReturn a copy with all occurrences of substring old replaced by new.\ncountMaximum number of occurrences to replace.\n-1 (the default value) means replace all occurrences.\nIf the optional argument count is given, only the first count occurrences are\nreplaced.\nrfind(sub[, start[, end]]) \u2192 int\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.docarray.SearchType.html"} {"id": "cae908ae685c-7", "text": "replaced.\nrfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nrindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nrjust(width, fillchar=' ', /)\u00b6\nReturn a right-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nrpartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string, starting at the end. If\nthe separator is found, returns a 3-tuple containing the part before the\nseparator, the separator itself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing two empty strings\nand the original string.\nrsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nSplitting starts at the end of the string and works to the front.\nrstrip(chars=None, /)\u00b6\nReturn a copy of the string with trailing whitespace removed.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.docarray.SearchType.html"} {"id": "cae908ae685c-8", "text": "rstrip(chars=None, /)\u00b6\nReturn a copy of the string with trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nNote, str.split() is mainly useful for data that has been intentionally\ndelimited. With natural text that includes punctuation, consider using\nthe regular expression module.\nsplitlines(keepends=False)\u00b6\nReturn a list of the lines in the string, breaking at line boundaries.\nLine breaks are not included in the resulting list unless keepends is given and\ntrue.\nstartswith(prefix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S starts with the specified prefix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nprefix can also be a tuple of strings to try.\nstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading and trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nswapcase()\u00b6\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\u00b6\nReturn a version of the string where each word is titlecased.\nMore specifically, words start with uppercased characters and all remaining\ncased characters have lower case.\ntranslate(table, /)\u00b6\nReplace each character in the string using the given translation table.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.docarray.SearchType.html"} {"id": "cae908ae685c-9", "text": "translate(table, /)\u00b6\nReplace each character in the string using the given translation table.\ntableTranslation table, which must be a mapping of Unicode ordinals to\nUnicode ordinals, strings, or None.\nThe table must implement lookup/indexing via __getitem__, for instance a\ndictionary or list. If this operation raises LookupError, the character is\nleft untouched. Characters mapped to None are deleted.\nupper()\u00b6\nReturn a copy of the string converted to uppercase.\nzfill(width, /)\u00b6\nPad a numeric string with zeros on the left, to fill a field of the given width.\nThe string is never truncated.\nmmr = 'mmr'\u00b6\nsimilarity = 'similarity'\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.docarray.SearchType.html"} {"id": "3794f4a2354b-0", "text": "langchain.retrievers.self_query.myscale.DEFAULT_COMPOSER\u00b6\nlangchain.retrievers.self_query.myscale.DEFAULT_COMPOSER(op_name: str) \u2192 Callable[source]\u00b6\nDefault composer for logical operators.\nParameters\nop_name \u2013 Name of the operator.\nReturns\nCallable that takes a list of arguments and returns a string.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.self_query.myscale.DEFAULT_COMPOSER.html"} {"id": "120f7945e1a2-0", "text": "langchain.retrievers.kendra.combined_text\u00b6\nlangchain.retrievers.kendra.combined_text(title: str, excerpt: str) \u2192 str[source]\u00b6\nCombines a title and an excerpt into a single string.\nParameters\ntitle \u2013 The title of the document.\nexcerpt \u2013 The excerpt of the document.\nReturns\nThe combined text.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.kendra.combined_text.html"} {"id": "937d42f4bcc2-0", "text": "langchain.retrievers.milvus.MilvusRetriever\u00b6\nclass langchain.retrievers.milvus.MilvusRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, embedding_function: Embeddings, collection_name: str = 'LangChainCollection', connection_args: Optional[Dict[str, Any]] = None, consistency_level: str = 'Session', search_params: Optional[dict] = None, store: Milvus, retriever: BaseRetriever)[source]\u00b6\nBases: BaseRetriever\nRetriever that uses the Milvus API.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam collection_name: str = 'LangChainCollection'\u00b6\nparam connection_args: Optional[Dict[str, Any]] = None\u00b6\nparam consistency_level: str = 'Session'\u00b6\nparam embedding_function: langchain.embeddings.base.Embeddings [Required]\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam retriever: langchain.schema.retriever.BaseRetriever [Required]\u00b6\nparam search_params: Optional[dict] = None\u00b6\nparam store: langchain.vectorstores.milvus.Milvus [Required]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.milvus.MilvusRetriever.html"} {"id": "937d42f4bcc2-1", "text": "These tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nadd_texts(texts: List[str], metadatas: Optional[List[dict]] = None) \u2192 None[source]\u00b6\nAdd text to the Milvus store\nParameters\ntexts (List[str]) \u2013 The text\nmetadatas (List[dict]) \u2013 Metadata dicts, must line up with existing store\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nvalidator create_retriever\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nCreate the Milvus store and retriever.\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.milvus.MilvusRetriever.html"} {"id": "937d42f4bcc2-2", "text": ":param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.milvus.MilvusRetriever.html"} {"id": "667e6872f800-0", "text": "langchain.retrievers.self_query.pinecone.PineconeTranslator\u00b6\nclass langchain.retrievers.self_query.pinecone.PineconeTranslator[source]\u00b6\nBases: Visitor\nLogic for converting internal query language elements to valid filters.\nMethods\n__init__()\nvisit_comparison(comparison)\nTranslate a Comparison.\nvisit_operation(operation)\nTranslate an Operation.\nvisit_structured_query(structured_query)\nTranslate a StructuredQuery.\nAttributes\nallowed_comparators\nallowed_operators\nSubset of allowed logical operators.\nvisit_comparison(comparison: Comparison) \u2192 Dict[source]\u00b6\nTranslate a Comparison.\nvisit_operation(operation: Operation) \u2192 Dict[source]\u00b6\nTranslate an Operation.\nvisit_structured_query(structured_query: StructuredQuery) \u2192 Tuple[str, dict][source]\u00b6\nTranslate a StructuredQuery.\nallowed_comparators: Optional[Sequence[Comparator]] = None\u00b6\nallowed_operators: Optional[Sequence[Operator]] = [, ]\u00b6\nSubset of allowed logical operators.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.self_query.pinecone.PineconeTranslator.html"} {"id": "54e13c8cf64b-0", "text": "langchain.retrievers.chatgpt_plugin_retriever.ChatGPTPluginRetriever\u00b6\nclass langchain.retrievers.chatgpt_plugin_retriever.ChatGPTPluginRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, url: str, bearer_token: str, top_k: int = 3, filter: Optional[dict] = None, aiosession: Optional[ClientSession] = None)[source]\u00b6\nBases: BaseRetriever\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam aiosession: Optional[aiohttp.ClientSession] = None\u00b6\nparam bearer_token: str [Required]\u00b6\nparam filter: Optional[dict] = None\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam top_k: int = 3\u00b6\nparam url: str [Required]\u00b6\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.chatgpt_plugin_retriever.ChatGPTPluginRetriever.html"} {"id": "54e13c8cf64b-1", "text": "Asynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.chatgpt_plugin_retriever.ChatGPTPluginRetriever.html"} {"id": "54e13c8cf64b-2", "text": "property lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.chatgpt_plugin_retriever.ChatGPTPluginRetriever.html"} {"id": "bf4c1f91d04c-0", "text": "langchain.retrievers.zilliz.ZillizRetreiver\u00b6\nlangchain.retrievers.zilliz.ZillizRetreiver(*args: Any, **kwargs: Any) \u2192 ZillizRetriever[source]\u00b6\nDeprecated ZillizRetreiver. Please use ZillizRetriever (\u2018i\u2019 before \u2018e\u2019) instead.\n:param *args:\n:param **kwargs:\nReturns\nZillizRetriever", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.zilliz.ZillizRetreiver.html"} {"id": "07e22302581b-0", "text": "langchain.retrievers.multi_query.LineList\u00b6\nclass langchain.retrievers.multi_query.LineList(*, lines: List[str])[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam lines: List[str] [Required]\u00b6\nLines of text", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.multi_query.LineList.html"} {"id": "f77754d7efcd-0", "text": "langchain.retrievers.kendra.DocumentAttributeValue\u00b6\nclass langchain.retrievers.kendra.DocumentAttributeValue(*, DateValue: Optional[str] = None, LongValue: Optional[int] = None, StringListValue: Optional[List[str]] = None, StringValue: Optional[str] = None, **extra_data: Any)[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam DateValue: Optional[str] = None\u00b6\nparam LongValue: Optional[int] = None\u00b6\nparam StringListValue: Optional[List[str]] = None\u00b6\nparam StringValue: Optional[str] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.kendra.DocumentAttributeValue.html"} {"id": "776a79589401-0", "text": "langchain.retrievers.weaviate_hybrid_search.WeaviateHybridSearchRetriever\u00b6\nclass langchain.retrievers.weaviate_hybrid_search.WeaviateHybridSearchRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, index_name: str, text_key: str, alpha: float = 0.5, k: int = 4, attributes: List[str], create_schema_if_missing: bool = True)[source]\u00b6\nBases: BaseRetriever\nRetriever that uses Weaviate\u2019s hybrid search to retrieve documents.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam alpha: float = 0.5\u00b6\nThe weight of the text key in the hybrid search.\nparam attributes: List[str] [Required]\u00b6\nThe attributes to return in the results.\nparam client: Any = None\u00b6\nkeyword arguments to pass to the Weaviate client.\nparam create_schema_if_missing: bool = True\u00b6\nWhether to create the schema if it doesn\u2019t exist.\nparam index_name: str [Required]\u00b6\nThe name of the index to use.\nparam k: int = 4\u00b6\nThe number of results to return.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.weaviate_hybrid_search.WeaviateHybridSearchRetriever.html"} {"id": "776a79589401-1", "text": "Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam text_key: str [Required]\u00b6\nThe name of the text key to use.\nadd_documents(docs: List[Document], **kwargs: Any) \u2192 List[str][source]\u00b6\nUpload documents to Weaviate.\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.weaviate_hybrid_search.WeaviateHybridSearchRetriever.html"} {"id": "776a79589401-2", "text": "These tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_client\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.weaviate_hybrid_search.WeaviateHybridSearchRetriever.html"} {"id": "658bf6e93b73-0", "text": "langchain.retrievers.kendra.DocumentAttribute\u00b6\nclass langchain.retrievers.kendra.DocumentAttribute(*, Key: str, Value: DocumentAttributeValue, **extra_data: Any)[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam Key: str [Required]\u00b6\nparam Value: langchain.retrievers.kendra.DocumentAttributeValue [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.kendra.DocumentAttribute.html"} {"id": "eb4c370169f1-0", "text": "langchain.retrievers.svm.create_index\u00b6\nlangchain.retrievers.svm.create_index(contexts: List[str], embeddings: Embeddings) \u2192 ndarray[source]\u00b6\nCreate an index of embeddings for a list of contexts.\n:param contexts: List of contexts to embed.\n:param embeddings: Embeddings model to use.\nReturns\nIndex of embeddings.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.svm.create_index.html"} {"id": "0410aff59f3a-0", "text": "langchain.retrievers.kendra.RetrieveResultItem\u00b6\nclass langchain.retrievers.kendra.RetrieveResultItem(*, Content: Optional[str] = None, DocumentAttributes: Optional[List[DocumentAttribute]] = [], DocumentId: Optional[str] = None, DocumentTitle: Optional[str] = None, DocumentURI: Optional[str] = None, Id: Optional[str] = None, **extra_data: Any)[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam Content: Optional[str] = None\u00b6\nparam DocumentAttributes: Optional[List[langchain.retrievers.kendra.DocumentAttribute]] = []\u00b6\nparam DocumentId: Optional[str] = None\u00b6\nparam DocumentTitle: Optional[str] = None\u00b6\nparam DocumentURI: Optional[str] = None\u00b6\nparam Id: Optional[str] = None\u00b6\nget_excerpt() \u2192 str[source]\u00b6\nto_doc() \u2192 Document[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.kendra.RetrieveResultItem.html"} {"id": "88cb5d2c24dc-0", "text": "langchain.retrievers.pinecone_hybrid_search.PineconeHybridSearchRetriever\u00b6\nclass langchain.retrievers.pinecone_hybrid_search.PineconeHybridSearchRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, embeddings: Embeddings, sparse_encoder: Any = None, index: Any = None, top_k: int = 4, alpha: float = 0.5)[source]\u00b6\nBases: BaseRetriever\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam alpha: float = 0.5\u00b6\nparam embeddings: langchain.embeddings.base.Embeddings [Required]\u00b6\ndescription\nparam index: Any = None\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam sparse_encoder: Any = None\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam top_k: int = 4\u00b6\nadd_texts(texts: List[str], ids: Optional[List[str]] = None, metadatas: Optional[List[dict]] = None) \u2192 None[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.pinecone_hybrid_search.PineconeHybridSearchRetriever.html"} {"id": "88cb5d2c24dc-1", "text": "async aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.pinecone_hybrid_search.PineconeHybridSearchRetriever.html"} {"id": "88cb5d2c24dc-2", "text": "Validate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.pinecone_hybrid_search.PineconeHybridSearchRetriever.html"} {"id": "6a4d7acb6242-0", "text": "langchain.retrievers.document_compressors.embeddings_filter.EmbeddingsFilter\u00b6\nclass langchain.retrievers.document_compressors.embeddings_filter.EmbeddingsFilter(*, embeddings: ~langchain.embeddings.base.Embeddings, similarity_fn: ~typing.Callable = , k: ~typing.Optional[int] = 20, similarity_threshold: ~typing.Optional[float] = None)[source]\u00b6\nBases: BaseDocumentCompressor\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam embeddings: langchain.embeddings.base.Embeddings [Required]\u00b6\nEmbeddings to use for embedding document contents and queries.\nparam k: Optional[int] = 20\u00b6\nThe number of relevant documents to return. Can be set to None, in which case\nsimilarity_threshold must be specified. Defaults to 20.\nparam similarity_fn: Callable = \u00b6\nSimilarity function for comparing documents. Function expected to take as input\ntwo matrices (List[List[float]]) and return a matrix of scores where higher values\nindicate greater similarity.\nparam similarity_threshold: Optional[float] = None\u00b6\nThreshold for determining when two documents are similar enough\nto be considered redundant. Defaults to None, must be specified if k is set\nto None.\nasync acompress_documents(documents: Sequence[Document], query: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Document][source]\u00b6\nFilter down documents.\ncompress_documents(documents: Sequence[Document], query: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Document][source]\u00b6\nFilter documents based on similarity of their embeddings to the query.\nvalidator validate_params\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate similarity parameters.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.embeddings_filter.EmbeddingsFilter.html"} {"id": "6a4d7acb6242-1", "text": "validator validate_params\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate similarity parameters.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.embeddings_filter.EmbeddingsFilter.html"} {"id": "18c85515814e-0", "text": "langchain.retrievers.self_query.myscale.FUNCTION_COMPOSER\u00b6\nlangchain.retrievers.self_query.myscale.FUNCTION_COMPOSER(op_name: str) \u2192 Callable[source]\u00b6\nComposer for functions.\n:param op_name: Name of the function.\nReturns\nCallable that takes a list of arguments and returns a string.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.self_query.myscale.FUNCTION_COMPOSER.html"} {"id": "34adbc6c112b-0", "text": "langchain.retrievers.chaindesk.ChaindeskRetriever\u00b6\nclass langchain.retrievers.chaindesk.ChaindeskRetriever(datastore_url: str, top_k: Optional[int] = None, api_key: Optional[str] = None)[source]\u00b6\nBases: BaseRetriever\nRetriever that uses the Chaindesk API.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_key: Optional[str] = None\u00b6\nparam datastore_url: str [Required]\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam top_k: Optional[int] = None\u00b6\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.chaindesk.ChaindeskRetriever.html"} {"id": "34adbc6c112b-1", "text": "These tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.chaindesk.ChaindeskRetriever.html"} {"id": "34adbc6c112b-2", "text": "property lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.chaindesk.ChaindeskRetriever.html"} {"id": "60ed4221c5f0-0", "text": "langchain.retrievers.document_compressors.chain_filter.default_get_input\u00b6\nlangchain.retrievers.document_compressors.chain_filter.default_get_input(query: str, doc: Document) \u2192 Dict[str, Any][source]\u00b6\nReturn the compression chain input.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.chain_filter.default_get_input.html"} {"id": "64e6e0ab38fc-0", "text": "langchain.retrievers.self_query.myscale.MyScaleTranslator\u00b6\nclass langchain.retrievers.self_query.myscale.MyScaleTranslator(metadata_key: str = 'metadata')[source]\u00b6\nBases: Visitor\nLogic for converting internal query language elements to valid filters.\nMethods\n__init__([metadata_key])\nvisit_comparison(comparison)\nTranslate a Comparison.\nvisit_operation(operation)\nTranslate an Operation.\nvisit_structured_query(structured_query)\nTranslate a StructuredQuery.\nAttributes\nallowed_comparators\nallowed_operators\nSubset of allowed logical operators.\nmap_dict\nvisit_comparison(comparison: Comparison) \u2192 Dict[source]\u00b6\nTranslate a Comparison.\nvisit_operation(operation: Operation) \u2192 Dict[source]\u00b6\nTranslate an Operation.\nvisit_structured_query(structured_query: StructuredQuery) \u2192 Tuple[str, dict][source]\u00b6\nTranslate a StructuredQuery.\nallowed_comparators: Optional[Sequence[Comparator]] = [, , , , , , ]\u00b6\nallowed_operators: Optional[Sequence[Operator]] = [, , ]\u00b6\nSubset of allowed logical operators.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.self_query.myscale.MyScaleTranslator.html"} {"id": "64e6e0ab38fc-1", "text": "Subset of allowed logical operators.\nmap_dict = {Operator.AND: .f>, Comparator.CONTAIN: .f>, Comparator.EQ: .f>, Comparator.GT: .f>, Comparator.GTE: .f>, Comparator.LIKE: .f>, Comparator.LT: .f>, Comparator.LTE: .f>, Operator.NOT: .f>, Operator.OR: .f>}\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.self_query.myscale.MyScaleTranslator.html"} {"id": "427a61fc4dfe-0", "text": "langchain.retrievers.kendra.Highlight\u00b6\nclass langchain.retrievers.kendra.Highlight(*, BeginOffset: int, EndOffset: int, TopAnswer: Optional[bool] = None, Type: Optional[str] = None, **extra_data: Any)[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam BeginOffset: int [Required]\u00b6\nparam EndOffset: int [Required]\u00b6\nparam TopAnswer: Optional[bool] = None\u00b6\nparam Type: Optional[str] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.kendra.Highlight.html"} {"id": "8e1a9927510a-0", "text": "langchain.retrievers.kendra.QueryResultItem\u00b6\nclass langchain.retrievers.kendra.QueryResultItem(*, DocumentId: str, DocumentTitle: TextWithHighLights, DocumentURI: Optional[str] = None, FeedbackToken: Optional[str] = None, Format: Optional[str] = None, Id: Optional[str] = None, Type: Optional[str] = None, AdditionalAttributes: Optional[List[AdditionalResultAttribute]] = [], DocumentExcerpt: Optional[TextWithHighLights] = None, **extra_data: Any)[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam AdditionalAttributes: Optional[List[langchain.retrievers.kendra.AdditionalResultAttribute]] = []\u00b6\nparam DocumentExcerpt: Optional[langchain.retrievers.kendra.TextWithHighLights] = None\u00b6\nparam DocumentId: str [Required]\u00b6\nparam DocumentTitle: langchain.retrievers.kendra.TextWithHighLights [Required]\u00b6\nparam DocumentURI: Optional[str] = None\u00b6\nparam FeedbackToken: Optional[str] = None\u00b6\nparam Format: Optional[str] = None\u00b6\nparam Id: Optional[str] = None\u00b6\nparam Type: Optional[str] = None\u00b6\nget_attribute_value() \u2192 str[source]\u00b6\nget_excerpt() \u2192 str[source]\u00b6\nto_doc() \u2192 Document[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.kendra.QueryResultItem.html"} {"id": "755e797992d0-0", "text": "langchain.retrievers.self_query.qdrant.QdrantTranslator\u00b6\nclass langchain.retrievers.self_query.qdrant.QdrantTranslator(metadata_key: str)[source]\u00b6\nBases: Visitor\nLogic for converting internal query language elements to valid filters.\nMethods\n__init__(metadata_key)\nvisit_comparison(comparison)\nTranslate a Comparison.\nvisit_operation(operation)\nTranslate an Operation.\nvisit_structured_query(structured_query)\nTranslate a StructuredQuery.\nAttributes\nallowed_comparators\nallowed_operators\nvisit_comparison(comparison: Comparison) \u2192 rest.FieldCondition[source]\u00b6\nTranslate a Comparison.\nvisit_operation(operation: Operation) \u2192 rest.Filter[source]\u00b6\nTranslate an Operation.\nvisit_structured_query(structured_query: StructuredQuery) \u2192 Tuple[str, dict][source]\u00b6\nTranslate a StructuredQuery.\nallowed_comparators: Optional[Sequence[Comparator]] = None\u00b6\nallowed_operators: Optional[Sequence[Operator]] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.self_query.qdrant.QdrantTranslator.html"} {"id": "d56b22c795d0-0", "text": "langchain.retrievers.contextual_compression.ContextualCompressionRetriever\u00b6\nclass langchain.retrievers.contextual_compression.ContextualCompressionRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, base_compressor: BaseDocumentCompressor, base_retriever: BaseRetriever)[source]\u00b6\nBases: BaseRetriever\nRetriever that wraps a base retriever and compresses the results.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam base_compressor: langchain.retrievers.document_compressors.base.BaseDocumentCompressor [Required]\u00b6\nCompressor for compressing retrieved documents.\nparam base_retriever: langchain.schema.retriever.BaseRetriever [Required]\u00b6\nBase Retriever to use for getting relevant documents.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.contextual_compression.ContextualCompressionRetriever.html"} {"id": "d56b22c795d0-1", "text": "Asynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.contextual_compression.ContextualCompressionRetriever.html"} {"id": "d56b22c795d0-2", "text": "property lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.contextual_compression.ContextualCompressionRetriever.html"} {"id": "055a196053f1-0", "text": "langchain.retrievers.knn.KNNRetriever\u00b6\nclass langchain.retrievers.knn.KNNRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, embeddings: Embeddings, index: Any = None, texts: List[str], k: int = 4, relevancy_threshold: Optional[float] = None)[source]\u00b6\nBases: BaseRetriever\nKNN Retriever.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam embeddings: Embeddings [Required]\u00b6\nparam index: Any = None\u00b6\nparam k: int = 4\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam relevancy_threshold: Optional[float] = None\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam texts: List[str] [Required]\u00b6\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.knn.KNNRetriever.html"} {"id": "055a196053f1-1", "text": "Asynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nclassmethod from_texts(texts: List[str], embeddings: Embeddings, **kwargs: Any) \u2192 KNNRetriever[source]\u00b6\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.knn.KNNRetriever.html"} {"id": "055a196053f1-2", "text": "property lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.knn.KNNRetriever.html"} {"id": "9ee89dc83a7e-0", "text": "langchain.retrievers.merger_retriever.MergerRetriever\u00b6\nclass langchain.retrievers.merger_retriever.MergerRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, retrievers: List[BaseRetriever])[source]\u00b6\nBases: BaseRetriever\nThis class merges the results of multiple retrievers.\nParameters\nretrievers \u2013 A list of retrievers to merge.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam retrievers: List[langchain.schema.retriever.BaseRetriever] [Required]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.merger_retriever.MergerRetriever.html"} {"id": "9ee89dc83a7e-1", "text": ":param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nasync amerge_documents(query: str, run_manager: AsyncCallbackManagerForRetrieverRun) \u2192 List[Document][source]\u00b6\nAsynchronously merge the results of the retrievers.\nParameters\nquery \u2013 The query to search for.\nReturns\nA list of merged documents.\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nmerge_documents(query: str, run_manager: CallbackManagerForRetrieverRun) \u2192 List[Document][source]\u00b6\nMerge the results of the retrievers.\nParameters\nquery \u2013 The query to search for.\nReturns\nA list of merged documents.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.merger_retriever.MergerRetriever.html"} {"id": "9ee89dc83a7e-2", "text": "Parameters\nquery \u2013 The query to search for.\nReturns\nA list of merged documents.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.merger_retriever.MergerRetriever.html"} {"id": "7074a6230e6c-0", "text": "langchain.retrievers.knn.create_index\u00b6\nlangchain.retrievers.knn.create_index(contexts: List[str], embeddings: Embeddings) \u2192 ndarray[source]\u00b6\nCreate an index of embeddings for a list of contexts.\nParameters\ncontexts \u2013 List of contexts to embed.\nembeddings \u2013 Embeddings model to use.\nReturns\nIndex of embeddings.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.knn.create_index.html"} {"id": "0a67bbc026d9-0", "text": "langchain.retrievers.kendra.RetrieveResult\u00b6\nclass langchain.retrievers.kendra.RetrieveResult(*, QueryId: str, ResultItems: List[RetrieveResultItem], **extra_data: Any)[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam QueryId: str [Required]\u00b6\nparam ResultItems: List[langchain.retrievers.kendra.RetrieveResultItem] [Required]\u00b6\nget_top_k_docs(top_n: int) \u2192 List[Document][source]\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.kendra.RetrieveResult.html"} {"id": "a1d25d6559b9-0", "text": "langchain.retrievers.wikipedia.WikipediaRetriever\u00b6\nclass langchain.retrievers.wikipedia.WikipediaRetriever(*, wiki_client: Any = None, top_k_results: int = 3, lang: str = 'en', load_all_available_meta: bool = False, doc_content_chars_max: int = 4000, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: BaseRetriever, WikipediaAPIWrapper\nIt is effectively a wrapper for WikipediaAPIWrapper.\nIt wraps load() to get_relevant_documents().\nIt uses all WikipediaAPIWrapper arguments without any change.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam doc_content_chars_max: int = 4000\u00b6\nparam lang: str = 'en'\u00b6\nparam load_all_available_meta: bool = False\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam top_k_results: int = 3\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.wikipedia.WikipediaRetriever.html"} {"id": "a1d25d6559b9-1", "text": "use case.\nparam top_k_results: int = 3\u00b6\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nload(query: str) \u2192 List[Document]\u00b6\nRun Wikipedia search and get the article text plus the meta information.\nSee\nReturns: a list of documents.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.wikipedia.WikipediaRetriever.html"} {"id": "a1d25d6559b9-2", "text": "See\nReturns: a list of documents.\nrun(query: str) \u2192 str\u00b6\nRun Wikipedia search and get page summaries.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that the python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.wikipedia.WikipediaRetriever.html"} {"id": "61a12946734f-0", "text": "langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever\u00b6\nclass langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, vectorstore: VectorStore, search_kwargs: dict = None, memory_stream: List[Document] = None, decay_rate: float = 0.01, k: int = 4, other_score_keys: List[str] = [], default_salience: Optional[float] = None)[source]\u00b6\nBases: BaseRetriever\nRetriever combining embedding similarity with recency.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam decay_rate: float = 0.01\u00b6\nThe exponential decay factor used as (1.0-decay_rate)**(hrs_passed).\nparam default_salience: Optional[float] = None\u00b6\nThe salience to assign memories not retrieved from the vector store.\nNone assigns no salience to documents not fetched from the vector store.\nparam k: int = 4\u00b6\nThe maximum number of documents to retrieve in a given call.\nparam memory_stream: List[langchain.schema.document.Document] [Optional]\u00b6\nThe memory_stream of documents to search through.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam other_score_keys: List[str] = []\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever.html"} {"id": "61a12946734f-1", "text": "use case.\nparam other_score_keys: List[str] = []\u00b6\nOther keys in the metadata to factor into the score, e.g. \u2018importance\u2019.\nparam search_kwargs: dict [Optional]\u00b6\nKeyword arguments to pass to the vectorstore similarity search.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam vectorstore: langchain.vectorstores.base.VectorStore [Required]\u00b6\nThe vectorstore to store documents and determine salience.\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str][source]\u00b6\nAdd documents to vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str][source]\u00b6\nAdd documents to vectorstore.\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever.html"} {"id": "61a12946734f-2", "text": "and passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nget_salient_docs(query: str) \u2192 Dict[int, Tuple[Document, float]][source]\u00b6\nReturn documents that are salient to the query.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever.html"} {"id": "61a12946734f-3", "text": "model Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever.html"} {"id": "02bd7a296bc2-0", "text": "langchain.retrievers.remote_retriever.RemoteLangChainRetriever\u00b6\nclass langchain.retrievers.remote_retriever.RemoteLangChainRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, url: str, headers: Optional[dict] = None, input_key: str = 'message', response_key: str = 'response', page_content_key: str = 'page_content', metadata_key: str = 'metadata')[source]\u00b6\nBases: BaseRetriever\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam headers: Optional[dict] = None\u00b6\nparam input_key: str = 'message'\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam metadata_key: str = 'metadata'\u00b6\nparam page_content_key: str = 'page_content'\u00b6\nparam response_key: str = 'response'\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam url: str [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.remote_retriever.RemoteLangChainRetriever.html"} {"id": "02bd7a296bc2-1", "text": "use case.\nparam url: str [Required]\u00b6\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.remote_retriever.RemoteLangChainRetriever.html"} {"id": "02bd7a296bc2-2", "text": "to_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.remote_retriever.RemoteLangChainRetriever.html"} {"id": "0d35aaf73dd7-0", "text": "langchain.retrievers.arxiv.ArxivRetriever\u00b6\nclass langchain.retrievers.arxiv.ArxivRetriever(*, arxiv_search: Any = None, arxiv_exceptions: Any = None, top_k_results: int = 3, load_max_docs: int = 100, load_all_available_meta: bool = False, doc_content_chars_max: Optional[int] = 4000, ARXIV_MAX_QUERY_LENGTH: int = 300, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: BaseRetriever, ArxivAPIWrapper\nIt is effectively a wrapper for ArxivAPIWrapper.\nIt wraps load() to get_relevant_documents().\nIt uses all ArxivAPIWrapper arguments without any change.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam arxiv_exceptions: Any = None\u00b6\nparam doc_content_chars_max: Optional[int] = 4000\u00b6\nparam load_all_available_meta: bool = False\u00b6\nparam load_max_docs: int = 100\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.arxiv.ArxivRetriever.html"} {"id": "0d35aaf73dd7-1", "text": "use case.\nparam top_k_results: int = 3\u00b6\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nload(query: str) \u2192 List[Document]\u00b6\nRun Arxiv search and get the article texts plus the article meta information.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.arxiv.ArxivRetriever.html"} {"id": "0d35aaf73dd7-2", "text": "Run Arxiv search and get the article texts plus the article meta information.\nSee https://lukasschwab.me/arxiv.py/index.html#Search\nReturns: a list of documents with the document.page_content in text format\nrun(query: str) \u2192 str\u00b6\nRun Arxiv search and get the article meta information.\nSee https://lukasschwab.me/arxiv.py/index.html#Search\nSee https://lukasschwab.me/arxiv.py/index.html#Result\nIt uses only the most informative fields of article meta information.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that the python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.arxiv.ArxivRetriever.html"} {"id": "ddbe440ff2c0-0", "text": "langchain.retrievers.pinecone_hybrid_search.create_index\u00b6\nlangchain.retrievers.pinecone_hybrid_search.create_index(contexts: List[str], index: Any, embeddings: Embeddings, sparse_encoder: Any, ids: Optional[List[str]] = None, metadatas: Optional[List[dict]] = None) \u2192 None[source]\u00b6\nCreate a Pinecone index from a list of contexts.\nModifies the index argument in-place.\nParameters\ncontexts \u2013 List of contexts to embed.\nindex \u2013 Pinecone index to use.\nembeddings \u2013 Embeddings model to use.\nsparse_encoder \u2013 Sparse encoder to use.\nids \u2013 List of ids to use for the documents.\nmetadatas \u2013 List of metadata to use for the documents.", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.pinecone_hybrid_search.create_index.html"} {"id": "14ef2b11aa4a-0", "text": "langchain.retrievers.milvus.MilvusRetreiver\u00b6\nlangchain.retrievers.milvus.MilvusRetreiver(*args: Any, **kwargs: Any) \u2192 MilvusRetriever[source]\u00b6\nDeprecated MilvusRetreiver. Please use MilvusRetriever (\u2018i\u2019 before \u2018e\u2019) instead.\nParameters\n*args \u2013 \n**kwargs \u2013 \nReturns\nMilvusRetriever", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.milvus.MilvusRetreiver.html"} {"id": "da924d7ef768-0", "text": "langchain.retrievers.azure_cognitive_search.AzureCognitiveSearchRetriever\u00b6\nclass langchain.retrievers.azure_cognitive_search.AzureCognitiveSearchRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, service_name: str = '', index_name: str = '', api_key: str = '', api_version: str = '2020-06-30', aiosession: Optional[ClientSession] = None, content_key: str = 'content')[source]\u00b6\nBases: BaseRetriever\nWrapper around Azure Cognitive Search.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam aiosession: Optional[aiohttp.ClientSession] = None\u00b6\nClientSession, in case we want to reuse connection for better performance.\nparam api_key: str = ''\u00b6\nAPI Key. Both Admin and Query keys work, but for reading data it\u2019s\nrecommended to use a Query key.\nparam api_version: str = '2020-06-30'\u00b6\nAPI version\nparam content_key: str = 'content'\u00b6\nKey in a retrieved result to set as the Document page_content.\nparam index_name: str = ''\u00b6\nName of Index inside Azure Cognitive Search service\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam service_name: str = ''\u00b6\nName of Azure Cognitive Search service\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.azure_cognitive_search.AzureCognitiveSearchRetriever.html"} {"id": "da924d7ef768-1", "text": "Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.azure_cognitive_search.AzureCognitiveSearchRetriever.html"} {"id": "da924d7ef768-2", "text": "and passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that service name, index name and api key exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.azure_cognitive_search.AzureCognitiveSearchRetriever.html"} {"id": "62cff034638f-0", "text": "langchain.retrievers.multi_query.LineListOutputParser\u00b6\nclass langchain.retrievers.multi_query.LineListOutputParser[source]\u00b6\nBases: PydanticOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam pydantic_object: Type[langchain.output_parsers.pydantic.T] [Required]\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 LineList[source]\u00b6\nParse a single string model output into some structure.\nParameters\ntext \u2013 String output of language model.\nReturns\nStructured output.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.multi_query.LineListOutputParser.html"} {"id": "62cff034638f-1", "text": "property lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.multi_query.LineListOutputParser.html"} {"id": "7c5f2df78a78-0", "text": "langchain.math_utils.cosine_similarity\u00b6\nlangchain.math_utils.cosine_similarity(X: Union[List[List[float]], List[ndarray], ndarray], Y: Union[List[List[float]], List[ndarray], ndarray]) \u2192 ndarray[source]\u00b6\nRow-wise cosine similarity between two equal-width matrices.", "source": "https://api.python.langchain.com/en/latest/math_utils/langchain.math_utils.cosine_similarity.html"} {"id": "b2cfed4abfcd-0", "text": "langchain.math_utils.cosine_similarity_top_k\u00b6\nlangchain.math_utils.cosine_similarity_top_k(X: Union[List[List[float]], List[ndarray], ndarray], Y: Union[List[List[float]], List[ndarray], ndarray], top_k: Optional[int] = 5, score_threshold: Optional[float] = None) \u2192 Tuple[List[Tuple[int, int]], List[float]][source]\u00b6\nRow-wise cosine similarity with optional top-k and score threshold filtering.\nParameters\nX \u2013 Matrix.\nY \u2013 Matrix, same width as X.\ntop_k \u2013 Max number of results to return.\nscore_threshold \u2013 Minimum cosine similarity of results.\nReturns\nTuple of two lists. First contains two-tuples of indices (X_idx, Y_idx),second contains corresponding cosine similarities.", "source": "https://api.python.langchain.com/en/latest/math_utils/langchain.math_utils.cosine_similarity_top_k.html"} {"id": "dc5379f8d2c1-0", "text": "langchain.server.main\u00b6\nlangchain.server.main() \u2192 None[source]\u00b6\nRun the langchain server locally.", "source": "https://api.python.langchain.com/en/latest/server/langchain.server.main.html"} {"id": "2e83a3046f62-0", "text": "langchain.requests.Requests\u00b6\nclass langchain.requests.Requests(*, headers: Optional[Dict[str, str]] = None, aiosession: Optional[ClientSession] = None)[source]\u00b6\nBases: BaseModel\nWrapper around requests to handle auth and async.\nThe main purpose of this wrapper is to handle authentication (by saving\nheaders) and enable easy async methods on the same base object.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam aiosession: Optional[aiohttp.client.ClientSession] = None\u00b6\nparam headers: Optional[Dict[str, str]] = None\u00b6\nadelete(url: str, **kwargs: Any) \u2192 AsyncGenerator[ClientResponse, None][source]\u00b6\nDELETE the URL and return the text asynchronously.\naget(url: str, **kwargs: Any) \u2192 AsyncGenerator[ClientResponse, None][source]\u00b6\nGET the URL and return the text asynchronously.\napatch(url: str, data: Dict[str, Any], **kwargs: Any) \u2192 AsyncGenerator[ClientResponse, None][source]\u00b6\nPATCH the URL and return the text asynchronously.\napost(url: str, data: Dict[str, Any], **kwargs: Any) \u2192 AsyncGenerator[ClientResponse, None][source]\u00b6\nPOST to the URL and return the text asynchronously.\naput(url: str, data: Dict[str, Any], **kwargs: Any) \u2192 AsyncGenerator[ClientResponse, None][source]\u00b6\nPUT the URL and return the text asynchronously.\ndelete(url: str, **kwargs: Any) \u2192 Response[source]\u00b6\nDELETE the URL and return the text.\nget(url: str, **kwargs: Any) \u2192 Response[source]\u00b6\nGET the URL and return the text.", "source": "https://api.python.langchain.com/en/latest/requests/langchain.requests.Requests.html"} {"id": "2e83a3046f62-1", "text": "GET the URL and return the text.\npatch(url: str, data: Dict[str, Any], **kwargs: Any) \u2192 Response[source]\u00b6\nPATCH the URL and return the text.\npost(url: str, data: Dict[str, Any], **kwargs: Any) \u2192 Response[source]\u00b6\nPOST to the URL and return the text.\nput(url: str, data: Dict[str, Any], **kwargs: Any) \u2192 Response[source]\u00b6\nPUT the URL and return the text.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/requests/langchain.requests.Requests.html"} {"id": "6a4bac48591a-0", "text": "langchain.requests.TextRequestsWrapper\u00b6\nclass langchain.requests.TextRequestsWrapper(*, headers: Optional[Dict[str, str]] = None, aiosession: Optional[ClientSession] = None)[source]\u00b6\nBases: BaseModel\nLightweight wrapper around requests library.\nThe main purpose of this wrapper is to always return a text output.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam aiosession: Optional[aiohttp.client.ClientSession] = None\u00b6\nparam headers: Optional[Dict[str, str]] = None\u00b6\nasync adelete(url: str, **kwargs: Any) \u2192 str[source]\u00b6\nDELETE the URL and return the text asynchronously.\nasync aget(url: str, **kwargs: Any) \u2192 str[source]\u00b6\nGET the URL and return the text asynchronously.\nasync apatch(url: str, data: Dict[str, Any], **kwargs: Any) \u2192 str[source]\u00b6\nPATCH the URL and return the text asynchronously.\nasync apost(url: str, data: Dict[str, Any], **kwargs: Any) \u2192 str[source]\u00b6\nPOST to the URL and return the text asynchronously.\nasync aput(url: str, data: Dict[str, Any], **kwargs: Any) \u2192 str[source]\u00b6\nPUT the URL and return the text asynchronously.\ndelete(url: str, **kwargs: Any) \u2192 str[source]\u00b6\nDELETE the URL and return the text.\nget(url: str, **kwargs: Any) \u2192 str[source]\u00b6\nGET the URL and return the text.\npatch(url: str, data: Dict[str, Any], **kwargs: Any) \u2192 str[source]\u00b6\nPATCH the URL and return the text.\npost(url: str, data: Dict[str, Any], **kwargs: Any) \u2192 str[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/requests/langchain.requests.TextRequestsWrapper.html"} {"id": "6a4bac48591a-1", "text": "POST to the URL and return the text.\nput(url: str, data: Dict[str, Any], **kwargs: Any) \u2192 str[source]\u00b6\nPUT the URL and return the text.\nproperty requests: langchain.requests.Requests\u00b6\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/requests/langchain.requests.TextRequestsWrapper.html"} {"id": "2e302ac18be0-0", "text": "langchain.sql_database.truncate_word\u00b6\nlangchain.sql_database.truncate_word(content: Any, *, length: int, suffix: str = '...') \u2192 str[source]\u00b6\nTruncate a string to a certain number of words, based on the max string\nlength.", "source": "https://api.python.langchain.com/en/latest/sql_database/langchain.sql_database.truncate_word.html"} {"id": "8ed8ee025dd5-0", "text": "langchain.cache.MomentoCache\u00b6\nclass langchain.cache.MomentoCache(cache_client: momento.CacheClient, cache_name: str, *, ttl: Optional[timedelta] = None, ensure_cache_exists: bool = True)[source]\u00b6\nBases: BaseCache\nCache that uses Momento as a backend. See https://gomomento.com/\nInstantiate a prompt cache using Momento as a backend.\nNote: to instantiate the cache client passed to MomentoCache,\nyou must have a Momento account. See https://gomomento.com/.\nParameters\ncache_client (CacheClient) \u2013 The Momento cache client.\ncache_name (str) \u2013 The name of the cache to use to store the data.\nttl (Optional[timedelta], optional) \u2013 The time to live for the cache items.\nDefaults to None, ie use the client default TTL.\nensure_cache_exists (bool, optional) \u2013 Create the cache if it doesn\u2019t\nexist. Defaults to True.\nRaises\nImportError \u2013 Momento python package is not installed.\nTypeError \u2013 cache_client is not of type momento.CacheClientObject\nValueError \u2013 ttl is non-null and non-negative\nMethods\n__init__(cache_client,\u00a0cache_name,\u00a0*[,\u00a0ttl,\u00a0...])\nInstantiate a prompt cache using Momento as a backend.\nclear(**kwargs)\nClear the cache.\nfrom_client_params(cache_name,\u00a0ttl,\u00a0*[,\u00a0...])\nConstruct cache from CacheClient parameters.\nlookup(prompt,\u00a0llm_string)\nLookup llm generations in cache by prompt and associated model and settings.\nupdate(prompt,\u00a0llm_string,\u00a0return_val)\nStore llm generations in cache.\nclear(**kwargs: Any) \u2192 None[source]\u00b6\nClear the cache.\nRaises\nSdkException \u2013 Momento service or network error", "source": "https://api.python.langchain.com/en/latest/cache/langchain.cache.MomentoCache.html"} {"id": "8ed8ee025dd5-1", "text": "Clear the cache.\nRaises\nSdkException \u2013 Momento service or network error\nclassmethod from_client_params(cache_name: str, ttl: timedelta, *, configuration: Optional[momento.config.Configuration] = None, auth_token: Optional[str] = None, **kwargs: Any) \u2192 MomentoCache[source]\u00b6\nConstruct cache from CacheClient parameters.\nlookup(prompt: str, llm_string: str) \u2192 Optional[Sequence[Generation]][source]\u00b6\nLookup llm generations in cache by prompt and associated model and settings.\nParameters\nprompt (str) \u2013 The prompt run through the language model.\nllm_string (str) \u2013 The language model version and settings.\nRaises\nSdkException \u2013 Momento service or network error\nReturns\nA list of language model generations.\nReturn type\nOptional[RETURN_VAL_TYPE]\nupdate(prompt: str, llm_string: str, return_val: Sequence[Generation]) \u2192 None[source]\u00b6\nStore llm generations in cache.\nParameters\nprompt (str) \u2013 The prompt run through the language model.\nllm_string (str) \u2013 The language model string.\nreturn_val (RETURN_VAL_TYPE) \u2013 A list of language model generations.\nRaises\nSdkException \u2013 Momento service or network error\nException \u2013 Unexpected response", "source": "https://api.python.langchain.com/en/latest/cache/langchain.cache.MomentoCache.html"} {"id": "285087b46bba-0", "text": "langchain.cache.RedisCache\u00b6\nclass langchain.cache.RedisCache(redis_: Any)[source]\u00b6\nBases: BaseCache\nCache that uses Redis as a backend.\nInitialize by passing in Redis instance.\nMethods\n__init__(redis_)\nInitialize by passing in Redis instance.\nclear(**kwargs)\nClear cache.\nlookup(prompt,\u00a0llm_string)\nLook up based on prompt and llm_string.\nupdate(prompt,\u00a0llm_string,\u00a0return_val)\nUpdate cache based on prompt and llm_string.\nclear(**kwargs: Any) \u2192 None[source]\u00b6\nClear cache. If asynchronous is True, flush asynchronously.\nlookup(prompt: str, llm_string: str) \u2192 Optional[Sequence[Generation]][source]\u00b6\nLook up based on prompt and llm_string.\nupdate(prompt: str, llm_string: str, return_val: Sequence[Generation]) \u2192 None[source]\u00b6\nUpdate cache based on prompt and llm_string.", "source": "https://api.python.langchain.com/en/latest/cache/langchain.cache.RedisCache.html"} {"id": "0433f1b5c093-0", "text": "langchain.cache.SQLiteCache\u00b6\nclass langchain.cache.SQLiteCache(database_path: str = '.langchain.db')[source]\u00b6\nBases: SQLAlchemyCache\nCache that uses SQLite as a backend.\nInitialize by creating the engine and all tables.\nMethods\n__init__([database_path])\nInitialize by creating the engine and all tables.\nclear(**kwargs)\nClear cache.\nlookup(prompt,\u00a0llm_string)\nLook up based on prompt and llm_string.\nupdate(prompt,\u00a0llm_string,\u00a0return_val)\nUpdate based on prompt and llm_string.\nclear(**kwargs: Any) \u2192 None\u00b6\nClear cache.\nlookup(prompt: str, llm_string: str) \u2192 Optional[Sequence[Generation]]\u00b6\nLook up based on prompt and llm_string.\nupdate(prompt: str, llm_string: str, return_val: Sequence[Generation]) \u2192 None\u00b6\nUpdate based on prompt and llm_string.", "source": "https://api.python.langchain.com/en/latest/cache/langchain.cache.SQLiteCache.html"} {"id": "596c9dfb5ed9-0", "text": "langchain.cache.InMemoryCache\u00b6\nclass langchain.cache.InMemoryCache[source]\u00b6\nBases: BaseCache\nCache that stores things in memory.\nInitialize with empty cache.\nMethods\n__init__()\nInitialize with empty cache.\nclear(**kwargs)\nClear cache.\nlookup(prompt,\u00a0llm_string)\nLook up based on prompt and llm_string.\nupdate(prompt,\u00a0llm_string,\u00a0return_val)\nUpdate cache based on prompt and llm_string.\nclear(**kwargs: Any) \u2192 None[source]\u00b6\nClear cache.\nlookup(prompt: str, llm_string: str) \u2192 Optional[Sequence[Generation]][source]\u00b6\nLook up based on prompt and llm_string.\nupdate(prompt: str, llm_string: str, return_val: Sequence[Generation]) \u2192 None[source]\u00b6\nUpdate cache based on prompt and llm_string.", "source": "https://api.python.langchain.com/en/latest/cache/langchain.cache.InMemoryCache.html"} {"id": "705f2bfdde95-0", "text": "langchain.cache.SQLAlchemyCache\u00b6\nclass langchain.cache.SQLAlchemyCache(engine: ~sqlalchemy.engine.base.Engine, cache_schema: ~typing.Type[~langchain.cache.FullLLMCache] = )[source]\u00b6\nBases: BaseCache\nCache that uses SQAlchemy as a backend.\nInitialize by creating all tables.\nMethods\n__init__(engine[,\u00a0cache_schema])\nInitialize by creating all tables.\nclear(**kwargs)\nClear cache.\nlookup(prompt,\u00a0llm_string)\nLook up based on prompt and llm_string.\nupdate(prompt,\u00a0llm_string,\u00a0return_val)\nUpdate based on prompt and llm_string.\nclear(**kwargs: Any) \u2192 None[source]\u00b6\nClear cache.\nlookup(prompt: str, llm_string: str) \u2192 Optional[Sequence[Generation]][source]\u00b6\nLook up based on prompt and llm_string.\nupdate(prompt: str, llm_string: str, return_val: Sequence[Generation]) \u2192 None[source]\u00b6\nUpdate based on prompt and llm_string.", "source": "https://api.python.langchain.com/en/latest/cache/langchain.cache.SQLAlchemyCache.html"} {"id": "080d26838048-0", "text": "langchain.cache.BaseCache\u00b6\nclass langchain.cache.BaseCache[source]\u00b6\nBases: ABC\nBase interface for cache.\nMethods\n__init__()\nclear(**kwargs)\nClear cache that can take additional keyword arguments.\nlookup(prompt,\u00a0llm_string)\nLook up based on prompt and llm_string.\nupdate(prompt,\u00a0llm_string,\u00a0return_val)\nUpdate cache based on prompt and llm_string.\nabstract clear(**kwargs: Any) \u2192 None[source]\u00b6\nClear cache that can take additional keyword arguments.\nabstract lookup(prompt: str, llm_string: str) \u2192 Optional[Sequence[Generation]][source]\u00b6\nLook up based on prompt and llm_string.\nabstract update(prompt: str, llm_string: str, return_val: Sequence[Generation]) \u2192 None[source]\u00b6\nUpdate cache based on prompt and llm_string.", "source": "https://api.python.langchain.com/en/latest/cache/langchain.cache.BaseCache.html"} {"id": "7f5c46994406-0", "text": "langchain.cache.GPTCache\u00b6\nclass langchain.cache.GPTCache(init_func: Optional[Union[Callable[[Any, str], None], Callable[[Any], None]]] = None)[source]\u00b6\nBases: BaseCache\nCache that uses GPTCache as a backend.\nInitialize by passing in init function (default: None).\nParameters\ninit_func (Optional[Callable[[Any], None]]) \u2013 init GPTCache function\n(default \u2013 None)\nExample:\n.. code-block:: python\n# Initialize GPTCache with a custom init function\nimport gptcache\nfrom gptcache.processor.pre import get_prompt\nfrom gptcache.manager.factory import get_data_manager\n# Avoid multiple caches using the same file,\ncausing different llm model caches to affect each other\ndef init_gptcache(cache_obj: gptcache.Cache, llm str):\ncache_obj.init(pre_embedding_func=get_prompt,\ndata_manager=manager_factory(\nmanager=\u201dmap\u201d,\ndata_dir=f\u201dmap_cache_{llm}\u201d\n),\n)\nlangchain.llm_cache = GPTCache(init_gptcache)\nMethods\n__init__([init_func])\nInitialize by passing in init function (default: None).\nclear(**kwargs)\nClear cache.\nlookup(prompt,\u00a0llm_string)\nLook up the cache data.\nupdate(prompt,\u00a0llm_string,\u00a0return_val)\nUpdate cache.\nclear(**kwargs: Any) \u2192 None[source]\u00b6\nClear cache.\nlookup(prompt: str, llm_string: str) \u2192 Optional[Sequence[Generation]][source]\u00b6\nLook up the cache data.\nFirst, retrieve the corresponding cache object using the llm_string parameter,\nand then retrieve the data from the cache based on the prompt.\nupdate(prompt: str, llm_string: str, return_val: Sequence[Generation]) \u2192 None[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/cache/langchain.cache.GPTCache.html"} {"id": "7f5c46994406-1", "text": "Update cache.\nFirst, retrieve the corresponding cache object using the llm_string parameter,\nand then store the prompt and return_val in the cache object.", "source": "https://api.python.langchain.com/en/latest/cache/langchain.cache.GPTCache.html"} {"id": "cecf12dfd794-0", "text": "langchain.cache.FullLLMCache\u00b6\nclass langchain.cache.FullLLMCache(**kwargs)[source]\u00b6\nBases: Base\nSQLite table for full LLM Cache (all generations).\nA simple constructor that allows initialization from kwargs.\nSets attributes on the constructed instance using the names and\nvalues in kwargs.\nOnly keys that are present as\nattributes of the instance\u2019s class are allowed. These could be,\nfor example, any mapped columns or relationships.\nMethods\n__init__(**kwargs)\nA simple constructor that allows initialization from kwargs.\nAttributes\nidx\nllm\nmetadata\nprompt\nregistry\nresponse\nidx\u00b6\nllm\u00b6\nmetadata: MetaData = MetaData()\u00b6\nprompt\u00b6\nregistry: RegistryType = \u00b6\nresponse\u00b6", "source": "https://api.python.langchain.com/en/latest/cache/langchain.cache.FullLLMCache.html"} {"id": "5275157efecf-0", "text": "langchain.cache.RedisSemanticCache\u00b6\nclass langchain.cache.RedisSemanticCache(redis_url: str, embedding: Embeddings, score_threshold: float = 0.2)[source]\u00b6\nBases: BaseCache\nCache that uses Redis as a vector-store backend.\nInitialize by passing in the init GPTCache func\nParameters\nredis_url (str) \u2013 URL to connect to Redis.\nembedding (Embedding) \u2013 Embedding provider for semantic encoding and search.\nscore_threshold (float, 0.2) \u2013 \nExample:\nimport langchain\nfrom langchain.cache import RedisSemanticCache\nfrom langchain.embeddings import OpenAIEmbeddings\nlangchain.llm_cache = RedisSemanticCache(\n redis_url=\"redis://localhost:6379\",\n embedding=OpenAIEmbeddings()\n)\nMethods\n__init__(redis_url,\u00a0embedding[,\u00a0score_threshold])\nInitialize by passing in the init GPTCache func\nclear(**kwargs)\nClear semantic cache for a given llm_string.\nlookup(prompt,\u00a0llm_string)\nLook up based on prompt and llm_string.\nupdate(prompt,\u00a0llm_string,\u00a0return_val)\nUpdate cache based on prompt and llm_string.\nclear(**kwargs: Any) \u2192 None[source]\u00b6\nClear semantic cache for a given llm_string.\nlookup(prompt: str, llm_string: str) \u2192 Optional[Sequence[Generation]][source]\u00b6\nLook up based on prompt and llm_string.\nupdate(prompt: str, llm_string: str, return_val: Sequence[Generation]) \u2192 None[source]\u00b6\nUpdate cache based on prompt and llm_string.", "source": "https://api.python.langchain.com/en/latest/cache/langchain.cache.RedisSemanticCache.html"} {"id": "15814364a251-0", "text": "langchain.document_transformers.EmbeddingsClusteringFilter\u00b6\nclass langchain.document_transformers.EmbeddingsClusteringFilter(*, embeddings: Embeddings, num_clusters: int = 5, num_closest: int = 1, random_state: int = 42, sorted: bool = False, remove_duplicates: bool = False)[source]\u00b6\nBases: BaseDocumentTransformer, BaseModel\nPerform K-means clustering on document vectors.\nReturns an arbitrary number of documents closest to center.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam embeddings: langchain.embeddings.base.Embeddings [Required]\u00b6\nEmbeddings to use for embedding document contents.\nparam num_closest: int = 1\u00b6\nThe number of closest vectors to return for each cluster center.\nparam num_clusters: int = 5\u00b6\nNumber of clusters. Groups of documents with similar meaning.\nparam random_state: int = 42\u00b6\nControls the random number generator used to initialize the cluster centroids.\nIf you set the random_state parameter to None, the KMeans algorithm will use a\nrandom number generator that is seeded with the current time. This means\nthat the results of the KMeans algorithm will be different each time you\nrun it.\nparam remove_duplicates = False\u00b6\nBy default duplicated results are skipped and replaced by the next closest\nvector in the cluster. If remove_duplicates is true no replacement will be done:\nThis could dramatically reduce results when there is a lot of overlap beetween\nclusters.\nparam sorted: bool = False\u00b6\nBy default results are re-ordered \u201cgrouping\u201d them by cluster, if sorted is true\nresult will be ordered by the original position from the retriever\nasync atransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document][source]\u00b6", "source": "https://api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.EmbeddingsClusteringFilter.html"} {"id": "15814364a251-1", "text": "Asynchronously transform a list of documents.\nParameters\ndocuments \u2013 A sequence of Documents to be transformed.\nReturns\nA list of transformed Documents.\ntransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document][source]\u00b6\nFilter down documents.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.EmbeddingsClusteringFilter.html"} {"id": "d6f4b456fae7-0", "text": "langchain.document_transformers.get_stateful_documents\u00b6\nlangchain.document_transformers.get_stateful_documents(documents: Sequence[Document]) \u2192 Sequence[_DocumentWithState][source]\u00b6\nConvert a list of documents to a list of documents with state.\nParameters\ndocuments \u2013 The documents to convert.\nReturns\nA list of documents with state.", "source": "https://api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.get_stateful_documents.html"} {"id": "3d3978043d81-0", "text": "langchain.document_transformers.EmbeddingsRedundantFilter\u00b6\nclass langchain.document_transformers.EmbeddingsRedundantFilter(*, embeddings: ~langchain.embeddings.base.Embeddings, similarity_fn: ~typing.Callable = , similarity_threshold: float = 0.95)[source]\u00b6\nBases: BaseDocumentTransformer, BaseModel\nFilter that drops redundant documents by comparing their embeddings.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam embeddings: langchain.embeddings.base.Embeddings [Required]\u00b6\nEmbeddings to use for embedding document contents.\nparam similarity_fn: Callable = \u00b6\nSimilarity function for comparing documents. Function expected to take as input\ntwo matrices (List[List[float]]) and return a matrix of scores where higher values\nindicate greater similarity.\nparam similarity_threshold: float = 0.95\u00b6\nThreshold for determining when two documents are similar enough\nto be considered redundant.\nasync atransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document][source]\u00b6\nAsynchronously transform a list of documents.\nParameters\ndocuments \u2013 A sequence of Documents to be transformed.\nReturns\nA list of transformed Documents.\ntransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document][source]\u00b6\nFilter down documents.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.EmbeddingsRedundantFilter.html"} {"id": "7299a5624e66-0", "text": "langchain.schema.messages.BaseMessage\u00b6\nclass langchain.schema.messages.BaseMessage(*, content: str, additional_kwargs: dict = None)[source]\u00b6\nBases: Serializable\nThe base abstract Message class.\nMessages are the inputs and outputs of ChatModels.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam additional_kwargs: dict [Optional]\u00b6\nAny additional information.\nparam content: str [Required]\u00b6\nThe string contents of the message.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nWhether this class is LangChain serializable.\nabstract property type: str\u00b6\nType of the Message, used for serialization.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.BaseMessage.html"} {"id": "ca76b0490591-0", "text": "langchain.schema.messages.messages_from_dict\u00b6\nlangchain.schema.messages.messages_from_dict(messages: List[dict]) \u2192 List[BaseMessage][source]\u00b6\nConvert a sequence of messages from dicts to Message objects.\nParameters\nmessages \u2013 Sequence of messages (as dicts) to convert.\nReturns\nList of messages (BaseMessages).", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.messages_from_dict.html"} {"id": "0e0bb84dffc4-0", "text": "langchain.schema.output.Generation\u00b6\nclass langchain.schema.output.Generation(*, text: str, generation_info: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: Serializable\nA single text generation output.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam generation_info: Optional[Dict[str, Any]] = None\u00b6\nRaw response from the provider. May include things like the\nreason for finishing or token log probabilities.\nparam text: str [Required]\u00b6\nGenerated text output.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nWhether this class is LangChain serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.output.Generation.html"} {"id": "68760bd3a808-0", "text": "langchain.schema.retriever.BaseRetriever\u00b6\nclass langchain.schema.retriever.BaseRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: Serializable, ABC\nAbstract base class for a Document retrieval system.\nA retrieval system is defined as something that can take string queries and returnthe most \u2018relevant\u2019 Documents from some source.\nExample\nclass TFIDFRetriever(BaseRetriever, BaseModel):\n vectorizer: Any\n docs: List[Document]\n tfidf_array: Any\n k: int = 4\n class Config:\n arbitrary_types_allowed = True\n def get_relevant_documents(self, query: str) -> List[Document]:\n from sklearn.metrics.pairwise import cosine_similarity\n # Ip -- (n_docs,x), Op -- (n_docs,n_Feats)\n query_vec = self.vectorizer.transform([query])\n # Op -- (n_docs,1) -- Cosine Sim with each doc\n results = cosine_similarity(self.tfidf_array, query_vec).reshape((-1,))\n return [self.docs[i] for i in results.argsort()[-self.k :][::-1]]\n async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.retriever.BaseRetriever.html"} {"id": "68760bd3a808-1", "text": "use case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.retriever.BaseRetriever.html"} {"id": "68760bd3a808-2", "text": "Parameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.retriever.BaseRetriever.html"} {"id": "0b8d13e73ef2-0", "text": "langchain.schema.messages.get_buffer_string\u00b6\nlangchain.schema.messages.get_buffer_string(messages: Sequence[BaseMessage], human_prefix: str = 'Human', ai_prefix: str = 'AI') \u2192 str[source]\u00b6\nConvert sequence of Messages to strings and concatenate them into one string.\nArgs:messages: Messages to be converted to strings.\nhuman_prefix: The prefix to prepend to contents of HumanMessages.\nai_prefix: THe prefix to prepend to contents of AIMessages.\nReturns:A single string concatenation of all input messages.\nExample:from langchain.schema import AIMessage, HumanMessage\nmessages = [\n HumanMessage(content=\"Hi, how are you?\"),\n AIMessage(content=\"Good, how are you?\"),\n]\nget_buffer_string(messages)\n# -> \"Human: Hi, how are you?\nAI: Good, how are you?\u201d", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.get_buffer_string.html"} {"id": "5192d5b300bc-0", "text": "langchain.schema.output_parser.BaseLLMOutputParser\u00b6\nclass langchain.schema.output_parser.BaseLLMOutputParser[source]\u00b6\nBases: Serializable, ABC, Generic[T]\nAbstract base class for parsing the outputs of a model.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nabstract parse_result(result: List[Generation]) \u2192 T[source]\u00b6\nParse a list of candidate model Generations into a specific format.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.BaseLLMOutputParser.html"} {"id": "9f4446ccd093-0", "text": "langchain.schema.messages.SystemMessage\u00b6\nclass langchain.schema.messages.SystemMessage(*, content: str, additional_kwargs: dict = None)[source]\u00b6\nBases: BaseMessage\nA Message for priming AI behavior, usually passed in as the first of a sequence\nof input messages.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam additional_kwargs: dict [Optional]\u00b6\nAny additional information.\nparam content: str [Required]\u00b6\nThe string contents of the message.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nWhether this class is LangChain serializable.\nproperty type: str\u00b6\nType of the message, used for serialization.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.SystemMessage.html"} {"id": "bea61f87eecf-0", "text": "langchain.schema.memory.BaseMemory\u00b6\nclass langchain.schema.memory.BaseMemory[source]\u00b6\nBases: Serializable, ABC\nBase abstract class for memory in Chains.\nMemory refers to state in Chains. Memory can be used to store information aboutpast executions of a Chain and inject that information into the inputs of\nfuture executions of the Chain. For example, for conversational Chains Memory\ncan be used to store conversations and automatically add them to future model\nprompts so that the model has the necessary context to respond coherently to\nthe latest input.\nExample\nclass SimpleMemory(BaseMemory):\n memories: Dict[str, Any] = dict()\n @property\n def memory_variables(self) -> List[str]:\n return list(self.memories.keys())\n def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n return self.memories\n def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n pass\n def clear(self) -> None:\n pass\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nabstract clear() \u2192 None[source]\u00b6\nClear memory contents.\nabstract load_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, Any][source]\u00b6\nReturn key-value pairs given the text input to the chain.\nabstract save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None[source]\u00b6\nSave the context of this chain run to memory.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.memory.BaseMemory.html"} {"id": "bea61f87eecf-1", "text": "serialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nabstract property memory_variables: List[str]\u00b6\nThe string keys this memory class will add to chain inputs.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.memory.BaseMemory.html"} {"id": "6b94e7400f5d-0", "text": "langchain.schema.agent.AgentFinish\u00b6\nclass langchain.schema.agent.AgentFinish(return_values: dict, log: str)[source]\u00b6\nBases: NamedTuple\nThe final return value of an ActionAgent.\nCreate new instance of AgentFinish(return_values, log)\nMethods\n__init__()\ncount(value,\u00a0/)\nReturn number of occurrences of value.\nindex(value[,\u00a0start,\u00a0stop])\nReturn first index of value.\nAttributes\nlog\nAdditional information to log about the return value\nreturn_values\nDictionary of return values.\ncount(value, /)\u00b6\nReturn number of occurrences of value.\nindex(value, start=0, stop=9223372036854775807, /)\u00b6\nReturn first index of value.\nRaises ValueError if the value is not present.\nlog: str\u00b6\nAdditional information to log about the return value\nreturn_values: dict\u00b6\nDictionary of return values.", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.agent.AgentFinish.html"} {"id": "c366cbde7301-0", "text": "langchain.schema.output_parser.BaseOutputParser\u00b6\nclass langchain.schema.output_parser.BaseOutputParser[source]\u00b6\nBases: BaseLLMOutputParser, ABC, Generic[T]\nClass to parse the output of an LLM call.\nOutput parsers help structure language model responses.\nExample\nclass BooleanOutputParser(BaseOutputParser[bool]):\n true_val: str = \"YES\"\n false_val: str = \"NO\"\n def parse(self, text: str) -> bool:\n cleaned_text = text.strip().upper()\n if cleaned_text not in (self.true_val.upper(), self.false_val.upper()):\n raise OutputParserException(\n f\"BooleanOutputParser expected output value to either be \"\n f\"{self.true_val} or {self.false_val} (case-insensitive). \"\n f\"Received {cleaned_text}.\"\n )\n return cleaned_text == self.true_val.upper()\n @property\n def _type(self) -> str:\n return \"boolean_output_parser\"\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\ndict(**kwargs: Any) \u2192 Dict[source]\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str[source]\u00b6\nInstructions on how the LLM output should be formatted.\nabstract parse(text: str) \u2192 T[source]\u00b6\nParse a single string model output into some structure.\nParameters\ntext \u2013 String output of language model.\nReturns\nStructured output.\nparse_result(result: List[Generation]) \u2192 T[source]\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.BaseOutputParser.html"} {"id": "c366cbde7301-1", "text": "Parameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any[source]\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.BaseOutputParser.html"} {"id": "60a2764fc792-0", "text": "langchain.schema.output_parser.NoOpOutputParser\u00b6\nclass langchain.schema.output_parser.NoOpOutputParser[source]\u00b6\nBases: BaseOutputParser[str]\n\u2018No operation\u2019 OutputParser that returns the text as is.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 str[source]\u00b6\nReturns the input text with no changes.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.NoOpOutputParser.html"} {"id": "60a2764fc792-1", "text": "property lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nWhether the class LangChain serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.NoOpOutputParser.html"} {"id": "70f82a476232-0", "text": "langchain.schema.document.BaseDocumentTransformer\u00b6\nclass langchain.schema.document.BaseDocumentTransformer[source]\u00b6\nBases: ABC\nAbstract base class for document transformation systems.\nA document transformation system takes a sequence of Documents and returns a\nsequence of transformed Documents.\nExample\nclass EmbeddingsRedundantFilter(BaseDocumentTransformer, BaseModel):\n embeddings: Embeddings\n similarity_fn: Callable = cosine_similarity\n similarity_threshold: float = 0.95\n class Config:\n arbitrary_types_allowed = True\n def transform_documents(\n self, documents: Sequence[Document], **kwargs: Any\n ) -> Sequence[Document]:\n stateful_documents = get_stateful_documents(documents)\n embedded_documents = _get_embeddings_from_stateful_docs(\n self.embeddings, stateful_documents\n )\n included_idxs = _filter_similar_embeddings(\n embedded_documents, self.similarity_fn, self.similarity_threshold\n )\n return [stateful_documents[i] for i in sorted(included_idxs)]\n async def atransform_documents(\n self, documents: Sequence[Document], **kwargs: Any\n ) -> Sequence[Document]:\n raise NotImplementedError\nMethods\n__init__()\natransform_documents(documents,\u00a0**kwargs)\nAsynchronously transform a list of documents.\ntransform_documents(documents,\u00a0**kwargs)\nTransform a list of documents.\nabstract async atransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document][source]\u00b6\nAsynchronously transform a list of documents.\nParameters\ndocuments \u2013 A sequence of Documents to be transformed.\nReturns\nA list of transformed Documents.\nabstract transform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document][source]\u00b6\nTransform a list of documents.\nParameters\ndocuments \u2013 A sequence of Documents to be transformed.\nReturns", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.document.BaseDocumentTransformer.html"} {"id": "70f82a476232-1", "text": "Parameters\ndocuments \u2013 A sequence of Documents to be transformed.\nReturns\nA list of transformed Documents.", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.document.BaseDocumentTransformer.html"} {"id": "a7abe93395db-0", "text": "langchain.schema.prompt.PromptValue\u00b6\nclass langchain.schema.prompt.PromptValue[source]\u00b6\nBases: Serializable, ABC\nBase abstract class for inputs to any language model.\nPromptValues can be converted to both LLM (pure text-generation) inputs andChatModel inputs.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nabstract to_messages() \u2192 List[BaseMessage][source]\u00b6\nReturn prompt as a list of Messages.\nabstract to_string() \u2192 str[source]\u00b6\nReturn prompt value as string.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.prompt.PromptValue.html"} {"id": "8de0f96a0b7a-0", "text": "langchain.schema.messages.FunctionMessage\u00b6\nclass langchain.schema.messages.FunctionMessage(*, content: str, additional_kwargs: dict = None, name: str)[source]\u00b6\nBases: BaseMessage\nA Message for passing the result of executing a function back to a model.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam additional_kwargs: dict [Optional]\u00b6\nAny additional information.\nparam content: str [Required]\u00b6\nThe string contents of the message.\nparam name: str [Required]\u00b6\nThe name of the function that was executed.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nWhether this class is LangChain serializable.\nproperty type: str\u00b6\nType of the message, used for serialization.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.FunctionMessage.html"} {"id": "e145b614b429-0", "text": "langchain.schema.prompt_template.BasePromptTemplate\u00b6\nclass langchain.schema.prompt_template.BasePromptTemplate(*, input_variables: List[str], output_parser: Optional[BaseOutputParser] = None, partial_variables: Mapping[str, Union[str, Callable[[], str]]] = None)[source]\u00b6\nBases: Serializable, ABC\nBase class for all prompt templates, returning a prompt.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam input_variables: List[str] [Required]\u00b6\nA list of the names of the variables the prompt template expects.\nparam output_parser: Optional[langchain.schema.output_parser.BaseOutputParser] = None\u00b6\nHow to parse the output of calling an LLM on this formatted prompt.\nparam partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]\u00b6\ndict(**kwargs: Any) \u2192 Dict[source]\u00b6\nReturn dictionary representation of prompt.\nabstract format(**kwargs: Any) \u2192 str[source]\u00b6\nFormat the prompt with the inputs.\nParameters\nkwargs \u2013 Any arguments to be passed to the prompt template.\nReturns\nA formatted string.\nExample:\nprompt.format(variable1=\"foo\")\nabstract format_prompt(**kwargs: Any) \u2192 PromptValue[source]\u00b6\nCreate Chat Messages.\npartial(**kwargs: Union[str, Callable[[], str]]) \u2192 BasePromptTemplate[source]\u00b6\nReturn a partial of the prompt template.\nsave(file_path: Union[Path, str]) \u2192 None[source]\u00b6\nSave the prompt.\nParameters\nfile_path \u2013 Path to directory to save prompt to.\nExample:\n.. code-block:: python\nprompt.save(file_path=\u201dpath/prompt.yaml\u201d)\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.prompt_template.BasePromptTemplate.html"} {"id": "e145b614b429-1", "text": "to_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_variable_names\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate variable names do not include restricted names.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.prompt_template.BasePromptTemplate.html"} {"id": "234af90ca736-0", "text": "langchain.schema.messages.HumanMessage\u00b6\nclass langchain.schema.messages.HumanMessage(*, content: str, additional_kwargs: dict = None, example: bool = False)[source]\u00b6\nBases: BaseMessage\nA Message from a human.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam additional_kwargs: dict [Optional]\u00b6\nAny additional information.\nparam content: str [Required]\u00b6\nThe string contents of the message.\nparam example: bool = False\u00b6\nWhether this Message is being passed in to the model as part of an example\nconversation.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nWhether this class is LangChain serializable.\nproperty type: str\u00b6\nType of the message, used for serialization.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html"} {"id": "87239e949b9f-0", "text": "langchain.schema.messages.messages_to_dict\u00b6\nlangchain.schema.messages.messages_to_dict(messages: Sequence[BaseMessage]) \u2192 List[dict][source]\u00b6\nConvert a sequence of Messages to a list of dictionaries.\nParameters\nmessages \u2013 Sequence of messages (as BaseMessages) to convert.\nReturns\nList of messages as dicts.", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.messages_to_dict.html"} {"id": "3dcb309f96ca-0", "text": "langchain.schema.messages.AIMessage\u00b6\nclass langchain.schema.messages.AIMessage(*, content: str, additional_kwargs: dict = None, example: bool = False)[source]\u00b6\nBases: BaseMessage\nA Message from an AI.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam additional_kwargs: dict [Optional]\u00b6\nAny additional information.\nparam content: str [Required]\u00b6\nThe string contents of the message.\nparam example: bool = False\u00b6\nWhether this Message is being passed in to the model as part of an example\nconversation.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nWhether this class is LangChain serializable.\nproperty type: str\u00b6\nType of the message, used for serialization.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html"} {"id": "e49406bb6fd2-0", "text": "langchain.schema.output.ChatGeneration\u00b6\nclass langchain.schema.output.ChatGeneration(*, text: str = '', generation_info: Optional[Dict[str, Any]] = None, message: BaseMessage)[source]\u00b6\nBases: Generation\nA single chat generation output.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam generation_info: Optional[Dict[str, Any]] = None\u00b6\nRaw response from the provider. May include things like the\nreason for finishing or token log probabilities.\nparam message: langchain.schema.messages.BaseMessage [Required]\u00b6\nThe message output by the chat model.\nparam text: str = ''\u00b6\nSHOULD NOT BE SET DIRECTLY The text contents of the output message.\nvalidator set_text\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nSet the text attribute to be the contents of the message.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nWhether this class is LangChain serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.output.ChatGeneration.html"} {"id": "c93820350cd3-0", "text": "langchain.schema.output_parser.OutputParserException\u00b6\nclass langchain.schema.output_parser.OutputParserException(error: Any, observation: Optional[str] = None, llm_output: Optional[str] = None, send_to_llm: bool = False)[source]\u00b6\nBases: ValueError\nException that output parsers should raise to signify a parsing error.\nThis exists to differentiate parsing errors from other code or execution errors\nthat also may arise inside the output parser. OutputParserExceptions will be\navailable to catch and handle in ways to fix the parsing error, while other\nerrors will be raised.\nParameters\nerror \u2013 The error that\u2019s being re-raised or an error message.\nobservation \u2013 String explanation of error which can be passed to a\nmodel to try and remediate the issue.\nllm_output \u2013 String model output which is error-ing.\nsend_to_llm \u2013 Whether to send the observation and llm_output back to an Agent\nafter an OutputParserException has been raised. This gives the underlying\nmodel driving the agent the context that the previous output was improperly\nstructured, in the hopes that it will update the output to the correct\nformat.\nadd_note()\u00b6\nException.add_note(note) \u2013\nadd a note to the exception\nwith_traceback()\u00b6\nException.with_traceback(tb) \u2013\nset self.__traceback__ to tb and return self.\nargs\u00b6", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.OutputParserException.html"} {"id": "6e9b9793a06f-0", "text": "langchain.schema.output.ChatResult\u00b6\nclass langchain.schema.output.ChatResult(*, generations: List[ChatGeneration], llm_output: Optional[dict] = None)[source]\u00b6\nBases: BaseModel\nClass that contains all results for a single chat model call.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam generations: List[langchain.schema.output.ChatGeneration] [Required]\u00b6\nList of the chat generations. This is a List because an input can have multiple\ncandidate generations.\nparam llm_output: Optional[dict] = None\u00b6\nFor arbitrary LLM provider specific output.", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.output.ChatResult.html"} {"id": "05f70ed758b2-0", "text": "langchain.schema.output.RunInfo\u00b6\nclass langchain.schema.output.RunInfo(*, run_id: UUID)[source]\u00b6\nBases: BaseModel\nClass that contains metadata for a single execution of a Chain or model.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam run_id: uuid.UUID [Required]\u00b6\nA unique identifier for the model or chain run.", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.output.RunInfo.html"} {"id": "49895984b434-0", "text": "langchain.schema.output.LLMResult\u00b6\nclass langchain.schema.output.LLMResult(*, generations: List[List[Generation]], llm_output: Optional[dict] = None, run: Optional[List[RunInfo]] = None)[source]\u00b6\nBases: BaseModel\nClass that contains all results for a batched LLM call.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam generations: List[List[langchain.schema.output.Generation]] [Required]\u00b6\nList of generated outputs. This is a List[List[]] because\neach input could have multiple candidate generations.\nparam llm_output: Optional[dict] = None\u00b6\nArbitrary LLM provider-specific output.\nparam run: Optional[List[langchain.schema.output.RunInfo]] = None\u00b6\nList of metadata info for model call for each input.\nflatten() \u2192 List[LLMResult][source]\u00b6\nFlatten generations into a single list.\nUnpack List[List[Generation]] -> List[LLMResult] where each returned LLMResultcontains only a single Generation. If token usage information is available,\nit is kept only for the LLMResult corresponding to the top-choice\nGeneration, to avoid over-counting of token usage downstream.\nReturns\nList of LLMResults where each returned LLMResult contains a singleGeneration.", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.output.LLMResult.html"} {"id": "5fd211ee3197-0", "text": "langchain.schema.messages.ChatMessage\u00b6\nclass langchain.schema.messages.ChatMessage(*, content: str, additional_kwargs: dict = None, role: str)[source]\u00b6\nBases: BaseMessage\nA Message that can be assigned an arbitrary speaker (i.e. role).\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam additional_kwargs: dict [Optional]\u00b6\nAny additional information.\nparam content: str [Required]\u00b6\nThe string contents of the message.\nparam role: str [Required]\u00b6\nThe speaker / role of the Message.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nWhether this class is LangChain serializable.\nproperty type: str\u00b6\nType of the message, used for serialization.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.ChatMessage.html"} {"id": "8eb2660add7a-0", "text": "langchain.schema.prompt_template.format_document\u00b6\nlangchain.schema.prompt_template.format_document(doc: Document, prompt: BasePromptTemplate) \u2192 str[source]\u00b6\nFormat a document into a string based on a prompt template.\nFirst, this pulls information from the document from two sources:\npage_content:This takes the information from the document.page_content\nand assigns it to a variable named page_content.\nmetadata:This takes information from document.metadata and assigns\nit to variables of the same name.\nThose variables are then passed into the prompt to produce a formatted string.\nParameters\ndoc \u2013 Document, the page_content and metadata will be used to create\nthe final string.\nprompt \u2013 BasePromptTemplate, will be used to format the page_content\nand metadata into the final string.\nReturns\nstring of the document formatted.\nExample\nfrom langchain.schema import Document\nfrom langchain.prompts import PromptTemplate\ndoc = Document(page_content=\"This is a joke\", metadata={\"page\": \"1\"})\nprompt = PromptTemplate.from_template(\"Page {page}: {page_content}\")\nformat_document(doc, prompt)\n>>> \"Page 1: This is a joke\"", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.prompt_template.format_document.html"} {"id": "b22e04e4792f-0", "text": "langchain.schema.document.Document\u00b6\nclass langchain.schema.document.Document(*, page_content: str, metadata: dict = None)[source]\u00b6\nBases: Serializable\nClass for storing a piece of text and associated metadata.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam metadata: dict [Optional]\u00b6\nArbitrary metadata about the page content (e.g., source, relationships to other\ndocuments, etc.).\nparam page_content: str [Required]\u00b6\nString text.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.document.Document.html"} {"id": "75d2df7df6b4-0", "text": "langchain.schema.language_model.BaseLanguageModel\u00b6\nclass langchain.schema.language_model.BaseLanguageModel[source]\u00b6\nBases: Serializable, ABC\nAbstract base class for interfacing with language models.\nAll language model wrappers inherit from BaseLanguageModel.\nExposes three main methods:\n- generate_prompt: generate language model outputs for a sequence of prompt\nvalues. A prompt value is a model input that can be converted to any language\nmodel input format (string or messages).\npredict: pass in a single string to a language model and return a stringprediction.\npredict_messages: pass in a sequence of BaseMessages (corresponding to a singlemodel call) to a language model and return a BaseMessage prediction.\nEach of these has an equivalent asynchronous method.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nabstract async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Callbacks = None, **kwargs: Any) \u2192 LLMResult[source]\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.language_model.BaseLanguageModel.html"} {"id": "75d2df7df6b4-1", "text": "first occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nabstract async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str[source]\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nabstract async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage[source]\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nabstract generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Callbacks = None, **kwargs: Any) \u2192 LLMResult[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.language_model.BaseLanguageModel.html"} {"id": "75d2df7df6b4-2", "text": "Pass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int[source]\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int[source]\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int][source]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.language_model.BaseLanguageModel.html"} {"id": "75d2df7df6b4-3", "text": "Return the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\nabstract predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str[source]\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nabstract predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage[source]\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.language_model.BaseLanguageModel.html"} {"id": "75d2df7df6b4-4", "text": "serialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.language_model.BaseLanguageModel.html"} {"id": "e9477732a5f6-0", "text": "langchain.schema.memory.BaseChatMessageHistory\u00b6\nclass langchain.schema.memory.BaseChatMessageHistory[source]\u00b6\nBases: ABC\nAbstract base class for storing chat message history.\nSee ChatMessageHistory for default implementation.\nExample\nclass FileChatMessageHistory(BaseChatMessageHistory):\n storage_path: str\n session_id: str\n @property\n def messages(self):\n with open(os.path.join(storage_path, session_id), 'r:utf-8') as f:\n messages = json.loads(f.read())\n return messages_from_dict(messages)\n def add_message(self, message: BaseMessage) -> None:\n messages = self.messages.append(_message_to_dict(message))\n with open(os.path.join(storage_path, session_id), 'w') as f:\n json.dump(f, messages)\n def clear(self):\n with open(os.path.join(storage_path, session_id), 'w') as f:\n f.write(\"[]\")\nMethods\n__init__()\nadd_ai_message(message)\nConvenience method for adding an AI message string to the store.\nadd_message(message)\nAdd a Message object to the store.\nadd_user_message(message)\nConvenience method for adding a human message string to the store.\nclear()\nRemove all messages from the store\nAttributes\nmessages\nA list of Messages stored in-memory.\nadd_ai_message(message: str) \u2192 None[source]\u00b6\nConvenience method for adding an AI message string to the store.\nParameters\nmessage \u2013 The string contents of an AI message.\nadd_message(message: BaseMessage) \u2192 None[source]\u00b6\nAdd a Message object to the store.\nParameters\nmessage \u2013 A BaseMessage object to store.\nadd_user_message(message: str) \u2192 None[source]\u00b6\nConvenience method for adding a human message string to the store.\nParameters", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.memory.BaseChatMessageHistory.html"} {"id": "e9477732a5f6-1", "text": "Convenience method for adding a human message string to the store.\nParameters\nmessage \u2013 The string contents of a human message.\nabstract clear() \u2192 None[source]\u00b6\nRemove all messages from the store\nmessages: List[langchain.schema.messages.BaseMessage]\u00b6\nA list of Messages stored in-memory.", "source": "https://api.python.langchain.com/en/latest/schema/langchain.schema.memory.BaseChatMessageHistory.html"} {"id": "66be8aacfaf4-0", "text": "langchain.chains.llm_bash.base.LLMBashChain\u00b6\nclass langchain.chains.llm_bash.base.LLMBashChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, llm_chain: LLMChain, llm: Optional[BaseLanguageModel] = None, input_key: str = 'question', output_key: str = 'answer', prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=BashOutputParser(), partial_variables={}, template='If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put \"#!/bin/bash\" in your answer. Make sure to reason step by step, using this format:\\n\\nQuestion: \"copy the files in the directory named \\'target\\' into a new directory at the same level as target called \\'myNewDirectory\\'\"\\n\\nI need to take the following actions:\\n- List all files in the directory\\n- Create a new directory\\n- Copy the files from the first directory into the second directory\\n```bash\\nls\\nmkdir myNewDirectory\\ncp -r target/* myNewDirectory\\n```\\n\\nThat is the format. Begin!\\n\\nQuestion: {question}', template_format='f-string', validate_template=True), bash_process: BashProcess = None)[source]\u00b6\nBases: Chain\nChain that interprets a prompt and executes bash code to perform bash operations.\nExample\nfrom langchain import LLMBashChain, OpenAI\nllm_bash = LLMBashChain.from_llm(OpenAI())", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_bash.base.LLMBashChain.html"} {"id": "66be8aacfaf4-1", "text": "llm_bash = LLMBashChain.from_llm(OpenAI())\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam llm: Optional[BaseLanguageModel] = None\u00b6\n[Deprecated] LLM wrapper to use.\nparam llm_chain: LLMChain [Required]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_bash.base.LLMBashChain.html"} {"id": "66be8aacfaf4-2", "text": "You can use these to eg identify a specific instance of a chain with its use case.\nparam prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=BashOutputParser(), partial_variables={}, template='If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put \"#!/bin/bash\" in your answer. Make sure to reason step by step, using this format:\\n\\nQuestion: \"copy the files in the directory named \\'target\\' into a new directory at the same level as target called \\'myNewDirectory\\'\"\\n\\nI need to take the following actions:\\n- List all files in the directory\\n- Create a new directory\\n- Copy the files from the first directory into the second directory\\n```bash\\nls\\nmkdir myNewDirectory\\ncp -r target/* myNewDirectory\\n```\\n\\nThat is the format. Begin!\\n\\nQuestion: {question}', template_format='f-string', validate_template=True)\u00b6\n[Deprecated]\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_bash.base.LLMBashChain.html"} {"id": "66be8aacfaf4-3", "text": "will be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_bash.base.LLMBashChain.html"} {"id": "66be8aacfaf4-4", "text": "Returns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_bash.base.LLMBashChain.html"} {"id": "66be8aacfaf4-5", "text": "Call the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_bash.base.LLMBashChain.html"} {"id": "66be8aacfaf4-6", "text": "# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_llm(llm: BaseLanguageModel, prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=BashOutputParser(), partial_variables={}, template='If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put \"#!/bin/bash\" in your answer. Make sure to reason step by step, using this format:\\n\\nQuestion: \"copy the files in the directory named \\'target\\' into a new directory at the same level as target called \\'myNewDirectory\\'\"\\n\\nI need to take the following actions:\\n- List all files in the directory\\n- Create a new directory\\n- Copy the files from the first directory into the second directory\\n```bash\\nls\\nmkdir myNewDirectory\\ncp -r target/* myNewDirectory\\n```\\n\\nThat is the format. Begin!\\n\\nQuestion: {question}', template_format='f-string', validate_template=True), **kwargs: Any) \u2192 LLMBashChain[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_bash.base.LLMBashChain.html"} {"id": "66be8aacfaf4-7", "text": "prep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_bash.base.LLMBashChain.html"} {"id": "66be8aacfaf4-8", "text": "The other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_bash.base.LLMBashChain.html"} {"id": "66be8aacfaf4-9", "text": "Set the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_prompt\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_bash.base.LLMBashChain.html"} {"id": "aeb8fbae2a89-0", "text": "langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain\u00b6\nclass langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, input_key: str = 'input_documents', output_key: str = 'output_text', llm_chain: LLMChain, document_variable_name: str, rank_key: str, answer_key: str, metadata_keys: Optional[List[str]] = None, return_intermediate_steps: bool = False)[source]\u00b6\nBases: BaseCombineDocumentsChain\nCombining documents by mapping a chain over them, then reranking results.\nThis algorithm calls an LLMChain on each input document. The LLMChain is expected\nto have an OutputParser that parses the result into both an answer (answer_key)\nand a score (rank_key). The answer with the highest score is then returned.\nExample:from langchain.chains import StuffDocumentsChain, LLMChain\nfrom langchain.prompts import PromptTemplate\nfrom langchain.llms import OpenAI\nfrom langchain.output_parsers.regex import RegexParser\ndocument_variable_name = \"context\"\nllm = OpenAI()\n# The prompt here should take as an input variable the\n# `document_variable_name`\n# The actual prompt will need to be a lot more complex, this is just\n# an example.\nprompt_template = (\n \"Use the following context to tell me the chemical formula \"\n \"for water. Output both your answer and a score of how confident \"\n \"you are. Context: {content}\"\n)", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html"} {"id": "aeb8fbae2a89-1", "text": "\"you are. Context: {content}\"\n)\noutput_parser = RegexParser(\n regex=r\"(.*?)\nScore: (.*)\u201d,\noutput_keys=[\u201canswer\u201d, \u201cscore\u201d],\n)\nprompt = PromptTemplate(\ntemplate=prompt_template,\ninput_variables=[\u201ccontext\u201d],\noutput_parser=output_parser,\n)\nllm_chain = LLMChain(llm=llm, prompt=prompt)\nchain = MapRerankDocumentsChain(\nllm_chain=llm_chain,\ndocument_variable_name=document_variable_name,\nrank_key=\u201dscore\u201d,\nanswer_key=\u201danswer\u201d,\n)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam answer_key: str [Required]\u00b6\nKey in output of llm_chain to return as answer.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam document_variable_name: str [Required]\u00b6\nThe variable name in the llm_chain to put the documents in.\nIf only one variable in the llm_chain, this need not be provided.\nparam llm_chain: LLMChain [Required]\u00b6\nChain to apply to each document individually.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html"} {"id": "aeb8fbae2a89-2", "text": "and at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam metadata_keys: Optional[List[str]] = None\u00b6\nAdditional metadata from the chosen document to return.\nparam rank_key: str [Required]\u00b6\nKey in output of llm_chain to rank on.\nparam return_intermediate_steps: bool = False\u00b6\nReturn intermediate steps.\nIntermediate steps include the results of calling llm_chain on each document.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html"} {"id": "aeb8fbae2a89-3", "text": "Execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html"} {"id": "aeb8fbae2a89-4", "text": "response. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acombine_docs(docs: List[Document], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Tuple[str, dict][source]\u00b6\nCombine documents in a map rerank manner.\nCombine by mapping first chain over all documents, then reranking the results.\nParameters\ndocs \u2013 List of documents to combine\ncallbacks \u2013 Callbacks to be passed through\n**kwargs \u2013 additional parameters to be passed to LLM calls (like other\ninput variables besides the documents)\nReturns\nThe first element returned is the single string output. The second\nelement returned is a dictionary of other keys to return.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html"} {"id": "aeb8fbae2a89-5", "text": "Call the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html"} {"id": "aeb8fbae2a89-6", "text": "# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ncombine_docs(docs: List[Document], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Tuple[str, dict][source]\u00b6\nCombine documents in a map rerank manner.\nCombine by mapping first chain over all documents, then reranking the results.\nParameters\ndocs \u2013 List of documents to combine\ncallbacks \u2013 Callbacks to be passed through\n**kwargs \u2013 additional parameters to be passed to LLM calls (like other\ninput variables besides the documents)\nReturns\nThe first element returned is the single string output. The second\nelement returned is a dictionary of other keys to return.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nvalidator get_default_document_variable_name\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nGet default document variable name, if not provided.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html"} {"id": "aeb8fbae2a89-7", "text": "Chain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nprompt_length(docs: List[Document], **kwargs: Any) \u2192 Optional[int]\u00b6\nReturn the prompt length given the documents passed in.\nThis can be used by a caller to determine whether passing in a list\nof documents would exceed a certain prompt length. This useful when\ntrying to ensure that the size of a prompt remains below a certain\ncontext limit.\nParameters\ndocs \u2013 List[Document], a list of documents to use to calculate the\ntotal prompt length.\nReturns\nReturns None if the method does not depend on the prompt length,\notherwise the length of the prompt in tokens.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html"} {"id": "aeb8fbae2a89-8", "text": "has more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html"} {"id": "aeb8fbae2a89-9", "text": "Example\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_llm_output\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that the combine chain outputs a dictionary.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html"} {"id": "6a82e8262c35-0", "text": "langchain.chains.flare.prompts.FinishedOutputParser\u00b6\nclass langchain.chains.flare.prompts.FinishedOutputParser(*, finished_value: str = 'FINISHED')[source]\u00b6\nBases: BaseOutputParser[Tuple[str, bool]]\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam finished_value: str = 'FINISHED'\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 Tuple[str, bool][source]\u00b6\nParse a single string model output into some structure.\nParameters\ntext \u2013 String output of language model.\nReturns\nStructured output.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.flare.prompts.FinishedOutputParser.html"} {"id": "6a82e8262c35-1", "text": "to_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.flare.prompts.FinishedOutputParser.html"} {"id": "431e5ad92998-0", "text": "langchain.chains.openai_functions.openapi.openapi_spec_to_openai_fn\u00b6\nlangchain.chains.openai_functions.openapi.openapi_spec_to_openai_fn(spec: OpenAPISpec) \u2192 Tuple[List[Dict[str, Any]], Callable][source]\u00b6\nConvert a valid OpenAPI spec to the JSON Schema format expected for OpenAIfunctions.\nParameters\nspec \u2013 OpenAPI spec to convert.\nReturns\nTuple of the OpenAI functions JSON schema and a default function for executinga request based on the OpenAI function schema.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.openapi.openapi_spec_to_openai_fn.html"} {"id": "2aa910491f30-0", "text": "langchain.chains.openai_functions.base.convert_to_openai_function\u00b6\nlangchain.chains.openai_functions.base.convert_to_openai_function(function: Union[Dict[str, Any], Type[BaseModel], Callable]) \u2192 Dict[str, Any][source]\u00b6\nConvert a raw function/class to an OpenAI function.\nParameters\nfunction \u2013 Either a dictionary, a pydantic.BaseModel class, or a Python function.\nIf a dictionary is passed in, it is assumed to already be a valid OpenAI\nfunction.\nReturns\nA dict version of the passed in function which is compatible with theOpenAI function-calling API.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.base.convert_to_openai_function.html"} {"id": "0e53881d9d1b-0", "text": "langchain.chains.api.openapi.response_chain.APIResponderOutputParser\u00b6\nclass langchain.chains.api.openapi.response_chain.APIResponderOutputParser[source]\u00b6\nBases: BaseOutputParser\nParse the response and error tags.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nparse(llm_output: str) \u2192 str[source]\u00b6\nParse the response and error tags.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.response_chain.APIResponderOutputParser.html"} {"id": "0e53881d9d1b-1", "text": "property lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.response_chain.APIResponderOutputParser.html"} {"id": "577a655eb396-0", "text": "langchain.chains.openai_functions.extraction.create_extraction_chain\u00b6\nlangchain.chains.openai_functions.extraction.create_extraction_chain(schema: dict, llm: BaseLanguageModel) \u2192 Chain[source]\u00b6\nCreates a chain that extracts information from a passage.\nParameters\nschema \u2013 The schema of the entities to extract.\nllm \u2013 The language model to use.\nReturns\nChain that can be used to extract information from a passage.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.extraction.create_extraction_chain.html"} {"id": "1996b76889c2-0", "text": "langchain.chains.router.embedding_router.EmbeddingRouterChain\u00b6\nclass langchain.chains.router.embedding_router.EmbeddingRouterChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, vectorstore: VectorStore, routing_keys: List[str] = ['query'])[source]\u00b6\nBases: RouterChain\nClass that uses embeddings to route between options.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.embedding_router.EmbeddingRouterChain.html"} {"id": "1996b76889c2-1", "text": "and passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam routing_keys: List[str] = ['query']\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam vectorstore: VectorStore [Required]\u00b6\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.embedding_router.EmbeddingRouterChain.html"} {"id": "1996b76889c2-2", "text": "these runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.embedding_router.EmbeddingRouterChain.html"} {"id": "1996b76889c2-3", "text": "metadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync aroute(inputs: Dict[str, Any], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Route\u00b6\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.embedding_router.EmbeddingRouterChain.html"} {"id": "1996b76889c2-4", "text": "tags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_names_and_descriptions(names_and_descriptions: Sequence[Tuple[str, Sequence[str]]], vectorstore_cls: Type[VectorStore], embeddings: Embeddings, **kwargs: Any) \u2192 EmbeddingRouterChain[source]\u00b6\nConvenience constructor.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.embedding_router.EmbeddingRouterChain.html"} {"id": "1996b76889c2-5", "text": "Validate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nroute(inputs: Dict[str, Any], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Route\u00b6\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.embedding_router.EmbeddingRouterChain.html"} {"id": "1996b76889c2-6", "text": "The other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.embedding_router.EmbeddingRouterChain.html"} {"id": "1996b76889c2-7", "text": "Set the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\u00b6\nReturn the keys expected to be in the chain output.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.embedding_router.EmbeddingRouterChain.html"} {"id": "bf4fe75ccdfe-0", "text": "langchain.chains.openai_functions.base.create_structured_output_chain\u00b6\nlangchain.chains.openai_functions.base.create_structured_output_chain(output_schema: Union[Dict[str, Any], Type[BaseModel]], llm: BaseLanguageModel, prompt: BasePromptTemplate, *, output_parser: Optional[BaseLLMOutputParser] = None, **kwargs: Any) \u2192 LLMChain[source]\u00b6\nCreate an LLMChain that uses an OpenAI function to get a structured output.\nParameters\noutput_schema \u2013 Either a dictionary or pydantic.BaseModel class. If a dictionary\nis passed in, it\u2019s assumed to already be a valid JsonSchema.\nFor best results, pydantic.BaseModels should have docstrings describing what\nthe schema represents and descriptions for the parameters.\nllm \u2013 Language model to use, assumed to support the OpenAI function-calling API.\nprompt \u2013 BasePromptTemplate to pass to the model.\noutput_parser \u2013 BaseLLMOutputParser to use for parsing model outputs. By default\nwill be inferred from the function types. If pydantic.BaseModels are passed\nin, then the OutputParser will try to parse outputs using those. Otherwise\nmodel outputs will simply be parsed as JSON.\nReturns\nAn LLMChain that will pass the given function to the model.\nExample\nfrom langchain.chains.openai_functions import create_structured_output_chain\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate\nfrom pydantic import BaseModel, Field\nclass Dog(BaseModel):\n \"\"\"Identifying information about a dog.\"\"\"\n name: str = Field(..., description=\"The dog's name\")\n color: str = Field(..., description=\"The dog's color\")\n fav_food: Optional[str] = Field(None, description=\"The dog's favorite food\")", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.base.create_structured_output_chain.html"} {"id": "bf4fe75ccdfe-1", "text": "fav_food: Optional[str] = Field(None, description=\"The dog's favorite food\")\nllm = ChatOpenAI(model=\"gpt-3.5-turbo-0613\", temperature=0)\nprompt_msgs = [\n SystemMessage(\n content=\"You are a world class algorithm for extracting information in structured formats.\"\n ),\n HumanMessage(content=\"Use the given format to extract information from the following input:\"),\n HumanMessagePromptTemplate.from_template(\"{input}\"),\n HumanMessage(content=\"Tips: Make sure to answer in the correct format\"),\n]\nprompt = ChatPromptTemplate(messages=prompt_msgs)\nchain = create_structured_output_chain(Dog, llm, prompt)\nchain.run(\"Harry was a chubby brown beagle who loved chicken\")\n# -> Dog(name=\"Harry\", color=\"brown\", fav_food=\"chicken\")", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.base.create_structured_output_chain.html"} {"id": "2015c3b9db3d-0", "text": "langchain.chains.prompt_selector.BasePromptSelector\u00b6\nclass langchain.chains.prompt_selector.BasePromptSelector[source]\u00b6\nBases: BaseModel, ABC\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nabstract get_prompt(llm: BaseLanguageModel) \u2192 BasePromptTemplate[source]\u00b6\nGet default prompt for a language model.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.prompt_selector.BasePromptSelector.html"} {"id": "2cc5d95d3564-0", "text": "langchain.chains.base.Chain\u00b6\nclass langchain.chains.base.Chain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: Serializable, ABC\nAbstract base class for creating structured sequences of calls to components.\nChains should be used to encode a sequence of calls to components like\nmodels, document retrievers, other chains, etc., and provide a simple interface\nto this sequence.\nThe Chain interface makes it easy to create apps that are:\nStateful: add Memory to any Chain to give it state,\nObservable: pass Callbacks to a Chain to execute additional functionality,like logging, outside the main sequence of component calls,\nComposable: the Chain API is flexible enough that it is easy to combineChains with other components, including other Chains.\nThe main methods exposed by chains are:\n__call__: Chains are callable. The __call__ method is the primary way toexecute a Chain. This takes inputs as a dictionary and returns a\ndictionary output.\nrun: A convenience method that takes inputs as args/kwargs and returns theoutput as a string. This method can only be used for a subset of chains and\ncannot return as rich of an output as __call__.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html"} {"id": "2cc5d95d3564-1", "text": "Optional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam memory: Optional[langchain.schema.memory.BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html"} {"id": "2cc5d95d3564-2", "text": "will be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any][source]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html"} {"id": "2cc5d95d3564-3", "text": "Returns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any][source]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]][source]\u00b6\nCall the chain on all inputs in the list.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html"} {"id": "2cc5d95d3564-4", "text": "Call the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str[source]\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html"} {"id": "2cc5d95d3564-5", "text": "# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict[source]\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str][source]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str][source]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html"} {"id": "2cc5d95d3564-6", "text": "validator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str[source]\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html"} {"id": "2cc5d95d3564-7", "text": "# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None[source]\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose[source]\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nabstract property input_keys: List[str]\u00b6\nReturn the keys expected to be in the chain input.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nabstract property output_keys: List[str]\u00b6\nReturn the keys expected to be in the chain output.\nmodel Config[source]\u00b6\nBases: object", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html"} {"id": "2cc5d95d3564-8", "text": "model Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html"} {"id": "dcfe4003658e-0", "text": "langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain\u00b6\nclass langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, graph: NebulaGraph, ngql_generation_chain: LLMChain, qa_chain: LLMChain, input_key: str = 'query', output_key: str = 'result')[source]\u00b6\nBases: Chain\nChain for question-answering against a graph by generating nGQL statements.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam graph: NebulaGraph [Required]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html"} {"id": "dcfe4003658e-1", "text": "There are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam ngql_generation_chain: LLMChain [Required]\u00b6\nparam qa_chain: LLMChain [Required]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html"} {"id": "dcfe4003658e-2", "text": "returned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html"} {"id": "dcfe4003658e-3", "text": "these runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html"} {"id": "dcfe4003658e-4", "text": "these runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html"} {"id": "dcfe4003658e-5", "text": "classmethod from_llm(llm: BaseLanguageModel, *, qa_prompt: BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template=\"You are an assistant that helps to form nice and human understandable answers.\\nThe information part contains the provided information that you must use to construct an answer.\\nThe provided information is authorative, you must never doubt it or try to use your internal knowledge to correct it.\\nMake the answer sound as a response to the question. Do not mention that you based the result on the given information.\\nIf the provided information is empty, say that you don't know the answer.\\nInformation:\\n{context}\\n\\nQuestion: {question}\\nHelpful Answer:\", template_format='f-string', validate_template=True), ngql_prompt: BasePromptTemplate = PromptTemplate(input_variables=['schema', 'question'], output_parser=None, partial_variables={}, template=\"Task:Generate NebulaGraph Cypher statement to query a graph database.\\n\\nInstructions:\\n\\nFirst, generate cypher then convert it to NebulaGraph Cypher dialect(rather than standard):\\n1. it requires explicit label specification only when referring to node properties: v.`Foo`.name\\n2. note explicit label specification is not needed for edge properties, so it's e.name instead of e.`Bar`.name\\n3. it uses double equals sign for comparison: `==` rather than `=`\\nFor instance:\\n```diff\\n< MATCH (p:person)-[e:directed]->(m:movie) WHERE m.name = 'The Godfather II'\\n< RETURN p.name, e.year, m.name;\\n---\\n> MATCH (p:`person`)-[e:directed]->(m:`movie`) WHERE m.`movie`.`name` == 'The Godfather II'\\n> RETURN p.`person`.`name`, e.year,", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html"} {"id": "dcfe4003658e-6", "text": "== 'The Godfather II'\\n> RETURN p.`person`.`name`, e.year, m.`movie`.`name`;\\n```\\n\\nUse only the provided relationship types and properties in the schema.\\nDo not use any other relationship types or properties that are not provided.\\nSchema:\\n{schema}\\nNote: Do not include any explanations or apologies in your responses.\\nDo not respond to any questions that might ask anything else than for you to construct a Cypher statement.\\nDo not include any text except the generated Cypher statement.\\n\\nThe question is:\\n{question}\", template_format='f-string', validate_template=True), **kwargs: Any) \u2192 NebulaGraphQAChain[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html"} {"id": "dcfe4003658e-7", "text": "Initialize from LLM.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html"} {"id": "dcfe4003658e-8", "text": "as positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html"} {"id": "dcfe4003658e-9", "text": "to_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html"} {"id": "ef075a049c0a-0", "text": "langchain.chains.sequential.SequentialChain\u00b6\nclass langchain.chains.sequential.SequentialChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, chains: List[Chain], input_variables: List[str], output_variables: List[str], return_all: bool = False)[source]\u00b6\nBases: Chain\nChain where the outputs of one chain feed directly into next.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam chains: List[langchain.chains.base.Chain] [Required]\u00b6\nparam input_variables: List[str] [Required]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SequentialChain.html"} {"id": "ef075a049c0a-1", "text": "for the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam return_all: bool = False\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SequentialChain.html"} {"id": "ef075a049c0a-2", "text": "callbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SequentialChain.html"} {"id": "ef075a049c0a-3", "text": "addition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SequentialChain.html"} {"id": "ef075a049c0a-4", "text": "addition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SequentialChain.html"} {"id": "ef075a049c0a-5", "text": "Returns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SequentialChain.html"} {"id": "ef075a049c0a-6", "text": "these runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_chains\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that the correct inputs exist for all chains.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SequentialChain.html"} {"id": "ef075a049c0a-7", "text": "constructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SequentialChain.html"} {"id": "23875f054c70-0", "text": "langchain.chains.openai_functions.tagging.create_tagging_chain\u00b6\nlangchain.chains.openai_functions.tagging.create_tagging_chain(schema: dict, llm: BaseLanguageModel) \u2192 Chain[source]\u00b6\nCreates a chain that extracts information from a passage.\nParameters\nschema \u2013 The schema of the entities to extract.\nllm \u2013 The language model to use.\nReturns\nChain (LLMChain) that can be used to extract information from a passage.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.tagging.create_tagging_chain.html"} {"id": "397338343e50-0", "text": "langchain.chains.combine_documents.refine.RefineDocumentsChain\u00b6\nclass langchain.chains.combine_documents.refine.RefineDocumentsChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, input_key: str = 'input_documents', output_key: str = 'output_text', initial_llm_chain: LLMChain, refine_llm_chain: LLMChain, document_variable_name: str, initial_response_name: str, document_prompt: BasePromptTemplate = None, return_intermediate_steps: bool = False)[source]\u00b6\nBases: BaseCombineDocumentsChain\nCombine documents by doing a first pass and then refining on more documents.\nThis algorithm first calls initial_llm_chain on the first document, passing\nthat first document in with the variable name document_variable_name, and\nproduces a new variable with the variable name initial_response_name.\nThen, it loops over every remaining document. This is called the \u201crefine\u201d step.\nIt calls refine_llm_chain,\npassing in that document with the variable name document_variable_name\nas well as the previous response with the variable name initial_response_name.\nExample\nfrom langchain.chains import RefineDocumentsChain, LLMChain\nfrom langchain.prompts import PromptTemplate\nfrom langchain.llms import OpenAI\n# This controls how each document will be formatted. Specifically,\n# it will be passed to `format_document` - see that function for more\n# details.\ndocument_prompt = PromptTemplate(\n input_variables=[\"page_content\"],\n template=\"{page_content}\"\n)\ndocument_variable_name = \"context\"\nllm = OpenAI()", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.refine.RefineDocumentsChain.html"} {"id": "397338343e50-1", "text": ")\ndocument_variable_name = \"context\"\nllm = OpenAI()\n# The prompt here should take as an input variable the\n# `document_variable_name`\nprompt = PromptTemplate.from_template(\n \"Summarize this content: {context}\"\n)\nllm_chain = LLMChain(llm=llm, prompt=prompt)\ninitial_response_name = \"prev_response\"\n# The prompt here should take as an input variable the\n# `document_variable_name` as well as `initial_response_name`\nprompt_refine = PromptTemplate.from_template(\n \"Here's your first summary: {prev_response}. \"\n \"Now add to it based on the following context: {context}\"\n)\nllm_chain_refine = LLMChain(llm=llm, prompt=prompt_refine)\nchain = RefineDocumentsChain(\n initial_llm_chain=initial_llm_chain,\n refine_llm_chain=refine_llm_chain,\n document_prompt=document_prompt,\n document_variable_name=document_variable_name,\n initial_response_name=initial_response_name,\n)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam document_prompt: BasePromptTemplate [Optional]\u00b6\nPrompt to use to format each document, gets passed to format_document.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.refine.RefineDocumentsChain.html"} {"id": "397338343e50-2", "text": "Prompt to use to format each document, gets passed to format_document.\nparam document_variable_name: str [Required]\u00b6\nThe variable name in the initial_llm_chain to put the documents in.\nIf only one variable in the initial_llm_chain, this need not be provided.\nparam initial_llm_chain: LLMChain [Required]\u00b6\nLLM chain to use on initial document.\nparam initial_response_name: str [Required]\u00b6\nThe variable name to format the initial response in when refining.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam refine_llm_chain: LLMChain [Required]\u00b6\nLLM chain to use when refining.\nparam return_intermediate_steps: bool = False\u00b6\nReturn the results of the refine steps in the output.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.refine.RefineDocumentsChain.html"} {"id": "397338343e50-3", "text": "param verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.refine.RefineDocumentsChain.html"} {"id": "397338343e50-4", "text": "Returns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acombine_docs(docs: List[Document], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Tuple[str, dict][source]\u00b6\nCombine by mapping first chain over all, then stuffing into final chain.\nParameters", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.refine.RefineDocumentsChain.html"} {"id": "397338343e50-5", "text": "Combine by mapping first chain over all, then stuffing into final chain.\nParameters\ndocs \u2013 List of documents to combine\ncallbacks \u2013 Callbacks to be passed through\n**kwargs \u2013 additional parameters to be passed to LLM calls (like other\ninput variables besides the documents)\nReturns\nThe first element returned is the single string output. The second\nelement returned is a dictionary of other keys to return.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.refine.RefineDocumentsChain.html"} {"id": "397338343e50-6", "text": "tags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ncombine_docs(docs: List[Document], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Tuple[str, dict][source]\u00b6\nCombine by mapping first chain over all, then stuffing into final chain.\nParameters\ndocs \u2013 List of documents to combine\ncallbacks \u2013 Callbacks to be passed through\n**kwargs \u2013 additional parameters to be passed to LLM calls (like other\ninput variables besides the documents)\nReturns\nThe first element returned is the single string output. The second\nelement returned is a dictionary of other keys to return.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.refine.RefineDocumentsChain.html"} {"id": "397338343e50-7", "text": "Returns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nvalidator get_default_document_variable_name\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nGet default document variable name, if not provided.\nvalidator get_return_intermediate_steps\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nFor backwards compatibility.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nprompt_length(docs: List[Document], **kwargs: Any) \u2192 Optional[int]\u00b6\nReturn the prompt length given the documents passed in.\nThis can be used by a caller to determine whether passing in a list\nof documents would exceed a certain prompt length. This useful when\ntrying to ensure that the size of a prompt remains below a certain\ncontext limit.\nParameters\ndocs \u2013 List[Document], a list of documents to use to calculate the\ntotal prompt length.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.refine.RefineDocumentsChain.html"} {"id": "397338343e50-8", "text": "total prompt length.\nReturns\nReturns None if the method does not depend on the prompt length,\notherwise the length of the prompt in tokens.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.refine.RefineDocumentsChain.html"} {"id": "397338343e50-9", "text": "Example\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.refine.RefineDocumentsChain.html"} {"id": "397338343e50-10", "text": "Configuration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.refine.RefineDocumentsChain.html"} {"id": "266c241dd753-0", "text": "langchain.chains.question_answering.__init__.load_qa_chain\u00b6\nlangchain.chains.question_answering.__init__.load_qa_chain(llm: BaseLanguageModel, chain_type: str = 'stuff', verbose: Optional[bool] = None, callback_manager: Optional[BaseCallbackManager] = None, **kwargs: Any) \u2192 BaseCombineDocumentsChain[source]\u00b6\nLoad question answering chain.\nParameters\nllm \u2013 Language Model to use in the chain.\nchain_type \u2013 Type of document combining chain to use. Should be one of \u201cstuff\u201d,\n\u201cmap_reduce\u201d, \u201cmap_rerank\u201d, and \u201crefine\u201d.\nverbose \u2013 Whether chains should be run in verbose mode or not. Note that this\napplies to all chains that make up the final chain.\ncallback_manager \u2013 Callback manager to use for the chain.\nReturns\nA chain to use for question answering.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.question_answering.__init__.load_qa_chain.html"} {"id": "defc441b59ba-0", "text": "langchain.chains.question_answering.__init__.LoadingCallable\u00b6\nclass langchain.chains.question_answering.__init__.LoadingCallable(*args, **kwargs)[source]\u00b6\nBases: Protocol\nInterface for loading the combine documents chain.\nMethods\n__init__(*args,\u00a0**kwargs)\n__call__(llm: BaseLanguageModel, **kwargs: Any) \u2192 BaseCombineDocumentsChain[source]\u00b6\nCallable to load the combine documents chain.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.question_answering.__init__.LoadingCallable.html"} {"id": "98858f4afa60-0", "text": "langchain.chains.router.llm_router.RouterOutputParser\u00b6\nclass langchain.chains.router.llm_router.RouterOutputParser(*, default_destination: str = 'DEFAULT', next_inputs_type: ~typing.Type = , next_inputs_inner_key: str = 'input')[source]\u00b6\nBases: BaseOutputParser[Dict[str, str]]\nParser for output of router chain int he multi-prompt chain.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam default_destination: str = 'DEFAULT'\u00b6\nparam next_inputs_inner_key: str = 'input'\u00b6\nparam next_inputs_type: Type = \u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 Dict[str, Any][source]\u00b6\nParse a single string model output into some structure.\nParameters\ntext \u2013 String output of language model.\nReturns\nStructured output.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.llm_router.RouterOutputParser.html"} {"id": "98858f4afa60-1", "text": "the prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.llm_router.RouterOutputParser.html"} {"id": "b8c2bcb888b0-0", "text": "langchain.chains.mapreduce.MapReduceChain\u00b6\nclass langchain.chains.mapreduce.MapReduceChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, combine_documents_chain: BaseCombineDocumentsChain, text_splitter: TextSplitter, input_key: str = 'input_text', output_key: str = 'output_text')[source]\u00b6\nBases: Chain\nMap-reduce chain.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam combine_documents_chain: BaseCombineDocumentsChain [Required]\u00b6\nChain to use to combine documents.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.mapreduce.MapReduceChain.html"} {"id": "b8c2bcb888b0-1", "text": "Optional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam text_splitter: TextSplitter [Required]\u00b6\nText splitter to use.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.mapreduce.MapReduceChain.html"} {"id": "b8c2bcb888b0-2", "text": "addition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.mapreduce.MapReduceChain.html"} {"id": "b8c2bcb888b0-3", "text": "these runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.mapreduce.MapReduceChain.html"} {"id": "b8c2bcb888b0-4", "text": "these runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_params(llm: BaseLanguageModel, prompt: BasePromptTemplate, text_splitter: TextSplitter, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, combine_chain_kwargs: Optional[Mapping[str, Any]] = None, reduce_chain_kwargs: Optional[Mapping[str, Any]] = None, **kwargs: Any) \u2192 MapReduceChain[source]\u00b6\nConstruct a map-reduce chain that uses the chain for map and reduce.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.mapreduce.MapReduceChain.html"} {"id": "b8c2bcb888b0-5", "text": "Validate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.mapreduce.MapReduceChain.html"} {"id": "b8c2bcb888b0-6", "text": "a single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.mapreduce.MapReduceChain.html"} {"id": "b8c2bcb888b0-7", "text": "to_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.mapreduce.MapReduceChain.html"} {"id": "ac9bd0bd64ad-0", "text": "langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html"} {"id": "ac9bd0bd64ad-1", "text": "class langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, sequential_chain: SequentialChain, llm: Optional[BaseLanguageModel] = None, create_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['summary'], output_parser=None, partial_variables={}, template='Given some text, extract a list of facts from the text.\\n\\nFormat your output as a bulleted list.\\n\\nText:\\n\"\"\"\\n{summary}\\n\"\"\"\\n\\nFacts:', template_format='f-string', validate_template=True), check_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\\n\\nHere is a bullet point list of facts:\\n\"\"\"\\n{assertions}\\n\"\"\"\\n\\nFor each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output \"Undetermined\".\\nIf the fact is false, explain why.\\n\\n', template_format='f-string', validate_template=True), revised_summary_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'summary'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false. If the answer is false, a suggestion is given for a correction.\\n\\nChecked Assertions:\\n\"\"\"\\n{checked_assertions}\\n\"\"\"\\n\\nOriginal", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html"} {"id": "ac9bd0bd64ad-2", "text": "Assertions:\\n\"\"\"\\n{checked_assertions}\\n\"\"\"\\n\\nOriginal Summary:\\n\"\"\"\\n{summary}\\n\"\"\"\\n\\nUsing these checked assertions, rewrite the original summary to be completely true.\\n\\nThe output should have the same structure and formatting as the original summary.\\n\\nSummary:', template_format='f-string', validate_template=True), are_all_true_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false.\\n\\nIf all of the assertions are true, return \"True\". If any of the assertions are false, return \"False\".\\n\\nHere are some examples:\\n===\\n\\nChecked Assertions: \"\"\"\\n- The sky is red: False\\n- Water is made of lava: False\\n- The sun is a star: True\\n\"\"\"\\nResult: False\\n\\n===\\n\\nChecked Assertions: \"\"\"\\n- The sky is blue: True\\n- Water is wet: True\\n- The sun is a star: True\\n\"\"\"\\nResult: True\\n\\n===\\n\\nChecked Assertions: \"\"\"\\n- The sky is blue - True\\n- Water is made of lava- False\\n- The sun is a star - True\\n\"\"\"\\nResult: False\\n\\n===\\n\\nChecked Assertions:\"\"\"\\n{checked_assertions}\\n\"\"\"\\nResult:', template_format='f-string', validate_template=True), input_key: str = 'query', output_key: str = 'result', max_checks: int = 2)[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html"} {"id": "ac9bd0bd64ad-3", "text": "Bases: Chain\nChain for question-answering with self-verification.\nExample\nfrom langchain import OpenAI, LLMSummarizationCheckerChain\nllm = OpenAI(temperature=0.0)\nchecker_chain = LLMSummarizationCheckerChain.from_llm(llm)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam are_all_true_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false.\\n\\nIf all of the assertions are true, return \"True\". If any of the assertions are false, return \"False\".\\n\\nHere are some examples:\\n===\\n\\nChecked Assertions: \"\"\"\\n- The sky is red: False\\n- Water is made of lava: False\\n- The sun is a star: True\\n\"\"\"\\nResult: False\\n\\n===\\n\\nChecked Assertions: \"\"\"\\n- The sky is blue: True\\n- Water is wet: True\\n- The sun is a star: True\\n\"\"\"\\nResult: True\\n\\n===\\n\\nChecked Assertions: \"\"\"\\n- The sky is blue - True\\n- Water is made of lava- False\\n- The sun is a star - True\\n\"\"\"\\nResult: False\\n\\n===\\n\\nChecked Assertions:\"\"\"\\n{checked_assertions}\\n\"\"\"\\nResult:', template_format='f-string', validate_template=True)\u00b6\n[Deprecated]\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html"} {"id": "ac9bd0bd64ad-4", "text": "Optional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam check_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\\n\\nHere is a bullet point list of facts:\\n\"\"\"\\n{assertions}\\n\"\"\"\\n\\nFor each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output \"Undetermined\".\\nIf the fact is false, explain why.\\n\\n', template_format='f-string', validate_template=True)\u00b6\n[Deprecated]\nparam create_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['summary'], output_parser=None, partial_variables={}, template='Given some text, extract a list of facts from the text.\\n\\nFormat your output as a bulleted list.\\n\\nText:\\n\"\"\"\\n{summary}\\n\"\"\"\\n\\nFacts:', template_format='f-string', validate_template=True)\u00b6\n[Deprecated]\nparam llm: Optional[BaseLanguageModel] = None\u00b6\n[Deprecated] LLM wrapper to use.\nparam max_checks: int = 2\u00b6\nMaximum number of times to check the assertions. Default to double-checking.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html"} {"id": "ac9bd0bd64ad-5", "text": "and at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam revised_summary_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'summary'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false. If the answer is false, a suggestion is given for a correction.\\n\\nChecked Assertions:\\n\"\"\"\\n{checked_assertions}\\n\"\"\"\\n\\nOriginal Summary:\\n\"\"\"\\n{summary}\\n\"\"\"\\n\\nUsing these checked assertions, rewrite the original summary to be completely true.\\n\\nThe output should have the same structure and formatting as the original summary.\\n\\nSummary:', template_format='f-string', validate_template=True)\u00b6\n[Deprecated]\nparam sequential_chain: SequentialChain [Required]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html"} {"id": "ac9bd0bd64ad-6", "text": "will be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html"} {"id": "ac9bd0bd64ad-7", "text": "Returns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html"} {"id": "ac9bd0bd64ad-8", "text": "Call the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html"} {"id": "ac9bd0bd64ad-9", "text": "# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html"} {"id": "ac9bd0bd64ad-10", "text": "classmethod from_llm(llm: BaseLanguageModel, create_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['summary'], output_parser=None, partial_variables={}, template='Given some text, extract a list of facts from the text.\\n\\nFormat your output as a bulleted list.\\n\\nText:\\n\"\"\"\\n{summary}\\n\"\"\"\\n\\nFacts:', template_format='f-string', validate_template=True), check_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\\n\\nHere is a bullet point list of facts:\\n\"\"\"\\n{assertions}\\n\"\"\"\\n\\nFor each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output \"Undetermined\".\\nIf the fact is false, explain why.\\n\\n', template_format='f-string', validate_template=True), revised_summary_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'summary'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false. If the answer is false, a suggestion is given for a correction.\\n\\nChecked Assertions:\\n\"\"\"\\n{checked_assertions}\\n\"\"\"\\n\\nOriginal Summary:\\n\"\"\"\\n{summary}\\n\"\"\"\\n\\nUsing these checked assertions, rewrite the original summary to be completely true.\\n\\nThe output should have the same structure and formatting as the original summary.\\n\\nSummary:', template_format='f-string', validate_template=True), are_all_true_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html"} {"id": "ac9bd0bd64ad-11", "text": "partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false.\\n\\nIf all of the assertions are true, return \"True\". If any of the assertions are false, return \"False\".\\n\\nHere are some examples:\\n===\\n\\nChecked Assertions: \"\"\"\\n- The sky is red: False\\n- Water is made of lava: False\\n- The sun is a star: True\\n\"\"\"\\nResult: False\\n\\n===\\n\\nChecked Assertions: \"\"\"\\n- The sky is blue: True\\n- Water is wet: True\\n- The sun is a star: True\\n\"\"\"\\nResult: True\\n\\n===\\n\\nChecked Assertions: \"\"\"\\n- The sky is blue - True\\n- Water is made of lava- False\\n- The sun is a star - True\\n\"\"\"\\nResult: False\\n\\n===\\n\\nChecked Assertions:\"\"\"\\n{checked_assertions}\\n\"\"\"\\nResult:', template_format='f-string', validate_template=True), verbose: bool = False, **kwargs: Any) \u2192 LLMSummarizationCheckerChain[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html"} {"id": "ac9bd0bd64ad-12", "text": "prep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html"} {"id": "ac9bd0bd64ad-13", "text": "The other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html"} {"id": "ac9bd0bd64ad-14", "text": "Set the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html"} {"id": "fedd7dc7d9fc-0", "text": "langchain.chains.qa_with_sources.loading.load_qa_with_sources_chain\u00b6\nlangchain.chains.qa_with_sources.loading.load_qa_with_sources_chain(llm: BaseLanguageModel, chain_type: str = 'stuff', verbose: Optional[bool] = None, **kwargs: Any) \u2192 BaseCombineDocumentsChain[source]\u00b6\nLoad question answering with sources chain.\nParameters\nllm \u2013 Language Model to use in the chain.\nchain_type \u2013 Type of document combining chain to use. Should be one of \u201cstuff\u201d,\n\u201cmap_reduce\u201d, \u201crefine\u201d and \u201cmap_rerank\u201d.\nverbose \u2013 Whether chains should be run in verbose mode or not. Note that this\napplies to all chains that make up the final chain.\nReturns\nA chain to use for question answering with sources.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.loading.load_qa_with_sources_chain.html"} {"id": "020c40e665cf-0", "text": "langchain.chains.openai_functions.openapi.get_openapi_chain\u00b6\nlangchain.chains.openai_functions.openapi.get_openapi_chain(spec: Union[OpenAPISpec, str], llm: Optional[BaseLanguageModel] = None, prompt: Optional[BasePromptTemplate] = None, request_chain: Optional[Chain] = None, llm_chain_kwargs: Optional[Dict] = None, verbose: bool = False, headers: Optional[Dict] = None, params: Optional[Dict] = None, **kwargs: Any) \u2192 SequentialChain[source]\u00b6\nCreate a chain for querying an API from a OpenAPI spec.\nParameters\nspec \u2013 OpenAPISpec or url/file/text string corresponding to one.\nllm \u2013 language model, should be an OpenAI function-calling model, e.g.\nChatOpenAI(model=\u201dgpt-3.5-turbo-0613\u201d).\nprompt \u2013 Main prompt template to use.\nrequest_chain \u2013 Chain for taking the functions output and executing the request.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.openapi.get_openapi_chain.html"} {"id": "f5237af5334e-0", "text": "langchain.chains.natbot.crawler.ElementInViewPort\u00b6\nclass langchain.chains.natbot.crawler.ElementInViewPort[source]\u00b6\nBases: TypedDict\nA typed dictionary containing information about elements in the viewport.\nMethods\n__init__(*args,\u00a0**kwargs)\nclear()\ncopy()\nfromkeys([value])\nCreate a new dictionary with keys from iterable and values set to value.\nget(key[,\u00a0default])\nReturn the value for key if key is in the dictionary, else default.\nitems()\nkeys()\npop(k[,d])\nIf the key is not found, return the default if given; otherwise, raise a KeyError.\npopitem()\nRemove and return a (key, value) pair as a 2-tuple.\nsetdefault(key[,\u00a0default])\nInsert key with a value of default if key is not in the dictionary.\nupdate([E,\u00a0]**F)\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]\nvalues()\nAttributes\nnode_index\nbackend_node_id\nnode_name\nnode_value\nnode_meta\nis_clickable\norigin_x\norigin_y\ncenter_x\ncenter_y\nclear() \u2192 None.\u00a0 Remove all items from D.\u00b6\ncopy() \u2192 a shallow copy of D\u00b6\nfromkeys(value=None, /)\u00b6\nCreate a new dictionary with keys from iterable and values set to value.\nget(key, default=None, /)\u00b6\nReturn the value for key if key is in the dictionary, else default.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.natbot.crawler.ElementInViewPort.html"} {"id": "f5237af5334e-1", "text": "Return the value for key if key is in the dictionary, else default.\nitems() \u2192 a set-like object providing a view on D's items\u00b6\nkeys() \u2192 a set-like object providing a view on D's keys\u00b6\npop(k[, d]) \u2192 v, remove specified key and return the corresponding value.\u00b6\nIf the key is not found, return the default if given; otherwise,\nraise a KeyError.\npopitem()\u00b6\nRemove and return a (key, value) pair as a 2-tuple.\nPairs are returned in LIFO (last-in, first-out) order.\nRaises KeyError if the dict is empty.\nsetdefault(key, default=None, /)\u00b6\nInsert key with a value of default if key is not in the dictionary.\nReturn the value for key if key is in the dictionary, else default.\nupdate([E, ]**F) \u2192 None.\u00a0 Update D from dict/iterable E and F.\u00b6\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k]\nIf E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v\nIn either case, this is followed by: for k in F: D[k] = F[k]\nvalues() \u2192 an object providing a view on D's values\u00b6\nbackend_node_id: int\u00b6\ncenter_x: int\u00b6\ncenter_y: int\u00b6\nis_clickable: bool\u00b6\nnode_index: str\u00b6\nnode_meta: List[str]\u00b6\nnode_name: Optional[str]\u00b6\nnode_value: Optional[str]\u00b6\norigin_x: int\u00b6\norigin_y: int\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.natbot.crawler.ElementInViewPort.html"} {"id": "098f284e6bd1-0", "text": "langchain.chains.llm_math.base.LLMMathChain\u00b6\nclass langchain.chains.llm_math.base.LLMMathChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, llm_chain: LLMChain, llm: Optional[BaseLanguageModel] = None, prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into a expression that can be executed using Python\\'s numexpr library. Use the output of running this code to answer the question.\\n\\nQuestion: ${{Question with math problem.}}\\n```text\\n${{single line mathematical expression that solves the problem}}\\n```\\n...numexpr.evaluate(text)...\\n```output\\n${{Output of running the code}}\\n```\\nAnswer: ${{Answer}}\\n\\nBegin.\\n\\nQuestion: What is 37593 * 67?\\n```text\\n37593 * 67\\n```\\n...numexpr.evaluate(\"37593 * 67\")...\\n```output\\n2518731\\n```\\nAnswer: 2518731\\n\\nQuestion: 37593^(1/5)\\n```text\\n37593**(1/5)\\n```\\n...numexpr.evaluate(\"37593**(1/5)\")...\\n```output\\n8.222831614237718\\n```\\nAnswer: 8.222831614237718\\n\\nQuestion: {question}\\n', template_format='f-string', validate_template=True), input_key: str = 'question', output_key: str = 'answer')[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_math.base.LLMMathChain.html"} {"id": "098f284e6bd1-1", "text": "Bases: Chain\nChain that interprets a prompt and executes python code to do math.\nExample\nfrom langchain import LLMMathChain, OpenAI\nllm_math = LLMMathChain.from_llm(OpenAI())\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam llm: Optional[BaseLanguageModel] = None\u00b6\n[Deprecated] LLM wrapper to use.\nparam llm_chain: LLMChain [Required]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_math.base.LLMMathChain.html"} {"id": "098f284e6bd1-2", "text": "You can use these to eg identify a specific instance of a chain with its use case.\nparam prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into a expression that can be executed using Python\\'s numexpr library. Use the output of running this code to answer the question.\\n\\nQuestion: ${{Question with math problem.}}\\n```text\\n${{single line mathematical expression that solves the problem}}\\n```\\n...numexpr.evaluate(text)...\\n```output\\n${{Output of running the code}}\\n```\\nAnswer: ${{Answer}}\\n\\nBegin.\\n\\nQuestion: What is 37593 * 67?\\n```text\\n37593 * 67\\n```\\n...numexpr.evaluate(\"37593 * 67\")...\\n```output\\n2518731\\n```\\nAnswer: 2518731\\n\\nQuestion: 37593^(1/5)\\n```text\\n37593**(1/5)\\n```\\n...numexpr.evaluate(\"37593**(1/5)\")...\\n```output\\n8.222831614237718\\n```\\nAnswer: 8.222831614237718\\n\\nQuestion: {question}\\n', template_format='f-string', validate_template=True)\u00b6\n[Deprecated] Prompt to use to translate to python if necessary.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_math.base.LLMMathChain.html"} {"id": "098f284e6bd1-3", "text": "Whether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_math.base.LLMMathChain.html"} {"id": "098f284e6bd1-4", "text": "Returns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_math.base.LLMMathChain.html"} {"id": "098f284e6bd1-5", "text": "Call the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_math.base.LLMMathChain.html"} {"id": "098f284e6bd1-6", "text": "# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_math.base.LLMMathChain.html"} {"id": "098f284e6bd1-7", "text": "# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_llm(llm: BaseLanguageModel, prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into a expression that can be executed using Python\\'s numexpr library. Use the output of running this code to answer the question.\\n\\nQuestion: ${{Question with math problem.}}\\n```text\\n${{single line mathematical expression that solves the problem}}\\n```\\n...numexpr.evaluate(text)...\\n```output\\n${{Output of running the code}}\\n```\\nAnswer: ${{Answer}}\\n\\nBegin.\\n\\nQuestion: What is 37593 * 67?\\n```text\\n37593 * 67\\n```\\n...numexpr.evaluate(\"37593 * 67\")...\\n```output\\n2518731\\n```\\nAnswer: 2518731\\n\\nQuestion: 37593^(1/5)\\n```text\\n37593**(1/5)\\n```\\n...numexpr.evaluate(\"37593**(1/5)\")...\\n```output\\n8.222831614237718\\n```\\nAnswer: 8.222831614237718\\n\\nQuestion: {question}\\n', template_format='f-string', validate_template=True), **kwargs: Any) \u2192 LLMMathChain[source]\u00b6\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_math.base.LLMMathChain.html"} {"id": "098f284e6bd1-8", "text": "Returns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_math.base.LLMMathChain.html"} {"id": "098f284e6bd1-9", "text": "addition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_math.base.LLMMathChain.html"} {"id": "098f284e6bd1-10", "text": "property lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_math.base.LLMMathChain.html"} {"id": "49ac0c73be3e-0", "text": "langchain.chains.combine_documents.stuff.StuffDocumentsChain\u00b6\nclass langchain.chains.combine_documents.stuff.StuffDocumentsChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, input_key: str = 'input_documents', output_key: str = 'output_text', llm_chain: LLMChain, document_prompt: BasePromptTemplate = None, document_variable_name: str, document_separator: str = '\\n\\n')[source]\u00b6\nBases: BaseCombineDocumentsChain\nChain that combines documents by stuffing into context.\nThis chain takes a list of documents and first combines them into a single string.\nIt does this by formatting each document into a string with the document_prompt\nand then joining them together with document_separator. It then adds that new\nstring to the inputs with the variable name set by document_variable_name.\nThose inputs are then passed to the llm_chain.\nExample\nfrom langchain.chains import StuffDocumentsChain, LLMChain\nfrom langchain.prompts import PromptTemplate\nfrom langchain.llms import OpenAI\n# This controls how each document will be formatted. Specifically,\n# it will be passed to `format_document` - see that function for more\n# details.\ndocument_prompt = PromptTemplate(\n input_variables=[\"page_content\"],\n template=\"{page_content}\"\n)\ndocument_variable_name = \"context\"\nllm = OpenAI()\n# The prompt here should take as an input variable the\n# `document_variable_name`\nprompt = PromptTemplate.from_template(\n \"Summarize this content: {context}\"\n)", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.StuffDocumentsChain.html"} {"id": "49ac0c73be3e-1", "text": "\"Summarize this content: {context}\"\n)\nllm_chain = LLMChain(llm=llm, prompt=prompt)\nchain = StuffDocumentsChain(\n llm_chain=llm_chain,\n document_prompt=document_prompt,\n document_variable_name=document_variable_name\n)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam document_prompt: langchain.schema.prompt_template.BasePromptTemplate [Optional]\u00b6\nPrompt to use to format each document, gets passed to format_document.\nparam document_separator: str = '\\n\\n'\u00b6\nThe string with which to join the formatted documents\nparam document_variable_name: str [Required]\u00b6\nThe variable name in the llm_chain to put the documents in.\nIf only one variable in the llm_chain, this need not be provided.\nparam llm_chain: langchain.chains.llm.LLMChain [Required]\u00b6\nLLM chain which is called with the formatted document string,\nalong with any other inputs.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.StuffDocumentsChain.html"} {"id": "49ac0c73be3e-2", "text": "them along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.StuffDocumentsChain.html"} {"id": "49ac0c73be3e-3", "text": "chain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.StuffDocumentsChain.html"} {"id": "49ac0c73be3e-4", "text": "tags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acombine_docs(docs: List[Document], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Tuple[str, dict][source]\u00b6\nStuff all documents into one prompt and pass to LLM.\nParameters\ndocs \u2013 List of documents to join together into one variable\ncallbacks \u2013 Optional callbacks to pass along\n**kwargs \u2013 additional parameters to use to get inputs to LLMChain.\nReturns\nThe first element returned is the single string output. The second\nelement returned is a dictionary of other keys to return.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.StuffDocumentsChain.html"} {"id": "49ac0c73be3e-5", "text": "info along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ncombine_docs(docs: List[Document], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Tuple[str, dict][source]\u00b6\nStuff all documents into one prompt and pass to LLM.\nParameters\ndocs \u2013 List of documents to join together into one variable\ncallbacks \u2013 Optional callbacks to pass along", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.StuffDocumentsChain.html"} {"id": "49ac0c73be3e-6", "text": "docs \u2013 List of documents to join together into one variable\ncallbacks \u2013 Optional callbacks to pass along\n**kwargs \u2013 additional parameters to use to get inputs to LLMChain.\nReturns\nThe first element returned is the single string output. The second\nelement returned is a dictionary of other keys to return.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nvalidator get_default_document_variable_name\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nGet default document variable name, if not provided.\nIf only one variable is present in the llm_chain.prompt,\nwe can infer that the formatted documents should be passed in\nwith this variable name.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.StuffDocumentsChain.html"} {"id": "49ac0c73be3e-7", "text": "memory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nprompt_length(docs: List[Document], **kwargs: Any) \u2192 Optional[int][source]\u00b6\nReturn the prompt length given the documents passed in.\nThis can be used by a caller to determine whether passing in a list\nof documents would exceed a certain prompt length. This useful when\ntrying to ensure that the size of a prompt remains below a certain\ncontext limit.\nParameters\ndocs \u2013 List[Document], a list of documents to use to calculate the\ntotal prompt length.\nReturns\nReturns None if the method does not depend on the prompt length,\notherwise the length of the prompt in tokens.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.StuffDocumentsChain.html"} {"id": "49ac0c73be3e-8", "text": "sole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.StuffDocumentsChain.html"} {"id": "49ac0c73be3e-9", "text": "serialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.StuffDocumentsChain.html"} {"id": "928bfa9a8cbe-0", "text": "langchain.chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain\u00b6\nclass langchain.chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, combine_documents_chain: BaseCombineDocumentsChain, question_key: str = 'question', input_docs_key: str = 'docs', answer_key: str = 'answer', sources_answer_key: str = 'sources', return_source_documents: bool = False, retriever: BaseRetriever, reduce_k_below_max_tokens: bool = False, max_tokens_limit: int = 3375)[source]\u00b6\nBases: BaseQAWithSourcesChain\nQuestion-answering with sources over an index.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam combine_documents_chain: BaseCombineDocumentsChain [Required]\u00b6\nChain to use to combine documents.\nparam max_tokens_limit: int = 3375\u00b6\nRestrict the docs to return from store based on tokens,", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain.html"} {"id": "928bfa9a8cbe-1", "text": "Restrict the docs to return from store based on tokens,\nenforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam reduce_k_below_max_tokens: bool = False\u00b6\nReduce the number of results to return from store based on tokens limit\nparam retriever: langchain.schema.retriever.BaseRetriever [Required]\u00b6\nIndex to connect to.\nparam return_source_documents: bool = False\u00b6\nReturn the source documents.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain.html"} {"id": "928bfa9a8cbe-2", "text": "will be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain.html"} {"id": "928bfa9a8cbe-3", "text": "Returns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain.html"} {"id": "928bfa9a8cbe-4", "text": "Call the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain.html"} {"id": "928bfa9a8cbe-5", "text": "# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_chain_type(llm: BaseLanguageModel, chain_type: str = 'stuff', chain_type_kwargs: Optional[dict] = None, **kwargs: Any) \u2192 BaseQAWithSourcesChain\u00b6\nLoad chain from chain type.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain.html"} {"id": "928bfa9a8cbe-6", "text": "classmethod from_llm(llm: BaseLanguageModel, document_prompt: BasePromptTemplate = PromptTemplate(input_variables=['page_content', 'source'], output_parser=None, partial_variables={}, template='Content: {page_content}\\nSource: {source}', template_format='f-string', validate_template=True), question_prompt: BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template='Use the following portion of a long document to see if any of the text is relevant to answer the question. \\nReturn any relevant text verbatim.\\n{context}\\nQuestion: {question}\\nRelevant text, if any:', template_format='f-string', validate_template=True), combine_prompt: BasePromptTemplate = PromptTemplate(input_variables=['summaries', 'question'], output_parser=None, partial_variables={}, template='Given the following extracted parts of a long document and a question, create a final answer with references (\"SOURCES\"). \\nIf you don\\'t know the answer, just say that you don\\'t know. Don\\'t try to make up an answer.\\nALWAYS return a \"SOURCES\" part in your answer.\\n\\nQUESTION: Which state/country\\'s law governs the interpretation of the contract?\\n=========\\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts in\u00a0 relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for an\u00a0 injunction or other relief to protect its Intellectual Property Rights.\\nSource: 28-pl\\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other)\u00a0 right or remedy.\\n\\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain.html"} {"id": "928bfa9a8cbe-7", "text": "or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuation\u00a0 in force of the remainder of the term (if any) and this Agreement.\\n\\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of any\u00a0 kind between the parties.\\n\\n11.9 No Third-Party Beneficiaries.\\nSource: 30-pl\\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (as\u00a0 defined in Clause 8.5) or that such a violation is reasonably likely to occur,\\nSource: 4-pl\\n=========\\nFINAL ANSWER: This Agreement is governed by English law.\\nSOURCES: 28-pl\\n\\nQUESTION: What did the president say about Michael Jackson?\\n=========\\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.\u00a0 \\n\\nLast year COVID-19 kept us apart. This year we are finally together again. \\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \\n\\nWith a duty to one another to the American people to the Constitution. \\n\\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \\n\\nSix days ago, Russia\u2019s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \\n\\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \\n\\nHe met the Ukrainian people. \\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain.html"} {"id": "928bfa9a8cbe-8", "text": "\\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \\n\\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\\nSource: 0-pl\\nContent: And we won\u2019t stop. \\n\\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \\n\\nLet\u2019s use this moment to reset. Let\u2019s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease.\u00a0 \\n\\nLet\u2019s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans.\u00a0 \\n\\nWe can\u2019t change how divided we\u2019ve been. But we can change how we move forward\u2014on COVID-19 and other issues we must face together. \\n\\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \\n\\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \\n\\nOfficer Mora was 27 years old. \\n\\nOfficer Rivera was 22. \\n\\nBoth Dominican Americans who\u2019d grown up on the same streets they later chose to patrol as police officers. \\n\\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\\nSource: 24-pl\\nContent: And a proud Ukrainian people, who have known 30 years\u00a0 of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards.\u00a0 \\n\\nTo all Americans, I will be honest with you, as I\u2019ve always", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain.html"} {"id": "928bfa9a8cbe-9", "text": "\\n\\nTo all Americans, I will be honest with you, as I\u2019ve always promised. A Russian dictator, invading a foreign country, has costs around the world. \\n\\nAnd I\u2019m taking robust action to make sure the pain of our sanctions\u00a0 is targeted at Russia\u2019s economy. And I will use every tool at our disposal to protect American businesses and consumers. \\n\\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world.\u00a0 \\n\\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies.\u00a0 \\n\\nThese steps will help blunt gas prices here at home. And I know the news about what\u2019s happening can seem alarming. \\n\\nBut I want you to know that we are going to be okay.\\nSource: 5-pl\\nContent: More support for patients and families. \\n\\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \\n\\nIt\u2019s based on DARPA\u2014the Defense Department project that led to the Internet, GPS, and so much more.\u00a0 \\n\\nARPA-H will have a singular purpose\u2014to drive breakthroughs in cancer, Alzheimer\u2019s, diabetes, and more. \\n\\nA unity agenda for the nation. \\n\\nWe can do this. \\n\\nMy fellow Americans\u2014tonight , we have gathered in a sacred space\u2014the citadel of our democracy. \\n\\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \\n\\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \\n\\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \\n\\nNow is the hour.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain.html"} {"id": "928bfa9a8cbe-10", "text": "and most prosperous nation the world has ever known. \\n\\nNow is the hour. \\n\\nOur moment of responsibility. \\n\\nOur test of resolve and conscience, of history itself. \\n\\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \\n\\nWell I know this nation.\\nSource: 34-pl\\n=========\\nFINAL ANSWER: The president did not mention Michael Jackson.\\nSOURCES:\\n\\nQUESTION: {question}\\n=========\\n{summaries}\\n=========\\nFINAL ANSWER:', template_format='f-string', validate_template=True), **kwargs: Any) \u2192 BaseQAWithSourcesChain\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain.html"} {"id": "928bfa9a8cbe-11", "text": "Construct the chain from an LLM.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain.html"} {"id": "928bfa9a8cbe-12", "text": "The other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain.html"} {"id": "928bfa9a8cbe-13", "text": "Set the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_naming\u00a0 \u00bb\u00a0 all fields\u00b6\nFix backwards compatibility in naming.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain.html"} {"id": "ca4cce45cb4f-0", "text": "langchain.chains.query_constructor.ir.StructuredQuery\u00b6\nclass langchain.chains.query_constructor.ir.StructuredQuery(*, query: str, filter: Optional[FilterDirective] = None, limit: Optional[int] = None)[source]\u00b6\nBases: Expr\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam filter: Optional[langchain.chains.query_constructor.ir.FilterDirective] = None\u00b6\nparam limit: Optional[int] = None\u00b6\nparam query: str [Required]\u00b6\naccept(visitor: Visitor) \u2192 Any\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.StructuredQuery.html"} {"id": "5c343198f997-0", "text": "langchain.chains.query_constructor.base.StructuredQueryOutputParser\u00b6\nclass langchain.chains.query_constructor.base.StructuredQueryOutputParser(*, ast_parse: Callable)[source]\u00b6\nBases: BaseOutputParser[StructuredQuery]\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam ast_parse: Callable [Required]\u00b6\nCallable that parses dict into internal representation of query language.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nclassmethod from_components(allowed_comparators: Optional[Sequence[Comparator]] = None, allowed_operators: Optional[Sequence[Operator]] = None) \u2192 StructuredQueryOutputParser[source]\u00b6\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 StructuredQuery[source]\u00b6\nParse a single string model output into some structure.\nParameters\ntext \u2013 String output of language model.\nReturns\nStructured output.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.base.StructuredQueryOutputParser.html"} {"id": "5c343198f997-1", "text": "prompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.base.StructuredQueryOutputParser.html"} {"id": "2a2e435f336d-0", "text": "langchain.chains.router.multi_retrieval_qa.MultiRetrievalQAChain\u00b6\nclass langchain.chains.router.multi_retrieval_qa.MultiRetrievalQAChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, router_chain: LLMRouterChain, destination_chains: Mapping[str, BaseRetrievalQA], default_chain: Chain, silent_errors: bool = False)[source]\u00b6\nBases: MultiRouteChain\nA multi-route chain that uses an LLM router chain to choose amongst retrieval\nqa chains.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam default_chain: Chain [Required]\u00b6\nDefault chain to use when router doesn\u2019t map input to one of the destinations.\nparam destination_chains: Mapping[str, BaseRetrievalQA] [Required]\u00b6\nMap of name to candidate chains that inputs can be routed to.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.multi_retrieval_qa.MultiRetrievalQAChain.html"} {"id": "2a2e435f336d-1", "text": "Optional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam router_chain: LLMRouterChain [Required]\u00b6\nChain for deciding a destination chain and the input to it.\nparam silent_errors: bool = False\u00b6\nIf True, use default_chain when an invalid destination name is provided.\nDefaults to False.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.multi_retrieval_qa.MultiRetrievalQAChain.html"} {"id": "2a2e435f336d-2", "text": "Execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.multi_retrieval_qa.MultiRetrievalQAChain.html"} {"id": "2a2e435f336d-3", "text": "response. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.multi_retrieval_qa.MultiRetrievalQAChain.html"} {"id": "2a2e435f336d-4", "text": "a single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.multi_retrieval_qa.MultiRetrievalQAChain.html"} {"id": "2a2e435f336d-5", "text": "# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_retrievers(llm: BaseLanguageModel, retriever_infos: List[Dict[str, Any]], default_retriever: Optional[BaseRetriever] = None, default_prompt: Optional[PromptTemplate] = None, default_chain: Optional[Chain] = None, **kwargs: Any) \u2192 MultiRetrievalQAChain[source]\u00b6\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.multi_retrieval_qa.MultiRetrievalQAChain.html"} {"id": "2a2e435f336d-6", "text": "Convenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.multi_retrieval_qa.MultiRetrievalQAChain.html"} {"id": "2a2e435f336d-7", "text": "save(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.multi_retrieval_qa.MultiRetrievalQAChain.html"} {"id": "0e83695fed60-0", "text": "langchain.chains.query_constructor.base.load_query_constructor_chain\u00b6\nlangchain.chains.query_constructor.base.load_query_constructor_chain(llm: BaseLanguageModel, document_contents: str, attribute_info: List[AttributeInfo], examples: Optional[List] = None, allowed_comparators: Optional[Sequence[Comparator]] = None, allowed_operators: Optional[Sequence[Operator]] = None, enable_limit: bool = False, **kwargs: Any) \u2192 LLMChain[source]\u00b6\nLoad a query constructor chain.\nParameters\nllm \u2013 BaseLanguageModel to use for the chain.\ndocument_contents \u2013 The contents of the document to be queried.\nattribute_info \u2013 A list of AttributeInfo objects describing\nthe attributes of the document.\nexamples \u2013 Optional list of examples to use for the chain.\nallowed_comparators \u2013 An optional list of allowed comparators.\nallowed_operators \u2013 An optional list of allowed operators.\nenable_limit \u2013 Whether to enable the limit operator. Defaults to False.\n**kwargs \u2013 \nReturns\nA LLMChain that can be used to construct queries.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.base.load_query_constructor_chain.html"} {"id": "722530466a9e-0", "text": "langchain.chains.graph_qa.kuzu.KuzuQAChain\u00b6\nclass langchain.chains.graph_qa.kuzu.KuzuQAChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, graph: KuzuGraph, cypher_generation_chain: LLMChain, qa_chain: LLMChain, input_key: str = 'query', output_key: str = 'result')[source]\u00b6\nBases: Chain\nChain for question-answering against a graph by generating Cypher statements for\nK\u00f9zu.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam cypher_generation_chain: LLMChain [Required]\u00b6\nparam graph: KuzuGraph [Required]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.kuzu.KuzuQAChain.html"} {"id": "722530466a9e-1", "text": "There are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam qa_chain: LLMChain [Required]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.kuzu.KuzuQAChain.html"} {"id": "722530466a9e-2", "text": "chain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.kuzu.KuzuQAChain.html"} {"id": "722530466a9e-3", "text": "tags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.kuzu.KuzuQAChain.html"} {"id": "722530466a9e-4", "text": "these runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.kuzu.KuzuQAChain.html"} {"id": "722530466a9e-5", "text": "classmethod from_llm(llm: BaseLanguageModel, *, qa_prompt: BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template=\"You are an assistant that helps to form nice and human understandable answers.\\nThe information part contains the provided information that you must use to construct an answer.\\nThe provided information is authorative, you must never doubt it or try to use your internal knowledge to correct it.\\nMake the answer sound as a response to the question. Do not mention that you based the result on the given information.\\nIf the provided information is empty, say that you don't know the answer.\\nInformation:\\n{context}\\n\\nQuestion: {question}\\nHelpful Answer:\", template_format='f-string', validate_template=True), cypher_prompt: BasePromptTemplate = PromptTemplate(input_variables=['schema', 'question'], output_parser=None, partial_variables={}, template='Task:Generate K\u00f9zu Cypher statement to query a graph database.\\n\\nInstructions:\\n\\nGenerate statement with K\u00f9zu Cypher dialect (rather than standard):\\n1. do not use `WHERE EXISTS` clause to check the existence of a property because K\u00f9zu database has a fixed schema.\\n2. do not omit relationship pattern. Always use `()-[]->()` instead of `()->()`.\\n3. do not include any notes or comments even if the statement does not produce the expected result.\\n```\\n\\nUse only the provided relationship types and properties in the schema.\\nDo not use any other relationship types or properties that are not provided.\\nSchema:\\n{schema}\\nNote: Do not include any explanations or apologies in your responses.\\nDo not respond to any questions that might ask anything else than for you to construct a Cypher statement.\\nDo not include any text except the generated Cypher statement.\\n\\nThe question is:\\n{question}', template_format='f-string',", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.kuzu.KuzuQAChain.html"} {"id": "722530466a9e-6", "text": "Cypher statement.\\n\\nThe question is:\\n{question}', template_format='f-string', validate_template=True), **kwargs: Any) \u2192 KuzuQAChain[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.kuzu.KuzuQAChain.html"} {"id": "722530466a9e-7", "text": "Initialize from LLM.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.kuzu.KuzuQAChain.html"} {"id": "722530466a9e-8", "text": "as positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.kuzu.KuzuQAChain.html"} {"id": "722530466a9e-9", "text": "to_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.kuzu.KuzuQAChain.html"} {"id": "acf380a50857-0", "text": "langchain.chains.llm_requests.LLMRequestsChain\u00b6\nclass langchain.chains.llm_requests.LLMRequestsChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, llm_chain: LLMChain, requests_wrapper: TextRequestsWrapper = None, text_length: int = 8000, requests_key: str = 'requests_result', input_key: str = 'url', output_key: str = 'output')[source]\u00b6\nBases: Chain\nChain that hits a URL and then uses an LLM to parse results.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam llm_chain: LLMChain [Required]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_requests.LLMRequestsChain.html"} {"id": "acf380a50857-1", "text": "There are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam requests_wrapper: TextRequestsWrapper [Optional]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam text_length: int = 8000\u00b6\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_requests.LLMRequestsChain.html"} {"id": "acf380a50857-2", "text": "chain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_requests.LLMRequestsChain.html"} {"id": "acf380a50857-3", "text": "tags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_requests.LLMRequestsChain.html"} {"id": "acf380a50857-4", "text": "these runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_requests.LLMRequestsChain.html"} {"id": "acf380a50857-5", "text": "Chain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_requests.LLMRequestsChain.html"} {"id": "acf380a50857-6", "text": "addition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_requests.LLMRequestsChain.html"} {"id": "acf380a50857-7", "text": "serialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_requests.LLMRequestsChain.html"} {"id": "2d7c7a9453d0-0", "text": "langchain.chains.router.base.MultiRouteChain\u00b6\nclass langchain.chains.router.base.MultiRouteChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, router_chain: RouterChain, destination_chains: Mapping[str, Chain], default_chain: Chain, silent_errors: bool = False)[source]\u00b6\nBases: Chain\nUse a single chain to route an input to one of multiple candidate chains.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam default_chain: Chain [Required]\u00b6\nDefault chain to use when none of the destination chains are suitable.\nparam destination_chains: Mapping[str, Chain] [Required]\u00b6\nChains that return final answer to inputs.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.MultiRouteChain.html"} {"id": "2d7c7a9453d0-1", "text": "There are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam router_chain: RouterChain [Required]\u00b6\nChain that routes inputs to destination chains.\nparam silent_errors: bool = False\u00b6\nIf True, use default_chain when an invalid destination name is provided.\nDefaults to False.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.MultiRouteChain.html"} {"id": "2d7c7a9453d0-2", "text": "response. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.MultiRouteChain.html"} {"id": "2d7c7a9453d0-3", "text": "addition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.MultiRouteChain.html"} {"id": "2d7c7a9453d0-4", "text": "addition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.MultiRouteChain.html"} {"id": "2d7c7a9453d0-5", "text": "Chain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.MultiRouteChain.html"} {"id": "2d7c7a9453d0-6", "text": "addition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.MultiRouteChain.html"} {"id": "2d7c7a9453d0-7", "text": "property lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.MultiRouteChain.html"} {"id": "c3d3dece9cba-0", "text": "langchain.chains.llm.LLMChain\u00b6\nclass langchain.chains.llm.LLMChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, prompt: BasePromptTemplate, llm: BaseLanguageModel, output_key: str = 'text', output_parser: BaseLLMOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None)[source]\u00b6\nBases: Chain\nChain to run queries against LLMs.\nExample\nfrom langchain import LLMChain, OpenAI, PromptTemplate\nprompt_template = \"Tell me a {adjective} joke\"\nprompt = PromptTemplate(\n input_variables=[\"adjective\"], template=prompt_template\n)\nllm = LLMChain(llm=OpenAI(), prompt=prompt)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam llm: BaseLanguageModel [Required]\u00b6\nLanguage model to call.\nparam llm_kwargs: dict [Optional]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm.LLMChain.html"} {"id": "c3d3dece9cba-1", "text": "Optional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam output_parser: BaseLLMOutputParser [Optional]\u00b6\nOutput parser to use.\nDefaults to one that takes the most likely string but does not change it\notherwise.\nparam prompt: BasePromptTemplate [Required]\u00b6\nPrompt object to use.\nparam return_final_only: bool = True\u00b6\nWhether to return only the final parsed result. Defaults to True.\nIf false, will return a bunch of extra information about the generation.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm.LLMChain.html"} {"id": "c3d3dece9cba-2", "text": "will be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]][source]\u00b6\nUtilize the LLM generate method for speed gains.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm.LLMChain.html"} {"id": "c3d3dece9cba-3", "text": "Utilize the LLM generate method for speed gains.\nasync aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]][source]\u00b6\nCall apply and then parse the results.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm.LLMChain.html"} {"id": "c3d3dece9cba-4", "text": "Returns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 LLMResult[source]\u00b6\nGenerate LLM result from inputs.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]][source]\u00b6\nUtilize the LLM generate method for speed gains.\napply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]][source]\u00b6\nCall apply and then parse the results.\nasync apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str[source]\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\nasync apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, str]][source]\u00b6\nCall apredict and then parse the results.\nasync aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]][source]\u00b6\nPrepare prompts from inputs.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm.LLMChain.html"} {"id": "c3d3dece9cba-5", "text": "Prepare prompts from inputs.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm.LLMChain.html"} {"id": "c3d3dece9cba-6", "text": "# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ncreate_outputs(llm_result: LLMResult) \u2192 List[Dict[str, Any]][source]\u00b6\nCreate outputs from response.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_string(llm: BaseLanguageModel, template: str) \u2192 LLMChain[source]\u00b6\nCreate LLMChain from LLM and template.\ngenerate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 LLMResult[source]\u00b6\nGenerate LLM result from inputs.\npredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str[source]\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\npredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, Any]][source]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm.LLMChain.html"} {"id": "c3d3dece9cba-7", "text": "Call predict and then parse the results.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]][source]\u00b6\nPrepare prompts from inputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm.LLMChain.html"} {"id": "c3d3dece9cba-8", "text": "has more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm.LLMChain.html"} {"id": "c3d3dece9cba-9", "text": "Example\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm.LLMChain.html"} {"id": "dcd162139f5f-0", "text": "langchain.chains.router.base.Route\u00b6\nclass langchain.chains.router.base.Route(destination, next_inputs)[source]\u00b6\nBases: NamedTuple\nCreate new instance of Route(destination, next_inputs)\nMethods\n__init__()\ncount(value,\u00a0/)\nReturn number of occurrences of value.\nindex(value[,\u00a0start,\u00a0stop])\nReturn first index of value.\nAttributes\ndestination\nAlias for field number 0\nnext_inputs\nAlias for field number 1\ncount(value, /)\u00b6\nReturn number of occurrences of value.\nindex(value, start=0, stop=9223372036854775807, /)\u00b6\nReturn first index of value.\nRaises ValueError if the value is not present.\ndestination: Optional[str]\u00b6\nAlias for field number 0\nnext_inputs: Dict[str, Any]\u00b6\nAlias for field number 1", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.Route.html"} {"id": "f3fd2e8fb8d6-0", "text": "langchain.chains.router.base.RouterChain\u00b6\nclass langchain.chains.router.base.RouterChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: Chain, ABC\nChain that outputs the name of a destination chain and the inputs to it.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam memory: Optional[langchain.schema.memory.BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.RouterChain.html"} {"id": "f3fd2e8fb8d6-1", "text": "This metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.RouterChain.html"} {"id": "f3fd2e8fb8d6-2", "text": "tags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.RouterChain.html"} {"id": "f3fd2e8fb8d6-3", "text": "to False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync aroute(inputs: Dict[str, Any], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Route[source]\u00b6\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.RouterChain.html"} {"id": "f3fd2e8fb8d6-4", "text": "these runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.RouterChain.html"} {"id": "f3fd2e8fb8d6-5", "text": "Validate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nroute(inputs: Dict[str, Any], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Route[source]\u00b6\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.RouterChain.html"} {"id": "f3fd2e8fb8d6-6", "text": "tags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nabstract property input_keys: List[str]\u00b6\nReturn the keys expected to be in the chain input.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.RouterChain.html"} {"id": "f3fd2e8fb8d6-7", "text": "property lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\u00b6\nReturn the keys expected to be in the chain output.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.RouterChain.html"} {"id": "edc40e127d45-0", "text": "langchain.chains.api.openapi.response_chain.APIResponderChain\u00b6\nclass langchain.chains.api.openapi.response_chain.APIResponderChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, prompt: BasePromptTemplate, llm: BaseLanguageModel, output_key: str = 'text', output_parser: BaseLLMOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None)[source]\u00b6\nBases: LLMChain\nGet the response parser.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam llm: BaseLanguageModel [Required]\u00b6\nLanguage model to call.\nparam llm_kwargs: dict [Optional]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.response_chain.APIResponderChain.html"} {"id": "edc40e127d45-1", "text": "There are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam output_key: str = 'text'\u00b6\nparam output_parser: BaseLLMOutputParser [Optional]\u00b6\nOutput parser to use.\nDefaults to one that takes the most likely string but does not change it\notherwise.\nparam prompt: BasePromptTemplate [Required]\u00b6\nPrompt object to use.\nparam return_final_only: bool = True\u00b6\nWhether to return only the final parsed result. Defaults to True.\nIf false, will return a bunch of extra information about the generation.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.response_chain.APIResponderChain.html"} {"id": "edc40e127d45-2", "text": "Execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.\nasync aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.response_chain.APIResponderChain.html"} {"id": "edc40e127d45-3", "text": "Call apply and then parse the results.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.response_chain.APIResponderChain.html"} {"id": "edc40e127d45-4", "text": "Generate LLM result from inputs.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.\napply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.\nasync apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\nasync apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, str]]\u00b6\nCall apredict and then parse the results.\nasync aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.response_chain.APIResponderChain.html"} {"id": "edc40e127d45-5", "text": "Convenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.response_chain.APIResponderChain.html"} {"id": "edc40e127d45-6", "text": "# -> \"The temperature in Boise is...\"\ncreate_outputs(llm_result: LLMResult) \u2192 List[Dict[str, Any]]\u00b6\nCreate outputs from response.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_llm(llm: BaseLanguageModel, verbose: bool = True, **kwargs: Any) \u2192 LLMChain[source]\u00b6\nGet the response parser.\nclassmethod from_string(llm: BaseLanguageModel, template: str) \u2192 LLMChain\u00b6\nCreate LLMChain from LLM and template.\ngenerate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.\npredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\npredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, Any]]\u00b6\nCall predict and then parse the results.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.response_chain.APIResponderChain.html"} {"id": "edc40e127d45-7", "text": "Validate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.response_chain.APIResponderChain.html"} {"id": "edc40e127d45-8", "text": "info along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.response_chain.APIResponderChain.html"} {"id": "edc40e127d45-9", "text": "validator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.response_chain.APIResponderChain.html"} {"id": "6f3f830e1f1f-0", "text": "langchain.chains.retrieval_qa.base.RetrievalQA\u00b6\nclass langchain.chains.retrieval_qa.base.RetrievalQA(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, combine_documents_chain: BaseCombineDocumentsChain, input_key: str = 'query', output_key: str = 'result', return_source_documents: bool = False, retriever: BaseRetriever)[source]\u00b6\nBases: BaseRetrievalQA\nChain for question-answering against an index.\nExample\nfrom langchain.llms import OpenAI\nfrom langchain.chains import RetrievalQA\nfrom langchain.faiss import FAISS\nfrom langchain.vectorstores.base import VectorStoreRetriever\nretriever = VectorStoreRetriever(vectorstore=FAISS(...))\nretrievalQA = RetrievalQA.from_llm(llm=OpenAI(), retriever=retriever)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam combine_documents_chain: BaseCombineDocumentsChain [Required]\u00b6\nChain to use to combine the documents.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.RetrievalQA.html"} {"id": "6f3f830e1f1f-1", "text": "Chain to use to combine the documents.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam retriever: BaseRetriever [Required]\u00b6\nparam return_source_documents: bool = False\u00b6\nReturn the source documents.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.RetrievalQA.html"} {"id": "6f3f830e1f1f-2", "text": "Execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.RetrievalQA.html"} {"id": "6f3f830e1f1f-3", "text": "response. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.RetrievalQA.html"} {"id": "6f3f830e1f1f-4", "text": "a single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.RetrievalQA.html"} {"id": "6f3f830e1f1f-5", "text": "# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_chain_type(llm: BaseLanguageModel, chain_type: str = 'stuff', chain_type_kwargs: Optional[dict] = None, **kwargs: Any) \u2192 BaseRetrievalQA\u00b6\nLoad chain from chain type.\nclassmethod from_llm(llm: BaseLanguageModel, prompt: Optional[PromptTemplate] = None, **kwargs: Any) \u2192 BaseRetrievalQA\u00b6\nInitialize from LLM.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.RetrievalQA.html"} {"id": "6f3f830e1f1f-6", "text": "Convenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.RetrievalQA.html"} {"id": "6f3f830e1f1f-7", "text": "save(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\nallow_population_by_field_name = True\u00b6\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.RetrievalQA.html"} {"id": "bde353376a9c-0", "text": "langchain.chains.sql_database.base.SQLDatabaseChain\u00b6\nclass langchain.chains.sql_database.base.SQLDatabaseChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, llm_chain: LLMChain, llm: Optional[BaseLanguageModel] = None, database: SQLDatabase, prompt: Optional[BasePromptTemplate] = None, top_k: int = 5, input_key: str = 'query', output_key: str = 'result', return_intermediate_steps: bool = False, return_direct: bool = False, use_query_checker: bool = False, query_checker_prompt: Optional[BasePromptTemplate] = None)[source]\u00b6\nBases: Chain\nChain for interacting with SQL Database.\nExample\nfrom langchain import SQLDatabaseChain, OpenAI, SQLDatabase\ndb = SQLDatabase(...)\ndb_chain = SQLDatabaseChain.from_llm(OpenAI(), db)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam database: SQLDatabase [Required]\u00b6\nSQL Database to connect to.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.base.SQLDatabaseChain.html"} {"id": "bde353376a9c-1", "text": "param database: SQLDatabase [Required]\u00b6\nSQL Database to connect to.\nparam llm: Optional[BaseLanguageModel] = None\u00b6\n[Deprecated] LLM wrapper to use.\nparam llm_chain: LLMChain [Required]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam prompt: Optional[BasePromptTemplate] = None\u00b6\n[Deprecated] Prompt to use to translate natural language to SQL.\nparam query_checker_prompt: Optional[BasePromptTemplate] = None\u00b6\nThe prompt template that should be used by the query checker\nparam return_direct: bool = False\u00b6\nWhether or not to return the result of querying the SQL table directly.\nparam return_intermediate_steps: bool = False\u00b6\nWhether or not to return the intermediate steps along with the final answer.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam top_k: int = 5\u00b6\nNumber of results to return from the query", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.base.SQLDatabaseChain.html"} {"id": "bde353376a9c-2", "text": "param top_k: int = 5\u00b6\nNumber of results to return from the query\nparam use_query_checker: bool = False\u00b6\nWhether or not the query checker tool should be used to attempt\nto fix the initial SQL from the LLM.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.base.SQLDatabaseChain.html"} {"id": "bde353376a9c-3", "text": "to False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.base.SQLDatabaseChain.html"} {"id": "bde353376a9c-4", "text": "Call the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.base.SQLDatabaseChain.html"} {"id": "bde353376a9c-5", "text": "# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_llm(llm: BaseLanguageModel, db: SQLDatabase, prompt: Optional[BasePromptTemplate] = None, **kwargs: Any) \u2192 SQLDatabaseChain[source]\u00b6\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.base.SQLDatabaseChain.html"} {"id": "bde353376a9c-6", "text": "inputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.base.SQLDatabaseChain.html"} {"id": "bde353376a9c-7", "text": "Example\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.base.SQLDatabaseChain.html"} {"id": "bde353376a9c-8", "text": "Configuration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.base.SQLDatabaseChain.html"} {"id": "68100b4bcc49-0", "text": "langchain.chains.query_constructor.schema.AttributeInfo\u00b6\nclass langchain.chains.query_constructor.schema.AttributeInfo(*, name: str, description: str, type: str)[source]\u00b6\nBases: BaseModel\nInformation about a data source attribute.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam description: str [Required]\u00b6\nparam name: str [Required]\u00b6\nparam type: str [Required]\u00b6\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nfrozen = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.schema.AttributeInfo.html"} {"id": "eca5f01e4db5-0", "text": "langchain.chains.sql_database.base.SQLDatabaseSequentialChain\u00b6\nclass langchain.chains.sql_database.base.SQLDatabaseSequentialChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, decider_chain: LLMChain, sql_chain: SQLDatabaseChain, input_key: str = 'query', output_key: str = 'result', return_intermediate_steps: bool = False)[source]\u00b6\nBases: Chain\nChain for querying SQL database that is a sequential chain.\nThe chain is as follows:\n1. Based on the query, determine which tables to use.\n2. Based on those tables, call the normal SQL database chain.\nThis is useful in cases where the number of tables in the database is large.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam decider_chain: LLMChain [Required]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.base.SQLDatabaseSequentialChain.html"} {"id": "eca5f01e4db5-1", "text": "and at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam return_intermediate_steps: bool = False\u00b6\nparam sql_chain: SQLDatabaseChain [Required]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.base.SQLDatabaseSequentialChain.html"} {"id": "eca5f01e4db5-2", "text": "memory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.base.SQLDatabaseSequentialChain.html"} {"id": "eca5f01e4db5-3", "text": "callbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.base.SQLDatabaseSequentialChain.html"} {"id": "eca5f01e4db5-4", "text": "sole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.base.SQLDatabaseSequentialChain.html"} {"id": "eca5f01e4db5-5", "text": "# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_llm(llm: BaseLanguageModel, database: SQLDatabase, query_prompt: BasePromptTemplate = PromptTemplate(input_variables=['input', 'table_info', 'dialect', 'top_k'], output_parser=None, partial_variables={}, template='Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Unless the user specifies in his question a specific number of examples he wishes to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.\\n\\nNever query for all the columns from a specific table, only ask for a the few relevant columns given the question.\\n\\nPay attention to use only the column names that you can see in the schema description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\\n\\nUse the following format:\\n\\nQuestion: Question here\\nSQLQuery: SQL Query to run\\nSQLResult: Result of the SQLQuery\\nAnswer: Final answer here\\n\\nOnly use the following tables:\\n{table_info}\\n\\nQuestion: {input}', template_format='f-string', validate_template=True), decider_prompt: BasePromptTemplate = PromptTemplate(input_variables=['query', 'table_names'], output_parser=CommaSeparatedListOutputParser(), partial_variables={}, template='Given the below input question and list of potential tables, output a comma separated list of the table names that may be necessary to answer this question.\\n\\nQuestion: {query}\\n\\nTable Names: {table_names}\\n\\nRelevant Table Names:', template_format='f-string', validate_template=True), **kwargs: Any) \u2192 SQLDatabaseSequentialChain[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.base.SQLDatabaseSequentialChain.html"} {"id": "eca5f01e4db5-6", "text": "Load the necessary chains.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.base.SQLDatabaseSequentialChain.html"} {"id": "eca5f01e4db5-7", "text": "as positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.base.SQLDatabaseSequentialChain.html"} {"id": "eca5f01e4db5-8", "text": "to_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.base.SQLDatabaseSequentialChain.html"} {"id": "36580e83dcad-0", "text": "langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain\u00b6\nclass langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, combine_docs_chain: BaseCombineDocumentsChain, question_generator: LLMChain, output_key: str = 'answer', rephrase_question: bool = True, return_source_documents: bool = False, return_generated_question: bool = False, get_chat_history: Optional[Callable[[Union[Tuple[str, str], BaseMessage]], str]] = None)[source]\u00b6\nBases: Chain\nChain for chatting with an index.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam combine_docs_chain: BaseCombineDocumentsChain [Required]\u00b6\nThe chain used to combine any retrieved documents.\nparam get_chat_history: Optional[Callable[[CHAT_TURN_TYPE], str]] = None\u00b6\nAn optional function to get a string of the chat history.\nIf None is provided, will use a default.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain.html"} {"id": "36580e83dcad-1", "text": "If None is provided, will use a default.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam output_key: str = 'answer'\u00b6\nThe output key to return the final answer of this chain in.\nparam question_generator: LLMChain [Required]\u00b6\nThe chain used to generate a new question for the sake of retrieval.\nThis chain will take in the current question (with variable question)\nand any chat history (with variable chat_history) and will produce\na new standalone question to be used later on.\nparam rephrase_question: bool = True\u00b6\nWhether or not to pass the new generated question to the combine_docs_chain.\nIf True, will pass the new generated question along.\nIf False, will only use the new generated question for retrieval and pass the\noriginal question along to the combine_docs_chain.\nparam return_generated_question: bool = False\u00b6\nReturn the generated question as part of the final result.\nparam return_source_documents: bool = False\u00b6\nReturn the retrieved source documents as part of the final result.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain.html"} {"id": "36580e83dcad-2", "text": "Optional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain.html"} {"id": "36580e83dcad-3", "text": "to False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain.html"} {"id": "36580e83dcad-4", "text": "Call the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain.html"} {"id": "36580e83dcad-5", "text": "# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain.html"} {"id": "36580e83dcad-6", "text": "Raise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain.html"} {"id": "36580e83dcad-7", "text": "# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None[source]\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty input_keys: List[str]\u00b6\nInput keys.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nallow_population_by_field_name = True\u00b6\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain.html"} {"id": "75176a2213bf-0", "text": "langchain.chains.summarize.__init__.load_summarize_chain\u00b6\nlangchain.chains.summarize.__init__.load_summarize_chain(llm: BaseLanguageModel, chain_type: str = 'stuff', verbose: Optional[bool] = None, **kwargs: Any) \u2192 BaseCombineDocumentsChain[source]\u00b6\nLoad summarizing chain.\nParameters\nllm \u2013 Language Model to use in the chain.\nchain_type \u2013 Type of document combining chain to use. Should be one of \u201cstuff\u201d,\n\u201cmap_reduce\u201d, and \u201crefine\u201d.\nverbose \u2013 Whether chains should be run in verbose mode or not. Note that this\napplies to all chains that make up the final chain.\nReturns\nA chain to use for summarizing.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.summarize.__init__.load_summarize_chain.html"} {"id": "56ad76e8e2e7-0", "text": "langchain.chains.summarize.__init__.LoadingCallable\u00b6\nclass langchain.chains.summarize.__init__.LoadingCallable(*args, **kwargs)[source]\u00b6\nBases: Protocol\nInterface for loading the combine documents chain.\nMethods\n__init__(*args,\u00a0**kwargs)\n__call__(llm: BaseLanguageModel, **kwargs: Any) \u2192 BaseCombineDocumentsChain[source]\u00b6\nCallable to load the combine documents chain.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.summarize.__init__.LoadingCallable.html"} {"id": "d00f2fab0476-0", "text": "langchain.chains.query_constructor.ir.Comparison\u00b6\nclass langchain.chains.query_constructor.ir.Comparison(*, comparator: Comparator, attribute: str, value: Any = None)[source]\u00b6\nBases: FilterDirective\nA comparison to a value.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam attribute: str [Required]\u00b6\nparam comparator: langchain.chains.query_constructor.ir.Comparator [Required]\u00b6\nparam value: Any = None\u00b6\naccept(visitor: Visitor) \u2192 Any\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Comparison.html"} {"id": "42564d7cd578-0", "text": "langchain.chains.pal.base.PALChain\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.pal.base.PALChain.html"} {"id": "42564d7cd578-1", "text": "class langchain.chains.pal.base.PALChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, llm_chain: LLMChain, llm: Optional[BaseLanguageModel] = None, prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\"\"\"\\n\u00a0\u00a0\u00a0 money_initial = 23\\n\u00a0\u00a0\u00a0 bagels = 5\\n\u00a0\u00a0\u00a0 bagel_cost = 3\\n\u00a0\u00a0\u00a0 money_spent = bagels * bagel_cost\\n\u00a0\u00a0\u00a0 money_left = money_initial - money_spent\\n\u00a0\u00a0\u00a0 result = money_left\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?\"\"\"\\n\u00a0\u00a0\u00a0 golf_balls_initial = 58\\n\u00a0\u00a0\u00a0 golf_balls_lost_tuesday = 23\\n\u00a0\u00a0\u00a0 golf_balls_lost_wednesday = 2\\n\u00a0\u00a0\u00a0 golf_balls_left = golf_balls_initial", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.pal.base.PALChain.html"} {"id": "42564d7cd578-2", "text": "= 2\\n\u00a0\u00a0\u00a0 golf_balls_left = golf_balls_initial - golf_balls_lost_tuesday - golf_balls_lost_wednesday\\n\u00a0\u00a0\u00a0 result = golf_balls_left\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?\"\"\"\\n\u00a0\u00a0\u00a0 computers_initial = 9\\n\u00a0\u00a0\u00a0 computers_per_day = 5\\n\u00a0\u00a0\u00a0 num_days = 4\u00a0 # 4 days between monday and thursday\\n\u00a0\u00a0\u00a0 computers_added = computers_per_day * num_days\\n\u00a0\u00a0\u00a0 computers_total = computers_initial + computers_added\\n\u00a0\u00a0\u00a0 result = computers_total\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?\"\"\"\\n\u00a0\u00a0\u00a0 toys_initial = 5\\n\u00a0\u00a0\u00a0 mom_toys = 2\\n\u00a0\u00a0\u00a0 dad_toys = 2\\n\u00a0\u00a0\u00a0 total_received = mom_toys + dad_toys\\n\u00a0\u00a0\u00a0 total_toys = toys_initial + total_received\\n\u00a0\u00a0\u00a0 result = total_toys\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?\\n\\n# solution in Python:\\n\\n\\ndef", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.pal.base.PALChain.html"} {"id": "42564d7cd578-3", "text": "did Jason give to Denny?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?\"\"\"\\n\u00a0\u00a0\u00a0 jason_lollipops_initial = 20\\n\u00a0\u00a0\u00a0 jason_lollipops_after = 12\\n\u00a0\u00a0\u00a0 denny_lollipops = jason_lollipops_initial - jason_lollipops_after\\n\u00a0\u00a0\u00a0 result = denny_lollipops\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\"\"\"\\n\u00a0\u00a0\u00a0 leah_chocolates = 32\\n\u00a0\u00a0\u00a0 sister_chocolates = 42\\n\u00a0\u00a0\u00a0 total_chocolates = leah_chocolates + sister_chocolates\\n\u00a0\u00a0\u00a0 chocolates_eaten = 35\\n\u00a0\u00a0\u00a0 chocolates_left = total_chocolates - chocolates_eaten\\n\u00a0\u00a0\u00a0 result = chocolates_left\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?\"\"\"\\n\u00a0\u00a0\u00a0 cars_initial = 3\\n\u00a0\u00a0\u00a0 cars_arrived = 2\\n\u00a0\u00a0\u00a0 total_cars = cars_initial + cars_arrived\\n\u00a0\u00a0\u00a0 result = total_cars\\n\u00a0\u00a0\u00a0 return", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.pal.base.PALChain.html"} {"id": "42564d7cd578-4", "text": "total_cars = cars_initial + cars_arrived\\n\u00a0\u00a0\u00a0 result = total_cars\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?\"\"\"\\n\u00a0\u00a0\u00a0 trees_initial = 15\\n\u00a0\u00a0\u00a0 trees_after = 21\\n\u00a0\u00a0\u00a0 trees_added = trees_after - trees_initial\\n\u00a0\u00a0\u00a0 result = trees_added\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: {question}\\n\\n# solution in Python:\\n\\n\\n', template_format='f-string', validate_template=True), stop: str = '\\n\\n', get_answer_expr: str = 'print(solution())', python_globals: Optional[Dict[str, Any]] = None, python_locals: Optional[Dict[str, Any]] = None, output_key: str = 'result', return_intermediate_steps: bool = False)[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.pal.base.PALChain.html"} {"id": "42564d7cd578-5", "text": "Bases: Chain\nImplements Program-Aided Language Models.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam get_answer_expr: str = 'print(solution())'\u00b6\nparam llm: Optional[BaseLanguageModel] = None\u00b6\n[Deprecated]\nparam llm_chain: LLMChain [Required]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.pal.base.PALChain.html"} {"id": "42564d7cd578-6", "text": "param prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\"\"\"\\n\u00a0\u00a0\u00a0 money_initial = 23\\n\u00a0\u00a0\u00a0 bagels = 5\\n\u00a0\u00a0\u00a0 bagel_cost = 3\\n\u00a0\u00a0\u00a0 money_spent = bagels * bagel_cost\\n\u00a0\u00a0\u00a0 money_left = money_initial - money_spent\\n\u00a0\u00a0\u00a0 result = money_left\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?\"\"\"\\n\u00a0\u00a0\u00a0 golf_balls_initial = 58\\n\u00a0\u00a0\u00a0 golf_balls_lost_tuesday = 23\\n\u00a0\u00a0\u00a0 golf_balls_lost_wednesday = 2\\n\u00a0\u00a0\u00a0 golf_balls_left = golf_balls_initial - golf_balls_lost_tuesday - golf_balls_lost_wednesday\\n\u00a0\u00a0\u00a0 result = golf_balls_left\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"There were nine computers in the server room. Five more computers were installed", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.pal.base.PALChain.html"} {"id": "42564d7cd578-7", "text": "solution():\\n\u00a0\u00a0\u00a0 \"\"\"There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?\"\"\"\\n\u00a0\u00a0\u00a0 computers_initial = 9\\n\u00a0\u00a0\u00a0 computers_per_day = 5\\n\u00a0\u00a0\u00a0 num_days = 4\u00a0 # 4 days between monday and thursday\\n\u00a0\u00a0\u00a0 computers_added = computers_per_day * num_days\\n\u00a0\u00a0\u00a0 computers_total = computers_initial + computers_added\\n\u00a0\u00a0\u00a0 result = computers_total\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?\"\"\"\\n\u00a0\u00a0\u00a0 toys_initial = 5\\n\u00a0\u00a0\u00a0 mom_toys = 2\\n\u00a0\u00a0\u00a0 dad_toys = 2\\n\u00a0\u00a0\u00a0 total_received = mom_toys + dad_toys\\n\u00a0\u00a0\u00a0 total_toys = toys_initial + total_received\\n\u00a0\u00a0\u00a0 result = total_toys\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?\"\"\"\\n\u00a0\u00a0\u00a0 jason_lollipops_initial = 20\\n\u00a0\u00a0\u00a0 jason_lollipops_after = 12\\n\u00a0\u00a0\u00a0 denny_lollipops = jason_lollipops_initial -", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.pal.base.PALChain.html"} {"id": "42564d7cd578-8", "text": "= 12\\n\u00a0\u00a0\u00a0 denny_lollipops = jason_lollipops_initial - jason_lollipops_after\\n\u00a0\u00a0\u00a0 result = denny_lollipops\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\"\"\"\\n\u00a0\u00a0\u00a0 leah_chocolates = 32\\n\u00a0\u00a0\u00a0 sister_chocolates = 42\\n\u00a0\u00a0\u00a0 total_chocolates = leah_chocolates + sister_chocolates\\n\u00a0\u00a0\u00a0 chocolates_eaten = 35\\n\u00a0\u00a0\u00a0 chocolates_left = total_chocolates - chocolates_eaten\\n\u00a0\u00a0\u00a0 result = chocolates_left\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?\"\"\"\\n\u00a0\u00a0\u00a0 cars_initial = 3\\n\u00a0\u00a0\u00a0 cars_arrived = 2\\n\u00a0\u00a0\u00a0 total_cars = cars_initial + cars_arrived\\n\u00a0\u00a0\u00a0 result = total_cars\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"There are 15 trees in the grove. Grove workers will plant trees in the grove today. After", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.pal.base.PALChain.html"} {"id": "42564d7cd578-9", "text": "15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?\"\"\"\\n\u00a0\u00a0\u00a0 trees_initial = 15\\n\u00a0\u00a0\u00a0 trees_after = 21\\n\u00a0\u00a0\u00a0 trees_added = trees_after - trees_initial\\n\u00a0\u00a0\u00a0 result = trees_added\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: {question}\\n\\n# solution in Python:\\n\\n\\n', template_format='f-string', validate_template=True)\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.pal.base.PALChain.html"} {"id": "42564d7cd578-10", "text": "[Deprecated]\nparam python_globals: Optional[Dict[str, Any]] = None\u00b6\nparam python_locals: Optional[Dict[str, Any]] = None\u00b6\nparam return_intermediate_steps: bool = False\u00b6\nparam stop: str = '\\n\\n'\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.pal.base.PALChain.html"} {"id": "42564d7cd578-11", "text": "these runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.pal.base.PALChain.html"} {"id": "42564d7cd578-12", "text": "metadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.pal.base.PALChain.html"} {"id": "42564d7cd578-13", "text": "these runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_colored_object_prompt(llm: BaseLanguageModel, **kwargs: Any) \u2192 PALChain[source]\u00b6\nLoad PAL from colored object prompt.\nclassmethod from_math_prompt(llm: BaseLanguageModel, **kwargs: Any) \u2192 PALChain[source]\u00b6\nLoad PAL from math prompt.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.pal.base.PALChain.html"} {"id": "42564d7cd578-14", "text": "only one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.pal.base.PALChain.html"} {"id": "42564d7cd578-15", "text": "sole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.pal.base.PALChain.html"} {"id": "42564d7cd578-16", "text": "serialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.pal.base.PALChain.html"} {"id": "e36c4c94fe1c-0", "text": "langchain.chains.constitutional_ai.base.ConstitutionalChain\u00b6\nclass langchain.chains.constitutional_ai.base.ConstitutionalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, chain: LLMChain, constitutional_principles: List[ConstitutionalPrinciple], critique_chain: LLMChain, revision_chain: LLMChain, return_intermediate_steps: bool = False)[source]\u00b6\nBases: Chain\nChain for applying constitutional principles.\nExample\nfrom langchain.llms import OpenAI\nfrom langchain.chains import LLMChain, ConstitutionalChain\nfrom langchain.chains.constitutional_ai.models import ConstitutionalPrinciple\nllm = OpenAI()\nqa_prompt = PromptTemplate(\n template=\"Q: {question} A:\",\n input_variables=[\"question\"],\n)\nqa_chain = LLMChain(llm=llm, prompt=qa_prompt)\nconstitutional_chain = ConstitutionalChain.from_llm(\n llm=llm,\n chain=qa_chain,\n constitutional_principles=[\n ConstitutionalPrinciple(\n critique_request=\"Tell if this answer is good.\",\n revision_request=\"Give a better answer.\",\n )\n ],\n)\nconstitutional_chain.run(question=\"What is the meaning of life?\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.base.ConstitutionalChain.html"} {"id": "e36c4c94fe1c-1", "text": "Deprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam chain: langchain.chains.llm.LLMChain [Required]\u00b6\nparam constitutional_principles: List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple] [Required]\u00b6\nparam critique_chain: langchain.chains.llm.LLMChain [Required]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam return_intermediate_steps: bool = False\u00b6\nparam revision_chain: langchain.chains.llm.LLMChain [Required]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.base.ConstitutionalChain.html"} {"id": "e36c4c94fe1c-2", "text": "You can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.base.ConstitutionalChain.html"} {"id": "e36c4c94fe1c-3", "text": "Returns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.base.ConstitutionalChain.html"} {"id": "e36c4c94fe1c-4", "text": "Call the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.base.ConstitutionalChain.html"} {"id": "e36c4c94fe1c-5", "text": "# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.base.ConstitutionalChain.html"} {"id": "e36c4c94fe1c-6", "text": "classmethod from_llm(llm: BaseLanguageModel, chain: LLMChain, critique_prompt: BasePromptTemplate = FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the model\u2019s precedings response is harmful. Also point out potential harmfulness in the human\u2019s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous. Critique Needed.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isn\u2019t strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, it\u2019s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don\u2019t be afraid to ask.', 'critique_request': 'Identify specific ways in which the model\u2019s precedings response is harmful. Also point out potential harmfulness in the human\u2019s questions and responses. Examples of harm include but are not limited to racism, sexism,", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.base.ConstitutionalChain.html"} {"id": "e36c4c94fe1c-7", "text": "the human\u2019s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the model\u2019s response could be interpreted as saying that it isn\u2019t strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent. Critique Needed.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. It\u2019s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don\u2019t be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600\u2019s. It\u2019s explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun\u2019s gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, I\u2019m not sure that the precession", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.base.ConstitutionalChain.html"} {"id": "e36c4c94fe1c-8", "text": "so I should have been more confident about that. However, I\u2019m not sure that the precession measurement was actually made in the 1600\u2019s, but was probably made at least 100 years ago. I\u2019m also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, I\u2019m not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although I\u2019m pretty sure it decays more quickly than Newton\u2019s law, and the Chern-Simons theorem is probably just wrong. Critique Needed.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun\u2019s gravitational field that is smaller and decays more quickly than Newton\u2019s law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}, {'input_prompt': \"Rewrite the following sentence in the style and substance of Yoda: 'It is illegal and unethical to steal kittens. If you are looking to adopt a kitten, please contact your local animal shelter or pet store.'\", 'output_from_model': 'Steal kittens, illegal and unethical it is, hmm. A kitten you seek to adopt? Local animal shelter or pet store, contact you must.', 'critique_request': \"Only if applicable, identify specific ways in which the model's response is not in the style of Master Yoda.\", 'critique': \"The provided sentence appears to capture the essence of Master Yoda's unique speaking style quite", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.base.ConstitutionalChain.html"} {"id": "e36c4c94fe1c-9", "text": "\"The provided sentence appears to capture the essence of Master Yoda's unique speaking style quite well, including the key characteristics of his speech. There are no apparent discrepancies between this sentence and Yoda's typical speaking style. No critique needed.\", 'revision': 'No revisions needed.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\\n\\nModel: {output_from_model}\\n\\nCritique Request: {critique_request}\\n\\nCritique: {critique}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\\nModel: {output_from_model}\\n\\nCritique Request: {critique_request}\\n\\nCritique:', example_separator='\\n === \\n', prefix=\"Below is a conversation between a human and an AI model. If there is no material critique of the model output, append to the end of the Critique: 'No critique needed.' If there is material critique of the model output, append to the end of the Critique: 'Critique needed.'\", template_format='f-string', validate_template=True), revision_prompt: BasePromptTemplate = FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique', 'revision_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the model\u2019s precedings response is harmful. Also point out potential harmfulness in the human\u2019s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique':", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.base.ConstitutionalChain.html"} {"id": "e36c4c94fe1c-10", "text": "Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous. Critique Needed.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isn\u2019t strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, it\u2019s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don\u2019t be afraid to ask.', 'critique_request': 'Identify specific ways in which the model\u2019s precedings response is harmful. Also point out potential harmfulness in the human\u2019s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the model\u2019s response could be interpreted as saying that it isn\u2019t strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent. Critique Needed.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.',", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.base.ConstitutionalChain.html"} {"id": "e36c4c94fe1c-11", "text": "model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. It\u2019s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don\u2019t be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600\u2019s. It\u2019s explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun\u2019s gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, I\u2019m not sure that the precession measurement was actually made in the 1600\u2019s, but was probably made at least 100 years ago. I\u2019m also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, I\u2019m not sure about the 1/r^5 scaling so I should rewrite that to make", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.base.ConstitutionalChain.html"} {"id": "e36c4c94fe1c-12", "text": "I\u2019m not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although I\u2019m pretty sure it decays more quickly than Newton\u2019s law, and the Chern-Simons theorem is probably just wrong. Critique Needed.', 'revision_request': 'Please rewrite the model response. In particular, respond in a way that asserts less confidence on possibly false claims, and more confidence on likely true claims. Remember that your knowledge comes solely from your training data, and you\u2019re unstable to access other sources of information except from the human directly. If you think your degree of confidence is already appropriate, then do not make any changes.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun\u2019s gravitational field that is smaller and decays more quickly than Newton\u2019s law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}, {'input_prompt': \"Rewrite the following sentence in the style and substance of Yoda: 'It is illegal and unethical to steal kittens. If you are looking to adopt a kitten, please contact your local animal shelter or pet store.'\", 'output_from_model': 'Steal kittens, illegal and unethical it is, hmm. A kitten you seek to adopt? Local animal shelter or pet store, contact you must.', 'critique_request': \"Only if applicable, identify specific ways in which the model's response is not in the style of Master Yoda.\", 'critique': \"The provided sentence appears to capture the essence of Master Yoda's unique speaking style quite", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.base.ConstitutionalChain.html"} {"id": "e36c4c94fe1c-13", "text": "\"The provided sentence appears to capture the essence of Master Yoda's unique speaking style quite well, including the key characteristics of his speech. There are no apparent discrepancies between this sentence and Yoda's typical speaking style. No critique needed.\", 'revision_request': 'Please rewrite the model response to more closely mimic the style of Master Yoda.', 'revision': 'No revisions needed.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\\n\\nModel: {output_from_model}\\n\\nCritique Request: {critique_request}\\n\\nCritique: {critique}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\\n\\nModel: {output_from_model}\\n\\nCritique Request: {critique_request}\\n\\nCritique: {critique}\\n\\nIf the critique does not identify anything worth changing, ignore the Revision Request and do not make any revisions. Instead, return \"No revisions needed\".\\n\\nIf the critique does identify something worth changing, please revise the model response based on the Revision Request.\\n\\nRevision Request: {revision_request}\\n\\nRevision:', example_separator='\\n === \\n', prefix='Below is a conversation between a human and an AI model.', template_format='f-string', validate_template=True), **kwargs: Any) \u2192 ConstitutionalChain[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.base.ConstitutionalChain.html"} {"id": "e36c4c94fe1c-14", "text": "Create a chain from an LLM.\nclassmethod get_principles(names: Optional[List[str]] = None) \u2192 List[ConstitutionalPrinciple][source]\u00b6\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.base.ConstitutionalChain.html"} {"id": "e36c4c94fe1c-15", "text": "has more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.base.ConstitutionalChain.html"} {"id": "e36c4c94fe1c-16", "text": "Example\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty input_keys: List[str]\u00b6\nDefines the input keys.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\u00b6\nDefines the output keys.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.base.ConstitutionalChain.html"} {"id": "aff0e8d0b369-0", "text": "langchain.chains.llm_bash.prompt.BashOutputParser\u00b6\nclass langchain.chains.llm_bash.prompt.BashOutputParser[source]\u00b6\nBases: BaseOutputParser\nParser for bash output.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nstatic get_code_blocks(t: str) \u2192 List[str][source]\u00b6\nGet multiple code blocks from the LLM result.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 List[str][source]\u00b6\nParse a single string model output into some structure.\nParameters\ntext \u2013 String output of language model.\nReturns\nStructured output.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_bash.prompt.BashOutputParser.html"} {"id": "aff0e8d0b369-1", "text": "to_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_bash.prompt.BashOutputParser.html"} {"id": "2652b0b0e5c5-0", "text": "langchain.chains.hyde.base.HypotheticalDocumentEmbedder\u00b6\nclass langchain.chains.hyde.base.HypotheticalDocumentEmbedder(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, base_embeddings: Embeddings, llm_chain: LLMChain)[source]\u00b6\nBases: Chain, Embeddings\nGenerate hypothetical document for query, and then embed that.\nBased on https://arxiv.org/abs/2212.10496\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam base_embeddings: Embeddings [Required]\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam llm_chain: LLMChain [Required]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html"} {"id": "2652b0b0e5c5-1", "text": "There are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html"} {"id": "2652b0b0e5c5-2", "text": "callbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html"} {"id": "2652b0b0e5c5-3", "text": "addition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html"} {"id": "2652b0b0e5c5-4", "text": "addition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ncombine_embeddings(embeddings: List[List[float]]) \u2192 List[float][source]\u00b6\nCombine embeddings into final embeddings.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nembed_documents(texts: List[str]) \u2192 List[List[float]][source]\u00b6\nCall the base embeddings.\nembed_query(text: str) \u2192 List[float][source]\u00b6\nGenerate a hypothetical document and embedded it.\nclassmethod from_llm(llm: BaseLanguageModel, base_embeddings: Embeddings, prompt_key: str, **kwargs: Any) \u2192 HypotheticalDocumentEmbedder[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html"} {"id": "2652b0b0e5c5-5", "text": "Load and use LLMChain for a specific prompt key.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html"} {"id": "2652b0b0e5c5-6", "text": "The other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html"} {"id": "2652b0b0e5c5-7", "text": "Set the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty input_keys: List[str]\u00b6\nInput keys for Hyde\u2019s LLM chain.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\u00b6\nOutput keys for Hyde\u2019s LLM chain.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html"} {"id": "f039a68c242b-0", "text": "langchain.chains.prompt_selector.is_llm\u00b6\nlangchain.chains.prompt_selector.is_llm(llm: BaseLanguageModel) \u2192 bool[source]\u00b6\nCheck if the language model is a LLM.\nParameters\nllm \u2013 Language model to check.\nReturns\nTrue if the language model is a BaseLLM model, False otherwise.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.prompt_selector.is_llm.html"} {"id": "3b7b5b4d940d-0", "text": "langchain.chains.qa_generation.base.QAGenerationChain\u00b6\nclass langchain.chains.qa_generation.base.QAGenerationChain(*, memory: ~typing.Optional[~langchain.schema.memory.BaseMemory] = None, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, verbose: bool = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, llm_chain: ~langchain.chains.llm.LLMChain, text_splitter: ~langchain.text_splitter.TextSplitter = , input_key: str = 'text', output_key: str = 'questions', k: ~typing.Optional[int] = None)[source]\u00b6\nBases: Chain\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam input_key: str = 'text'\u00b6\nparam k: Optional[int] = None\u00b6\nparam llm_chain: LLMChain [Required]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_generation.base.QAGenerationChain.html"} {"id": "3b7b5b4d940d-1", "text": "Optional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam output_key: str = 'questions'\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam text_splitter: TextSplitter = \u00b6\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_generation.base.QAGenerationChain.html"} {"id": "3b7b5b4d940d-2", "text": "only one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_generation.base.QAGenerationChain.html"} {"id": "3b7b5b4d940d-3", "text": "chain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_generation.base.QAGenerationChain.html"} {"id": "3b7b5b4d940d-4", "text": "sole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_llm(llm: BaseLanguageModel, prompt: Optional[BasePromptTemplate] = None, **kwargs: Any) \u2192 QAGenerationChain[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_generation.base.QAGenerationChain.html"} {"id": "3b7b5b4d940d-5", "text": "prep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_generation.base.QAGenerationChain.html"} {"id": "3b7b5b4d940d-6", "text": "as positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_generation.base.QAGenerationChain.html"} {"id": "3b7b5b4d940d-7", "text": "to_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty input_keys: List[str]\u00b6\nReturn the keys expected to be in the chain input.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\u00b6\nReturn the keys expected to be in the chain output.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_generation.base.QAGenerationChain.html"} {"id": "420ae86d4df7-0", "text": "langchain.chains.conversation.base.ConversationChain\u00b6\nclass langchain.chains.conversation.base.ConversationChain(*, memory: BaseMemory = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, prompt: BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\\n\\nCurrent conversation:\\n{history}\\nHuman: {input}\\nAI:', template_format='f-string', validate_template=True), llm: BaseLanguageModel, output_key: str = 'response', output_parser: BaseLLMOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None, input_key: str = 'input')[source]\u00b6\nBases: LLMChain\nChain to have a conversation and load context from memory.\nExample\nfrom langchain import ConversationChain, OpenAI\nconversation = ConversationChain(llm=OpenAI())\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversation.base.ConversationChain.html"} {"id": "420ae86d4df7-1", "text": "starting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam llm: BaseLanguageModel [Required]\u00b6\nLanguage model to call.\nparam llm_kwargs: dict [Optional]\u00b6\nparam memory: langchain.schema.memory.BaseMemory [Optional]\u00b6\nDefault memory store.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam output_parser: BaseLLMOutputParser [Optional]\u00b6\nOutput parser to use.\nDefaults to one that takes the most likely string but does not change it\notherwise.\nparam prompt: langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\\n\\nCurrent conversation:\\n{history}\\nHuman: {input}\\nAI:', template_format='f-string', validate_template=True)\u00b6\nDefault conversation prompt to use.\nparam return_final_only: bool = True\u00b6\nWhether to return only the final parsed result. Defaults to True.\nIf false, will return a bunch of extra information about the generation.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversation.base.ConversationChain.html"} {"id": "420ae86d4df7-2", "text": "and passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversation.base.ConversationChain.html"} {"id": "420ae86d4df7-3", "text": "Returns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.\nasync aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversation.base.ConversationChain.html"} {"id": "420ae86d4df7-4", "text": "these runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.\napply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.\nasync apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\nasync apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, str]]\u00b6\nCall apredict and then parse the results.\nasync aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversation.base.ConversationChain.html"} {"id": "420ae86d4df7-5", "text": "Prepare prompts from inputs.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversation.base.ConversationChain.html"} {"id": "420ae86d4df7-6", "text": "# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ncreate_outputs(llm_result: LLMResult) \u2192 List[Dict[str, Any]]\u00b6\nCreate outputs from response.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_string(llm: BaseLanguageModel, template: str) \u2192 LLMChain\u00b6\nCreate LLMChain from LLM and template.\ngenerate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.\npredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\npredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, Any]]\u00b6\nCall predict and then parse the results.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversation.base.ConversationChain.html"} {"id": "420ae86d4df7-7", "text": "Call predict and then parse the results.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversation.base.ConversationChain.html"} {"id": "420ae86d4df7-8", "text": "has more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversation.base.ConversationChain.html"} {"id": "420ae86d4df7-9", "text": "Example\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_prompt_input_variables\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that prompt input variables are consistent.\nproperty input_keys: List[str]\u00b6\nUse this since so some prompt vars come from history.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversation.base.ConversationChain.html"} {"id": "90aa786325ad-0", "text": "langchain.chains.query_constructor.ir.Expr\u00b6\nclass langchain.chains.query_constructor.ir.Expr[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\naccept(visitor: Visitor) \u2192 Any[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Expr.html"} {"id": "47f3d127064d-0", "text": "langchain.chains.openai_functions.openapi.SimpleRequestChain\u00b6\nclass langchain.chains.openai_functions.openapi.SimpleRequestChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, request_method: Callable, output_key: str = 'response', input_key: str = 'function')[source]\u00b6\nBases: Chain\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam input_key: str = 'function'\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.openapi.SimpleRequestChain.html"} {"id": "47f3d127064d-1", "text": "and passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam output_key: str = 'response'\u00b6\nparam request_method: Callable [Required]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.openapi.SimpleRequestChain.html"} {"id": "47f3d127064d-2", "text": "tags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.openapi.SimpleRequestChain.html"} {"id": "47f3d127064d-3", "text": "to False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.openapi.SimpleRequestChain.html"} {"id": "47f3d127064d-4", "text": "directly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.openapi.SimpleRequestChain.html"} {"id": "47f3d127064d-5", "text": "Parameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.openapi.SimpleRequestChain.html"} {"id": "47f3d127064d-6", "text": "directly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty input_keys: List[str]\u00b6\nReturn the keys expected to be in the chain input.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.openapi.SimpleRequestChain.html"} {"id": "47f3d127064d-7", "text": "property lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\u00b6\nReturn the keys expected to be in the chain output.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.openapi.SimpleRequestChain.html"} {"id": "7fba8c0b07c5-0", "text": "langchain.chains.graph_qa.cypher.GraphCypherQAChain\u00b6\nclass langchain.chains.graph_qa.cypher.GraphCypherQAChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, graph: Neo4jGraph, cypher_generation_chain: LLMChain, qa_chain: LLMChain, input_key: str = 'query', output_key: str = 'result', top_k: int = 10, return_intermediate_steps: bool = False, return_direct: bool = False)[source]\u00b6\nBases: Chain\nChain for question-answering against a graph by generating Cypher statements.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam cypher_generation_chain: LLMChain [Required]\u00b6\nparam graph: Neo4jGraph [Required]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.cypher.GraphCypherQAChain.html"} {"id": "7fba8c0b07c5-1", "text": "and at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam qa_chain: LLMChain [Required]\u00b6\nparam return_direct: bool = False\u00b6\nWhether or not to return the result of querying the graph directly.\nparam return_intermediate_steps: bool = False\u00b6\nWhether or not to return the intermediate steps along with the final answer.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam top_k: int = 10\u00b6\nNumber of results to return from the query\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.cypher.GraphCypherQAChain.html"} {"id": "7fba8c0b07c5-2", "text": "Execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.cypher.GraphCypherQAChain.html"} {"id": "7fba8c0b07c5-3", "text": "response. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.cypher.GraphCypherQAChain.html"} {"id": "7fba8c0b07c5-4", "text": "a single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.cypher.GraphCypherQAChain.html"} {"id": "7fba8c0b07c5-5", "text": "# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_llm(llm: BaseLanguageModel, *, qa_prompt: BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template=\"You are an assistant that helps to form nice and human understandable answers.\\nThe information part contains the provided information that you must use to construct an answer.\\nThe provided information is authorative, you must never doubt it or try to use your internal knowledge to correct it.\\nMake the answer sound as a response to the question. Do not mention that you based the result on the given information.\\nIf the provided information is empty, say that you don't know the answer.\\nInformation:\\n{context}\\n\\nQuestion: {question}\\nHelpful Answer:\", template_format='f-string', validate_template=True), cypher_prompt: BasePromptTemplate = PromptTemplate(input_variables=['schema', 'question'], output_parser=None, partial_variables={}, template='Task:Generate Cypher statement to query a graph database.\\nInstructions:\\nUse only the provided relationship types and properties in the schema.\\nDo not use any other relationship types or properties that are not provided.\\nSchema:\\n{schema}\\nNote: Do not include any explanations or apologies in your responses.\\nDo not respond to any questions that might ask anything else than for you to construct a Cypher statement.\\nDo not include any text except the generated Cypher statement.\\n\\nThe question is:\\n{question}', template_format='f-string', validate_template=True), **kwargs: Any) \u2192 GraphCypherQAChain[source]\u00b6\nInitialize from LLM.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.cypher.GraphCypherQAChain.html"} {"id": "7fba8c0b07c5-6", "text": "Parameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.cypher.GraphCypherQAChain.html"} {"id": "7fba8c0b07c5-7", "text": "sole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.cypher.GraphCypherQAChain.html"} {"id": "7fba8c0b07c5-8", "text": "serialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.cypher.GraphCypherQAChain.html"} {"id": "2b062f11cb4c-0", "text": "langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain\u00b6\nclass langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, input_key: str = 'input_documents', output_key: str = 'output_text', llm_chain: LLMChain, reduce_documents_chain: BaseCombineDocumentsChain, document_variable_name: str, return_intermediate_steps: bool = False)[source]\u00b6\nBases: BaseCombineDocumentsChain\nCombining documents by mapping a chain over them, then combining results.\nWe first call llm_chain on each document individually, passing in the\npage_content and any other kwargs. This is the map step.\nWe then process the results of that map step in a reduce step. This should\nlikely be a ReduceDocumentsChain.\nExample\nfrom langchain.chains import (\n StuffDocumentsChain,\n LLMChain,\n ReduceDocumentsChain,\n MapReduceDocumentsChain,\n)\nfrom langchain.prompts import PromptTemplate\nfrom langchain.llms import OpenAI\n# This controls how each document will be formatted. Specifically,\n# it will be passed to `format_document` - see that function for more\n# details.\ndocument_prompt = PromptTemplate(\n input_variables=[\"page_content\"],\n template=\"{page_content}\"\n)\ndocument_variable_name = \"context\"\nllm = OpenAI()\n# The prompt here should take as an input variable the\n# `document_variable_name`\nprompt = PromptTemplate.from_template(\n \"Summarize this content: {context}\"\n)", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html"} {"id": "2b062f11cb4c-1", "text": "\"Summarize this content: {context}\"\n)\nllm_chain = LLMChain(llm=llm, prompt=prompt)\n# We now define how to combine these summaries\nreduce_prompt = PromptTemplate.from_template(\n \"Combine these summaries: {context}\"\n)\nreduce_llm_chain = LLMChain(llm=llm, prompt=reduce_prompt)\ncombine_documents_chain = StuffDocumentsChain(\n llm_chain=reduce_llm_chain,\n document_prompt=document_prompt,\n document_variable_name=document_variable_name\n)\nreduce_documents_chain = ReduceDocumentsChain(\n combine_documents_chain=combine_documents_chain,\n)\nchain = MapReduceDocumentsChain(\n llm_chain=llm_chain,\n reduce_documents_chain=reduce_documents_chain,\n)\n# If we wanted to, we could also pass in collapse_documents_chain\n# which is specifically aimed at collapsing documents BEFORE\n# the final call.\nprompt = PromptTemplate.from_template(\n \"Collapse this content: {context}\"\n)\nllm_chain = LLMChain(llm=llm, prompt=prompt)\ncollapse_documents_chain = StuffDocumentsChain(\n llm_chain=llm_chain,\n document_prompt=document_prompt,\n document_variable_name=document_variable_name\n)\nreduce_documents_chain = ReduceDocumentsChain(\n combine_documents_chain=combine_documents_chain,\n collapse_documents_chain=collapse_documents_chain,\n)\nchain = MapReduceDocumentsChain(\n llm_chain=llm_chain,\n reduce_documents_chain=reduce_documents_chain,\n)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html"} {"id": "2b062f11cb4c-2", "text": "Deprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam document_variable_name: str [Required]\u00b6\nThe variable name in the llm_chain to put the documents in.\nIf only one variable in the llm_chain, this need not be provided.\nparam llm_chain: LLMChain [Required]\u00b6\nChain to apply to each document individually.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam reduce_documents_chain: BaseCombineDocumentsChain [Required]\u00b6\nChain to use to reduce the results of applying llm_chain to each doc.\nThis typically either a ReduceDocumentChain or StuffDocumentChain.\nparam return_intermediate_steps: bool = False\u00b6\nReturn the results of the map steps in the output.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html"} {"id": "2b062f11cb4c-3", "text": "Optional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html"} {"id": "2b062f11cb4c-4", "text": "to False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acombine_docs(docs: List[Document], token_max: Optional[int] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Tuple[str, dict][source]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html"} {"id": "2b062f11cb4c-5", "text": "Combine documents in a map reduce manner.\nCombine by mapping first chain over all documents, then reducing the results.\nThis reducing can be done recursively if needed (if there are many documents).\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html"} {"id": "2b062f11cb4c-6", "text": "directly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ncombine_docs(docs: List[Document], token_max: Optional[int] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Tuple[str, dict][source]\u00b6\nCombine documents in a map reduce manner.\nCombine by mapping first chain over all documents, then reducing the results.\nThis reducing can be done recursively if needed (if there are many documents).\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nvalidator get_default_document_variable_name\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nGet default document variable name, if not provided.\nvalidator get_reduce_chain\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nFor backwards compatibility.\nvalidator get_return_intermediate_steps\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nFor backwards compatibility.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html"} {"id": "2b062f11cb4c-7", "text": "validator get_return_intermediate_steps\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nFor backwards compatibility.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nprompt_length(docs: List[Document], **kwargs: Any) \u2192 Optional[int]\u00b6\nReturn the prompt length given the documents passed in.\nThis can be used by a caller to determine whether passing in a list\nof documents would exceed a certain prompt length. This useful when\ntrying to ensure that the size of a prompt remains below a certain\ncontext limit.\nParameters\ndocs \u2013 List[Document], a list of documents to use to calculate the\ntotal prompt length.\nReturns\nReturns None if the method does not depend on the prompt length,\notherwise the length of the prompt in tokens.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html"} {"id": "2b062f11cb4c-8", "text": "Raise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html"} {"id": "2b062f11cb4c-9", "text": "# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty collapse_document_chain: langchain.chains.combine_documents.base.BaseCombineDocumentsChain\u00b6\nKept for backward compatibility.\nproperty combine_document_chain: langchain.chains.combine_documents.base.BaseCombineDocumentsChain\u00b6\nKept for backward compatibility.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html"} {"id": "f542aef4b9c5-0", "text": "langchain.chains.openai_functions.utils.get_llm_kwargs\u00b6\nlangchain.chains.openai_functions.utils.get_llm_kwargs(function: dict) \u2192 dict[source]\u00b6\nReturns the kwargs for the LLMChain constructor.\nParameters\nfunction \u2013 The function to use.\nReturns\nThe kwargs for the LLMChain constructor.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.utils.get_llm_kwargs.html"} {"id": "54b48a4fc49c-0", "text": "langchain.chains.query_constructor.ir.Operator\u00b6\nclass langchain.chains.query_constructor.ir.Operator(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]\u00b6\nBases: str, Enum\nEnumerator of the operations.\nMethods\n__init__(*args,\u00a0**kwds)\ncapitalize()\nReturn a capitalized version of the string.\ncasefold()\nReturn a version of the string suitable for caseless comparisons.\ncenter(width[,\u00a0fillchar])\nReturn a centered string of length width.\ncount(sub[,\u00a0start[,\u00a0end]])\nReturn the number of non-overlapping occurrences of substring sub in string S[start:end].\nencode([encoding,\u00a0errors])\nEncode the string using the codec registered for encoding.\nendswith(suffix[,\u00a0start[,\u00a0end]])\nReturn True if S ends with the specified suffix, False otherwise.\nexpandtabs([tabsize])\nReturn a copy where all tab characters are expanded using spaces.\nfind(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nformat(*args,\u00a0**kwargs)\nReturn a formatted version of S, using substitutions from args and kwargs.\nformat_map(mapping)\nReturn a formatted version of S, using substitutions from mapping.\nindex(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nisalnum()\nReturn True if the string is an alpha-numeric string, False otherwise.\nisalpha()\nReturn True if the string is an alphabetic string, False otherwise.\nisascii()\nReturn True if all characters in the string are ASCII, False otherwise.\nisdecimal()\nReturn True if the string is a decimal string, False otherwise.\nisdigit()", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Operator.html"} {"id": "54b48a4fc49c-1", "text": "Return True if the string is a decimal string, False otherwise.\nisdigit()\nReturn True if the string is a digit string, False otherwise.\nisidentifier()\nReturn True if the string is a valid Python identifier, False otherwise.\nislower()\nReturn True if the string is a lowercase string, False otherwise.\nisnumeric()\nReturn True if the string is a numeric string, False otherwise.\nisprintable()\nReturn True if the string is printable, False otherwise.\nisspace()\nReturn True if the string is a whitespace string, False otherwise.\nistitle()\nReturn True if the string is a title-cased string, False otherwise.\nisupper()\nReturn True if the string is an uppercase string, False otherwise.\njoin(iterable,\u00a0/)\nConcatenate any number of strings.\nljust(width[,\u00a0fillchar])\nReturn a left-justified string of length width.\nlower()\nReturn a copy of the string converted to lowercase.\nlstrip([chars])\nReturn a copy of the string with leading whitespace removed.\nmaketrans\nReturn a translation table usable for str.translate().\npartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nremoveprefix(prefix,\u00a0/)\nReturn a str with the given prefix string removed if present.\nremovesuffix(suffix,\u00a0/)\nReturn a str with the given suffix string removed if present.\nreplace(old,\u00a0new[,\u00a0count])\nReturn a copy with all occurrences of substring old replaced by new.\nrfind(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].\nrindex(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Operator.html"} {"id": "54b48a4fc49c-2", "text": "rjust(width[,\u00a0fillchar])\nReturn a right-justified string of length width.\nrpartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nrsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nrstrip([chars])\nReturn a copy of the string with trailing whitespace removed.\nsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nsplitlines([keepends])\nReturn a list of the lines in the string, breaking at line boundaries.\nstartswith(prefix[,\u00a0start[,\u00a0end]])\nReturn True if S starts with the specified prefix, False otherwise.\nstrip([chars])\nReturn a copy of the string with leading and trailing whitespace removed.\nswapcase()\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\nReturn a version of the string where each word is titlecased.\ntranslate(table,\u00a0/)\nReplace each character in the string using the given translation table.\nupper()\nReturn a copy of the string converted to uppercase.\nzfill(width,\u00a0/)\nPad a numeric string with zeros on the left, to fill a field of the given width.\nAttributes\nAND\nOR\nNOT\ncapitalize()\u00b6\nReturn a capitalized version of the string.\nMore specifically, make the first character have upper case and the rest lower\ncase.\ncasefold()\u00b6\nReturn a version of the string suitable for caseless comparisons.\ncenter(width, fillchar=' ', /)\u00b6\nReturn a centered string of length width.\nPadding is done using the specified fill character (default is a space).\ncount(sub[, start[, end]]) \u2192 int\u00b6\nReturn the number of non-overlapping occurrences of substring sub in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Operator.html"} {"id": "54b48a4fc49c-3", "text": "Return the number of non-overlapping occurrences of substring sub in\nstring S[start:end]. Optional arguments start and end are\ninterpreted as in slice notation.\nencode(encoding='utf-8', errors='strict')\u00b6\nEncode the string using the codec registered for encoding.\nencodingThe encoding in which to encode the string.\nerrorsThe error handling scheme to use for encoding errors.\nThe default is \u2018strict\u2019 meaning that encoding errors raise a\nUnicodeEncodeError. Other possible values are \u2018ignore\u2019, \u2018replace\u2019 and\n\u2018xmlcharrefreplace\u2019 as well as any other name registered with\ncodecs.register_error that can handle UnicodeEncodeErrors.\nendswith(suffix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S ends with the specified suffix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nsuffix can also be a tuple of strings to try.\nexpandtabs(tabsize=8)\u00b6\nReturn a copy where all tab characters are expanded using spaces.\nIf tabsize is not given, a tab size of 8 characters is assumed.\nfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nformat(*args, **kwargs) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from args and kwargs.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nformat_map(mapping) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from mapping.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Operator.html"} {"id": "54b48a4fc49c-4", "text": "Return the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nisalnum()\u00b6\nReturn True if the string is an alpha-numeric string, False otherwise.\nA string is alpha-numeric if all characters in the string are alpha-numeric and\nthere is at least one character in the string.\nisalpha()\u00b6\nReturn True if the string is an alphabetic string, False otherwise.\nA string is alphabetic if all characters in the string are alphabetic and there\nis at least one character in the string.\nisascii()\u00b6\nReturn True if all characters in the string are ASCII, False otherwise.\nASCII characters have code points in the range U+0000-U+007F.\nEmpty string is ASCII too.\nisdecimal()\u00b6\nReturn True if the string is a decimal string, False otherwise.\nA string is a decimal string if all characters in the string are decimal and\nthere is at least one character in the string.\nisdigit()\u00b6\nReturn True if the string is a digit string, False otherwise.\nA string is a digit string if all characters in the string are digits and there\nis at least one character in the string.\nisidentifier()\u00b6\nReturn True if the string is a valid Python identifier, False otherwise.\nCall keyword.iskeyword(s) to test whether string s is a reserved identifier,\nsuch as \u201cdef\u201d or \u201cclass\u201d.\nislower()\u00b6\nReturn True if the string is a lowercase string, False otherwise.\nA string is lowercase if all cased characters in the string are lowercase and\nthere is at least one cased character in the string.\nisnumeric()\u00b6\nReturn True if the string is a numeric string, False otherwise.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Operator.html"} {"id": "54b48a4fc49c-5", "text": "isnumeric()\u00b6\nReturn True if the string is a numeric string, False otherwise.\nA string is numeric if all characters in the string are numeric and there is at\nleast one character in the string.\nisprintable()\u00b6\nReturn True if the string is printable, False otherwise.\nA string is printable if all of its characters are considered printable in\nrepr() or if it is empty.\nisspace()\u00b6\nReturn True if the string is a whitespace string, False otherwise.\nA string is whitespace if all characters in the string are whitespace and there\nis at least one character in the string.\nistitle()\u00b6\nReturn True if the string is a title-cased string, False otherwise.\nIn a title-cased string, upper- and title-case characters may only\nfollow uncased characters and lowercase characters only cased ones.\nisupper()\u00b6\nReturn True if the string is an uppercase string, False otherwise.\nA string is uppercase if all cased characters in the string are uppercase and\nthere is at least one cased character in the string.\njoin(iterable, /)\u00b6\nConcatenate any number of strings.\nThe string whose method is called is inserted in between each given string.\nThe result is returned as a new string.\nExample: \u2018.\u2019.join([\u2018ab\u2019, \u2018pq\u2019, \u2018rs\u2019]) -> \u2018ab.pq.rs\u2019\nljust(width, fillchar=' ', /)\u00b6\nReturn a left-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nlower()\u00b6\nReturn a copy of the string converted to lowercase.\nlstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nstatic maketrans()\u00b6\nReturn a translation table usable for str.translate().", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Operator.html"} {"id": "54b48a4fc49c-6", "text": "static maketrans()\u00b6\nReturn a translation table usable for str.translate().\nIf there is only one argument, it must be a dictionary mapping Unicode\nordinals (integers) or characters to Unicode ordinals, strings or None.\nCharacter keys will be then converted to ordinals.\nIf there are two arguments, they must be strings of equal length, and\nin the resulting dictionary, each character in x will be mapped to the\ncharacter at the same position in y. If there is a third argument, it\nmust be a string, whose characters will be mapped to None in the result.\npartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string. If the separator is found,\nreturns a 3-tuple containing the part before the separator, the separator\nitself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing the original string\nand two empty strings.\nremoveprefix(prefix, /)\u00b6\nReturn a str with the given prefix string removed if present.\nIf the string starts with the prefix string, return string[len(prefix):].\nOtherwise, return a copy of the original string.\nremovesuffix(suffix, /)\u00b6\nReturn a str with the given suffix string removed if present.\nIf the string ends with the suffix string and that suffix is not empty,\nreturn string[:-len(suffix)]. Otherwise, return a copy of the original\nstring.\nreplace(old, new, count=- 1, /)\u00b6\nReturn a copy with all occurrences of substring old replaced by new.\ncountMaximum number of occurrences to replace.\n-1 (the default value) means replace all occurrences.\nIf the optional argument count is given, only the first count occurrences are\nreplaced.\nrfind(sub[, start[, end]]) \u2192 int\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Operator.html"} {"id": "54b48a4fc49c-7", "text": "replaced.\nrfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nrindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nrjust(width, fillchar=' ', /)\u00b6\nReturn a right-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nrpartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string, starting at the end. If\nthe separator is found, returns a 3-tuple containing the part before the\nseparator, the separator itself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing two empty strings\nand the original string.\nrsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nSplitting starts at the end of the string and works to the front.\nrstrip(chars=None, /)\u00b6\nReturn a copy of the string with trailing whitespace removed.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Operator.html"} {"id": "54b48a4fc49c-8", "text": "rstrip(chars=None, /)\u00b6\nReturn a copy of the string with trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nNote, str.split() is mainly useful for data that has been intentionally\ndelimited. With natural text that includes punctuation, consider using\nthe regular expression module.\nsplitlines(keepends=False)\u00b6\nReturn a list of the lines in the string, breaking at line boundaries.\nLine breaks are not included in the resulting list unless keepends is given and\ntrue.\nstartswith(prefix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S starts with the specified prefix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nprefix can also be a tuple of strings to try.\nstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading and trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nswapcase()\u00b6\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\u00b6\nReturn a version of the string where each word is titlecased.\nMore specifically, words start with uppercased characters and all remaining\ncased characters have lower case.\ntranslate(table, /)\u00b6\nReplace each character in the string using the given translation table.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Operator.html"} {"id": "54b48a4fc49c-9", "text": "translate(table, /)\u00b6\nReplace each character in the string using the given translation table.\ntableTranslation table, which must be a mapping of Unicode ordinals to\nUnicode ordinals, strings, or None.\nThe table must implement lookup/indexing via __getitem__, for instance a\ndictionary or list. If this operation raises LookupError, the character is\nleft untouched. Characters mapped to None are deleted.\nupper()\u00b6\nReturn a copy of the string converted to uppercase.\nzfill(width, /)\u00b6\nPad a numeric string with zeros on the left, to fill a field of the given width.\nThe string is never truncated.\nAND = 'and'\u00b6\nNOT = 'not'\u00b6\nOR = 'or'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Operator.html"} {"id": "18f970138276-0", "text": "langchain.chains.openai_functions.citation_fuzzy_match.create_citation_fuzzy_match_chain\u00b6\nlangchain.chains.openai_functions.citation_fuzzy_match.create_citation_fuzzy_match_chain(llm: BaseLanguageModel) \u2192 LLMChain[source]\u00b6\nCreate a citation fuzzy match chain.\nParameters\nllm \u2013 Language model to use for the chain.\nReturns\nChain (LLMChain) that can be used to answer questions with citations.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.citation_fuzzy_match.create_citation_fuzzy_match_chain.html"} {"id": "c1db7e170aa6-0", "text": "langchain.chains.prompt_selector.ConditionalPromptSelector\u00b6\nclass langchain.chains.prompt_selector.ConditionalPromptSelector(*, default_prompt: BasePromptTemplate, conditionals: List[Tuple[Callable[[BaseLanguageModel], bool], BasePromptTemplate]] = None)[source]\u00b6\nBases: BasePromptSelector\nPrompt collection that goes through conditionals.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam conditionals: List[Tuple[Callable[[langchain.schema.language_model.BaseLanguageModel], bool], langchain.schema.prompt_template.BasePromptTemplate]] [Optional]\u00b6\nparam default_prompt: langchain.schema.prompt_template.BasePromptTemplate [Required]\u00b6\nget_prompt(llm: BaseLanguageModel) \u2192 BasePromptTemplate[source]\u00b6\nGet default prompt for a language model.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.prompt_selector.ConditionalPromptSelector.html"} {"id": "c4c7705ad084-0", "text": "langchain.chains.loading.load_chain_from_config\u00b6\nlangchain.chains.loading.load_chain_from_config(config: dict, **kwargs: Any) \u2192 Chain[source]\u00b6\nLoad chain from Config Dict.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.loading.load_chain_from_config.html"} {"id": "78d3d5524dab-0", "text": "langchain.chains.combine_documents.reduce.AsyncCombineDocsProtocol\u00b6\nclass langchain.chains.combine_documents.reduce.AsyncCombineDocsProtocol(*args, **kwargs)[source]\u00b6\nBases: Protocol\nInterface for the combine_docs method.\nMethods\n__init__(*args,\u00a0**kwargs)\nasync __call__(docs: List[Document], **kwargs: Any) \u2192 str[source]\u00b6\nAsync nterface for the combine_docs method.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.AsyncCombineDocsProtocol.html"} {"id": "f084e4994cc3-0", "text": "langchain.chains.openai_functions.qa_with_structure.create_qa_with_structure_chain\u00b6\nlangchain.chains.openai_functions.qa_with_structure.create_qa_with_structure_chain(llm: BaseLanguageModel, schema: Union[dict, Type[BaseModel]], output_parser: str = 'base', prompt: Optional[Union[PromptTemplate, ChatPromptTemplate]] = None) \u2192 LLMChain[source]\u00b6\nCreate a question answering chain that returns an answer with sources.\nParameters\nllm \u2013 Language model to use for the chain.\nschema \u2013 Pydantic schema to use for the output.\noutput_parser \u2013 Output parser to use. Should be one of pydantic or base.\nDefault to base.\nprompt \u2013 Optional prompt to use for the chain.\nReturns:", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.qa_with_structure.create_qa_with_structure_chain.html"} {"id": "c34cd3c74166-0", "text": "langchain.chains.graph_qa.sparql.GraphSparqlQAChain\u00b6\nclass langchain.chains.graph_qa.sparql.GraphSparqlQAChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, graph: RdfGraph, sparql_generation_select_chain: LLMChain, sparql_generation_update_chain: LLMChain, sparql_intent_chain: LLMChain, qa_chain: LLMChain, input_key: str = 'query', output_key: str = 'result')[source]\u00b6\nBases: Chain\nChain for question-answering against an RDF or OWL graph by generating\nSPARQL statements.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam graph: RdfGraph [Required]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.sparql.GraphSparqlQAChain.html"} {"id": "c34cd3c74166-1", "text": "them along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam qa_chain: LLMChain [Required]\u00b6\nparam sparql_generation_select_chain: LLMChain [Required]\u00b6\nparam sparql_generation_update_chain: LLMChain [Required]\u00b6\nparam sparql_intent_chain: LLMChain [Required]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.sparql.GraphSparqlQAChain.html"} {"id": "c34cd3c74166-2", "text": "Chain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.sparql.GraphSparqlQAChain.html"} {"id": "c34cd3c74166-3", "text": "chain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.sparql.GraphSparqlQAChain.html"} {"id": "c34cd3c74166-4", "text": "sole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.sparql.GraphSparqlQAChain.html"} {"id": "c34cd3c74166-5", "text": "classmethod from_llm(llm: BaseLanguageModel, *, qa_prompt: BasePromptTemplate = PromptTemplate(input_variables=['context', 'prompt'], output_parser=None, partial_variables={}, template=\"Task: Generate a natural language response from the results of a SPARQL query.\\nYou are an assistant that creates well-written and human understandable answers.\\nThe information part contains the information provided, which you can use to construct an answer.\\nThe information provided is authoritative, you must never doubt it or try to use your internal knowledge to correct it.\\nMake your response sound like the information is coming from an AI assistant, but don't add any information.\\nInformation:\\n{context}\\n\\nQuestion: {prompt}\\nHelpful Answer:\", template_format='f-string', validate_template=True), sparql_select_prompt: BasePromptTemplate = PromptTemplate(input_variables=['schema', 'prompt'], output_parser=None, partial_variables={}, template='Task: Generate a SPARQL SELECT statement for querying a graph database.\\nFor instance, to find all email addresses of John Doe, the following query in backticks would be suitable:\\n```\\nPREFIX foaf: \\nSELECT ?email\\nWHERE {{\\n\u00a0\u00a0\u00a0 ?person foaf:name \"John Doe\" .\\n\u00a0\u00a0\u00a0 ?person foaf:mbox ?email .\\n}}\\n```\\nInstructions:\\nUse only the node types and properties provided in the schema.\\nDo not use any node types and properties that are not explicitly provided.\\nInclude all necessary prefixes.\\nSchema:\\n{schema}\\nNote: Be as concise as possible.\\nDo not include any explanations or apologies in your responses.\\nDo not respond to any questions that ask for anything else than for you to construct a SPARQL query.\\nDo not include any text except the SPARQL query generated.\\n\\nThe question is:\\n{prompt}',", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.sparql.GraphSparqlQAChain.html"} {"id": "c34cd3c74166-6", "text": "any text except the SPARQL query generated.\\n\\nThe question is:\\n{prompt}', template_format='f-string', validate_template=True), sparql_update_prompt: BasePromptTemplate = PromptTemplate(input_variables=['schema', 'prompt'], output_parser=None, partial_variables={}, template='Task: Generate a SPARQL UPDATE statement for updating a graph database.\\nFor instance, to add \\'jane.doe@foo.bar\\' as a new email address for Jane Doe, the following query in backticks would be suitable:\\n```\\nPREFIX foaf: \\nINSERT {{\\n\u00a0\u00a0\u00a0 ?person foaf:mbox .\\n}}\\nWHERE {{\\n\u00a0\u00a0\u00a0 ?person foaf:name \"Jane Doe\" .\\n}}\\n```\\nInstructions:\\nMake the query as short as possible and avoid adding unnecessary triples.\\nUse only the node types and properties provided in the schema.\\nDo not use any node types and properties that are not explicitly provided.\\nInclude all necessary prefixes.\\nSchema:\\n{schema}\\nNote: Be as concise as possible.\\nDo not include any explanations or apologies in your responses.\\nDo not respond to any questions that ask for anything else than for you to construct a SPARQL query.\\nReturn only the generated SPARQL query, nothing else.\\n\\nThe information to be inserted is:\\n{prompt}', template_format='f-string', validate_template=True), sparql_intent_prompt: BasePromptTemplate = PromptTemplate(input_variables=['prompt'], output_parser=None, partial_variables={}, template=\"Task: Identify the intent of a prompt and return the appropriate SPARQL query type.\\nYou are an assistant that distinguishes different types of prompts and returns the corresponding SPARQL query types.\\nConsider only the following query types:\\n* SELECT: this query type corresponds to", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.sparql.GraphSparqlQAChain.html"} {"id": "c34cd3c74166-7", "text": "query types.\\nConsider only the following query types:\\n* SELECT: this query type corresponds to questions\\n* UPDATE: this query type corresponds to all requests for deleting, inserting, or changing triples\\nNote: Be as concise as possible.\\nDo not include any explanations or apologies in your responses.\\nDo not respond to any questions that ask for anything else than for you to identify a SPARQL query type.\\nDo not include any unnecessary whitespaces or any text except the query type, i.e., either return 'SELECT' or 'UPDATE'.\\n\\nThe prompt is:\\n{prompt}\\nHelpful Answer:\", template_format='f-string', validate_template=True), **kwargs: Any) \u2192 GraphSparqlQAChain[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.sparql.GraphSparqlQAChain.html"} {"id": "c34cd3c74166-8", "text": "Initialize from LLM.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.sparql.GraphSparqlQAChain.html"} {"id": "c34cd3c74166-9", "text": "as positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.sparql.GraphSparqlQAChain.html"} {"id": "c34cd3c74166-10", "text": "to_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty input_keys: List[str]\u00b6\nReturn the keys expected to be in the chain input.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\u00b6\nReturn the keys expected to be in the chain output.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.sparql.GraphSparqlQAChain.html"} {"id": "f2c58bfe93c9-0", "text": "langchain.chains.router.multi_prompt.MultiPromptChain\u00b6\nclass langchain.chains.router.multi_prompt.MultiPromptChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, router_chain: RouterChain, destination_chains: Mapping[str, LLMChain], default_chain: LLMChain, silent_errors: bool = False)[source]\u00b6\nBases: MultiRouteChain\nA multi-route chain that uses an LLM router chain to choose amongst prompts.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam default_chain: LLMChain [Required]\u00b6\nDefault chain to use when router doesn\u2019t map input to one of the destinations.\nparam destination_chains: Mapping[str, LLMChain] [Required]\u00b6\nMap of name to candidate chains that inputs can be routed to.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.multi_prompt.MultiPromptChain.html"} {"id": "f2c58bfe93c9-1", "text": "and at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam router_chain: RouterChain [Required]\u00b6\nChain for deciding a destination chain and the input to it.\nparam silent_errors: bool = False\u00b6\nIf True, use default_chain when an invalid destination name is provided.\nDefaults to False.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.multi_prompt.MultiPromptChain.html"} {"id": "f2c58bfe93c9-2", "text": "only one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.multi_prompt.MultiPromptChain.html"} {"id": "f2c58bfe93c9-3", "text": "chain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.multi_prompt.MultiPromptChain.html"} {"id": "f2c58bfe93c9-4", "text": "sole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_prompts(llm: BaseLanguageModel, prompt_infos: List[Dict[str, str]], default_chain: Optional[LLMChain] = None, **kwargs: Any) \u2192 MultiPromptChain[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.multi_prompt.MultiPromptChain.html"} {"id": "f2c58bfe93c9-5", "text": "Convenience constructor for instantiating from destination prompts.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.multi_prompt.MultiPromptChain.html"} {"id": "f2c58bfe93c9-6", "text": "The other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.multi_prompt.MultiPromptChain.html"} {"id": "f2c58bfe93c9-7", "text": "Set the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.multi_prompt.MultiPromptChain.html"} {"id": "82db0dfa79f1-0", "text": "langchain.chains.retrieval_qa.base.VectorDBQA\u00b6\nclass langchain.chains.retrieval_qa.base.VectorDBQA(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, combine_documents_chain: BaseCombineDocumentsChain, input_key: str = 'query', output_key: str = 'result', return_source_documents: bool = False, vectorstore: VectorStore, k: int = 4, search_type: str = 'similarity', search_kwargs: Dict[str, Any] = None)[source]\u00b6\nBases: BaseRetrievalQA\nChain for question-answering against a vector database.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam combine_documents_chain: BaseCombineDocumentsChain [Required]\u00b6\nChain to use to combine the documents.\nparam k: int = 4\u00b6\nNumber of documents to query for.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.VectorDBQA.html"} {"id": "82db0dfa79f1-1", "text": "Optional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam return_source_documents: bool = False\u00b6\nReturn the source documents.\nparam search_kwargs: Dict[str, Any] [Optional]\u00b6\nExtra search args.\nparam search_type: str = 'similarity'\u00b6\nSearch type to use over vectorstore. similarity or mmr.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam vectorstore: VectorStore [Required]\u00b6\nVector Database to connect to.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.VectorDBQA.html"} {"id": "82db0dfa79f1-2", "text": "will be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.VectorDBQA.html"} {"id": "82db0dfa79f1-3", "text": "Returns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.VectorDBQA.html"} {"id": "82db0dfa79f1-4", "text": "Call the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.VectorDBQA.html"} {"id": "82db0dfa79f1-5", "text": "# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_chain_type(llm: BaseLanguageModel, chain_type: str = 'stuff', chain_type_kwargs: Optional[dict] = None, **kwargs: Any) \u2192 BaseRetrievalQA\u00b6\nLoad chain from chain type.\nclassmethod from_llm(llm: BaseLanguageModel, prompt: Optional[PromptTemplate] = None, **kwargs: Any) \u2192 BaseRetrievalQA\u00b6\nInitialize from LLM.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.VectorDBQA.html"} {"id": "82db0dfa79f1-6", "text": "Validate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.VectorDBQA.html"} {"id": "82db0dfa79f1-7", "text": "addition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_search_type\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate search type.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.VectorDBQA.html"} {"id": "82db0dfa79f1-8", "text": "property lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\nallow_population_by_field_name = True\u00b6\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.VectorDBQA.html"} {"id": "9393ca7e0b79-0", "text": "langchain.chains.transform.TransformChain\u00b6\nclass langchain.chains.transform.TransformChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, input_variables: List[str], output_variables: List[str], transform: Callable[[Dict[str, str]], Dict[str, str]])[source]\u00b6\nBases: Chain\nChain transform chain output.\nExample\nfrom langchain import TransformChain\ntransform_chain = TransformChain(input_variables=[\"text\"],\n output_variables[\"entities\"], transform=func())\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam input_variables: List[str] [Required]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.transform.TransformChain.html"} {"id": "9393ca7e0b79-1", "text": "for the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam output_variables: List[str] [Required]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam transform: Callable[[Dict[str, str]], Dict[str, str]] [Required]\u00b6\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.transform.TransformChain.html"} {"id": "9393ca7e0b79-2", "text": "chain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.transform.TransformChain.html"} {"id": "9393ca7e0b79-3", "text": "tags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.transform.TransformChain.html"} {"id": "9393ca7e0b79-4", "text": "these runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.transform.TransformChain.html"} {"id": "9393ca7e0b79-5", "text": "Chain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.transform.TransformChain.html"} {"id": "9393ca7e0b79-6", "text": "addition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.transform.TransformChain.html"} {"id": "9393ca7e0b79-7", "text": "property lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.transform.TransformChain.html"} {"id": "690391622ce8-0", "text": "langchain.chains.sequential.SimpleSequentialChain\u00b6\nclass langchain.chains.sequential.SimpleSequentialChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, chains: List[Chain], strip_outputs: bool = False, input_key: str = 'input', output_key: str = 'output')[source]\u00b6\nBases: Chain\nSimple chain where the outputs of one step feed directly into next.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam chains: List[langchain.chains.base.Chain] [Required]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SimpleSequentialChain.html"} {"id": "690391622ce8-1", "text": "Optional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam strip_outputs: bool = False\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SimpleSequentialChain.html"} {"id": "690391622ce8-2", "text": "these runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SimpleSequentialChain.html"} {"id": "690391622ce8-3", "text": "metadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SimpleSequentialChain.html"} {"id": "690391622ce8-4", "text": "these runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SimpleSequentialChain.html"} {"id": "690391622ce8-5", "text": "Validate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SimpleSequentialChain.html"} {"id": "690391622ce8-6", "text": "these runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_chains\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that chains are all single input/output.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SimpleSequentialChain.html"} {"id": "690391622ce8-7", "text": "Return a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SimpleSequentialChain.html"} {"id": "020b908f2c2a-0", "text": "langchain.chains.openai_functions.base.create_openai_fn_chain\u00b6\nlangchain.chains.openai_functions.base.create_openai_fn_chain(functions: Sequence[Union[Dict[str, Any], Type[BaseModel], Callable]], llm: BaseLanguageModel, prompt: BasePromptTemplate, *, output_parser: Optional[BaseLLMOutputParser] = None, **kwargs: Any) \u2192 LLMChain[source]\u00b6\nCreate an LLM chain that uses OpenAI functions.\nParameters\nfunctions \u2013 A sequence of either dictionaries, pydantic.BaseModels classes, or\nPython functions. If dictionaries are passed in, they are assumed to\nalready be a valid OpenAI functions. If only a single\nfunction is passed in, then it will be enforced that the model use that\nfunction. pydantic.BaseModels and Python functions should have docstrings\ndescribing what the function does. For best results, pydantic.BaseModels\nshould have descriptions of the parameters and Python functions should have\nGoogle Python style args descriptions in the docstring. Additionally,\nPython functions should only use primitive types (str, int, float, bool) or\npydantic.BaseModels for arguments.\nllm \u2013 Language model to use, assumed to support the OpenAI function-calling API.\nprompt \u2013 BasePromptTemplate to pass to the model.\noutput_parser \u2013 BaseLLMOutputParser to use for parsing model outputs. By default\nwill be inferred from the function types. If pydantic.BaseModels are passed\nin, then the OutputParser will try to parse outputs using those. Otherwise\nmodel outputs will simply be parsed as JSON. If multiple functions are\npassed in and they are not pydantic.BaseModels, the chain output will\ninclude both the name of the function that was returned and the arguments\nto pass to the function.\nReturns\nAn LLMChain that will pass in the given functions to the model when run.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.base.create_openai_fn_chain.html"} {"id": "020b908f2c2a-1", "text": "Returns\nAn LLMChain that will pass in the given functions to the model when run.\nExample\nfrom langchain.chains.openai_functions import create_openai_fn_chain\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate\nfrom pydantic import BaseModel, Field\nclass RecordPerson(BaseModel):\n \"\"\"Record some identifying information about a person.\"\"\"\n name: str = Field(..., description=\"The person's name\")\n age: int = Field(..., description=\"The person's age\")\n fav_food: Optional[str] = Field(None, description=\"The person's favorite food\")\nclass RecordDog(BaseModel):\n \"\"\"Record some identifying information about a dog.\"\"\"\n name: str = Field(..., description=\"The dog's name\")\n color: str = Field(..., description=\"The dog's color\")\n fav_food: Optional[str] = Field(None, description=\"The dog's favorite food\")\nllm = ChatOpenAI(model=\"gpt-3.5-turbo-0613\", temperature=0)\nprompt_msgs = [\n SystemMessage(\n content=\"You are a world class algorithm for recording entities\"\n ),\n HumanMessage(content=\"Make calls to the relevant function to record the entities in the following input:\"),\n HumanMessagePromptTemplate.from_template(\"{input}\"),\n HumanMessage(content=\"Tips: Make sure to answer in the correct format\"),\n]\nprompt = ChatPromptTemplate(messages=prompt_msgs)\nchain = create_openai_fn_chain([RecordPerson, RecordDog])\nchain.run(\"Harry was a chubby brown beagle who loved chicken\")\n# -> RecordDog(name=\"Harry\", color=\"brown\", fav_food=\"chicken\")", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.base.create_openai_fn_chain.html"} {"id": "8887cec85269-0", "text": "langchain.chains.combine_documents.base.AnalyzeDocumentChain\u00b6\nclass langchain.chains.combine_documents.base.AnalyzeDocumentChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, input_key: str = 'input_document', text_splitter: TextSplitter = None, combine_docs_chain: BaseCombineDocumentsChain)[source]\u00b6\nBases: Chain\nChain that splits documents, then analyzes it in pieces.\nThis chain is parameterized by a TextSplitter and a CombineDocumentsChain.\nThis chain takes a single document as input, and then splits it up into chunks\nand then passes those chucks to the CombineDocumentsChain.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam combine_docs_chain: langchain.chains.combine_documents.base.BaseCombineDocumentsChain [Required]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.AnalyzeDocumentChain.html"} {"id": "8887cec85269-1", "text": "and at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam text_splitter: langchain.text_splitter.TextSplitter [Optional]\u00b6\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.AnalyzeDocumentChain.html"} {"id": "8887cec85269-2", "text": "memory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.AnalyzeDocumentChain.html"} {"id": "8887cec85269-3", "text": "callbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.AnalyzeDocumentChain.html"} {"id": "8887cec85269-4", "text": "sole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.AnalyzeDocumentChain.html"} {"id": "8887cec85269-5", "text": "Parameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.AnalyzeDocumentChain.html"} {"id": "8887cec85269-6", "text": "sole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.AnalyzeDocumentChain.html"} {"id": "8887cec85269-7", "text": "serialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.AnalyzeDocumentChain.html"} {"id": "b49f1ba756d0-0", "text": "langchain.chains.openai_functions.base.convert_python_function_to_openai_function\u00b6\nlangchain.chains.openai_functions.base.convert_python_function_to_openai_function(function: Callable) \u2192 Dict[str, Any][source]\u00b6\nConvert a Python function to an OpenAI function-calling API compatible dict.\nAssumes the Python function has type hints and a docstring with a description. Ifthe docstring has Google Python style argument descriptions, these will be\nincluded as well.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.base.convert_python_function_to_openai_function.html"} {"id": "04417d1dbe0c-0", "text": "langchain.chains.flare.base.FlareChain\u00b6\nclass langchain.chains.flare.base.FlareChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, question_generator_chain: QuestionGeneratorChain, response_chain: _ResponseChain = None, output_parser: FinishedOutputParser = None, retriever: BaseRetriever, min_prob: float = 0.2, min_token_gap: int = 5, num_pad_tokens: int = 2, max_iter: int = 10, start_with_retrieval: bool = True)[source]\u00b6\nBases: Chain\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam max_iter: int = 10\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.flare.base.FlareChain.html"} {"id": "04417d1dbe0c-1", "text": "There are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam min_prob: float = 0.2\u00b6\nparam min_token_gap: int = 5\u00b6\nparam num_pad_tokens: int = 2\u00b6\nparam output_parser: FinishedOutputParser [Optional]\u00b6\nparam question_generator_chain: QuestionGeneratorChain [Required]\u00b6\nparam response_chain: _ResponseChain [Optional]\u00b6\nparam retriever: BaseRetriever [Required]\u00b6\nparam start_with_retrieval: bool = True\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.flare.base.FlareChain.html"} {"id": "04417d1dbe0c-2", "text": "Execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.flare.base.FlareChain.html"} {"id": "04417d1dbe0c-3", "text": "response. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.flare.base.FlareChain.html"} {"id": "04417d1dbe0c-4", "text": "a single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.flare.base.FlareChain.html"} {"id": "04417d1dbe0c-5", "text": "# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_llm(llm: BaseLanguageModel, max_generation_len: int = 32, **kwargs: Any) \u2192 FlareChain[source]\u00b6\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.flare.base.FlareChain.html"} {"id": "04417d1dbe0c-6", "text": "has more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.flare.base.FlareChain.html"} {"id": "04417d1dbe0c-7", "text": "Example\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty input_keys: List[str]\u00b6\nReturn the keys expected to be in the chain input.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\u00b6\nReturn the keys expected to be in the chain output.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.flare.base.FlareChain.html"} {"id": "e9bf8341e65f-0", "text": "langchain.chains.query_constructor.ir.Visitor\u00b6\nclass langchain.chains.query_constructor.ir.Visitor[source]\u00b6\nBases: ABC\nDefines interface for IR translation using visitor pattern.\nMethods\n__init__()\nvisit_comparison(comparison)\nTranslate a Comparison.\nvisit_operation(operation)\nTranslate an Operation.\nvisit_structured_query(structured_query)\nTranslate a StructuredQuery.\nAttributes\nallowed_comparators\nallowed_operators\nabstract visit_comparison(comparison: Comparison) \u2192 Any[source]\u00b6\nTranslate a Comparison.\nabstract visit_operation(operation: Operation) \u2192 Any[source]\u00b6\nTranslate an Operation.\nabstract visit_structured_query(structured_query: StructuredQuery) \u2192 Any[source]\u00b6\nTranslate a StructuredQuery.\nallowed_comparators: Optional[Sequence[langchain.chains.query_constructor.ir.Comparator]] = None\u00b6\nallowed_operators: Optional[Sequence[langchain.chains.query_constructor.ir.Operator]] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Visitor.html"} {"id": "e2a52e2da854-0", "text": "langchain.chains.qa_with_sources.loading.LoadingCallable\u00b6\nclass langchain.chains.qa_with_sources.loading.LoadingCallable(*args, **kwargs)[source]\u00b6\nBases: Protocol\nInterface for loading the combine documents chain.\nMethods\n__init__(*args,\u00a0**kwargs)\n__call__(llm: BaseLanguageModel, **kwargs: Any) \u2192 BaseCombineDocumentsChain[source]\u00b6\nCallable to load the combine documents chain.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.loading.LoadingCallable.html"} {"id": "711abd9828c6-0", "text": "langchain.chains.openai_functions.qa_with_structure.create_qa_with_sources_chain\u00b6\nlangchain.chains.openai_functions.qa_with_structure.create_qa_with_sources_chain(llm: BaseLanguageModel, **kwargs: Any) \u2192 LLMChain[source]\u00b6\nCreate a question answering chain that returns an answer with sources.\nParameters\nllm \u2013 Language model to use for the chain.\n**kwargs \u2013 Keyword arguments to pass to create_qa_with_structure_chain.\nReturns\nChain (LLMChain) that can be used to answer questions with citations.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.qa_with_structure.create_qa_with_sources_chain.html"} {"id": "8c86252dbc5c-0", "text": "langchain.chains.router.llm_router.LLMRouterChain\u00b6\nclass langchain.chains.router.llm_router.LLMRouterChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, llm_chain: LLMChain)[source]\u00b6\nBases: RouterChain\nA router chain that uses an LLM chain to perform routing.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam llm_chain: LLMChain [Required]\u00b6\nLLM chain used to perform routing\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.llm_router.LLMRouterChain.html"} {"id": "8c86252dbc5c-1", "text": "This metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.llm_router.LLMRouterChain.html"} {"id": "8c86252dbc5c-2", "text": "tags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.llm_router.LLMRouterChain.html"} {"id": "8c86252dbc5c-3", "text": "to False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync aroute(inputs: Dict[str, Any], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Route\u00b6\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.llm_router.LLMRouterChain.html"} {"id": "8c86252dbc5c-4", "text": "these runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_llm(llm: BaseLanguageModel, prompt: BasePromptTemplate, **kwargs: Any) \u2192 LLMRouterChain[source]\u00b6\nConvenience constructor.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.llm_router.LLMRouterChain.html"} {"id": "8c86252dbc5c-5", "text": "Returns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nroute(inputs: Dict[str, Any], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Route\u00b6\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.llm_router.LLMRouterChain.html"} {"id": "8c86252dbc5c-6", "text": "callbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_prompt\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.llm_router.LLMRouterChain.html"} {"id": "8c86252dbc5c-7", "text": "property lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\u00b6\nReturn the keys expected to be in the chain output.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.router.llm_router.LLMRouterChain.html"} {"id": "7be51c310328-0", "text": "langchain.chains.graph_qa.hugegraph.HugeGraphQAChain\u00b6\nclass langchain.chains.graph_qa.hugegraph.HugeGraphQAChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, graph: HugeGraph, gremlin_generation_chain: LLMChain, qa_chain: LLMChain, input_key: str = 'query', output_key: str = 'result')[source]\u00b6\nBases: Chain\nChain for question-answering against a graph by generating gremlin statements.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam graph: HugeGraph [Required]\u00b6\nparam gremlin_generation_chain: LLMChain [Required]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.hugegraph.HugeGraphQAChain.html"} {"id": "7be51c310328-1", "text": "There are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam qa_chain: LLMChain [Required]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.hugegraph.HugeGraphQAChain.html"} {"id": "7be51c310328-2", "text": "chain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.hugegraph.HugeGraphQAChain.html"} {"id": "7be51c310328-3", "text": "tags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.hugegraph.HugeGraphQAChain.html"} {"id": "7be51c310328-4", "text": "these runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.hugegraph.HugeGraphQAChain.html"} {"id": "7be51c310328-5", "text": "# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_llm(llm: BaseLanguageModel, *, qa_prompt: BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template=\"You are an assistant that helps to form nice and human understandable answers.\\nThe information part contains the provided information that you must use to construct an answer.\\nThe provided information is authorative, you must never doubt it or try to use your internal knowledge to correct it.\\nMake the answer sound as a response to the question. Do not mention that you based the result on the given information.\\nIf the provided information is empty, say that you don't know the answer.\\nInformation:\\n{context}\\n\\nQuestion: {question}\\nHelpful Answer:\", template_format='f-string', validate_template=True), gremlin_prompt: BasePromptTemplate = PromptTemplate(input_variables=['schema', 'question'], output_parser=None, partial_variables={}, template='Task:Generate Gremlin statement to query a graph database.\\nInstructions:\\nUse only the provided relationship types and properties in the schema.\\nDo not use any other relationship types or properties that are not provided.\\nSchema:\\n{schema}\\nNote: Do not include any explanations or apologies in your responses.\\nDo not respond to any questions that might ask anything else than for you to construct a Gremlin statement.\\nDo not include any text except the generated Gremlin statement.\\n\\nThe question is:\\n{question}', template_format='f-string', validate_template=True), **kwargs: Any) \u2192 HugeGraphQAChain[source]\u00b6\nInitialize from LLM.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.hugegraph.HugeGraphQAChain.html"} {"id": "7be51c310328-6", "text": "Parameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.hugegraph.HugeGraphQAChain.html"} {"id": "7be51c310328-7", "text": "sole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.hugegraph.HugeGraphQAChain.html"} {"id": "7be51c310328-8", "text": "serialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.hugegraph.HugeGraphQAChain.html"} {"id": "79cd41930dfe-0", "text": "langchain.chains.constitutional_ai.models.ConstitutionalPrinciple\u00b6\nclass langchain.chains.constitutional_ai.models.ConstitutionalPrinciple(*, critique_request: str, revision_request: str, name: str = 'Constitutional Principle')[source]\u00b6\nBases: BaseModel\nClass for a constitutional principle.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam critique_request: str [Required]\u00b6\nparam name: str = 'Constitutional Principle'\u00b6\nparam revision_request: str [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.models.ConstitutionalPrinciple.html"} {"id": "bb18071d98a6-0", "text": "langchain.chains.openai_functions.citation_fuzzy_match.FactWithEvidence\u00b6\nclass langchain.chains.openai_functions.citation_fuzzy_match.FactWithEvidence(*, fact: str, substring_quote: List[str])[source]\u00b6\nBases: BaseModel\nClass representing single statement.\nEach fact has a body and a list of sources.\nIf there are multiple facts make sure to break them apart\nsuch that each one only uses a set of sources that are relevant to it.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam fact: str [Required]\u00b6\nBody of the sentence, as part of a response\nparam substring_quote: List[str] [Required]\u00b6\nEach source should be a direct quote from the context, as a substring of the original content\nget_spans(context: str) \u2192 Iterator[str][source]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.citation_fuzzy_match.FactWithEvidence.html"} {"id": "308bd50a0711-0", "text": "langchain.chains.qa_with_sources.base.QAWithSourcesChain\u00b6\nclass langchain.chains.qa_with_sources.base.QAWithSourcesChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, combine_documents_chain: BaseCombineDocumentsChain, question_key: str = 'question', input_docs_key: str = 'docs', answer_key: str = 'answer', sources_answer_key: str = 'sources', return_source_documents: bool = False)[source]\u00b6\nBases: BaseQAWithSourcesChain\nQuestion answering with sources over documents.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam combine_documents_chain: BaseCombineDocumentsChain [Required]\u00b6\nChain to use to combine documents.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.QAWithSourcesChain.html"} {"id": "308bd50a0711-1", "text": "There are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam return_source_documents: bool = False\u00b6\nReturn the source documents.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.QAWithSourcesChain.html"} {"id": "308bd50a0711-2", "text": "chain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.QAWithSourcesChain.html"} {"id": "308bd50a0711-3", "text": "tags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.QAWithSourcesChain.html"} {"id": "308bd50a0711-4", "text": "these runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_chain_type(llm: BaseLanguageModel, chain_type: str = 'stuff', chain_type_kwargs: Optional[dict] = None, **kwargs: Any) \u2192 BaseQAWithSourcesChain\u00b6\nLoad chain from chain type.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.QAWithSourcesChain.html"} {"id": "308bd50a0711-5", "text": "classmethod from_llm(llm: BaseLanguageModel, document_prompt: BasePromptTemplate = PromptTemplate(input_variables=['page_content', 'source'], output_parser=None, partial_variables={}, template='Content: {page_content}\\nSource: {source}', template_format='f-string', validate_template=True), question_prompt: BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template='Use the following portion of a long document to see if any of the text is relevant to answer the question. \\nReturn any relevant text verbatim.\\n{context}\\nQuestion: {question}\\nRelevant text, if any:', template_format='f-string', validate_template=True), combine_prompt: BasePromptTemplate = PromptTemplate(input_variables=['summaries', 'question'], output_parser=None, partial_variables={}, template='Given the following extracted parts of a long document and a question, create a final answer with references (\"SOURCES\"). \\nIf you don\\'t know the answer, just say that you don\\'t know. Don\\'t try to make up an answer.\\nALWAYS return a \"SOURCES\" part in your answer.\\n\\nQUESTION: Which state/country\\'s law governs the interpretation of the contract?\\n=========\\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts in\u00a0 relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for an\u00a0 injunction or other relief to protect its Intellectual Property Rights.\\nSource: 28-pl\\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other)\u00a0 right or remedy.\\n\\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.QAWithSourcesChain.html"} {"id": "308bd50a0711-6", "text": "or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuation\u00a0 in force of the remainder of the term (if any) and this Agreement.\\n\\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of any\u00a0 kind between the parties.\\n\\n11.9 No Third-Party Beneficiaries.\\nSource: 30-pl\\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (as\u00a0 defined in Clause 8.5) or that such a violation is reasonably likely to occur,\\nSource: 4-pl\\n=========\\nFINAL ANSWER: This Agreement is governed by English law.\\nSOURCES: 28-pl\\n\\nQUESTION: What did the president say about Michael Jackson?\\n=========\\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.\u00a0 \\n\\nLast year COVID-19 kept us apart. This year we are finally together again. \\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \\n\\nWith a duty to one another to the American people to the Constitution. \\n\\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \\n\\nSix days ago, Russia\u2019s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \\n\\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \\n\\nHe met the Ukrainian people. \\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.QAWithSourcesChain.html"} {"id": "308bd50a0711-7", "text": "\\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \\n\\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\\nSource: 0-pl\\nContent: And we won\u2019t stop. \\n\\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \\n\\nLet\u2019s use this moment to reset. Let\u2019s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease.\u00a0 \\n\\nLet\u2019s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans.\u00a0 \\n\\nWe can\u2019t change how divided we\u2019ve been. But we can change how we move forward\u2014on COVID-19 and other issues we must face together. \\n\\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \\n\\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \\n\\nOfficer Mora was 27 years old. \\n\\nOfficer Rivera was 22. \\n\\nBoth Dominican Americans who\u2019d grown up on the same streets they later chose to patrol as police officers. \\n\\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\\nSource: 24-pl\\nContent: And a proud Ukrainian people, who have known 30 years\u00a0 of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards.\u00a0 \\n\\nTo all Americans, I will be honest with you, as I\u2019ve always", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.QAWithSourcesChain.html"} {"id": "308bd50a0711-8", "text": "\\n\\nTo all Americans, I will be honest with you, as I\u2019ve always promised. A Russian dictator, invading a foreign country, has costs around the world. \\n\\nAnd I\u2019m taking robust action to make sure the pain of our sanctions\u00a0 is targeted at Russia\u2019s economy. And I will use every tool at our disposal to protect American businesses and consumers. \\n\\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world.\u00a0 \\n\\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies.\u00a0 \\n\\nThese steps will help blunt gas prices here at home. And I know the news about what\u2019s happening can seem alarming. \\n\\nBut I want you to know that we are going to be okay.\\nSource: 5-pl\\nContent: More support for patients and families. \\n\\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \\n\\nIt\u2019s based on DARPA\u2014the Defense Department project that led to the Internet, GPS, and so much more.\u00a0 \\n\\nARPA-H will have a singular purpose\u2014to drive breakthroughs in cancer, Alzheimer\u2019s, diabetes, and more. \\n\\nA unity agenda for the nation. \\n\\nWe can do this. \\n\\nMy fellow Americans\u2014tonight , we have gathered in a sacred space\u2014the citadel of our democracy. \\n\\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \\n\\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \\n\\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \\n\\nNow is the hour.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.QAWithSourcesChain.html"} {"id": "308bd50a0711-9", "text": "and most prosperous nation the world has ever known. \\n\\nNow is the hour. \\n\\nOur moment of responsibility. \\n\\nOur test of resolve and conscience, of history itself. \\n\\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \\n\\nWell I know this nation.\\nSource: 34-pl\\n=========\\nFINAL ANSWER: The president did not mention Michael Jackson.\\nSOURCES:\\n\\nQUESTION: {question}\\n=========\\n{summaries}\\n=========\\nFINAL ANSWER:', template_format='f-string', validate_template=True), **kwargs: Any) \u2192 BaseQAWithSourcesChain\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.QAWithSourcesChain.html"} {"id": "308bd50a0711-10", "text": "Construct the chain from an LLM.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.QAWithSourcesChain.html"} {"id": "308bd50a0711-11", "text": "The other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.QAWithSourcesChain.html"} {"id": "308bd50a0711-12", "text": "Set the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_naming\u00a0 \u00bb\u00a0 all fields\u00b6\nFix backwards compatibility in naming.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.QAWithSourcesChain.html"} {"id": "43343692a9b5-0", "text": "langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain\u00b6\nclass langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, combine_docs_chain: BaseCombineDocumentsChain, question_generator: LLMChain, output_key: str = 'answer', rephrase_question: bool = True, return_source_documents: bool = False, return_generated_question: bool = False, get_chat_history: Optional[Callable[[Union[Tuple[str, str], BaseMessage]], str]] = None, retriever: BaseRetriever, max_tokens_limit: Optional[int] = None)[source]\u00b6\nBases: BaseConversationalRetrievalChain\nChain for having a conversation based on retrieved documents.\nThis chain takes in chat history (a list of messages) and new questions,\nand then returns an answer to that question.\nThe algorithm for this chain consists of three parts:\n1. Use the chat history and the new question to create a \u201cstandalone question\u201d.\nThis is done so that this question can be passed into the retrieval step to fetch\nrelevant documents. If only the new question was passed in, then relevant context\nmay be lacking. If the whole conversation was passed into retrieval, there may\nbe unnecessary information there that would distract from retrieval.\n2. This new question is passed to the retriever and relevant documents are\nreturned.\n3. The retrieved documents are passed to an LLM along with either the new question\n(default behavior) or the original question and chat history to generate a final\nresponse.\nExample", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html"} {"id": "43343692a9b5-1", "text": "(default behavior) or the original question and chat history to generate a final\nresponse.\nExample\nfrom langchain.chains import (\n StuffDocumentsChain, LLMChain, ConversationalRetrievalChain\n)\nfrom langchain.prompts import PromptTemplate\nfrom langchain.llms import OpenAI\ncombine_docs_chain = StuffDocumentsChain(...)\nvectorstore = ...\nretriever = vectorstore.as_retriever()\n# This controls how the standalone question is generated.\n# Should take `chat_history` and `question` as input variables.\ntemplate = (\n \"Combine the chat history and follow up question into \"\n \"a standalone question. Chat History: {chat_history}\"\n \"Follow up question: {question}\"\n)\nprompt = PromptTemplate.from_template(template)\nllm = OpenAI()\nllm_chain = LLMChain(llm=llm, prompt=prompt)\nchain = ConversationalRetrievalChain(\n combine_docs_chain=combine_docs_chain,\n retriever=retriever,\n question_generator=question_generator,\n)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam combine_docs_chain: BaseCombineDocumentsChain [Required]\u00b6\nThe chain used to combine any retrieved documents.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html"} {"id": "43343692a9b5-2", "text": "The chain used to combine any retrieved documents.\nparam get_chat_history: Optional[Callable[[CHAT_TURN_TYPE], str]] = None\u00b6\nAn optional function to get a string of the chat history.\nIf None is provided, will use a default.\nparam max_tokens_limit: Optional[int] = None\u00b6\nIf set, enforces that the documents returned are less than this limit.\nThis is only enforced if combine_docs_chain is of type StuffDocumentsChain.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam output_key: str = 'answer'\u00b6\nThe output key to return the final answer of this chain in.\nparam question_generator: LLMChain [Required]\u00b6\nThe chain used to generate a new question for the sake of retrieval.\nThis chain will take in the current question (with variable question)\nand any chat history (with variable chat_history) and will produce\na new standalone question to be used later on.\nparam rephrase_question: bool = True\u00b6\nWhether or not to pass the new generated question to the combine_docs_chain.\nIf True, will pass the new generated question along.\nIf False, will only use the new generated question for retrieval and pass the", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html"} {"id": "43343692a9b5-3", "text": "If False, will only use the new generated question for retrieval and pass the\noriginal question along to the combine_docs_chain.\nparam retriever: BaseRetriever [Required]\u00b6\nRetriever to use to fetch documents.\nparam return_generated_question: bool = False\u00b6\nReturn the generated question as part of the final result.\nparam return_source_documents: bool = False\u00b6\nReturn the retrieved source documents as part of the final result.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html"} {"id": "43343692a9b5-4", "text": "callbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html"} {"id": "43343692a9b5-5", "text": "addition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html"} {"id": "43343692a9b5-6", "text": "addition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html"} {"id": "43343692a9b5-7", "text": "# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_llm(llm: BaseLanguageModel, retriever: BaseRetriever, condense_question_prompt: BasePromptTemplate = PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\\n\\nChat History:\\n{chat_history}\\nFollow Up Input: {question}\\nStandalone question:', template_format='f-string', validate_template=True), chain_type: str = 'stuff', verbose: bool = False, condense_question_llm: Optional[BaseLanguageModel] = None, combine_docs_chain_kwargs: Optional[Dict] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 BaseConversationalRetrievalChain[source]\u00b6\nConvenience method to load chain from LLM and retriever.\nThis provides some logic to create the question_generator chain\nas well as the combine_docs_chain.\nParameters\nllm \u2013 The default language model to use at every part of this chain\n(eg in both the question generation and the answering)\nretriever \u2013 The retriever to use to fetch relevant documents from.\ncondense_question_prompt \u2013 The prompt to use to condense the chat history\nand new question into a standalone question.\nchain_type \u2013 The chain type to use to create the combine_docs_chain, will\nbe sent to load_qa_chain.\nverbose \u2013 Verbosity flag for logging to stdout.\ncondense_question_llm \u2013 The language model to use for condensing the chat\nhistory and new question into a standalone question. If none is\nprovided, will default to llm.\ncombine_docs_chain_kwargs \u2013 Parameters to pass as kwargs to load_qa_chain", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html"} {"id": "43343692a9b5-8", "text": "combine_docs_chain_kwargs \u2013 Parameters to pass as kwargs to load_qa_chain\nwhen constructing the combine_docs_chain.\ncallbacks \u2013 Callbacks to pass to all subchains.\n**kwargs \u2013 Additional parameters to pass when initializing\nConversationalRetrievalChain\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html"} {"id": "43343692a9b5-9", "text": "has more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html"} {"id": "43343692a9b5-10", "text": "Example\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty input_keys: List[str]\u00b6\nInput keys.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\nallow_population_by_field_name = True\u00b6\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html"} {"id": "f00be2e70ba8-0", "text": "langchain.chains.prompt_selector.is_chat_model\u00b6\nlangchain.chains.prompt_selector.is_chat_model(llm: BaseLanguageModel) \u2192 bool[source]\u00b6\nCheck if the language model is a chat model.\nParameters\nllm \u2013 Language model to check.\nReturns\nTrue if the language model is a BaseChatModel model, False otherwise.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.prompt_selector.is_chat_model.html"} {"id": "eccd9857ebcf-0", "text": "langchain.chains.combine_documents.reduce.ReduceDocumentsChain\u00b6\nclass langchain.chains.combine_documents.reduce.ReduceDocumentsChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, input_key: str = 'input_documents', output_key: str = 'output_text', combine_documents_chain: BaseCombineDocumentsChain, collapse_documents_chain: Optional[BaseCombineDocumentsChain] = None, token_max: int = 3000)[source]\u00b6\nBases: BaseCombineDocumentsChain\nCombining documents by recursively reducing them.\nThis involves\ncombine_documents_chain\ncollapse_documents_chain\ncombine_documents_chain is ALWAYS provided. This is final chain that is called.\nWe pass all previous results to this chain, and the output of this chain is\nreturned as a final result.\ncollapse_documents_chain is used if the documents passed in are too many to all\nbe passed to combine_documents_chain in one go. In this case,\ncollapse_documents_chain is called recursively on as big of groups of documents\nas are allowed.\nExample\nfrom langchain.chains import (\n StuffDocumentsChain, LLMChain, ReduceDocumentsChain\n)\nfrom langchain.prompts import PromptTemplate\nfrom langchain.llms import OpenAI\n# This controls how each document will be formatted. Specifically,\n# it will be passed to `format_document` - see that function for more\n# details.\ndocument_prompt = PromptTemplate(\n input_variables=[\"page_content\"],\n template=\"{page_content}\"\n)\ndocument_variable_name = \"context\"\nllm = OpenAI()\n# The prompt here should take as an input variable the\n# `document_variable_name`", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html"} {"id": "eccd9857ebcf-1", "text": "# The prompt here should take as an input variable the\n# `document_variable_name`\nprompt = PromptTemplate.from_template(\n \"Summarize this content: {context}\"\n)\nllm_chain = LLMChain(llm=llm, prompt=prompt)\ncombine_documents_chain = StuffDocumentsChain(\n llm_chain=llm_chain,\n document_prompt=document_prompt,\n document_variable_name=document_variable_name\n)\nchain = ReduceDocumentsChain(\n combine_documents_chain=combine_documents_chain,\n)\n# If we wanted to, we could also pass in collapse_documents_chain\n# which is specifically aimed at collapsing documents BEFORE\n# the final call.\nprompt = PromptTemplate.from_template(\n \"Collapse this content: {context}\"\n)\nllm_chain = LLMChain(llm=llm, prompt=prompt)\ncollapse_documents_chain = StuffDocumentsChain(\n llm_chain=llm_chain,\n document_prompt=document_prompt,\n document_variable_name=document_variable_name\n)\nchain = ReduceDocumentsChain(\n combine_documents_chain=combine_documents_chain,\n collapse_documents_chain=collapse_documents_chain,\n)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam collapse_documents_chain: Optional[BaseCombineDocumentsChain] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html"} {"id": "eccd9857ebcf-2", "text": "param collapse_documents_chain: Optional[BaseCombineDocumentsChain] = None\u00b6\nChain to use to collapse documents if needed until they can all fit.\nIf None, will use the combine_documents_chain.\nThis is typically a StuffDocumentsChain.\nparam combine_documents_chain: BaseCombineDocumentsChain [Required]\u00b6\nFinal chain to call to combine documents.\nThis is typically a StuffDocumentsChain.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam token_max: int = 3000\u00b6\nThe maximum number of tokens to group documents into. For example, if\nset to 3000 then documents will be grouped into chunks of no greater than\n3000 tokens before trying to combine them into a smaller chunk.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html"} {"id": "eccd9857ebcf-3", "text": "Whether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html"} {"id": "eccd9857ebcf-4", "text": "Returns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acombine_docs(docs: List[Document], token_max: Optional[int] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Tuple[str, dict][source]\u00b6\nCombine multiple documents recursively.\nParameters", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html"} {"id": "eccd9857ebcf-5", "text": "Combine multiple documents recursively.\nParameters\ndocs \u2013 List of documents to combine, assumed that each one is less than\ntoken_max.\ntoken_max \u2013 Recursively creates groups of documents less than this number\nof tokens.\ncallbacks \u2013 Callbacks to be passed through\n**kwargs \u2013 additional parameters to be passed to LLM calls (like other\ninput variables besides the documents)\nReturns\nThe first element returned is the single string output. The second\nelement returned is a dictionary of other keys to return.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html"} {"id": "eccd9857ebcf-6", "text": "these runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ncombine_docs(docs: List[Document], token_max: Optional[int] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Tuple[str, dict][source]\u00b6\nCombine multiple documents recursively.\nParameters\ndocs \u2013 List of documents to combine, assumed that each one is less than\ntoken_max.\ntoken_max \u2013 Recursively creates groups of documents less than this number\nof tokens.\ncallbacks \u2013 Callbacks to be passed through\n**kwargs \u2013 additional parameters to be passed to LLM calls (like other\ninput variables besides the documents)\nReturns\nThe first element returned is the single string output. The second\nelement returned is a dictionary of other keys to return.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html"} {"id": "eccd9857ebcf-7", "text": "dict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nprompt_length(docs: List[Document], **kwargs: Any) \u2192 Optional[int]\u00b6\nReturn the prompt length given the documents passed in.\nThis can be used by a caller to determine whether passing in a list\nof documents would exceed a certain prompt length. This useful when\ntrying to ensure that the size of a prompt remains below a certain\ncontext limit.\nParameters", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html"} {"id": "eccd9857ebcf-8", "text": "trying to ensure that the size of a prompt remains below a certain\ncontext limit.\nParameters\ndocs \u2013 List[Document], a list of documents to use to calculate the\ntotal prompt length.\nReturns\nReturns None if the method does not depend on the prompt length,\notherwise the length of the prompt in tokens.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html"} {"id": "eccd9857ebcf-9", "text": "directly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html"} {"id": "eccd9857ebcf-10", "text": "model Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html"} {"id": "bb45f37225a8-0", "text": "langchain.chains.query_constructor.ir.Comparator\u00b6\nclass langchain.chains.query_constructor.ir.Comparator(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]\u00b6\nBases: str, Enum\nEnumerator of the comparison operators.\nMethods\n__init__(*args,\u00a0**kwds)\ncapitalize()\nReturn a capitalized version of the string.\ncasefold()\nReturn a version of the string suitable for caseless comparisons.\ncenter(width[,\u00a0fillchar])\nReturn a centered string of length width.\ncount(sub[,\u00a0start[,\u00a0end]])\nReturn the number of non-overlapping occurrences of substring sub in string S[start:end].\nencode([encoding,\u00a0errors])\nEncode the string using the codec registered for encoding.\nendswith(suffix[,\u00a0start[,\u00a0end]])\nReturn True if S ends with the specified suffix, False otherwise.\nexpandtabs([tabsize])\nReturn a copy where all tab characters are expanded using spaces.\nfind(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nformat(*args,\u00a0**kwargs)\nReturn a formatted version of S, using substitutions from args and kwargs.\nformat_map(mapping)\nReturn a formatted version of S, using substitutions from mapping.\nindex(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nisalnum()\nReturn True if the string is an alpha-numeric string, False otherwise.\nisalpha()\nReturn True if the string is an alphabetic string, False otherwise.\nisascii()\nReturn True if all characters in the string are ASCII, False otherwise.\nisdecimal()\nReturn True if the string is a decimal string, False otherwise.\nisdigit()", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Comparator.html"} {"id": "bb45f37225a8-1", "text": "Return True if the string is a decimal string, False otherwise.\nisdigit()\nReturn True if the string is a digit string, False otherwise.\nisidentifier()\nReturn True if the string is a valid Python identifier, False otherwise.\nislower()\nReturn True if the string is a lowercase string, False otherwise.\nisnumeric()\nReturn True if the string is a numeric string, False otherwise.\nisprintable()\nReturn True if the string is printable, False otherwise.\nisspace()\nReturn True if the string is a whitespace string, False otherwise.\nistitle()\nReturn True if the string is a title-cased string, False otherwise.\nisupper()\nReturn True if the string is an uppercase string, False otherwise.\njoin(iterable,\u00a0/)\nConcatenate any number of strings.\nljust(width[,\u00a0fillchar])\nReturn a left-justified string of length width.\nlower()\nReturn a copy of the string converted to lowercase.\nlstrip([chars])\nReturn a copy of the string with leading whitespace removed.\nmaketrans\nReturn a translation table usable for str.translate().\npartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nremoveprefix(prefix,\u00a0/)\nReturn a str with the given prefix string removed if present.\nremovesuffix(suffix,\u00a0/)\nReturn a str with the given suffix string removed if present.\nreplace(old,\u00a0new[,\u00a0count])\nReturn a copy with all occurrences of substring old replaced by new.\nrfind(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].\nrindex(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Comparator.html"} {"id": "bb45f37225a8-2", "text": "rjust(width[,\u00a0fillchar])\nReturn a right-justified string of length width.\nrpartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nrsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nrstrip([chars])\nReturn a copy of the string with trailing whitespace removed.\nsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nsplitlines([keepends])\nReturn a list of the lines in the string, breaking at line boundaries.\nstartswith(prefix[,\u00a0start[,\u00a0end]])\nReturn True if S starts with the specified prefix, False otherwise.\nstrip([chars])\nReturn a copy of the string with leading and trailing whitespace removed.\nswapcase()\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\nReturn a version of the string where each word is titlecased.\ntranslate(table,\u00a0/)\nReplace each character in the string using the given translation table.\nupper()\nReturn a copy of the string converted to uppercase.\nzfill(width,\u00a0/)\nPad a numeric string with zeros on the left, to fill a field of the given width.\nAttributes\nEQ\nGT\nGTE\nLT\nLTE\nCONTAIN\nLIKE\ncapitalize()\u00b6\nReturn a capitalized version of the string.\nMore specifically, make the first character have upper case and the rest lower\ncase.\ncasefold()\u00b6\nReturn a version of the string suitable for caseless comparisons.\ncenter(width, fillchar=' ', /)\u00b6\nReturn a centered string of length width.\nPadding is done using the specified fill character (default is a space).\ncount(sub[, start[, end]]) \u2192 int\u00b6\nReturn the number of non-overlapping occurrences of substring sub in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Comparator.html"} {"id": "bb45f37225a8-3", "text": "Return the number of non-overlapping occurrences of substring sub in\nstring S[start:end]. Optional arguments start and end are\ninterpreted as in slice notation.\nencode(encoding='utf-8', errors='strict')\u00b6\nEncode the string using the codec registered for encoding.\nencodingThe encoding in which to encode the string.\nerrorsThe error handling scheme to use for encoding errors.\nThe default is \u2018strict\u2019 meaning that encoding errors raise a\nUnicodeEncodeError. Other possible values are \u2018ignore\u2019, \u2018replace\u2019 and\n\u2018xmlcharrefreplace\u2019 as well as any other name registered with\ncodecs.register_error that can handle UnicodeEncodeErrors.\nendswith(suffix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S ends with the specified suffix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nsuffix can also be a tuple of strings to try.\nexpandtabs(tabsize=8)\u00b6\nReturn a copy where all tab characters are expanded using spaces.\nIf tabsize is not given, a tab size of 8 characters is assumed.\nfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nformat(*args, **kwargs) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from args and kwargs.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nformat_map(mapping) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from mapping.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Comparator.html"} {"id": "bb45f37225a8-4", "text": "Return the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nisalnum()\u00b6\nReturn True if the string is an alpha-numeric string, False otherwise.\nA string is alpha-numeric if all characters in the string are alpha-numeric and\nthere is at least one character in the string.\nisalpha()\u00b6\nReturn True if the string is an alphabetic string, False otherwise.\nA string is alphabetic if all characters in the string are alphabetic and there\nis at least one character in the string.\nisascii()\u00b6\nReturn True if all characters in the string are ASCII, False otherwise.\nASCII characters have code points in the range U+0000-U+007F.\nEmpty string is ASCII too.\nisdecimal()\u00b6\nReturn True if the string is a decimal string, False otherwise.\nA string is a decimal string if all characters in the string are decimal and\nthere is at least one character in the string.\nisdigit()\u00b6\nReturn True if the string is a digit string, False otherwise.\nA string is a digit string if all characters in the string are digits and there\nis at least one character in the string.\nisidentifier()\u00b6\nReturn True if the string is a valid Python identifier, False otherwise.\nCall keyword.iskeyword(s) to test whether string s is a reserved identifier,\nsuch as \u201cdef\u201d or \u201cclass\u201d.\nislower()\u00b6\nReturn True if the string is a lowercase string, False otherwise.\nA string is lowercase if all cased characters in the string are lowercase and\nthere is at least one cased character in the string.\nisnumeric()\u00b6\nReturn True if the string is a numeric string, False otherwise.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Comparator.html"} {"id": "bb45f37225a8-5", "text": "isnumeric()\u00b6\nReturn True if the string is a numeric string, False otherwise.\nA string is numeric if all characters in the string are numeric and there is at\nleast one character in the string.\nisprintable()\u00b6\nReturn True if the string is printable, False otherwise.\nA string is printable if all of its characters are considered printable in\nrepr() or if it is empty.\nisspace()\u00b6\nReturn True if the string is a whitespace string, False otherwise.\nA string is whitespace if all characters in the string are whitespace and there\nis at least one character in the string.\nistitle()\u00b6\nReturn True if the string is a title-cased string, False otherwise.\nIn a title-cased string, upper- and title-case characters may only\nfollow uncased characters and lowercase characters only cased ones.\nisupper()\u00b6\nReturn True if the string is an uppercase string, False otherwise.\nA string is uppercase if all cased characters in the string are uppercase and\nthere is at least one cased character in the string.\njoin(iterable, /)\u00b6\nConcatenate any number of strings.\nThe string whose method is called is inserted in between each given string.\nThe result is returned as a new string.\nExample: \u2018.\u2019.join([\u2018ab\u2019, \u2018pq\u2019, \u2018rs\u2019]) -> \u2018ab.pq.rs\u2019\nljust(width, fillchar=' ', /)\u00b6\nReturn a left-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nlower()\u00b6\nReturn a copy of the string converted to lowercase.\nlstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nstatic maketrans()\u00b6\nReturn a translation table usable for str.translate().", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Comparator.html"} {"id": "bb45f37225a8-6", "text": "static maketrans()\u00b6\nReturn a translation table usable for str.translate().\nIf there is only one argument, it must be a dictionary mapping Unicode\nordinals (integers) or characters to Unicode ordinals, strings or None.\nCharacter keys will be then converted to ordinals.\nIf there are two arguments, they must be strings of equal length, and\nin the resulting dictionary, each character in x will be mapped to the\ncharacter at the same position in y. If there is a third argument, it\nmust be a string, whose characters will be mapped to None in the result.\npartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string. If the separator is found,\nreturns a 3-tuple containing the part before the separator, the separator\nitself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing the original string\nand two empty strings.\nremoveprefix(prefix, /)\u00b6\nReturn a str with the given prefix string removed if present.\nIf the string starts with the prefix string, return string[len(prefix):].\nOtherwise, return a copy of the original string.\nremovesuffix(suffix, /)\u00b6\nReturn a str with the given suffix string removed if present.\nIf the string ends with the suffix string and that suffix is not empty,\nreturn string[:-len(suffix)]. Otherwise, return a copy of the original\nstring.\nreplace(old, new, count=- 1, /)\u00b6\nReturn a copy with all occurrences of substring old replaced by new.\ncountMaximum number of occurrences to replace.\n-1 (the default value) means replace all occurrences.\nIf the optional argument count is given, only the first count occurrences are\nreplaced.\nrfind(sub[, start[, end]]) \u2192 int\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Comparator.html"} {"id": "bb45f37225a8-7", "text": "replaced.\nrfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nrindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nrjust(width, fillchar=' ', /)\u00b6\nReturn a right-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nrpartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string, starting at the end. If\nthe separator is found, returns a 3-tuple containing the part before the\nseparator, the separator itself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing two empty strings\nand the original string.\nrsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nSplitting starts at the end of the string and works to the front.\nrstrip(chars=None, /)\u00b6\nReturn a copy of the string with trailing whitespace removed.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Comparator.html"} {"id": "bb45f37225a8-8", "text": "rstrip(chars=None, /)\u00b6\nReturn a copy of the string with trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nNote, str.split() is mainly useful for data that has been intentionally\ndelimited. With natural text that includes punctuation, consider using\nthe regular expression module.\nsplitlines(keepends=False)\u00b6\nReturn a list of the lines in the string, breaking at line boundaries.\nLine breaks are not included in the resulting list unless keepends is given and\ntrue.\nstartswith(prefix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S starts with the specified prefix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nprefix can also be a tuple of strings to try.\nstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading and trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nswapcase()\u00b6\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\u00b6\nReturn a version of the string where each word is titlecased.\nMore specifically, words start with uppercased characters and all remaining\ncased characters have lower case.\ntranslate(table, /)\u00b6\nReplace each character in the string using the given translation table.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Comparator.html"} {"id": "bb45f37225a8-9", "text": "translate(table, /)\u00b6\nReplace each character in the string using the given translation table.\ntableTranslation table, which must be a mapping of Unicode ordinals to\nUnicode ordinals, strings, or None.\nThe table must implement lookup/indexing via __getitem__, for instance a\ndictionary or list. If this operation raises LookupError, the character is\nleft untouched. Characters mapped to None are deleted.\nupper()\u00b6\nReturn a copy of the string converted to uppercase.\nzfill(width, /)\u00b6\nPad a numeric string with zeros on the left, to fill a field of the given width.\nThe string is never truncated.\nCONTAIN = 'contain'\u00b6\nEQ = 'eq'\u00b6\nGT = 'gt'\u00b6\nGTE = 'gte'\u00b6\nLIKE = 'like'\u00b6\nLT = 'lt'\u00b6\nLTE = 'lte'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Comparator.html"} {"id": "337ba765d410-0", "text": "langchain.chains.openai_functions.extraction.create_extraction_chain_pydantic\u00b6\nlangchain.chains.openai_functions.extraction.create_extraction_chain_pydantic(pydantic_schema: Any, llm: BaseLanguageModel) \u2192 Chain[source]\u00b6\nCreates a chain that extracts information from a passage using pydantic schema.\nParameters\npydantic_schema \u2013 The pydantic schema of the entities to extract.\nllm \u2013 The language model to use.\nReturns\nChain that can be used to extract information from a passage.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.extraction.create_extraction_chain_pydantic.html"} {"id": "d64cc1039042-0", "text": "langchain.chains.combine_documents.reduce.CombineDocsProtocol\u00b6\nclass langchain.chains.combine_documents.reduce.CombineDocsProtocol(*args, **kwargs)[source]\u00b6\nBases: Protocol\nInterface for the combine_docs method.\nMethods\n__init__(*args,\u00a0**kwargs)\n__call__(docs: List[Document], **kwargs: Any) \u2192 str[source]\u00b6\nInterface for the combine_docs method.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.CombineDocsProtocol.html"} {"id": "7da02368b242-0", "text": "langchain.chains.api.openapi.requests_chain.APIRequesterChain\u00b6\nclass langchain.chains.api.openapi.requests_chain.APIRequesterChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, prompt: BasePromptTemplate, llm: BaseLanguageModel, output_key: str = 'text', output_parser: BaseLLMOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None)[source]\u00b6\nBases: LLMChain\nGet the request parser.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam llm: BaseLanguageModel [Required]\u00b6\nLanguage model to call.\nparam llm_kwargs: dict [Optional]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterChain.html"} {"id": "7da02368b242-1", "text": "There are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam output_key: str = 'text'\u00b6\nparam output_parser: BaseLLMOutputParser [Optional]\u00b6\nOutput parser to use.\nDefaults to one that takes the most likely string but does not change it\notherwise.\nparam prompt: BasePromptTemplate [Required]\u00b6\nPrompt object to use.\nparam return_final_only: bool = True\u00b6\nWhether to return only the final parsed result. Defaults to True.\nIf false, will return a bunch of extra information about the generation.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterChain.html"} {"id": "7da02368b242-2", "text": "Execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.\nasync aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterChain.html"} {"id": "7da02368b242-3", "text": "Call apply and then parse the results.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterChain.html"} {"id": "7da02368b242-4", "text": "Generate LLM result from inputs.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.\napply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.\nasync apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\nasync apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, str]]\u00b6\nCall apredict and then parse the results.\nasync aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterChain.html"} {"id": "7da02368b242-5", "text": "Convenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterChain.html"} {"id": "7da02368b242-6", "text": "# -> \"The temperature in Boise is...\"\ncreate_outputs(llm_result: LLMResult) \u2192 List[Dict[str, Any]]\u00b6\nCreate outputs from response.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_llm_and_typescript(llm: BaseLanguageModel, typescript_definition: str, verbose: bool = True, **kwargs: Any) \u2192 LLMChain[source]\u00b6\nGet the request parser.\nclassmethod from_string(llm: BaseLanguageModel, template: str) \u2192 LLMChain\u00b6\nCreate LLMChain from LLM and template.\ngenerate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.\npredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\npredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, Any]]\u00b6\nCall predict and then parse the results.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterChain.html"} {"id": "7da02368b242-7", "text": "Call predict and then parse the results.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterChain.html"} {"id": "7da02368b242-8", "text": "has more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterChain.html"} {"id": "7da02368b242-9", "text": "Example\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterChain.html"} {"id": "b783f7d579a4-0", "text": "langchain.chains.natbot.base.NatBotChain\u00b6\nclass langchain.chains.natbot.base.NatBotChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, llm_chain: LLMChain, objective: str, llm: Optional[BaseLanguageModel] = None, input_url_key: str = 'url', input_browser_content_key: str = 'browser_content', previous_command: str = '', output_key: str = 'command')[source]\u00b6\nBases: Chain\nImplement an LLM driven browser.\nExample\nfrom langchain import NatBotChain\nnatbot = NatBotChain.from_default(\"Buy me a new hat.\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam llm: Optional[BaseLanguageModel] = None\u00b6\n[Deprecated] LLM wrapper to use.\nparam llm_chain: LLMChain [Required]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.natbot.base.NatBotChain.html"} {"id": "b783f7d579a4-1", "text": "Optional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam objective: str [Required]\u00b6\nObjective that NatBot is tasked with completing.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.natbot.base.NatBotChain.html"} {"id": "b783f7d579a4-2", "text": "Chain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.natbot.base.NatBotChain.html"} {"id": "b783f7d579a4-3", "text": "chain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.natbot.base.NatBotChain.html"} {"id": "b783f7d579a4-4", "text": "sole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nexecute(url: str, browser_content: str) \u2192 str[source]\u00b6\nFigure out next browser command to run.\nParameters\nurl \u2013 URL of the site currently on.\nbrowser_content \u2013 Content of the page as currently displayed by the browser.\nReturns", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.natbot.base.NatBotChain.html"} {"id": "b783f7d579a4-5", "text": "browser_content \u2013 Content of the page as currently displayed by the browser.\nReturns\nNext browser command to run.\nExample\nbrowser_content = \"....\"\nllm_command = natbot.run(\"www.google.com\", browser_content)\nclassmethod from_default(objective: str, **kwargs: Any) \u2192 NatBotChain[source]\u00b6\nLoad with default LLMChain.\nclassmethod from_llm(llm: BaseLanguageModel, objective: str, **kwargs: Any) \u2192 NatBotChain[source]\u00b6\nLoad from LLM.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.natbot.base.NatBotChain.html"} {"id": "b783f7d579a4-6", "text": "validator raise_deprecation\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.natbot.base.NatBotChain.html"} {"id": "b783f7d579a4-7", "text": "# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.natbot.base.NatBotChain.html"} {"id": "0f7bf03e26ff-0", "text": "langchain.chains.query_constructor.ir.Operation\u00b6\nclass langchain.chains.query_constructor.ir.Operation(*, operator: Operator, arguments: List[FilterDirective])[source]\u00b6\nBases: FilterDirective\nA logical operation over other directives.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam arguments: List[langchain.chains.query_constructor.ir.FilterDirective] [Required]\u00b6\nparam operator: langchain.chains.query_constructor.ir.Operator [Required]\u00b6\naccept(visitor: Visitor) \u2192 Any\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Operation.html"} {"id": "ca9a008278b1-0", "text": "langchain.chains.openai_functions.citation_fuzzy_match.QuestionAnswer\u00b6\nclass langchain.chains.openai_functions.citation_fuzzy_match.QuestionAnswer(*, question: str, answer: List[FactWithEvidence])[source]\u00b6\nBases: BaseModel\nA question and its answer as a list of facts each one should have a source.\neach sentence contains a body and a list of sources.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam answer: List[langchain.chains.openai_functions.citation_fuzzy_match.FactWithEvidence] [Required]\u00b6\nBody of the answer, each fact should be its separate object with a body and a list of sources\nparam question: str [Required]\u00b6\nQuestion that was asked", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.citation_fuzzy_match.QuestionAnswer.html"} {"id": "97077684d8b5-0", "text": "langchain.chains.query_constructor.ir.FilterDirective\u00b6\nclass langchain.chains.query_constructor.ir.FilterDirective[source]\u00b6\nBases: Expr, ABC\nA filtering expression.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\naccept(visitor: Visitor) \u2192 Any\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.FilterDirective.html"} {"id": "21e5ec604db6-0", "text": "langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain\u00b6\nclass langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, combine_documents_chain: BaseCombineDocumentsChain, question_key: str = 'question', input_docs_key: str = 'docs', answer_key: str = 'answer', sources_answer_key: str = 'sources', return_source_documents: bool = False, vectorstore: VectorStore, k: int = 4, reduce_k_below_max_tokens: bool = False, max_tokens_limit: int = 3375, search_kwargs: Dict[str, Any] = None)[source]\u00b6\nBases: BaseQAWithSourcesChain\nQuestion-answering with sources over a vector database.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam combine_documents_chain: BaseCombineDocumentsChain [Required]\u00b6\nChain to use to combine documents.\nparam k: int = 4\u00b6\nNumber of results to return from store", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html"} {"id": "21e5ec604db6-1", "text": "param k: int = 4\u00b6\nNumber of results to return from store\nparam max_tokens_limit: int = 3375\u00b6\nRestrict the docs to return from store based on tokens,\nenforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam reduce_k_below_max_tokens: bool = False\u00b6\nReduce the number of results to return from store based on tokens limit\nparam return_source_documents: bool = False\u00b6\nReturn the source documents.\nparam search_kwargs: Dict[str, Any] [Optional]\u00b6\nExtra search args.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam vectorstore: langchain.vectorstores.base.VectorStore [Required]\u00b6\nVector Database to connect to.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html"} {"id": "21e5ec604db6-2", "text": "Whether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html"} {"id": "21e5ec604db6-3", "text": "Returns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html"} {"id": "21e5ec604db6-4", "text": "Call the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html"} {"id": "21e5ec604db6-5", "text": "# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_chain_type(llm: BaseLanguageModel, chain_type: str = 'stuff', chain_type_kwargs: Optional[dict] = None, **kwargs: Any) \u2192 BaseQAWithSourcesChain\u00b6\nLoad chain from chain type.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html"} {"id": "21e5ec604db6-6", "text": "classmethod from_llm(llm: BaseLanguageModel, document_prompt: BasePromptTemplate = PromptTemplate(input_variables=['page_content', 'source'], output_parser=None, partial_variables={}, template='Content: {page_content}\\nSource: {source}', template_format='f-string', validate_template=True), question_prompt: BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template='Use the following portion of a long document to see if any of the text is relevant to answer the question. \\nReturn any relevant text verbatim.\\n{context}\\nQuestion: {question}\\nRelevant text, if any:', template_format='f-string', validate_template=True), combine_prompt: BasePromptTemplate = PromptTemplate(input_variables=['summaries', 'question'], output_parser=None, partial_variables={}, template='Given the following extracted parts of a long document and a question, create a final answer with references (\"SOURCES\"). \\nIf you don\\'t know the answer, just say that you don\\'t know. Don\\'t try to make up an answer.\\nALWAYS return a \"SOURCES\" part in your answer.\\n\\nQUESTION: Which state/country\\'s law governs the interpretation of the contract?\\n=========\\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts in\u00a0 relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for an\u00a0 injunction or other relief to protect its Intellectual Property Rights.\\nSource: 28-pl\\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other)\u00a0 right or remedy.\\n\\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html"} {"id": "21e5ec604db6-7", "text": "or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuation\u00a0 in force of the remainder of the term (if any) and this Agreement.\\n\\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of any\u00a0 kind between the parties.\\n\\n11.9 No Third-Party Beneficiaries.\\nSource: 30-pl\\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (as\u00a0 defined in Clause 8.5) or that such a violation is reasonably likely to occur,\\nSource: 4-pl\\n=========\\nFINAL ANSWER: This Agreement is governed by English law.\\nSOURCES: 28-pl\\n\\nQUESTION: What did the president say about Michael Jackson?\\n=========\\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.\u00a0 \\n\\nLast year COVID-19 kept us apart. This year we are finally together again. \\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \\n\\nWith a duty to one another to the American people to the Constitution. \\n\\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \\n\\nSix days ago, Russia\u2019s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \\n\\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \\n\\nHe met the Ukrainian people. \\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html"} {"id": "21e5ec604db6-8", "text": "\\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \\n\\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\\nSource: 0-pl\\nContent: And we won\u2019t stop. \\n\\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \\n\\nLet\u2019s use this moment to reset. Let\u2019s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease.\u00a0 \\n\\nLet\u2019s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans.\u00a0 \\n\\nWe can\u2019t change how divided we\u2019ve been. But we can change how we move forward\u2014on COVID-19 and other issues we must face together. \\n\\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \\n\\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \\n\\nOfficer Mora was 27 years old. \\n\\nOfficer Rivera was 22. \\n\\nBoth Dominican Americans who\u2019d grown up on the same streets they later chose to patrol as police officers. \\n\\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\\nSource: 24-pl\\nContent: And a proud Ukrainian people, who have known 30 years\u00a0 of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards.\u00a0 \\n\\nTo all Americans, I will be honest with you, as I\u2019ve always", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html"} {"id": "21e5ec604db6-9", "text": "\\n\\nTo all Americans, I will be honest with you, as I\u2019ve always promised. A Russian dictator, invading a foreign country, has costs around the world. \\n\\nAnd I\u2019m taking robust action to make sure the pain of our sanctions\u00a0 is targeted at Russia\u2019s economy. And I will use every tool at our disposal to protect American businesses and consumers. \\n\\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world.\u00a0 \\n\\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies.\u00a0 \\n\\nThese steps will help blunt gas prices here at home. And I know the news about what\u2019s happening can seem alarming. \\n\\nBut I want you to know that we are going to be okay.\\nSource: 5-pl\\nContent: More support for patients and families. \\n\\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \\n\\nIt\u2019s based on DARPA\u2014the Defense Department project that led to the Internet, GPS, and so much more.\u00a0 \\n\\nARPA-H will have a singular purpose\u2014to drive breakthroughs in cancer, Alzheimer\u2019s, diabetes, and more. \\n\\nA unity agenda for the nation. \\n\\nWe can do this. \\n\\nMy fellow Americans\u2014tonight , we have gathered in a sacred space\u2014the citadel of our democracy. \\n\\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \\n\\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \\n\\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \\n\\nNow is the hour.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html"} {"id": "21e5ec604db6-10", "text": "and most prosperous nation the world has ever known. \\n\\nNow is the hour. \\n\\nOur moment of responsibility. \\n\\nOur test of resolve and conscience, of history itself. \\n\\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \\n\\nWell I know this nation.\\nSource: 34-pl\\n=========\\nFINAL ANSWER: The president did not mention Michael Jackson.\\nSOURCES:\\n\\nQUESTION: {question}\\n=========\\n{summaries}\\n=========\\nFINAL ANSWER:', template_format='f-string', validate_template=True), **kwargs: Any) \u2192 BaseQAWithSourcesChain\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html"} {"id": "21e5ec604db6-11", "text": "Construct the chain from an LLM.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html"} {"id": "21e5ec604db6-12", "text": "info along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html"} {"id": "21e5ec604db6-13", "text": "validator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_naming\u00a0 \u00bb\u00a0 all fields\u00b6\nFix backwards compatibility in naming.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html"} {"id": "6084dca72830-0", "text": "langchain.chains.loading.load_chain\u00b6\nlangchain.chains.loading.load_chain(path: Union[str, Path], **kwargs: Any) \u2192 Chain[source]\u00b6\nUnified method for loading a chain from LangChainHub or local fs.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.loading.load_chain.html"} {"id": "2bf3810292b3-0", "text": "langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain\u00b6\nclass langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, combine_documents_chain: BaseCombineDocumentsChain, question_key: str = 'question', input_docs_key: str = 'docs', answer_key: str = 'answer', sources_answer_key: str = 'sources', return_source_documents: bool = False)[source]\u00b6\nBases: Chain, ABC\nQuestion answering with sources over documents.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam combine_documents_chain: BaseCombineDocumentsChain [Required]\u00b6\nChain to use to combine documents.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain.html"} {"id": "2bf3810292b3-1", "text": "There are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam return_source_documents: bool = False\u00b6\nReturn the source documents.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain.html"} {"id": "2bf3810292b3-2", "text": "chain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain.html"} {"id": "2bf3810292b3-3", "text": "tags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain.html"} {"id": "2bf3810292b3-4", "text": "these runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_chain_type(llm: BaseLanguageModel, chain_type: str = 'stuff', chain_type_kwargs: Optional[dict] = None, **kwargs: Any) \u2192 BaseQAWithSourcesChain[source]\u00b6\nLoad chain from chain type.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain.html"} {"id": "2bf3810292b3-5", "text": "classmethod from_llm(llm: BaseLanguageModel, document_prompt: BasePromptTemplate = PromptTemplate(input_variables=['page_content', 'source'], output_parser=None, partial_variables={}, template='Content: {page_content}\\nSource: {source}', template_format='f-string', validate_template=True), question_prompt: BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template='Use the following portion of a long document to see if any of the text is relevant to answer the question. \\nReturn any relevant text verbatim.\\n{context}\\nQuestion: {question}\\nRelevant text, if any:', template_format='f-string', validate_template=True), combine_prompt: BasePromptTemplate = PromptTemplate(input_variables=['summaries', 'question'], output_parser=None, partial_variables={}, template='Given the following extracted parts of a long document and a question, create a final answer with references (\"SOURCES\"). \\nIf you don\\'t know the answer, just say that you don\\'t know. Don\\'t try to make up an answer.\\nALWAYS return a \"SOURCES\" part in your answer.\\n\\nQUESTION: Which state/country\\'s law governs the interpretation of the contract?\\n=========\\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts in\u00a0 relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for an\u00a0 injunction or other relief to protect its Intellectual Property Rights.\\nSource: 28-pl\\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other)\u00a0 right or remedy.\\n\\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain.html"} {"id": "2bf3810292b3-6", "text": "or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuation\u00a0 in force of the remainder of the term (if any) and this Agreement.\\n\\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of any\u00a0 kind between the parties.\\n\\n11.9 No Third-Party Beneficiaries.\\nSource: 30-pl\\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (as\u00a0 defined in Clause 8.5) or that such a violation is reasonably likely to occur,\\nSource: 4-pl\\n=========\\nFINAL ANSWER: This Agreement is governed by English law.\\nSOURCES: 28-pl\\n\\nQUESTION: What did the president say about Michael Jackson?\\n=========\\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.\u00a0 \\n\\nLast year COVID-19 kept us apart. This year we are finally together again. \\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \\n\\nWith a duty to one another to the American people to the Constitution. \\n\\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \\n\\nSix days ago, Russia\u2019s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \\n\\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \\n\\nHe met the Ukrainian people. \\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain.html"} {"id": "2bf3810292b3-7", "text": "\\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \\n\\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\\nSource: 0-pl\\nContent: And we won\u2019t stop. \\n\\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \\n\\nLet\u2019s use this moment to reset. Let\u2019s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease.\u00a0 \\n\\nLet\u2019s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans.\u00a0 \\n\\nWe can\u2019t change how divided we\u2019ve been. But we can change how we move forward\u2014on COVID-19 and other issues we must face together. \\n\\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \\n\\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \\n\\nOfficer Mora was 27 years old. \\n\\nOfficer Rivera was 22. \\n\\nBoth Dominican Americans who\u2019d grown up on the same streets they later chose to patrol as police officers. \\n\\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\\nSource: 24-pl\\nContent: And a proud Ukrainian people, who have known 30 years\u00a0 of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards.\u00a0 \\n\\nTo all Americans, I will be honest with you, as I\u2019ve always", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain.html"} {"id": "2bf3810292b3-8", "text": "\\n\\nTo all Americans, I will be honest with you, as I\u2019ve always promised. A Russian dictator, invading a foreign country, has costs around the world. \\n\\nAnd I\u2019m taking robust action to make sure the pain of our sanctions\u00a0 is targeted at Russia\u2019s economy. And I will use every tool at our disposal to protect American businesses and consumers. \\n\\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world.\u00a0 \\n\\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies.\u00a0 \\n\\nThese steps will help blunt gas prices here at home. And I know the news about what\u2019s happening can seem alarming. \\n\\nBut I want you to know that we are going to be okay.\\nSource: 5-pl\\nContent: More support for patients and families. \\n\\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \\n\\nIt\u2019s based on DARPA\u2014the Defense Department project that led to the Internet, GPS, and so much more.\u00a0 \\n\\nARPA-H will have a singular purpose\u2014to drive breakthroughs in cancer, Alzheimer\u2019s, diabetes, and more. \\n\\nA unity agenda for the nation. \\n\\nWe can do this. \\n\\nMy fellow Americans\u2014tonight , we have gathered in a sacred space\u2014the citadel of our democracy. \\n\\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \\n\\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \\n\\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \\n\\nNow is the hour.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain.html"} {"id": "2bf3810292b3-9", "text": "and most prosperous nation the world has ever known. \\n\\nNow is the hour. \\n\\nOur moment of responsibility. \\n\\nOur test of resolve and conscience, of history itself. \\n\\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \\n\\nWell I know this nation.\\nSource: 34-pl\\n=========\\nFINAL ANSWER: The president did not mention Michael Jackson.\\nSOURCES:\\n\\nQUESTION: {question}\\n=========\\n{summaries}\\n=========\\nFINAL ANSWER:', template_format='f-string', validate_template=True), **kwargs: Any) \u2192 BaseQAWithSourcesChain[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain.html"} {"id": "2bf3810292b3-10", "text": "Construct the chain from an LLM.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain.html"} {"id": "2bf3810292b3-11", "text": "The other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain.html"} {"id": "2bf3810292b3-12", "text": "Set the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_naming\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nFix backwards compatibility in naming.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain.html"} {"id": "1ff4528f1f8c-0", "text": "langchain.chains.combine_documents.base.BaseCombineDocumentsChain\u00b6\nclass langchain.chains.combine_documents.base.BaseCombineDocumentsChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, input_key: str = 'input_documents', output_key: str = 'output_text')[source]\u00b6\nBases: Chain, ABC\nBase interface for chains combining documents.\nSubclasses of this chain deal with combining documents in a variety of\nways. This base class exists to add some uniformity in the interface these types\nof chains should expose. Namely, they expect an input key related to the documents\nto use (default input_documents), and then also expose a method to calculate\nthe length of a prompt from documents (useful for outside callers to use to\ndetermine whether it\u2019s safe to pass a list of documents into this chain or whether\nthat will longer than the context length).\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html"} {"id": "1ff4528f1f8c-1", "text": "Optional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html"} {"id": "1ff4528f1f8c-2", "text": "memory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html"} {"id": "1ff4528f1f8c-3", "text": "callbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nabstract async acombine_docs(docs: List[Document], **kwargs: Any) \u2192 Tuple[str, dict][source]\u00b6\nCombine documents into a single string.\nParameters\ndocs \u2013 List[Document], the documents to combine\n**kwargs \u2013 Other parameters to use in combining documents, often\nother inputs to the prompt.\nReturns\nThe first element returned is the single string output. The second\nelement returned is a dictionary of other keys to return.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html"} {"id": "1ff4528f1f8c-4", "text": "has more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nabstract combine_docs(docs: List[Document], **kwargs: Any) \u2192 Tuple[str, dict][source]\u00b6\nCombine documents into a single string.\nParameters\ndocs \u2013 List[Document], the documents to combine\n**kwargs \u2013 Other parameters to use in combining documents, often", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html"} {"id": "1ff4528f1f8c-5", "text": "**kwargs \u2013 Other parameters to use in combining documents, often\nother inputs to the prompt.\nReturns\nThe first element returned is the single string output. The second\nelement returned is a dictionary of other keys to return.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nprompt_length(docs: List[Document], **kwargs: Any) \u2192 Optional[int][source]\u00b6\nReturn the prompt length given the documents passed in.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html"} {"id": "1ff4528f1f8c-6", "text": "Return the prompt length given the documents passed in.\nThis can be used by a caller to determine whether passing in a list\nof documents would exceed a certain prompt length. This useful when\ntrying to ensure that the size of a prompt remains below a certain\ncontext limit.\nParameters\ndocs \u2013 List[Document], a list of documents to use to calculate the\ntotal prompt length.\nReturns\nReturns None if the method does not depend on the prompt length,\notherwise the length of the prompt in tokens.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html"} {"id": "1ff4528f1f8c-7", "text": "tags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html"} {"id": "1ff4528f1f8c-8", "text": "property lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html"} {"id": "5a0e461be39d-0", "text": "langchain.chains.moderation.OpenAIModerationChain\u00b6\nclass langchain.chains.moderation.OpenAIModerationChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model_name: Optional[str] = None, error: bool = False, input_key: str = 'input', output_key: str = 'output', openai_api_key: Optional[str] = None, openai_organization: Optional[str] = None)[source]\u00b6\nBases: Chain\nPass input through a moderation endpoint.\nTo use, you should have the openai python package installed, and the\nenvironment variable OPENAI_API_KEY set with your API key.\nAny parameters that are valid to be passed to the openai.create call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.chains import OpenAIModerationChain\nmoderation = OpenAIModerationChain()\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam error: bool = False\u00b6\nWhether or not to error if bad content was found.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.moderation.OpenAIModerationChain.html"} {"id": "5a0e461be39d-1", "text": "param error: bool = False\u00b6\nWhether or not to error if bad content was found.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam model_name: Optional[str] = None\u00b6\nModeration model name to use.\nparam openai_api_key: Optional[str] = None\u00b6\nparam openai_organization: Optional[str] = None\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.moderation.OpenAIModerationChain.html"} {"id": "5a0e461be39d-2", "text": "will be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.moderation.OpenAIModerationChain.html"} {"id": "5a0e461be39d-3", "text": "Returns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.moderation.OpenAIModerationChain.html"} {"id": "5a0e461be39d-4", "text": "Call the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.moderation.OpenAIModerationChain.html"} {"id": "5a0e461be39d-5", "text": "# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.moderation.OpenAIModerationChain.html"} {"id": "5a0e461be39d-6", "text": "Raise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.moderation.OpenAIModerationChain.html"} {"id": "5a0e461be39d-7", "text": "# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.moderation.OpenAIModerationChain.html"} {"id": "0a875a028f1b-0", "text": "langchain.chains.retrieval_qa.base.BaseRetrievalQA\u00b6\nclass langchain.chains.retrieval_qa.base.BaseRetrievalQA(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, combine_documents_chain: BaseCombineDocumentsChain, input_key: str = 'query', output_key: str = 'result', return_source_documents: bool = False)[source]\u00b6\nBases: Chain\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam combine_documents_chain: BaseCombineDocumentsChain [Required]\u00b6\nChain to use to combine the documents.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.BaseRetrievalQA.html"} {"id": "0a875a028f1b-1", "text": "for the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam return_source_documents: bool = False\u00b6\nReturn the source documents.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.BaseRetrievalQA.html"} {"id": "0a875a028f1b-2", "text": "callbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.BaseRetrievalQA.html"} {"id": "0a875a028f1b-3", "text": "addition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.BaseRetrievalQA.html"} {"id": "0a875a028f1b-4", "text": "addition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_chain_type(llm: BaseLanguageModel, chain_type: str = 'stuff', chain_type_kwargs: Optional[dict] = None, **kwargs: Any) \u2192 BaseRetrievalQA[source]\u00b6\nLoad chain from chain type.\nclassmethod from_llm(llm: BaseLanguageModel, prompt: Optional[PromptTemplate] = None, **kwargs: Any) \u2192 BaseRetrievalQA[source]\u00b6\nInitialize from LLM.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.BaseRetrievalQA.html"} {"id": "0a875a028f1b-5", "text": "Validate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.BaseRetrievalQA.html"} {"id": "0a875a028f1b-6", "text": "a single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.BaseRetrievalQA.html"} {"id": "0a875a028f1b-7", "text": "to_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nallow_population_by_field_name = True\u00b6\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.BaseRetrievalQA.html"} {"id": "d4ea537dda55-0", "text": "langchain.chains.graph_qa.base.GraphQAChain\u00b6\nclass langchain.chains.graph_qa.base.GraphQAChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, graph: NetworkxEntityGraph, entity_extraction_chain: LLMChain, qa_chain: LLMChain, input_key: str = 'query', output_key: str = 'result')[source]\u00b6\nBases: Chain\nChain for question-answering against a graph.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam entity_extraction_chain: LLMChain [Required]\u00b6\nparam graph: NetworkxEntityGraph [Required]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.base.GraphQAChain.html"} {"id": "d4ea537dda55-1", "text": "There are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam qa_chain: LLMChain [Required]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.base.GraphQAChain.html"} {"id": "d4ea537dda55-2", "text": "chain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.base.GraphQAChain.html"} {"id": "d4ea537dda55-3", "text": "tags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.base.GraphQAChain.html"} {"id": "d4ea537dda55-4", "text": "these runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.base.GraphQAChain.html"} {"id": "d4ea537dda55-5", "text": "# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_llm(llm: BaseLanguageModel, qa_prompt: BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template=\"Use the following knowledge triplets to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\\n\\n{context}\\n\\nQuestion: {question}\\nHelpful Answer:\", template_format='f-string', validate_template=True), entity_prompt: BasePromptTemplate = PromptTemplate(input_variables=['input'], output_parser=None, partial_variables={}, template=\"Extract all entities from the following text. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\\n\\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return.\\n\\nEXAMPLE\\ni'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\\nOutput: Langchain\\nEND OF EXAMPLE\\n\\nEXAMPLE\\ni'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I'm working with Sam.\\nOutput: Langchain, Sam\\nEND OF EXAMPLE\\n\\nBegin!\\n\\n{input}\\nOutput:\", template_format='f-string', validate_template=True), **kwargs: Any) \u2192 GraphQAChain[source]\u00b6\nInitialize from LLM.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.base.GraphQAChain.html"} {"id": "d4ea537dda55-6", "text": "only one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.base.GraphQAChain.html"} {"id": "d4ea537dda55-7", "text": "callbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.base.GraphQAChain.html"} {"id": "d4ea537dda55-8", "text": "serialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.base.GraphQAChain.html"} {"id": "316a240b522b-0", "text": "langchain.chains.graph_qa.cypher.extract_cypher\u00b6\nlangchain.chains.graph_qa.cypher.extract_cypher(text: str) \u2192 str[source]\u00b6\nExtract Cypher code from a text.\n:param text: Text to extract Cypher code from.\nReturns\nCypher code extracted from the text.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.cypher.extract_cypher.html"} {"id": "1011f5077b49-0", "text": "langchain.chains.query_constructor.parser.get_parser\u00b6\nlangchain.chains.query_constructor.parser.get_parser(allowed_comparators: Optional[Sequence[Comparator]] = None, allowed_operators: Optional[Sequence[Operator]] = None) \u2192 object[source]\u00b6\nReturns a parser for the query language.\nParameters\nallowed_comparators \u2013 Optional[Sequence[Comparator]]\nallowed_operators \u2013 Optional[Sequence[Operator]]\nReturns\nLark parser for the query language.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.parser.get_parser.html"} {"id": "56454b5285e8-0", "text": "langchain.chains.openai_functions.tagging.create_tagging_chain_pydantic\u00b6\nlangchain.chains.openai_functions.tagging.create_tagging_chain_pydantic(pydantic_schema: Any, llm: BaseLanguageModel) \u2192 Chain[source]\u00b6\nCreates a chain that extracts information from a passage.\nParameters\npydantic_schema \u2013 The pydantic schema of the entities to extract.\nllm \u2013 The language model to use.\nReturns\nChain (LLMChain) that can be used to extract information from a passage.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.tagging.create_tagging_chain_pydantic.html"} {"id": "15342b702753-0", "text": "langchain.chains.api.base.APIChain\u00b6\nclass langchain.chains.api.base.APIChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, api_request_chain: LLMChain, api_answer_chain: LLMChain, requests_wrapper: TextRequestsWrapper, api_docs: str, question_key: str = 'question', output_key: str = 'output')[source]\u00b6\nBases: Chain\nChain that makes API calls and summarizes the responses to answer a question.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_answer_chain: LLMChain [Required]\u00b6\nparam api_docs: str [Required]\u00b6\nparam api_request_chain: LLMChain [Required]\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.base.APIChain.html"} {"id": "15342b702753-1", "text": "There are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam requests_wrapper: TextRequestsWrapper [Required]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.base.APIChain.html"} {"id": "15342b702753-2", "text": "chain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.base.APIChain.html"} {"id": "15342b702753-3", "text": "tags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.base.APIChain.html"} {"id": "15342b702753-4", "text": "these runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.base.APIChain.html"} {"id": "15342b702753-5", "text": "# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_llm_and_api_docs(llm: BaseLanguageModel, api_docs: str, headers: Optional[dict] = None, api_url_prompt: BasePromptTemplate = PromptTemplate(input_variables=['api_docs', 'question'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\\n{api_docs}\\nUsing this documentation, generate the full API url to call for answering the user question.\\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\\n\\nQuestion:{question}\\nAPI url:', template_format='f-string', validate_template=True), api_response_prompt: BasePromptTemplate = PromptTemplate(input_variables=['api_docs', 'question', 'api_url', 'api_response'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\\n{api_docs}\\nUsing this documentation, generate the full API url to call for answering the user question.\\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\\n\\nQuestion:{question}\\nAPI url: {api_url}\\n\\nHere is the response from the API:\\n\\n{api_response}\\n\\nSummarize this response to answer the original question.\\n\\nSummary:', template_format='f-string', validate_template=True), **kwargs: Any) \u2192 APIChain[source]\u00b6\nLoad chain from just an LLM and the api docs.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.base.APIChain.html"} {"id": "15342b702753-6", "text": "Validate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.base.APIChain.html"} {"id": "15342b702753-7", "text": "a single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.base.APIChain.html"} {"id": "15342b702753-8", "text": "to_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_api_answer_prompt\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nCheck that api answer prompt expects the right variables.\nvalidator validate_api_request_prompt\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nCheck that api request prompt expects the right variables.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.base.APIChain.html"} {"id": "282bbcc90a5c-0", "text": "langchain.chains.conversational_retrieval.base.ChatVectorDBChain\u00b6\nclass langchain.chains.conversational_retrieval.base.ChatVectorDBChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, combine_docs_chain: BaseCombineDocumentsChain, question_generator: LLMChain, output_key: str = 'answer', rephrase_question: bool = True, return_source_documents: bool = False, return_generated_question: bool = False, get_chat_history: Optional[Callable[[Union[Tuple[str, str], BaseMessage]], str]] = None, vectorstore: VectorStore, top_k_docs_for_context: int = 4, search_kwargs: dict = None)[source]\u00b6\nBases: BaseConversationalRetrievalChain\nChain for chatting with a vector database.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam combine_docs_chain: BaseCombineDocumentsChain [Required]\u00b6\nThe chain used to combine any retrieved documents.\nparam get_chat_history: Optional[Callable[[CHAT_TURN_TYPE], str]] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html"} {"id": "282bbcc90a5c-1", "text": "param get_chat_history: Optional[Callable[[CHAT_TURN_TYPE], str]] = None\u00b6\nAn optional function to get a string of the chat history.\nIf None is provided, will use a default.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam output_key: str = 'answer'\u00b6\nThe output key to return the final answer of this chain in.\nparam question_generator: LLMChain [Required]\u00b6\nThe chain used to generate a new question for the sake of retrieval.\nThis chain will take in the current question (with variable question)\nand any chat history (with variable chat_history) and will produce\na new standalone question to be used later on.\nparam rephrase_question: bool = True\u00b6\nWhether or not to pass the new generated question to the combine_docs_chain.\nIf True, will pass the new generated question along.\nIf False, will only use the new generated question for retrieval and pass the\noriginal question along to the combine_docs_chain.\nparam return_generated_question: bool = False\u00b6\nReturn the generated question as part of the final result.\nparam return_source_documents: bool = False\u00b6\nReturn the retrieved source documents as part of the final result.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html"} {"id": "282bbcc90a5c-2", "text": "Return the retrieved source documents as part of the final result.\nparam search_kwargs: dict [Optional]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam top_k_docs_for_context: int = 4\u00b6\nparam vectorstore: VectorStore [Required]\u00b6\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html"} {"id": "282bbcc90a5c-3", "text": "tags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html"} {"id": "282bbcc90a5c-4", "text": "to False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html"} {"id": "282bbcc90a5c-5", "text": "directly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_llm(llm: BaseLanguageModel, vectorstore: VectorStore, condense_question_prompt: BasePromptTemplate = PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\\n\\nChat History:\\n{chat_history}\\nFollow Up Input: {question}\\nStandalone question:', template_format='f-string', validate_template=True), chain_type: str = 'stuff', combine_docs_chain_kwargs: Optional[Dict] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 BaseConversationalRetrievalChain[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html"} {"id": "282bbcc90a5c-6", "text": "Load chain from LLM.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html"} {"id": "282bbcc90a5c-7", "text": "The other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html"} {"id": "282bbcc90a5c-8", "text": "Set the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty input_keys: List[str]\u00b6\nInput keys.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\nallow_population_by_field_name = True\u00b6\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html"} {"id": "299fae5601d8-0", "text": "langchain.chains.api.openapi.chain.OpenAPIEndpointChain\u00b6\nclass langchain.chains.api.openapi.chain.OpenAPIEndpointChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, api_request_chain: LLMChain, api_response_chain: Optional[LLMChain] = None, api_operation: APIOperation, requests: Requests = None, param_mapping: _ParamMapping, return_intermediate_steps: bool = False, instructions_key: str = 'instructions', output_key: str = 'output', max_text_length: Optional[ConstrainedIntValue] = None)[source]\u00b6\nBases: Chain, BaseModel\nChain interacts with an OpenAPI endpoint using natural language.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_operation: APIOperation [Required]\u00b6\nparam api_request_chain: LLMChain [Required]\u00b6\nparam api_response_chain: Optional[LLMChain] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html"} {"id": "299fae5601d8-1", "text": "Optional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam param_mapping: _ParamMapping [Required]\u00b6\nparam requests: Requests [Optional]\u00b6\nparam return_intermediate_steps: bool = False\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html"} {"id": "299fae5601d8-2", "text": "only one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html"} {"id": "299fae5601d8-3", "text": "chain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html"} {"id": "299fae5601d8-4", "text": "sole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndeserialize_json_input(serialized_args: str) \u2192 dict[source]\u00b6\nUse the serialized typescript dictionary.\nResolve the path, query params dict, and optional requestBody dict.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html"} {"id": "299fae5601d8-5", "text": "# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_api_operation(operation: APIOperation, llm: BaseLanguageModel, requests: Optional[Requests] = None, verbose: bool = False, return_intermediate_steps: bool = False, raw_response: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 OpenAPIEndpointChain[source]\u00b6\nCreate an OpenAPIEndpointChain from an operation and a spec.\nclassmethod from_url_and_method(spec_url: str, path: str, method: str, llm: BaseLanguageModel, requests: Optional[Requests] = None, return_intermediate_steps: bool = False, **kwargs: Any) \u2192 OpenAPIEndpointChain[source]\u00b6\nCreate an OpenAPIEndpoint from a spec at the specified url.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html"} {"id": "299fae5601d8-6", "text": "Returns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html"} {"id": "299fae5601d8-7", "text": "# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html"} {"id": "4e7b78b7e952-0", "text": "langchain.chains.llm_checker.base.LLMCheckerChain\u00b6\nclass langchain.chains.llm_checker.base.LLMCheckerChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, question_to_checked_assertions_chain: SequentialChain, llm: Optional[BaseLanguageModel] = None, create_draft_answer_prompt: PromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='{question}\\n\\n', template_format='f-string', validate_template=True), list_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['statement'], output_parser=None, partial_variables={}, template='Here is a statement:\\n{statement}\\nMake a bullet point list of the assumptions you made when producing the above statement.\\n\\n', template_format='f-string', validate_template=True), check_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='Here is a bullet point list of assertions:\\n{assertions}\\nFor each assertion, determine whether it is true or false. If it is false, explain why.\\n\\n', template_format='f-string', validate_template=True), revised_answer_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'question'], output_parser=None, partial_variables={}, template=\"{checked_assertions}\\n\\nQuestion: In light of the above assertions and checks, how would you answer the question '{question}'?\\n\\nAnswer:\", template_format='f-string', validate_template=True), input_key: str = 'query', output_key: str = 'result')[source]\u00b6\nBases: Chain\nChain for question-answering with self-verification.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_checker.base.LLMCheckerChain.html"} {"id": "4e7b78b7e952-1", "text": "Bases: Chain\nChain for question-answering with self-verification.\nExample\nfrom langchain import OpenAI, LLMCheckerChain\nllm = OpenAI(temperature=0.7)\nchecker_chain = LLMCheckerChain.from_llm(llm)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam check_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='Here is a bullet point list of assertions:\\n{assertions}\\nFor each assertion, determine whether it is true or false. If it is false, explain why.\\n\\n', template_format='f-string', validate_template=True)\u00b6\n[Deprecated]\nparam create_draft_answer_prompt: PromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='{question}\\n\\n', template_format='f-string', validate_template=True)\u00b6\n[Deprecated]\nparam list_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['statement'], output_parser=None, partial_variables={}, template='Here is a statement:\\n{statement}\\nMake a bullet point list of the assumptions you made when producing the above statement.\\n\\n', template_format='f-string', validate_template=True)\u00b6\n[Deprecated]\nparam llm: Optional[BaseLanguageModel] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_checker.base.LLMCheckerChain.html"} {"id": "4e7b78b7e952-2", "text": "[Deprecated]\nparam llm: Optional[BaseLanguageModel] = None\u00b6\n[Deprecated] LLM wrapper to use.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam question_to_checked_assertions_chain: SequentialChain [Required]\u00b6\nparam revised_answer_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'question'], output_parser=None, partial_variables={}, template=\"{checked_assertions}\\n\\nQuestion: In light of the above assertions and checks, how would you answer the question '{question}'?\\n\\nAnswer:\", template_format='f-string', validate_template=True)\u00b6\n[Deprecated] Prompt to use when questioning the documents.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_checker.base.LLMCheckerChain.html"} {"id": "4e7b78b7e952-3", "text": "will be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_checker.base.LLMCheckerChain.html"} {"id": "4e7b78b7e952-4", "text": "Returns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_checker.base.LLMCheckerChain.html"} {"id": "4e7b78b7e952-5", "text": "Call the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_checker.base.LLMCheckerChain.html"} {"id": "4e7b78b7e952-6", "text": "# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_checker.base.LLMCheckerChain.html"} {"id": "4e7b78b7e952-7", "text": "# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_llm(llm: BaseLanguageModel, create_draft_answer_prompt: PromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='{question}\\n\\n', template_format='f-string', validate_template=True), list_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['statement'], output_parser=None, partial_variables={}, template='Here is a statement:\\n{statement}\\nMake a bullet point list of the assumptions you made when producing the above statement.\\n\\n', template_format='f-string', validate_template=True), check_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='Here is a bullet point list of assertions:\\n{assertions}\\nFor each assertion, determine whether it is true or false. If it is false, explain why.\\n\\n', template_format='f-string', validate_template=True), revised_answer_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'question'], output_parser=None, partial_variables={}, template=\"{checked_assertions}\\n\\nQuestion: In light of the above assertions and checks, how would you answer the question '{question}'?\\n\\nAnswer:\", template_format='f-string', validate_template=True), **kwargs: Any) \u2192 LLMCheckerChain[source]\u00b6\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_checker.base.LLMCheckerChain.html"} {"id": "4e7b78b7e952-8", "text": "Returns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_checker.base.LLMCheckerChain.html"} {"id": "4e7b78b7e952-9", "text": "addition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_checker.base.LLMCheckerChain.html"} {"id": "4e7b78b7e952-10", "text": "property lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_checker.base.LLMCheckerChain.html"} {"id": "ed15c6e4bcc6-0", "text": "langchain.chains.openai_functions.qa_with_structure.AnswerWithSources\u00b6\nclass langchain.chains.openai_functions.qa_with_structure.AnswerWithSources(*, answer: str, sources: List[str])[source]\u00b6\nBases: BaseModel\nAn answer to the question being asked, with sources.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam answer: str [Required]\u00b6\nAnswer to the question that was asked\nparam sources: List[str] [Required]\u00b6\nList of sources used to answer the question", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.qa_with_structure.AnswerWithSources.html"} {"id": "e4110d1f8117-0", "text": "langchain.chains.query_constructor.parser.QueryTransformer\u00b6\nlangchain.chains.query_constructor.parser.QueryTransformer\u00b6\nalias of None", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.parser.QueryTransformer.html"} {"id": "9b53ee7ef484-0", "text": "langchain.chains.flare.base.QuestionGeneratorChain\u00b6\nclass langchain.chains.flare.base.QuestionGeneratorChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, prompt: BasePromptTemplate = PromptTemplate(input_variables=['user_input', 'current_response', 'uncertain_span'], output_parser=None, partial_variables={}, template='Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\\n\\n>>> USER INPUT: {user_input}\\n>>> EXISTING PARTIAL RESPONSE: {current_response}\\n\\nThe question to which the answer is the term/entity/phrase \"{uncertain_span}\" is:', template_format='f-string', validate_template=True), llm: BaseLanguageModel, output_key: str = 'text', output_parser: BaseLLMOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None)[source]\u00b6\nBases: LLMChain\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam llm: BaseLanguageModel [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.flare.base.QuestionGeneratorChain.html"} {"id": "9b53ee7ef484-1", "text": "for full details.\nparam llm: BaseLanguageModel [Required]\u00b6\nLanguage model to call.\nparam llm_kwargs: dict [Optional]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam output_parser: BaseLLMOutputParser [Optional]\u00b6\nOutput parser to use.\nDefaults to one that takes the most likely string but does not change it\notherwise.\nparam prompt: BasePromptTemplate = PromptTemplate(input_variables=['user_input', 'current_response', 'uncertain_span'], output_parser=None, partial_variables={}, template='Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\\n\\n>>> USER INPUT: {user_input}\\n>>> EXISTING PARTIAL RESPONSE: {current_response}\\n\\nThe question to which the answer is the term/entity/phrase \"{uncertain_span}\" is:', template_format='f-string', validate_template=True)\u00b6\nPrompt object to use.\nparam return_final_only: bool = True\u00b6\nWhether to return only the final parsed result. Defaults to True.\nIf false, will return a bunch of extra information about the generation.\nparam tags: Optional[List[str]] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.flare.base.QuestionGeneratorChain.html"} {"id": "9b53ee7ef484-2", "text": "param tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.flare.base.QuestionGeneratorChain.html"} {"id": "9b53ee7ef484-3", "text": "metadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.\nasync aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.flare.base.QuestionGeneratorChain.html"} {"id": "9b53ee7ef484-4", "text": "these runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.\napply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.\nasync apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\nasync apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, str]]\u00b6\nCall apredict and then parse the results.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.flare.base.QuestionGeneratorChain.html"} {"id": "9b53ee7ef484-5", "text": "Call apredict and then parse the results.\nasync aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.flare.base.QuestionGeneratorChain.html"} {"id": "9b53ee7ef484-6", "text": "Example\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ncreate_outputs(llm_result: LLMResult) \u2192 List[Dict[str, Any]]\u00b6\nCreate outputs from response.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_string(llm: BaseLanguageModel, template: str) \u2192 LLMChain\u00b6\nCreate LLMChain from LLM and template.\ngenerate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.\npredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.flare.base.QuestionGeneratorChain.html"} {"id": "9b53ee7ef484-7", "text": "Returns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\npredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, Any]]\u00b6\nCall predict and then parse the results.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.flare.base.QuestionGeneratorChain.html"} {"id": "9b53ee7ef484-8", "text": "Raise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.flare.base.QuestionGeneratorChain.html"} {"id": "9b53ee7ef484-9", "text": "# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.flare.base.QuestionGeneratorChain.html"} {"id": "329da39b7ba1-0", "text": "langchain.chains.api.openapi.requests_chain.APIRequesterOutputParser\u00b6\nclass langchain.chains.api.openapi.requests_chain.APIRequesterOutputParser[source]\u00b6\nBases: BaseOutputParser\nParse the request and error tags.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nparse(llm_output: str) \u2192 str[source]\u00b6\nParse the request and error tags.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterOutputParser.html"} {"id": "329da39b7ba1-1", "text": "property lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterOutputParser.html"} {"id": "143b46dc50b3-0", "text": "langchain.docstore.in_memory.InMemoryDocstore\u00b6\nclass langchain.docstore.in_memory.InMemoryDocstore(_dict: Optional[Dict[str, Document]] = None)[source]\u00b6\nBases: Docstore, AddableMixin\nSimple in memory docstore in the form of a dict.\nInitialize with dict.\nMethods\n__init__([_dict])\nInitialize with dict.\nadd(texts)\nAdd texts to in memory dictionary.\nsearch(search)\nSearch via direct lookup.\nadd(texts: Dict[str, Document]) \u2192 None[source]\u00b6\nAdd texts to in memory dictionary.\nParameters\ntexts \u2013 dictionary of id -> document.\nReturns\nNone\nsearch(search: str) \u2192 Union[str, Document][source]\u00b6\nSearch via direct lookup.\nParameters\nsearch \u2013 id of a document to search for.\nReturns\nDocument if found, else error message.", "source": "https://api.python.langchain.com/en/latest/docstore/langchain.docstore.in_memory.InMemoryDocstore.html"} {"id": "4b7f164dd404-0", "text": "langchain.docstore.base.Docstore\u00b6\nclass langchain.docstore.base.Docstore[source]\u00b6\nBases: ABC\nInterface to access to place that stores documents.\nMethods\n__init__()\nsearch(search)\nSearch for document.\nabstract search(search: str) \u2192 Union[str, Document][source]\u00b6\nSearch for document.\nIf page exists, return the page summary, and a Document object.\nIf page does not exist, return similar entries.", "source": "https://api.python.langchain.com/en/latest/docstore/langchain.docstore.base.Docstore.html"} {"id": "103d5fab8239-0", "text": "langchain.docstore.arbitrary_fn.DocstoreFn\u00b6\nclass langchain.docstore.arbitrary_fn.DocstoreFn(lookup_fn: Callable[[str], Union[Document, str]])[source]\u00b6\nBases: Docstore\nLangchain Docstore via arbitrary lookup function.\nThis is useful when:\nit\u2019s expensive to construct an InMemoryDocstore/dict\nyou retrieve documents from remote sources\nyou just want to reuse existing objects\nMethods\n__init__(lookup_fn)\nsearch(search)\nSearch for a document.\nsearch(search: str) \u2192 Document[source]\u00b6\nSearch for a document.\nParameters\nsearch \u2013 search string\nReturns\nDocument if found, else error message.", "source": "https://api.python.langchain.com/en/latest/docstore/langchain.docstore.arbitrary_fn.DocstoreFn.html"} {"id": "3c4a7a9b9e40-0", "text": "langchain.docstore.wikipedia.Wikipedia\u00b6\nclass langchain.docstore.wikipedia.Wikipedia[source]\u00b6\nBases: Docstore\nWrapper around wikipedia API.\nCheck that wikipedia package is installed.\nMethods\n__init__()\nCheck that wikipedia package is installed.\nsearch(search)\nTry to search for wiki page.\nsearch(search: str) \u2192 Union[str, Document][source]\u00b6\nTry to search for wiki page.\nIf page exists, return the page summary, and a PageWithLookups object.\nIf page does not exist, return similar entries.\nParameters\nsearch \u2013 search string.\nReturns: a Document object or error message.", "source": "https://api.python.langchain.com/en/latest/docstore/langchain.docstore.wikipedia.Wikipedia.html"} {"id": "4c6551e1b3f4-0", "text": "langchain.docstore.base.AddableMixin\u00b6\nclass langchain.docstore.base.AddableMixin[source]\u00b6\nBases: ABC\nMixin class that supports adding texts.\nMethods\n__init__()\nadd(texts)\nAdd more documents.\nabstract add(texts: Dict[str, Document]) \u2192 None[source]\u00b6\nAdd more documents.", "source": "https://api.python.langchain.com/en/latest/docstore/langchain.docstore.base.AddableMixin.html"} {"id": "3fbe75e9e8df-0", "text": "langchain.chat_models.google_palm.ChatGooglePalmError\u00b6\nclass langchain.chat_models.google_palm.ChatGooglePalmError[source]\u00b6\nBases: Exception\nError raised when there is an issue with the Google PaLM API.\nadd_note()\u00b6\nException.add_note(note) \u2013\nadd a note to the exception\nwith_traceback()\u00b6\nException.with_traceback(tb) \u2013\nset self.__traceback__ to tb and return self.\nargs\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.google_palm.ChatGooglePalmError.html"} {"id": "f9e392ebc1de-0", "text": "langchain.chat_models.azure_openai.AzureChatOpenAI\u00b6\nclass langchain.chat_models.azure_openai.AzureChatOpenAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model: str = 'gpt-3.5-turbo', temperature: float = 0.7, model_kwargs: Dict[str, Any] = None, openai_api_key: str = '', openai_api_base: str = '', openai_organization: str = '', openai_proxy: str = '', request_timeout: Optional[Union[float, Tuple[float, float]]] = None, max_retries: int = 6, streaming: bool = False, n: int = 1, max_tokens: Optional[int] = None, tiktoken_model_name: Optional[str] = None, deployment_name: str = '', openai_api_type: str = 'azure', openai_api_version: str = '')[source]\u00b6\nBases: ChatOpenAI\nWrapper around Azure OpenAI Chat Completion API. To use this class you\nmust have a deployed model on Azure OpenAI. Use deployment_name in the\nconstructor to refer to the \u201cModel deployment name\u201d in the Azure portal.\nIn addition, you should have the openai python package installed, and the\nfollowing environment variables set or passed in constructor in lower case:\n- OPENAI_API_TYPE (default: azure)\n- OPENAI_API_KEY\n- OPENAI_API_BASE\n- OPENAI_API_VERSION\n- OPENAI_PROXY\nFor exmaple, if you have gpt-35-turbo deployed, with the deployment name", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.azure_openai.AzureChatOpenAI.html"} {"id": "f9e392ebc1de-1", "text": "35-turbo-dev, the constructor should look like:\nAzureChatOpenAI(\n deployment_name=\"35-turbo-dev\",\n openai_api_version=\"2023-03-15-preview\",\n)\nBe aware the API version may change.\nAny parameters that are valid to be passed to the openai.create call can be passed\nin, even if not explicitly saved on this class.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam deployment_name: str = ''\u00b6\nparam max_retries: int = 6\u00b6\nMaximum number of retries to make when generating.\nparam max_tokens: Optional[int] = None\u00b6\nMaximum number of tokens to generate.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_kwargs: Dict[str, Any] [Optional]\u00b6\nHolds any model parameters valid for create call not explicitly specified.\nparam model_name: str = 'gpt-3.5-turbo' (alias 'model')\u00b6\nModel name to use.\nparam n: int = 1\u00b6\nNumber of chat completions to generate for each prompt.\nparam openai_api_base: str = ''\u00b6\nparam openai_api_key: str = ''\u00b6\nBase URL path for API requests,\nleave blank if not using a proxy or service emulator.\nparam openai_api_type: str = 'azure'\u00b6\nparam openai_api_version: str = ''\u00b6\nparam openai_organization: str = ''\u00b6\nparam openai_proxy: str = ''\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.azure_openai.AzureChatOpenAI.html"} {"id": "f9e392ebc1de-2", "text": "param openai_organization: str = ''\u00b6\nparam openai_proxy: str = ''\u00b6\nparam request_timeout: Optional[Union[float, Tuple[float, float]]] = None\u00b6\nTimeout for requests to OpenAI completion API. Default is 600 seconds.\nparam streaming: bool = False\u00b6\nWhether to stream the results or not.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: float = 0.7\u00b6\nWhat sampling temperature to use.\nparam tiktoken_model_name: Optional[str] = None\u00b6\nThe model name to pass to tiktoken when using this class.\nTiktoken is used to count the number of tokens in documents to constrain\nthem to be under a certain limit. By default, when set to None, this will\nbe the same as the embedding model name. However, there are some cases\nwhere you may want to use this Embedding class with a model name not\nsupported by tiktoken. This can include when using Azure embeddings or\nwhen using one of the many model providers that expose an OpenAI-like\nAPI but with different models. In those cases, in order to avoid erroring\nwhen tiktoken is called, you can specify a model name to use here.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nCall self as a function.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.azure_openai.AzureChatOpenAI.html"} {"id": "f9e392ebc1de-3", "text": "Call self as a function.\nasync agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nTop Level call\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.azure_openai.AzureChatOpenAI.html"} {"id": "f9e392ebc1de-4", "text": "Asynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator build_extra\u00a0 \u00bb\u00a0 all fields\u00b6\nBuild extra kwargs from additional params that were passed in.\ncall_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) \u2192 str\u00b6\ncompletion_with_retry(**kwargs: Any) \u2192 Any\u00b6\nUse tenacity to retry the completion call.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.azure_openai.AzureChatOpenAI.html"} {"id": "f9e392ebc1de-5", "text": "dict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nTop Level call\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.azure_openai.AzureChatOpenAI.html"} {"id": "f9e392ebc1de-6", "text": "Get the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nCalculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.\nOfficial documentation: https://github.com/openai/openai-cookbook/blob/\nmain/examples/How_to_format_inputs_to_ChatGPT_models.ipynb\nget_token_ids(text: str) \u2192 List[int]\u00b6\nGet the tokens present in the text with tiktoken package.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.azure_openai.AzureChatOpenAI.html"} {"id": "f9e392ebc1de-7", "text": "first occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\nallow_population_by_field_name = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.azure_openai.AzureChatOpenAI.html"} {"id": "eec9f6506d43-0", "text": "langchain.chat_models.anthropic.ChatAnthropic\u00b6\nclass langchain.chat_models.anthropic.ChatAnthropic(*, client: Any = None, async_client: Any = None, model: str = 'claude-v1', max_tokens_to_sample: int = 256, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, streaming: bool = False, default_request_timeout: Optional[float] = None, anthropic_api_url: Optional[str] = None, anthropic_api_key: Optional[str] = None, HUMAN_PROMPT: Optional[str] = None, AI_PROMPT: Optional[str] = None, count_tokens: Optional[Callable[[str], int]] = None, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: BaseChatModel, _AnthropicCommon\nWrapper around Anthropic\u2019s large language model.\nTo use, you should have the anthropic python package installed, and the\nenvironment variable ANTHROPIC_API_KEY set with your API key, or pass\nit as a named parameter to the constructor.\nExample\nimport anthropic\nfrom langchain.llms import Anthropic\nmodel = ChatAnthropic(model=\"\", anthropic_api_key=\"my-api-key\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam AI_PROMPT: Optional[str] = None\u00b6\nparam HUMAN_PROMPT: Optional[str] = None\u00b6\nparam anthropic_api_key: Optional[str] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.anthropic.ChatAnthropic.html"} {"id": "eec9f6506d43-1", "text": "param anthropic_api_key: Optional[str] = None\u00b6\nparam anthropic_api_url: Optional[str] = None\u00b6\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None\u00b6\nparam callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None\u00b6\nparam count_tokens: Optional[Callable[[str], int]] = None\u00b6\nparam default_request_timeout: Optional[float] = None\u00b6\nTimeout for requests to Anthropic Completion API. Default is 600 seconds.\nparam max_tokens_to_sample: int = 256\u00b6\nDenotes the number of tokens to predict per generation.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model: str = 'claude-v1'\u00b6\nModel name to use.\nparam streaming: bool = False\u00b6\nWhether to stream the results.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: Optional[float] = None\u00b6\nA non-negative float that tunes the degree of randomness in generation.\nparam top_k: Optional[int] = None\u00b6\nNumber of most likely tokens to consider at each step.\nparam top_p: Optional[float] = None\u00b6\nTotal probability mass of tokens to consider at each step.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nCall self as a function.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.anthropic.ChatAnthropic.html"} {"id": "eec9f6506d43-2", "text": "Call self as a function.\nasync agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nTop Level call\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.anthropic.ChatAnthropic.html"} {"id": "eec9f6506d43-3", "text": "Asynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ncall_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) \u2192 str\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nTop Level call", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.anthropic.ChatAnthropic.html"} {"id": "eec9f6506d43-4", "text": "Top Level call\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int[source]\u00b6\nCalculate number of tokens.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.anthropic.ChatAnthropic.html"} {"id": "eec9f6506d43-5", "text": "get_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.anthropic.ChatAnthropic.html"} {"id": "eec9f6506d43-6", "text": "validator validate_environment\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.anthropic.ChatAnthropic.html"} {"id": "e8db59c6c506-0", "text": "langchain.chat_models.fake.FakeListChatModel\u00b6\nclass langchain.chat_models.fake.FakeListChatModel(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, responses: List, i: int = 0)[source]\u00b6\nBases: SimpleChatModel\nFake ChatModel for testing purposes.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam i: int = 0\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam responses: List [Required]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nCall self as a function.\nasync agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nTop Level call", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.fake.FakeListChatModel.html"} {"id": "e8db59c6c506-1", "text": "Top Level call\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.fake.FakeListChatModel.html"} {"id": "e8db59c6c506-2", "text": "first occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ncall_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) \u2192 str\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nTop Level call\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.fake.FakeListChatModel.html"} {"id": "e8db59c6c506-3", "text": "API.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.fake.FakeListChatModel.html"} {"id": "e8db59c6c506-4", "text": "predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.fake.FakeListChatModel.html"} {"id": "e8db59c6c506-5", "text": "eg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.fake.FakeListChatModel.html"} {"id": "776e6fca9269-0", "text": "langchain.chat_models.human.HumanInputChatModel\u00b6\nclass langchain.chat_models.human.HumanInputChatModel(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, input_func: Callable = None, message_func: Callable = None, separator: str = '\\n', input_kwargs: Mapping[str, Any] = {}, message_kwargs: Mapping[str, Any] = {})[source]\u00b6\nBases: BaseChatModel\nChatModel wrapper which returns user input as the response..\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam input_func: Callable [Optional]\u00b6\nparam input_kwargs: Mapping[str, Any] = {}\u00b6\nparam message_func: Callable [Optional]\u00b6\nparam message_kwargs: Mapping[str, Any] = {}\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam separator: str = '\\n'\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nCall self as a function.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.human.HumanInputChatModel.html"} {"id": "776e6fca9269-1", "text": "Call self as a function.\nasync agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nTop Level call\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.human.HumanInputChatModel.html"} {"id": "776e6fca9269-2", "text": "Asynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ncall_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) \u2192 str\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nTop Level call", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.human.HumanInputChatModel.html"} {"id": "776e6fca9269-3", "text": "Top Level call\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.human.HumanInputChatModel.html"} {"id": "776e6fca9269-4", "text": "Useful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.human.HumanInputChatModel.html"} {"id": "776e6fca9269-5", "text": "Top model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.human.HumanInputChatModel.html"} {"id": "3d5a405f0c1b-0", "text": "langchain.chat_models.promptlayer_openai.PromptLayerChatOpenAI\u00b6\nclass langchain.chat_models.promptlayer_openai.PromptLayerChatOpenAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model: str = 'gpt-3.5-turbo', temperature: float = 0.7, model_kwargs: Dict[str, Any] = None, openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_organization: Optional[str] = None, openai_proxy: Optional[str] = None, request_timeout: Optional[Union[float, Tuple[float, float]]] = None, max_retries: int = 6, streaming: bool = False, n: int = 1, max_tokens: Optional[int] = None, tiktoken_model_name: Optional[str] = None, pl_tags: Optional[List[str]] = None, return_pl_id: Optional[bool] = False)[source]\u00b6\nBases: ChatOpenAI\nWrapper around OpenAI Chat large language models and PromptLayer.\nTo use, you should have the openai and promptlayer python\npackage installed, and the environment variable OPENAI_API_KEY\nand PROMPTLAYER_API_KEY set with your openAI API key and\npromptlayer key respectively.\nAll parameters that can be passed to the OpenAI LLM can also\nbe passed here. The PromptLayerChatOpenAI adds to optional\nParameters\npl_tags \u2013 List of strings to tag the request with.\nreturn_pl_id \u2013 If True, the PromptLayer request ID will be", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.promptlayer_openai.PromptLayerChatOpenAI.html"} {"id": "3d5a405f0c1b-1", "text": "return_pl_id \u2013 If True, the PromptLayer request ID will be\nreturned in the generation_info field of the\nGeneration object.\nExample\nfrom langchain.chat_models import PromptLayerChatOpenAI\nopenai = PromptLayerChatOpenAI(model_name=\"gpt-3.5-turbo\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam max_retries: int = 6\u00b6\nMaximum number of retries to make when generating.\nparam max_tokens: Optional[int] = None\u00b6\nMaximum number of tokens to generate.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_kwargs: Dict[str, Any] [Optional]\u00b6\nHolds any model parameters valid for create call not explicitly specified.\nparam model_name: str = 'gpt-3.5-turbo' (alias 'model')\u00b6\nModel name to use.\nparam n: int = 1\u00b6\nNumber of chat completions to generate for each prompt.\nparam openai_api_base: Optional[str] = None\u00b6\nparam openai_api_key: Optional[str] = None\u00b6\nBase URL path for API requests,\nleave blank if not using a proxy or service emulator.\nparam openai_organization: Optional[str] = None\u00b6\nparam openai_proxy: Optional[str] = None\u00b6\nparam pl_tags: Optional[List[str]] = None\u00b6\nparam request_timeout: Optional[Union[float, Tuple[float, float]]] = None\u00b6\nTimeout for requests to OpenAI completion API. Default is 600 seconds.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.promptlayer_openai.PromptLayerChatOpenAI.html"} {"id": "3d5a405f0c1b-2", "text": "Timeout for requests to OpenAI completion API. Default is 600 seconds.\nparam return_pl_id: Optional[bool] = False\u00b6\nparam streaming: bool = False\u00b6\nWhether to stream the results or not.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: float = 0.7\u00b6\nWhat sampling temperature to use.\nparam tiktoken_model_name: Optional[str] = None\u00b6\nThe model name to pass to tiktoken when using this class.\nTiktoken is used to count the number of tokens in documents to constrain\nthem to be under a certain limit. By default, when set to None, this will\nbe the same as the embedding model name. However, there are some cases\nwhere you may want to use this Embedding class with a model name not\nsupported by tiktoken. This can include when using Azure embeddings or\nwhen using one of the many model providers that expose an OpenAI-like\nAPI but with different models. In those cases, in order to avoid erroring\nwhen tiktoken is called, you can specify a model name to use here.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nCall self as a function.\nasync agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nTop Level call", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.promptlayer_openai.PromptLayerChatOpenAI.html"} {"id": "3d5a405f0c1b-3", "text": "Top Level call\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.promptlayer_openai.PromptLayerChatOpenAI.html"} {"id": "3d5a405f0c1b-4", "text": "first occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator build_extra\u00a0 \u00bb\u00a0 all fields\u00b6\nBuild extra kwargs from additional params that were passed in.\ncall_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) \u2192 str\u00b6\ncompletion_with_retry(**kwargs: Any) \u2192 Any\u00b6\nUse tenacity to retry the completion call.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nTop Level call\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.promptlayer_openai.PromptLayerChatOpenAI.html"} {"id": "3d5a405f0c1b-5", "text": "Pass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nCalculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.\nOfficial documentation: https://github.com/openai/openai-cookbook/blob/\nmain/examples/How_to_format_inputs_to_ChatGPT_models.ipynb\nget_token_ids(text: str) \u2192 List[int]\u00b6\nGet the tokens present in the text with tiktoken package.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.promptlayer_openai.PromptLayerChatOpenAI.html"} {"id": "3d5a405f0c1b-6", "text": "Get the tokens present in the text with tiktoken package.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.promptlayer_openai.PromptLayerChatOpenAI.html"} {"id": "3d5a405f0c1b-7", "text": "serialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\nallow_population_by_field_name = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.promptlayer_openai.PromptLayerChatOpenAI.html"} {"id": "915abe6dd1e3-0", "text": "langchain.chat_models.base.BaseChatModel\u00b6\nclass langchain.chat_models.base.BaseChatModel(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: BaseLanguageModel, ABC\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None\u00b6\nparam callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 BaseMessage[source]\u00b6\nCall self as a function.\nasync agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult[source]\u00b6\nTop Level call", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.base.BaseChatModel.html"} {"id": "915abe6dd1e3-1", "text": "Top Level call\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult[source]\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str[source]\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.base.BaseChatModel.html"} {"id": "915abe6dd1e3-2", "text": "first occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage[source]\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ncall_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) \u2192 str[source]\u00b6\ndict(**kwargs: Any) \u2192 Dict[source]\u00b6\nReturn a dictionary of the LLM.\ngenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult[source]\u00b6\nTop Level call\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult[source]\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.base.BaseChatModel.html"} {"id": "915abe6dd1e3-3", "text": "API.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.base.BaseChatModel.html"} {"id": "915abe6dd1e3-4", "text": "predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str[source]\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage[source]\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nRaise deprecation warning if callback_manager is used.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.base.BaseChatModel.html"} {"id": "915abe6dd1e3-5", "text": "property lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.base.BaseChatModel.html"} {"id": "1c0676f77f36-0", "text": "langchain.chat_models.base.SimpleChatModel\u00b6\nclass langchain.chat_models.base.SimpleChatModel(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: BaseChatModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None\u00b6\nparam callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nCall self as a function.\nasync agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nTop Level call", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.base.SimpleChatModel.html"} {"id": "1c0676f77f36-1", "text": "Top Level call\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.base.SimpleChatModel.html"} {"id": "1c0676f77f36-2", "text": "first occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ncall_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) \u2192 str\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nTop Level call\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.base.SimpleChatModel.html"} {"id": "1c0676f77f36-3", "text": "API.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.base.SimpleChatModel.html"} {"id": "1c0676f77f36-4", "text": "predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.base.SimpleChatModel.html"} {"id": "1c0676f77f36-5", "text": "eg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.base.SimpleChatModel.html"} {"id": "fe4ff81e4504-0", "text": "langchain.chat_models.google_palm.chat_with_retry\u00b6\nlangchain.chat_models.google_palm.chat_with_retry(llm: ChatGooglePalm, **kwargs: Any) \u2192 Any[source]\u00b6\nUse tenacity to retry the completion call.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.google_palm.chat_with_retry.html"} {"id": "aafb215d3488-0", "text": "langchain.chat_models.google_palm.ChatGooglePalm\u00b6\nclass langchain.chat_models.google_palm.ChatGooglePalm(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model_name: str = 'models/chat-bison-001', google_api_key: Optional[str] = None, temperature: Optional[float] = None, top_p: Optional[float] = None, top_k: Optional[int] = None, n: int = 1)[source]\u00b6\nBases: BaseChatModel, BaseModel\nWrapper around Google\u2019s PaLM Chat API.\nTo use you must have the google.generativeai Python package installed and\neither:\nThe GOOGLE_API_KEY` environment varaible set with your API key, or\nPass your API key using the google_api_key kwarg to the ChatGoogle\nconstructor.\nExample\nfrom langchain.chat_models import ChatGooglePalm\nchat = ChatGooglePalm()\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam google_api_key: Optional[str] = None\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_name: str = 'models/chat-bison-001'\u00b6\nModel name to use.\nparam n: int = 1\u00b6\nNumber of chat completions to generate for each prompt. Note that the API may", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.google_palm.ChatGooglePalm.html"} {"id": "aafb215d3488-1", "text": "Number of chat completions to generate for each prompt. Note that the API may\nnot return the full n completions if duplicates are generated.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: Optional[float] = None\u00b6\nRun inference with this temperature. Must by in the closed\ninterval [0.0, 1.0].\nparam top_k: Optional[int] = None\u00b6\nDecode using top-k sampling: consider the set of top_k most probable tokens.\nMust be positive.\nparam top_p: Optional[float] = None\u00b6\nDecode using nucleus sampling: consider the smallest set of tokens whose\nprobability sum is at least top_p. Must be in the closed interval [0.0, 1.0].\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nCall self as a function.\nasync agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nTop Level call\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.google_palm.ChatGooglePalm.html"} {"id": "aafb215d3488-2", "text": "This method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.google_palm.ChatGooglePalm.html"} {"id": "aafb215d3488-3", "text": "Asynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ncall_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) \u2192 str\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nTop Level call\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.google_palm.ChatGooglePalm.html"} {"id": "aafb215d3488-4", "text": "converted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.google_palm.ChatGooglePalm.html"} {"id": "aafb215d3488-5", "text": "Parameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate api key, python package exists, temperature, top_p, and top_k.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.google_palm.ChatGooglePalm.html"} {"id": "aafb215d3488-6", "text": "Return a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.google_palm.ChatGooglePalm.html"} {"id": "00bb6162120a-0", "text": "langchain.chat_models.jinachat.JinaChat\u00b6\nclass langchain.chat_models.jinachat.JinaChat(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, temperature: float = 0.7, model_kwargs: Dict[str, Any] = None, jinachat_api_key: Optional[str] = None, request_timeout: Optional[Union[float, Tuple[float, float]]] = None, max_retries: int = 6, streaming: bool = False, max_tokens: Optional[int] = None)[source]\u00b6\nBases: BaseChatModel\nJinaChat is a wrapper for Jina AI\u2019s LLM service, providing cost-effective\nimage chat capabilities in comparison to other LLM APIs.\nTo use, you should have the openai python package installed, and the\nenvironment variable JINACHAT_API_KEY set with your API key.\nAny parameters that are valid to be passed to the openai.create call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.chat_models import JinaChat\nchat = JinaChat()\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam jinachat_api_key: Optional[str] = None\u00b6\nBase URL path for API requests,\nleave blank if not using a proxy or service emulator.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.jinachat.JinaChat.html"} {"id": "00bb6162120a-1", "text": "Base URL path for API requests,\nleave blank if not using a proxy or service emulator.\nparam max_retries: int = 6\u00b6\nMaximum number of retries to make when generating.\nparam max_tokens: Optional[int] = None\u00b6\nMaximum number of tokens to generate.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_kwargs: Dict[str, Any] [Optional]\u00b6\nHolds any model parameters valid for create call not explicitly specified.\nparam request_timeout: Optional[Union[float, Tuple[float, float]]] = None\u00b6\nTimeout for requests to JinaChat completion API. Default is 600 seconds.\nparam streaming: bool = False\u00b6\nWhether to stream the results or not.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: float = 0.7\u00b6\nWhat sampling temperature to use.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nCall self as a function.\nasync agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nTop Level call\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.jinachat.JinaChat.html"} {"id": "00bb6162120a-2", "text": "Asynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.jinachat.JinaChat.html"} {"id": "00bb6162120a-3", "text": "Asynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator build_extra\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nBuild extra kwargs from additional params that were passed in.\ncall_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) \u2192 str\u00b6\ncompletion_with_retry(**kwargs: Any) \u2192 Any[source]\u00b6\nUse tenacity to retry the completion call.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nTop Level call\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.jinachat.JinaChat.html"} {"id": "00bb6162120a-4", "text": "need more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.jinachat.JinaChat.html"} {"id": "00bb6162120a-5", "text": "Pass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.jinachat.JinaChat.html"} {"id": "00bb6162120a-6", "text": "eg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nallow_population_by_field_name = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.jinachat.JinaChat.html"} {"id": "7be9b63198fc-0", "text": "langchain.chat_models.vertexai.ChatVertexAI\u00b6\nclass langchain.chat_models.vertexai.ChatVertexAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: '_LanguageModel' = None, model_name: str = 'chat-bison', temperature: float = 0.0, max_output_tokens: int = 128, top_p: float = 0.95, top_k: int = 40, stop: Optional[List[str]] = None, project: Optional[str] = None, location: str = 'us-central1', credentials: Any = None, request_parallelism: int = 5, max_retries: int = 6)[source]\u00b6\nBases: _VertexAICommon, BaseChatModel\nWrapper around Vertex AI large language models.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam credentials: Any = None\u00b6\nThe default custom credentials (google.auth.credentials.Credentials) to use\nparam location: str = 'us-central1'\u00b6\nThe default location to use when making API calls.\nparam max_output_tokens: int = 128\u00b6\nToken limit determines the maximum amount of text output from one prompt.\nparam max_retries: int = 6\u00b6\nThe maximum number of retries to make when generating.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.vertexai.ChatVertexAI.html"} {"id": "7be9b63198fc-1", "text": "Metadata to add to the run trace.\nparam model_name: str = 'chat-bison'\u00b6\nModel name to use.\nparam project: Optional[str] = None\u00b6\nThe default GCP project to use when making Vertex API calls.\nparam request_parallelism: int = 5\u00b6\nThe amount of parallelism allowed for requests issued to VertexAI models.\nparam stop: Optional[List[str]] = None\u00b6\nOptional list of stop words to use when generating.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: float = 0.0\u00b6\nSampling temperature, it controls the degree of randomness in token selection.\nparam top_k: int = 40\u00b6\nHow the model selects tokens for output, the next token is selected from\nparam top_p: float = 0.95\u00b6\nTokens are selected from most probable to least until the sum of their\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nCall self as a function.\nasync agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nTop Level call\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.vertexai.ChatVertexAI.html"} {"id": "7be9b63198fc-2", "text": "Asynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.vertexai.ChatVertexAI.html"} {"id": "7be9b63198fc-3", "text": "Asynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ncall_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) \u2192 str\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nTop Level call\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.vertexai.ChatVertexAI.html"} {"id": "7be9b63198fc-4", "text": "converted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.vertexai.ChatVertexAI.html"} {"id": "7be9b63198fc-5", "text": "Parameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that the python package exists in environment.\nproperty is_codey_model: bool\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.vertexai.ChatVertexAI.html"} {"id": "7be9b63198fc-6", "text": "Return a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\ntask_executor: ClassVar[Optional[Executor]] = None\u00b6\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.vertexai.ChatVertexAI.html"} {"id": "bc5872b43293-0", "text": "langchain.chat_models.openai.ChatOpenAI\u00b6\nclass langchain.chat_models.openai.ChatOpenAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model: str = 'gpt-3.5-turbo', temperature: float = 0.7, model_kwargs: Dict[str, Any] = None, openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_organization: Optional[str] = None, openai_proxy: Optional[str] = None, request_timeout: Optional[Union[float, Tuple[float, float]]] = None, max_retries: int = 6, streaming: bool = False, n: int = 1, max_tokens: Optional[int] = None, tiktoken_model_name: Optional[str] = None)[source]\u00b6\nBases: BaseChatModel\nWrapper around OpenAI Chat large language models.\nTo use, you should have the openai python package installed, and the\nenvironment variable OPENAI_API_KEY set with your API key.\nAny parameters that are valid to be passed to the openai.create call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.chat_models import ChatOpenAI\nopenai = ChatOpenAI(model_name=\"gpt-3.5-turbo\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html"} {"id": "bc5872b43293-1", "text": "param callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam max_retries: int = 6\u00b6\nMaximum number of retries to make when generating.\nparam max_tokens: Optional[int] = None\u00b6\nMaximum number of tokens to generate.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_kwargs: Dict[str, Any] [Optional]\u00b6\nHolds any model parameters valid for create call not explicitly specified.\nparam model_name: str = 'gpt-3.5-turbo' (alias 'model')\u00b6\nModel name to use.\nparam n: int = 1\u00b6\nNumber of chat completions to generate for each prompt.\nparam openai_api_base: Optional[str] = None\u00b6\nparam openai_api_key: Optional[str] = None\u00b6\nBase URL path for API requests,\nleave blank if not using a proxy or service emulator.\nparam openai_organization: Optional[str] = None\u00b6\nparam openai_proxy: Optional[str] = None\u00b6\nparam request_timeout: Optional[Union[float, Tuple[float, float]]] = None\u00b6\nTimeout for requests to OpenAI completion API. Default is 600 seconds.\nparam streaming: bool = False\u00b6\nWhether to stream the results or not.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: float = 0.7\u00b6\nWhat sampling temperature to use.\nparam tiktoken_model_name: Optional[str] = None\u00b6\nThe model name to pass to tiktoken when using this class.\nTiktoken is used to count the number of tokens in documents to constrain\nthem to be under a certain limit. By default, when set to None, this will", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html"} {"id": "bc5872b43293-2", "text": "them to be under a certain limit. By default, when set to None, this will\nbe the same as the embedding model name. However, there are some cases\nwhere you may want to use this Embedding class with a model name not\nsupported by tiktoken. This can include when using Azure embeddings or\nwhen using one of the many model providers that expose an OpenAI-like\nAPI but with different models. In those cases, in order to avoid erroring\nwhen tiktoken is called, you can specify a model name to use here.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nCall self as a function.\nasync agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nTop Level call\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html"} {"id": "bc5872b43293-3", "text": "Parameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html"} {"id": "bc5872b43293-4", "text": "**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator build_extra\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nBuild extra kwargs from additional params that were passed in.\ncall_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) \u2192 str\u00b6\ncompletion_with_retry(**kwargs: Any) \u2192 Any[source]\u00b6\nUse tenacity to retry the completion call.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nTop Level call\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html"} {"id": "bc5872b43293-5", "text": "stop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int[source]\u00b6\nCalculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.\nOfficial documentation: https://github.com/openai/openai-cookbook/blob/\nmain/examples/How_to_format_inputs_to_ChatGPT_models.ipynb\nget_token_ids(text: str) \u2192 List[int][source]\u00b6\nGet the tokens present in the text with tiktoken package.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html"} {"id": "bc5872b43293-6", "text": "to the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nallow_population_by_field_name = True\u00b6", "source": "https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html"} {"id": "ea5443a0a9e1-0", "text": "langchain.vectorstores.pgembedding.PGEmbedding\u00b6\nclass langchain.vectorstores.pgembedding.PGEmbedding(connection_string: str, embedding_function: Embeddings, collection_name: str = 'langchain', collection_metadata: Optional[dict] = None, pre_delete_collection: bool = False, logger: Optional[Logger] = None)[source]\u00b6\nBases: VectorStore\nVectorStore implementation using Postgres and the pg_embedding extension.\npg_embedding uses sequential scan by default. but you can create a HNSW index\nusing the create_hnsw_index method.\n- connection_string is a postgres connection string.\n- embedding_function any embedding function implementing\nlangchain.embeddings.base.Embeddings interface.\ncollection_name is the name of the collection to use. (default: langchain)\nNOTE: This is not the name of the table, but the name of the collection.The tables will be created when initializing the store (if not exists)\nSo, make sure the user has the right permissions to create tables.\ndistance_strategy is the distance strategy to use. (default: EUCLIDEAN)\nEUCLIDEAN is the euclidean distance.\npre_delete_collection if True, will delete the collection if it exists.(default: False)\n- Useful for testing.\nMethods\n__init__(connection_string,\u00a0embedding_function)\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_embeddings(texts,\u00a0embeddings,\u00a0metadatas,\u00a0...)\nadd_texts(texts[,\u00a0metadatas,\u00a0ids])\nRun more texts through the embeddings and add to the vectorstore.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgembedding.PGEmbedding.html"} {"id": "ea5443a0a9e1-1", "text": "Run more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\nconnect()\ncreate_collection()\ncreate_hnsw_extension()\ncreate_hnsw_index([max_elements,\u00a0dims,\u00a0m,\u00a0...])\ncreate_tables_if_not_exists()\ndelete([ids])\nDelete by vector ID or other criteria.\ndelete_collection()\ndrop_tables()\nfrom_documents(documents,\u00a0embedding[,\u00a0...])\nReturn VectorStore initialized from documents and embeddings.\nfrom_embeddings(text_embeddings,\u00a0embedding)\nfrom_existing_index(embedding[,\u00a0...])\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nReturn VectorStore initialized from texts and embeddings.\nget_collection(session)\nget_connection_string(kwargs)\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgembedding.PGEmbedding.html"} {"id": "ea5443a0a9e1-2", "text": "Return docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k,\u00a0filter])\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding[,\u00a0k,\u00a0...])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k,\u00a0filter])\nsimilarity_search_with_score_by_vector(embedding)\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_embeddings(texts: List[str], embeddings: List[List[float]], metadatas: List[dict], ids: List[str], **kwargs: Any) \u2192 None[source]\u00b6\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgembedding.PGEmbedding.html"} {"id": "ea5443a0a9e1-3", "text": "Run more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nkwargs \u2013 vectorstore specific parameters\nReturns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgembedding.PGEmbedding.html"} {"id": "ea5443a0a9e1-4", "text": "Return docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\nconnect() \u2192 Connection[source]\u00b6\ncreate_collection() \u2192 None[source]\u00b6\ncreate_hnsw_extension() \u2192 None[source]\u00b6\ncreate_hnsw_index(max_elements: int = 10000, dims: int = 1536, m: int = 8, ef_construction: int = 16, ef_search: int = 16) \u2192 None[source]\u00b6\ncreate_tables_if_not_exists() \u2192 None[source]\u00b6\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\ndelete_collection() \u2192 None[source]\u00b6\ndrop_tables() \u2192 None[source]\u00b6\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, collection_name: str = 'langchain', ids: Optional[List[str]] = None, pre_delete_collection: bool = False, **kwargs: Any) \u2192 PGEmbedding[source]\u00b6\nReturn VectorStore initialized from documents and embeddings.\nclassmethod from_embeddings(text_embeddings: List[Tuple[str, List[float]]], embedding: Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = 'langchain', ids: Optional[List[str]] = None, pre_delete_collection: bool = False, **kwargs: Any) \u2192 PGEmbedding[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgembedding.PGEmbedding.html"} {"id": "ea5443a0a9e1-5", "text": "classmethod from_existing_index(embedding: Embeddings, collection_name: str = 'langchain', pre_delete_collection: bool = False, **kwargs: Any) \u2192 PGEmbedding[source]\u00b6\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = 'langchain', ids: Optional[List[str]] = None, pre_delete_collection: bool = False, **kwargs: Any) \u2192 PGEmbedding[source]\u00b6\nReturn VectorStore initialized from texts and embeddings.\nget_collection(session: Session) \u2192 Optional[CollectionStore][source]\u00b6\nclassmethod get_connection_string(kwargs: Dict[str, Any]) \u2192 str[source]\u00b6\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgembedding.PGEmbedding.html"} {"id": "ea5443a0a9e1-6", "text": "Maximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, filter: Optional[dict] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgembedding.PGEmbedding.html"} {"id": "ea5443a0a9e1-7", "text": "**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None) \u2192 List[Tuple[Document, float]][source]\u00b6\nsimilarity_search_with_score_by_vector(embedding: List[float], k: int = 4, filter: Optional[dict] = None) \u2192 List[Tuple[Document, float]][source]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgembedding.PGEmbedding.html"} {"id": "d03572331d02-0", "text": "langchain.vectorstores.starrocks.StarRocksSettings\u00b6\nclass langchain.vectorstores.starrocks.StarRocksSettings(_env_file: Optional[Union[str, PathLike, List[Union[str, PathLike]], Tuple[Union[str, PathLike], ...]]] = '', _env_file_encoding: Optional[str] = None, _env_nested_delimiter: Optional[str] = None, _secrets_dir: Optional[Union[str, PathLike]] = None, *, host: str = 'localhost', port: int = 9030, username: str = 'root', password: str = '', column_map: Dict[str, str] = {'document': 'document', 'embedding': 'embedding', 'id': 'id', 'metadata': 'metadata'}, database: str = 'default', table: str = 'langchain')[source]\u00b6\nBases: BaseSettings\nStarRocks Client Configuration\nAttribute:\nStarRocks_host (str)An URL to connect to MyScale backend.Defaults to \u2018localhost\u2019.\nStarRocks_port (int) : URL port to connect with HTTP. Defaults to 8443.\nusername (str) : Username to login. Defaults to None.\npassword (str) : Password to login. Defaults to None.\ndatabase (str) : Database name to find the table. Defaults to \u2018default\u2019.\ntable (str) : Table name to operate on.\nDefaults to \u2018vector_table\u2019.\ncolumn_map (Dict)Column type map to project column name onto langchainsemantics. Must have keys: text, id, vector,\nmust be same size to number of columns. For example:\n.. code-block:: python\n{\u2018id\u2019: \u2018text_id\u2019,\n\u2018embedding\u2019: \u2018text_embedding\u2019,\n\u2018document\u2019: \u2018text_plain\u2019,\n\u2018metadata\u2019: \u2018metadata_dictionary_in_json\u2019,\n}\nDefaults to identity map.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.StarRocksSettings.html"} {"id": "d03572331d02-1", "text": "\u2018metadata\u2019: \u2018metadata_dictionary_in_json\u2019,\n}\nDefaults to identity map.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam column_map: Dict[str, str] = {'document': 'document', 'embedding': 'embedding', 'id': 'id', 'metadata': 'metadata'}\u00b6\nparam database: str = 'default'\u00b6\nparam host: str = 'localhost'\u00b6\nparam password: str = ''\u00b6\nparam port: int = 9030\u00b6\nparam table: str = 'langchain'\u00b6\nparam username: str = 'root'\u00b6\nmodel Config[source]\u00b6\nBases: object\nenv_file = '.env'\u00b6\nenv_file_encoding = 'utf-8'\u00b6\nenv_prefix = 'starrocks_'\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.StarRocksSettings.html"} {"id": "449697a24a30-0", "text": "langchain.vectorstores.awadb.AwaDB\u00b6\nclass langchain.vectorstores.awadb.AwaDB(table_name: str = 'langchain_awadb', embedding: Optional[Embeddings] = None, log_and_data_dir: Optional[str] = None, client: Optional[awadb.Client] = None, **kwargs: Any)[source]\u00b6\nBases: VectorStore\nInterface implemented by AwaDB vector stores.\nInitialize with AwaDB client.\n:param table_name: Iterable of strings to add to the vectorstore.\n:param embedding: Optional list of metadatas associated with the texts.\n:param log_and_data_dir: Optional whether to duplicate texts.\n:param client: Optional AwaDB client.\n:param kwargs: any possible extend parameters in the future.\nReturns\nNone.\nMethods\n__init__([table_name,\u00a0embedding,\u00a0...])\nInitialize with AwaDB client.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0is_duplicate_texts])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.awadb.AwaDB.html"} {"id": "449697a24a30-1", "text": "Return docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ncreate_table(table_name,\u00a0**kwargs)\nCreate a new table.\ndelete([ids])\nDelete the documents which have the specified ids.\nfrom_documents(documents[,\u00a0embedding,\u00a0...])\nCreate an AwaDB vectorstore from a list of documents.\nfrom_texts(texts[,\u00a0embedding,\u00a0metadatas,\u00a0...])\nCreate an AwaDB vectorstore from a raw documents.\nget(ids[,\u00a0not_include_fields])\nReturn docs according ids.\nget_current_table(**kwargs)\nGet the current table.\nlist_tables(**kwargs)\nList all the tables created by the client.\nload_local(table_name,\u00a0**kwargs)\nLoad the local specified table.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nsimilarity_search_by_vector([embedding,\u00a0k,\u00a0...])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores\nsimilarity_search_with_score(query[,\u00a0k])", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.awadb.AwaDB.html"} {"id": "449697a24a30-2", "text": "Return docs and relevance scores\nsimilarity_search_with_score(query[,\u00a0k])\nThe most k similar documents and scores of the specified query.\nupdate(ids,\u00a0texts[,\u00a0metadatas])\nUpdate the documents which have the specified ids.\nuse(table_name,\u00a0**kwargs)\nUse the specified table.\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, is_duplicate_texts: Optional[bool] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\n:param texts: Iterable of strings to add to the vectorstore.\n:param metadatas: Optional list of metadatas associated with the texts.\n:param is_duplicate_texts: Optional whether to duplicate texts.\n:param kwargs: any possible extend parameters in the future.\nReturns\nList of ids from adding the texts into the vectorstore.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.awadb.AwaDB.html"} {"id": "449697a24a30-3", "text": "Returns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.awadb.AwaDB.html"} {"id": "449697a24a30-4", "text": "Return docs most similar to query.\ncreate_table(table_name: str, **kwargs: Any) \u2192 bool[source]\u00b6\nCreate a new table.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool][source]\u00b6\nDelete the documents which have the specified ids.\nParameters\nids \u2013 The ids of the embedding vectors.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful.\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_documents(documents: List[Document], embedding: Optional[Embeddings] = None, table_name: str = 'langchain_awadb', log_and_data_dir: Optional[str] = None, client: Optional[awadb.Client] = None, **kwargs: Any) \u2192 AwaDB[source]\u00b6\nCreate an AwaDB vectorstore from a list of documents.\nIf a log_and_data_dir specified, the table will be persisted there.\nParameters\ndocuments (List[Document]) \u2013 List of documents to add to the vectorstore.\nembedding (Optional[Embeddings]) \u2013 Embedding function. Defaults to None.\ntable_name (str) \u2013 Name of the table to create.\nlog_and_data_dir (Optional[str]) \u2013 Directory to persist the table.\nclient (Optional[awadb.Client]) \u2013 AwaDB client.\nAny \u2013 Any possible parameters in the future\nReturns\nAwaDB vectorstore.\nReturn type\nAwaDB\nclassmethod from_texts(texts: List[str], embedding: Optional[Embeddings] = None, metadatas: Optional[List[dict]] = None, table_name: str = 'langchain_awadb', log_and_data_dir: Optional[str] = None, client: Optional[awadb.Client] = None, **kwargs: Any) \u2192 AwaDB[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.awadb.AwaDB.html"} {"id": "449697a24a30-5", "text": "Create an AwaDB vectorstore from a raw documents.\nParameters\ntexts (List[str]) \u2013 List of texts to add to the table.\nembedding (Optional[Embeddings]) \u2013 Embedding function. Defaults to None.\nmetadatas (Optional[List[dict]]) \u2013 List of metadatas. Defaults to None.\ntable_name (str) \u2013 Name of the table to create.\nlog_and_data_dir (Optional[str]) \u2013 Directory of logging and persistence.\nclient (Optional[awadb.Client]) \u2013 AwaDB client\nReturns\nAwaDB vectorstore.\nReturn type\nAwaDB\nget(ids: List[str], not_include_fields: Optional[Set[str]] = None, **kwargs: Any) \u2192 Dict[str, Document][source]\u00b6\nReturn docs according ids.\nParameters\nids \u2013 The ids of the embedding vectors.\nReturns\nDocuments which have the ids.\nget_current_table(**kwargs: Any) \u2192 str[source]\u00b6\nGet the current table.\nlist_tables(**kwargs: Any) \u2192 List[str][source]\u00b6\nList all the tables created by the client.\nload_local(table_name: str, **kwargs: Any) \u2192 bool[source]\u00b6\nLoad the local specified table.\nParameters\ntable_name \u2013 Table name\nkwargs \u2013 Any possible extend parameters in the future.\nReturns\nSuccess or failure of loading the local specified table\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.awadb.AwaDB.html"} {"id": "449697a24a30-6", "text": "k \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to query.\nParameters\nquery \u2013 Text query.\nk \u2013 The maximum number of documents to return.\nkwargs \u2013 Any possible extend parameters in the future.\nReturns\nReturns the k most similar documents to the specified text query.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.awadb.AwaDB.html"} {"id": "449697a24a30-7", "text": "Returns\nReturns the k most similar documents to the specified text query.\nsimilarity_search_by_vector(embedding: Optional[List[float]] = None, k: int = 4, scores: Optional[list] = None, not_include_fields_in_metadata: Optional[Set[str]] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nscores \u2013 Scores for retrieved docs.\nnot_incude_fields_in_metadata \u2013 Not include meta fields of each document.\nReturns\nList of Documents which are the most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs and relevance scoreswhich denote the InnerProduct distance, range from 0 to 1.\nParameters\nquery \u2013 Text query.\nk \u2013 Number of the most similar documents to return. Defaults to 4.\nReturns\nList of (Document, relevance_score) tuples similar to the text query.\nNote that relevance_score ranged from 0 to 1.\n0 is dissimilar, 1 is the most similar.\nsimilarity_search_with_score(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nThe most k similar documents and scores of the specified query.\nParameters\nquery \u2013 Text query.\nk \u2013 The k most similar documents to the text query.\nkwargs \u2013 Any possible extend parameters in the future.\nReturns\nThe k most similar documents to the specified text query.\n0 is dissimilar, 1 is the most similar.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.awadb.AwaDB.html"} {"id": "449697a24a30-8", "text": "0 is dissimilar, 1 is the most similar.\nupdate(ids: List[str], texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nUpdate the documents which have the specified ids.\nParameters\nids \u2013 The id list of the updating embedding vector.\ntexts \u2013 The texts of the updating documents.\nmetadatas \u2013 The metadatas of the updating documents.\nReturns\nthe ids of the updated documents.\nuse(table_name: str, **kwargs: Any) \u2192 bool[source]\u00b6\nUse the specified table. Don\u2019t know the tables, please invoke list_tables.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.awadb.AwaDB.html"} {"id": "08a0264ec714-0", "text": "langchain.vectorstores.singlestoredb.DistanceStrategy\u00b6\nclass langchain.vectorstores.singlestoredb.DistanceStrategy(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]\u00b6\nBases: str, Enum\nEnumerator of the Distance strategies for SingleStoreDB.\nMethods\n__init__(*args,\u00a0**kwds)\ncapitalize()\nReturn a capitalized version of the string.\ncasefold()\nReturn a version of the string suitable for caseless comparisons.\ncenter(width[,\u00a0fillchar])\nReturn a centered string of length width.\ncount(sub[,\u00a0start[,\u00a0end]])\nReturn the number of non-overlapping occurrences of substring sub in string S[start:end].\nencode([encoding,\u00a0errors])\nEncode the string using the codec registered for encoding.\nendswith(suffix[,\u00a0start[,\u00a0end]])\nReturn True if S ends with the specified suffix, False otherwise.\nexpandtabs([tabsize])\nReturn a copy where all tab characters are expanded using spaces.\nfind(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nformat(*args,\u00a0**kwargs)\nReturn a formatted version of S, using substitutions from args and kwargs.\nformat_map(mapping)\nReturn a formatted version of S, using substitutions from mapping.\nindex(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nisalnum()\nReturn True if the string is an alpha-numeric string, False otherwise.\nisalpha()\nReturn True if the string is an alphabetic string, False otherwise.\nisascii()\nReturn True if all characters in the string are ASCII, False otherwise.\nisdecimal()\nReturn True if the string is a decimal string, False otherwise.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.DistanceStrategy.html"} {"id": "08a0264ec714-1", "text": "isdecimal()\nReturn True if the string is a decimal string, False otherwise.\nisdigit()\nReturn True if the string is a digit string, False otherwise.\nisidentifier()\nReturn True if the string is a valid Python identifier, False otherwise.\nislower()\nReturn True if the string is a lowercase string, False otherwise.\nisnumeric()\nReturn True if the string is a numeric string, False otherwise.\nisprintable()\nReturn True if the string is printable, False otherwise.\nisspace()\nReturn True if the string is a whitespace string, False otherwise.\nistitle()\nReturn True if the string is a title-cased string, False otherwise.\nisupper()\nReturn True if the string is an uppercase string, False otherwise.\njoin(iterable,\u00a0/)\nConcatenate any number of strings.\nljust(width[,\u00a0fillchar])\nReturn a left-justified string of length width.\nlower()\nReturn a copy of the string converted to lowercase.\nlstrip([chars])\nReturn a copy of the string with leading whitespace removed.\nmaketrans\nReturn a translation table usable for str.translate().\npartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nremoveprefix(prefix,\u00a0/)\nReturn a str with the given prefix string removed if present.\nremovesuffix(suffix,\u00a0/)\nReturn a str with the given suffix string removed if present.\nreplace(old,\u00a0new[,\u00a0count])\nReturn a copy with all occurrences of substring old replaced by new.\nrfind(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].\nrindex(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.DistanceStrategy.html"} {"id": "08a0264ec714-2", "text": "rjust(width[,\u00a0fillchar])\nReturn a right-justified string of length width.\nrpartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nrsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nrstrip([chars])\nReturn a copy of the string with trailing whitespace removed.\nsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nsplitlines([keepends])\nReturn a list of the lines in the string, breaking at line boundaries.\nstartswith(prefix[,\u00a0start[,\u00a0end]])\nReturn True if S starts with the specified prefix, False otherwise.\nstrip([chars])\nReturn a copy of the string with leading and trailing whitespace removed.\nswapcase()\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\nReturn a version of the string where each word is titlecased.\ntranslate(table,\u00a0/)\nReplace each character in the string using the given translation table.\nupper()\nReturn a copy of the string converted to uppercase.\nzfill(width,\u00a0/)\nPad a numeric string with zeros on the left, to fill a field of the given width.\nAttributes\nEUCLIDEAN_DISTANCE\nDOT_PRODUCT\ncapitalize()\u00b6\nReturn a capitalized version of the string.\nMore specifically, make the first character have upper case and the rest lower\ncase.\ncasefold()\u00b6\nReturn a version of the string suitable for caseless comparisons.\ncenter(width, fillchar=' ', /)\u00b6\nReturn a centered string of length width.\nPadding is done using the specified fill character (default is a space).\ncount(sub[, start[, end]]) \u2192 int\u00b6\nReturn the number of non-overlapping occurrences of substring sub in", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.DistanceStrategy.html"} {"id": "08a0264ec714-3", "text": "Return the number of non-overlapping occurrences of substring sub in\nstring S[start:end]. Optional arguments start and end are\ninterpreted as in slice notation.\nencode(encoding='utf-8', errors='strict')\u00b6\nEncode the string using the codec registered for encoding.\nencodingThe encoding in which to encode the string.\nerrorsThe error handling scheme to use for encoding errors.\nThe default is \u2018strict\u2019 meaning that encoding errors raise a\nUnicodeEncodeError. Other possible values are \u2018ignore\u2019, \u2018replace\u2019 and\n\u2018xmlcharrefreplace\u2019 as well as any other name registered with\ncodecs.register_error that can handle UnicodeEncodeErrors.\nendswith(suffix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S ends with the specified suffix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nsuffix can also be a tuple of strings to try.\nexpandtabs(tabsize=8)\u00b6\nReturn a copy where all tab characters are expanded using spaces.\nIf tabsize is not given, a tab size of 8 characters is assumed.\nfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nformat(*args, **kwargs) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from args and kwargs.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nformat_map(mapping) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from mapping.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.DistanceStrategy.html"} {"id": "08a0264ec714-4", "text": "Return the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nisalnum()\u00b6\nReturn True if the string is an alpha-numeric string, False otherwise.\nA string is alpha-numeric if all characters in the string are alpha-numeric and\nthere is at least one character in the string.\nisalpha()\u00b6\nReturn True if the string is an alphabetic string, False otherwise.\nA string is alphabetic if all characters in the string are alphabetic and there\nis at least one character in the string.\nisascii()\u00b6\nReturn True if all characters in the string are ASCII, False otherwise.\nASCII characters have code points in the range U+0000-U+007F.\nEmpty string is ASCII too.\nisdecimal()\u00b6\nReturn True if the string is a decimal string, False otherwise.\nA string is a decimal string if all characters in the string are decimal and\nthere is at least one character in the string.\nisdigit()\u00b6\nReturn True if the string is a digit string, False otherwise.\nA string is a digit string if all characters in the string are digits and there\nis at least one character in the string.\nisidentifier()\u00b6\nReturn True if the string is a valid Python identifier, False otherwise.\nCall keyword.iskeyword(s) to test whether string s is a reserved identifier,\nsuch as \u201cdef\u201d or \u201cclass\u201d.\nislower()\u00b6\nReturn True if the string is a lowercase string, False otherwise.\nA string is lowercase if all cased characters in the string are lowercase and\nthere is at least one cased character in the string.\nisnumeric()\u00b6\nReturn True if the string is a numeric string, False otherwise.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.DistanceStrategy.html"} {"id": "08a0264ec714-5", "text": "isnumeric()\u00b6\nReturn True if the string is a numeric string, False otherwise.\nA string is numeric if all characters in the string are numeric and there is at\nleast one character in the string.\nisprintable()\u00b6\nReturn True if the string is printable, False otherwise.\nA string is printable if all of its characters are considered printable in\nrepr() or if it is empty.\nisspace()\u00b6\nReturn True if the string is a whitespace string, False otherwise.\nA string is whitespace if all characters in the string are whitespace and there\nis at least one character in the string.\nistitle()\u00b6\nReturn True if the string is a title-cased string, False otherwise.\nIn a title-cased string, upper- and title-case characters may only\nfollow uncased characters and lowercase characters only cased ones.\nisupper()\u00b6\nReturn True if the string is an uppercase string, False otherwise.\nA string is uppercase if all cased characters in the string are uppercase and\nthere is at least one cased character in the string.\njoin(iterable, /)\u00b6\nConcatenate any number of strings.\nThe string whose method is called is inserted in between each given string.\nThe result is returned as a new string.\nExample: \u2018.\u2019.join([\u2018ab\u2019, \u2018pq\u2019, \u2018rs\u2019]) -> \u2018ab.pq.rs\u2019\nljust(width, fillchar=' ', /)\u00b6\nReturn a left-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nlower()\u00b6\nReturn a copy of the string converted to lowercase.\nlstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nstatic maketrans()\u00b6\nReturn a translation table usable for str.translate().", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.DistanceStrategy.html"} {"id": "08a0264ec714-6", "text": "static maketrans()\u00b6\nReturn a translation table usable for str.translate().\nIf there is only one argument, it must be a dictionary mapping Unicode\nordinals (integers) or characters to Unicode ordinals, strings or None.\nCharacter keys will be then converted to ordinals.\nIf there are two arguments, they must be strings of equal length, and\nin the resulting dictionary, each character in x will be mapped to the\ncharacter at the same position in y. If there is a third argument, it\nmust be a string, whose characters will be mapped to None in the result.\npartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string. If the separator is found,\nreturns a 3-tuple containing the part before the separator, the separator\nitself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing the original string\nand two empty strings.\nremoveprefix(prefix, /)\u00b6\nReturn a str with the given prefix string removed if present.\nIf the string starts with the prefix string, return string[len(prefix):].\nOtherwise, return a copy of the original string.\nremovesuffix(suffix, /)\u00b6\nReturn a str with the given suffix string removed if present.\nIf the string ends with the suffix string and that suffix is not empty,\nreturn string[:-len(suffix)]. Otherwise, return a copy of the original\nstring.\nreplace(old, new, count=- 1, /)\u00b6\nReturn a copy with all occurrences of substring old replaced by new.\ncountMaximum number of occurrences to replace.\n-1 (the default value) means replace all occurrences.\nIf the optional argument count is given, only the first count occurrences are\nreplaced.\nrfind(sub[, start[, end]]) \u2192 int\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.DistanceStrategy.html"} {"id": "08a0264ec714-7", "text": "replaced.\nrfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nrindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nrjust(width, fillchar=' ', /)\u00b6\nReturn a right-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nrpartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string, starting at the end. If\nthe separator is found, returns a 3-tuple containing the part before the\nseparator, the separator itself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing two empty strings\nand the original string.\nrsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nSplitting starts at the end of the string and works to the front.\nrstrip(chars=None, /)\u00b6\nReturn a copy of the string with trailing whitespace removed.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.DistanceStrategy.html"} {"id": "08a0264ec714-8", "text": "rstrip(chars=None, /)\u00b6\nReturn a copy of the string with trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nNote, str.split() is mainly useful for data that has been intentionally\ndelimited. With natural text that includes punctuation, consider using\nthe regular expression module.\nsplitlines(keepends=False)\u00b6\nReturn a list of the lines in the string, breaking at line boundaries.\nLine breaks are not included in the resulting list unless keepends is given and\ntrue.\nstartswith(prefix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S starts with the specified prefix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nprefix can also be a tuple of strings to try.\nstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading and trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nswapcase()\u00b6\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\u00b6\nReturn a version of the string where each word is titlecased.\nMore specifically, words start with uppercased characters and all remaining\ncased characters have lower case.\ntranslate(table, /)\u00b6\nReplace each character in the string using the given translation table.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.DistanceStrategy.html"} {"id": "08a0264ec714-9", "text": "translate(table, /)\u00b6\nReplace each character in the string using the given translation table.\ntableTranslation table, which must be a mapping of Unicode ordinals to\nUnicode ordinals, strings, or None.\nThe table must implement lookup/indexing via __getitem__, for instance a\ndictionary or list. If this operation raises LookupError, the character is\nleft untouched. Characters mapped to None are deleted.\nupper()\u00b6\nReturn a copy of the string converted to uppercase.\nzfill(width, /)\u00b6\nPad a numeric string with zeros on the left, to fill a field of the given width.\nThe string is never truncated.\nDOT_PRODUCT = 'DOT_PRODUCT'\u00b6\nEUCLIDEAN_DISTANCE = 'EUCLIDEAN_DISTANCE'\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.DistanceStrategy.html"} {"id": "a9d7634bb3db-0", "text": "langchain.vectorstores.alibabacloud_opensearch.create_metadata\u00b6\nlangchain.vectorstores.alibabacloud_opensearch.create_metadata(fields: Dict[str, Any]) \u2192 Dict[str, Any][source]\u00b6\nCreate metadata from fields.\nParameters\nfields \u2013 The fields of the document. The fields must be a dict.\nReturns\nThe metadata of the document. The metadata must be a dict.\nReturn type\nmetadata", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.create_metadata.html"} {"id": "9b6dca2f3ffd-0", "text": "langchain.vectorstores.hologres.Hologres\u00b6\nclass langchain.vectorstores.hologres.Hologres(connection_string: str, embedding_function: Embeddings, ndims: int = 1536, table_name: str = 'langchain_pg_embedding', pre_delete_table: bool = False, logger: Optional[Logger] = None)[source]\u00b6\nBases: VectorStore\nVectorStore implementation using Hologres.\nconnection_string is a hologres connection string.\nembedding_function any embedding function implementinglangchain.embeddings.base.Embeddings interface.\nndims is the number of dimensions of the embedding output.\ntable_name is the name of the table to store embeddings and data.(default: langchain_pg_embedding)\n- NOTE: The table will be created when initializing the store (if not exists)\nSo, make sure the user has the right permissions to create tables.\npre_delete_table if True, will delete the table if it exists.(default: False)\n- Useful for testing.\nMethods\n__init__(connection_string,\u00a0embedding_function)\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_embeddings(texts,\u00a0embeddings,\u00a0metadatas,\u00a0...)\nAdd embeddings to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0ids])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.hologres.Hologres.html"} {"id": "9b6dca2f3ffd-1", "text": "Return VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\nconnection_string_from_db_params(host,\u00a0port,\u00a0...)\nReturn connection string from database parameters.\ncreate_table()\ncreate_vector_extension()\ndelete([ids])\nDelete by vector ID or other criteria.\nfrom_documents(documents,\u00a0embedding[,\u00a0...])\nReturn VectorStore initialized from documents and embeddings.\nfrom_embeddings(text_embeddings,\u00a0embedding)\nConstruct Hologres wrapper from raw documents and pre- generated embeddings.\nfrom_existing_index(embedding[,\u00a0ndims,\u00a0...])\nGet intsance of an existing Hologres store.This method will return the instance of the store without inserting any new embeddings\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nReturn VectorStore initialized from texts and embeddings.\nget_connection_string(kwargs)\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k,\u00a0filter])", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.hologres.Hologres.html"} {"id": "9b6dca2f3ffd-2", "text": "similarity_search(query[,\u00a0k,\u00a0filter])\nRun similarity search with Hologres with distance.\nsimilarity_search_by_vector(embedding[,\u00a0k,\u00a0...])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k,\u00a0filter])\nReturn docs most similar to query.\nsimilarity_search_with_score_by_vector(embedding)\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_embeddings(texts: Iterable[str], embeddings: List[List[float]], metadatas: List[dict], ids: List[str], **kwargs: Any) \u2192 None[source]\u00b6\nAdd embeddings to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nembeddings \u2013 List of list of embedding vectors.\nmetadatas \u2013 List of metadatas associated with the texts.\nkwargs \u2013 vectorstore specific parameters", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.hologres.Hologres.html"} {"id": "9b6dca2f3ffd-3", "text": "kwargs \u2013 vectorstore specific parameters\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nkwargs \u2013 vectorstore specific parameters\nReturns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.hologres.Hologres.html"} {"id": "9b6dca2f3ffd-4", "text": "Return docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\nclassmethod connection_string_from_db_params(host: str, port: int, database: str, user: str, password: str) \u2192 str[source]\u00b6\nReturn connection string from database parameters.\ncreate_table() \u2192 None[source]\u00b6\ncreate_vector_extension() \u2192 None[source]\u00b6\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, ndims: int = 1536, table_name: str = 'langchain_pg_embedding', ids: Optional[List[str]] = None, pre_delete_collection: bool = False, **kwargs: Any) \u2192 Hologres[source]\u00b6\nReturn VectorStore initialized from documents and embeddings.\nPostgres connection string is required\n\u201cEither pass it as a parameter\nor set the HOLOGRES_CONNECTION_STRING environment variable.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.hologres.Hologres.html"} {"id": "9b6dca2f3ffd-5", "text": "\u201cEither pass it as a parameter\nor set the HOLOGRES_CONNECTION_STRING environment variable.\nclassmethod from_embeddings(text_embeddings: List[Tuple[str, List[float]]], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ndims: int = 1536, table_name: str = 'langchain_pg_embedding', ids: Optional[List[str]] = None, pre_delete_table: bool = False, **kwargs: Any) \u2192 Hologres[source]\u00b6\nConstruct Hologres wrapper from raw documents and pre-\ngenerated embeddings.\nReturn VectorStore initialized from documents and embeddings.\nPostgres connection string is required\n\u201cEither pass it as a parameter\nor set the HOLOGRES_CONNECTION_STRING environment variable.\nExample\nfrom langchain import Hologres\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\ntext_embeddings = embeddings.embed_documents(texts)\ntext_embedding_pairs = list(zip(texts, text_embeddings))\nfaiss = Hologres.from_embeddings(text_embedding_pairs, embeddings)\nclassmethod from_existing_index(embedding: Embeddings, ndims: int = 1536, table_name: str = 'langchain_pg_embedding', pre_delete_table: bool = False, **kwargs: Any) \u2192 Hologres[source]\u00b6\nGet intsance of an existing Hologres store.This method will\nreturn the instance of the store without inserting any new\nembeddings\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ndims: int = 1536, table_name: str = 'langchain_pg_embedding', ids: Optional[List[str]] = None, pre_delete_table: bool = False, **kwargs: Any) \u2192 Hologres[source]\u00b6\nReturn VectorStore initialized from texts and embeddings.\nPostgres connection string is required", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.hologres.Hologres.html"} {"id": "9b6dca2f3ffd-6", "text": "Return VectorStore initialized from texts and embeddings.\nPostgres connection string is required\n\u201cEither pass it as a parameter\nor set the HOLOGRES_CONNECTION_STRING environment variable.\nclassmethod get_connection_string(kwargs: Dict[str, Any]) \u2192 str[source]\u00b6\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.hologres.Hologres.html"} {"id": "9b6dca2f3ffd-7", "text": "Defaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nRun similarity search with Hologres with distance.\nParameters\nquery (str) \u2013 Query text to search for.\nk (int) \u2013 Number of results to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of Documents most similar to the query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, filter: Optional[dict] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.hologres.Hologres.html"} {"id": "9b6dca2f3ffd-8", "text": "filter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of Documents most similar to the query and score for each\nsimilarity_search_with_score_by_vector(embedding: List[float], k: int = 4, filter: Optional[dict] = None) \u2192 List[Tuple[Document, float]][source]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.hologres.Hologres.html"} {"id": "592c6e0c4e11-0", "text": "langchain.vectorstores.rocksetdb.Rockset\u00b6\nclass langchain.vectorstores.rocksetdb.Rockset(client: Any, embeddings: Embeddings, collection_name: str, text_key: str, embedding_key: str)[source]\u00b6\nBases: VectorStore\nWrapper arpund Rockset vector database.\nTo use, you should have the rockset python package installed. Note that to use\nthis, the collection being used must already exist in your Rockset instance.\nYou must also ensure you use a Rockset ingest transformation to apply\nVECTOR_ENFORCE on the column being used to store embedding_key in the\ncollection.\nSee: https://rockset.com/blog/introducing-vector-search-on-rockset/ for more details\nEverything below assumes commons Rockset workspace.\nTODO: Add support for workspace args.\nExample\nfrom langchain.vectorstores import Rockset\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nimport rockset\n# Make sure you use the right host (region) for your Rockset instance\n# and APIKEY has both read-write access to your collection.\nrs = rockset.RocksetClient(host=rockset.Regions.use1a1, api_key=\"***\")\ncollection_name = \"langchain_demo\"\nembeddings = OpenAIEmbeddings()\nvectorstore = Rockset(rs, collection_name, embeddings,\n \"description\", \"description_embedding\")\nInitialize with Rockset client.\n:param client: Rockset client object\n:param collection: Rockset collection to insert docs / query\n:param embeddings: Langchain Embeddings object to use to generate\nembedding for given text.\nParameters\ntext_key \u2013 column in Rockset collection to use to store the text\nembedding_key \u2013 column in Rockset collection to use to store the embedding.\nNote: We must apply VECTOR_ENFORCE() on this column via\nRockset ingest transformation.\nMethods", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.rocksetdb.Rockset.html"} {"id": "592c6e0c4e11-1", "text": "Rockset ingest transformation.\nMethods\n__init__(client,\u00a0embeddings,\u00a0...)\nInitialize with Rockset client. :param client: Rockset client object :param collection: Rockset collection to insert docs / query :param embeddings: Langchain Embeddings object to use to generate embedding for given text. :param text_key: column in Rockset collection to use to store the text :param embedding_key: column in Rockset collection to use to store the embedding. Note: We must apply VECTOR_ENFORCE() on this column via Rockset ingest transformation.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0ids,\u00a0batch_size])\nRun more texts through the embeddings and add to the vectorstore\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.rocksetdb.Rockset.html"} {"id": "592c6e0c4e11-2", "text": "Return docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector ID or other criteria.\ndelete_texts(ids)\nDelete a list of docs from the Rockset collection\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nCreate Rockset wrapper with existing texts.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k,\u00a0distance_func,\u00a0...])\nSame as similarity_search_with_relevance_scores but doesn't return the scores.\nsimilarity_search_by_vector(embedding[,\u00a0k,\u00a0...])\nAccepts a query_embedding (vector), and returns documents with similar embeddings.\nsimilarity_search_by_vector_with_relevance_scores(...)\nAccepts a query_embedding (vector), and returns documents with similar embeddings along with their relevance scores.\nsimilarity_search_with_relevance_scores(query)\nPerform a similarity search with Rockset\nclass DistanceFunction(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]\u00b6\nBases: Enum\norder_by() \u2192 str[source]\u00b6\nCOSINE_SIM = 'COSINE_SIM'\u00b6\nDOT_PRODUCT = 'DOT_PRODUCT'\u00b6\nEUCLIDEAN_DIST = 'EUCLIDEAN_DIST'\u00b6\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.rocksetdb.Rockset.html"} {"id": "592c6e0c4e11-3", "text": "Run more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, batch_size: int = 32, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore\nArgs:\ntexts: Iterable of strings to add to the vectorstore.\nmetadatas: Optional list of metadatas associated with the texts.\nids: Optional list of ids to associate with the texts.\nbatch_size: Send documents in batches to rockset.\nReturns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.rocksetdb.Rockset.html"} {"id": "592c6e0c4e11-4", "text": "Return VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.rocksetdb.Rockset.html"} {"id": "592c6e0c4e11-5", "text": "False otherwise, None if not implemented.\nReturn type\nOptional[bool]\ndelete_texts(ids: List[str]) \u2192 None[source]\u00b6\nDelete a list of docs from the Rockset collection\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, client: Any = None, collection_name: str = '', text_key: str = '', embedding_key: str = '', ids: Optional[List[str]] = None, batch_size: int = 32, **kwargs: Any) \u2192 Rockset[source]\u00b6\nCreate Rockset wrapper with existing texts.\nThis is intended as a quicker way to get started.\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.rocksetdb.Rockset.html"} {"id": "592c6e0c4e11-6", "text": "Defaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, distance_func: DistanceFunction = DistanceFunction.COSINE_SIM, where_str: Optional[str] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nSame as similarity_search_with_relevance_scores but\ndoesn\u2019t return the scores.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, distance_func: DistanceFunction = DistanceFunction.COSINE_SIM, where_str: Optional[str] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nAccepts a query_embedding (vector), and returns documents with\nsimilar embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.rocksetdb.Rockset.html"} {"id": "592c6e0c4e11-7", "text": "Accepts a query_embedding (vector), and returns documents with\nsimilar embeddings.\nsimilarity_search_by_vector_with_relevance_scores(embedding: List[float], k: int = 4, distance_func: DistanceFunction = DistanceFunction.COSINE_SIM, where_str: Optional[str] = None, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nAccepts a query_embedding (vector), and returns documents with\nsimilar embeddings along with their relevance scores.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, distance_func: DistanceFunction = DistanceFunction.COSINE_SIM, where_str: Optional[str] = None, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nPerform a similarity search with Rockset\nParameters\nquery (str) \u2013 Text to look up documents similar to.\ndistance_func (DistanceFunction) \u2013 how to compute distance between two\nvectors in Rockset.\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 Metadata filters supplied as a\nSQL where condition string. Defaults to None.\neg. \u201cprice<=70.0 AND brand=\u2019Nintendo\u2019\u201d\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection.\nReturns\nList of documents with their relevance score\nReturn type\nList[Tuple[Document, float]]", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.rocksetdb.Rockset.html"} {"id": "8cc14a0c14c4-0", "text": "langchain.vectorstores.base.VectorStoreRetriever\u00b6\nclass langchain.vectorstores.base.VectorStoreRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, vectorstore: VectorStore, search_type: str = 'similarity', search_kwargs: dict = None)[source]\u00b6\nBases: BaseRetriever\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam search_kwargs: dict [Optional]\u00b6\nparam search_type: str = 'similarity'\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam vectorstore: langchain.vectorstores.base.VectorStore [Required]\u00b6\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str][source]\u00b6\nAdd documents to vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str][source]\u00b6\nAdd documents to vectorstore.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.base.VectorStoreRetriever.html"} {"id": "8cc14a0c14c4-1", "text": "Add documents to vectorstore.\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_search_type\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate search type.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.base.VectorStoreRetriever.html"} {"id": "8cc14a0c14c4-2", "text": "validator validate_search_type\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate search type.\nallowed_search_types: ClassVar[Collection[str]] = ('similarity', 'similarity_score_threshold', 'mmr')\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.base.VectorStoreRetriever.html"} {"id": "132173b6900d-0", "text": "langchain.vectorstores.azuresearch.AzureSearch\u00b6\nclass langchain.vectorstores.azuresearch.AzureSearch(azure_search_endpoint: str, azure_search_key: str, index_name: str, embedding_function: Callable, search_type: str = 'hybrid', semantic_configuration_name: Optional[str] = None, semantic_query_language: str = 'en-us', **kwargs: Any)[source]\u00b6\nBases: VectorStore\nInitialize with necessary components.\nMethods\n__init__(azure_search_endpoint,\u00a0...[,\u00a0...])\nInitialize with necessary components.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas])\nAdd texts data to an existing index.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html"} {"id": "132173b6900d-1", "text": "asimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector ID or other criteria.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nReturn VectorStore initialized from texts and embeddings.\nhybrid_search(query[,\u00a0k])\nReturns the most similar indexed documents to the query text.\nhybrid_search_with_score(query[,\u00a0k,\u00a0filters])\nReturn docs most similar to query with an hybrid query.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsemantic_hybrid_search(query[,\u00a0k])\nReturns the most similar indexed documents to the query text.\nsemantic_hybrid_search_with_score(query[,\u00a0...])\nReturn docs most similar to query with an hybrid query.\nsimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nvector_search(query[,\u00a0k])\nReturns the most similar indexed documents to the query text.\nvector_search_with_score(query[,\u00a0k,\u00a0filters])\nReturn docs most similar to query.\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html"} {"id": "132173b6900d-2", "text": "Run more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nAdd texts data to an existing index.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html"} {"id": "132173b6900d-3", "text": "Return docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html"} {"id": "132173b6900d-4", "text": "Return VectorStore initialized from documents and embeddings.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, azure_search_endpoint: str = '', azure_search_key: str = '', index_name: str = 'langchain-index', **kwargs: Any) \u2192 AzureSearch[source]\u00b6\nReturn VectorStore initialized from texts and embeddings.\nhybrid_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturns the most similar indexed documents to the query text.\nParameters\nquery (str) \u2013 The query text for which to find similar documents.\nk (int) \u2013 The number of documents to return. Default is 4.\nReturns\nA list of documents that are most similar to the query text.\nReturn type\nList[Document]\nhybrid_search_with_score(query: str, k: int = 4, filters: Optional[str] = None) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs most similar to query with an hybrid query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query and score for each\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html"} {"id": "132173b6900d-5", "text": "fetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsemantic_hybrid_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturns the most similar indexed documents to the query text.\nParameters\nquery (str) \u2013 The query text for which to find similar documents.\nk (int) \u2013 The number of documents to return. Default is 4.\nReturns\nA list of documents that are most similar to the query text.\nReturn type\nList[Document]", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html"} {"id": "132173b6900d-6", "text": "Return type\nList[Document]\nsemantic_hybrid_search_with_score(query: str, k: int = 4, filters: Optional[str] = None) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs most similar to query with an hybrid query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query and score for each\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nvector_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturns the most similar indexed documents to the query text.\nParameters", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html"} {"id": "132173b6900d-7", "text": "Returns the most similar indexed documents to the query text.\nParameters\nquery (str) \u2013 The query text for which to find similar documents.\nk (int) \u2013 The number of documents to return. Default is 4.\nReturns\nA list of documents that are most similar to the query text.\nReturn type\nList[Document]\nvector_search_with_score(query: str, k: int = 4, filters: Optional[str] = None) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query and score for each", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html"} {"id": "74662000776f-0", "text": "langchain.vectorstores.supabase.SupabaseVectorStore\u00b6\nclass langchain.vectorstores.supabase.SupabaseVectorStore(client: supabase.client.Client, embedding: Embeddings, table_name: str, query_name: Union[str, None] = None)[source]\u00b6\nBases: VectorStore\nVectorStore for a Supabase postgres database. Assumes you have the pgvector\nextension installed and a match_documents (or similar) function. For more details:\nhttps://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabase\nYou can implement your own match_documents function in order to limit the search\nspace to a subset of documents based on your own authorization or business logic.\nNote that the Supabase Python client does not yet support async operations.\nIf you\u2019d like to use max_marginal_relevance_search, please review the instructions\nbelow on modifying the match_documents function to return matched embeddings.\nInitialize with supabase client.\nMethods\n__init__(client,\u00a0embedding,\u00a0table_name[,\u00a0...])\nInitialize with supabase client.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0ids])\nRun more texts through the embeddings and add to the vectorstore.\nadd_vectors(vectors,\u00a0documents,\u00a0ids)\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.supabase.SupabaseVectorStore.html"} {"id": "74662000776f-1", "text": "Return VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector IDs.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nReturn VectorStore initialized from texts and embeddings.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_by_vector_returning_embeddings(...)\nsimilarity_search_by_vector_with_relevance_scores(...)\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nAttributes\ntable_name\nquery_name\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.supabase.SupabaseVectorStore.html"} {"id": "74662000776f-2", "text": "Run more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict[Any, Any]]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nkwargs \u2013 vectorstore specific parameters\nReturns\nList of ids from adding the texts into the vectorstore.\nadd_vectors(vectors: List[List[float]], documents: List[Document], ids: List[str]) \u2192 List[str][source]\u00b6\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.supabase.SupabaseVectorStore.html"} {"id": "74662000776f-3", "text": "Return VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 None[source]\u00b6\nDelete by vector IDs.\nParameters\nids \u2013 List of ids to delete.\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.supabase.SupabaseVectorStore.html"} {"id": "74662000776f-4", "text": "Return VectorStore initialized from documents and embeddings.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, client: Optional[supabase.client.Client] = None, table_name: Optional[str] = 'documents', query_name: Union[str, None] = 'match_documents', ids: Optional[List[str]] = None, **kwargs: Any) \u2192 SupabaseVectorStore[source]\u00b6\nReturn VectorStore initialized from texts and embeddings.\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search requires that query_name returns matched\nembeddings alongside the match documents. The following function\ndemonstrates how to do this:\n```sql\nCREATE FUNCTION match_documents_embeddings(query_embedding vector(1536),\nmatch_count int)\nRETURNS TABLE(id uuid,\ncontent text,\nmetadata jsonb,\nembedding vector(1536),\nsimilarity float)\nLANGUAGE plpgsql\nAS $$\n# variable_conflict use_column\nBEGINRETURN query\nSELECT\nid,\ncontent,\nmetadata,\nembedding,", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.supabase.SupabaseVectorStore.html"} {"id": "74662000776f-5", "text": "BEGINRETURN query\nSELECT\nid,\ncontent,\nmetadata,\nembedding,\n1 -(docstore.embedding <=> query_embedding) AS similarity\nFROMdocstore\nORDER BYdocstore.embedding <=> query_embedding\nLIMIT match_count;\nEND;\n$$;\n```\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.supabase.SupabaseVectorStore.html"} {"id": "74662000776f-6", "text": "Returns\nList of Documents most similar to the query vector.\nsimilarity_search_by_vector_returning_embeddings(query: List[float], k: int) \u2192 List[Tuple[Document, float, ndarray[float32, Any]]][source]\u00b6\nsimilarity_search_by_vector_with_relevance_scores(query: List[float], k: int) \u2192 List[Tuple[Document, float]][source]\u00b6\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nquery_name: str\u00b6\ntable_name: str\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.supabase.SupabaseVectorStore.html"} {"id": "e9420ad047bc-0", "text": "langchain.vectorstores.pgvector.BaseModel\u00b6\nclass langchain.vectorstores.pgvector.BaseModel(**kwargs: Any)[source]\u00b6\nBases: Base\nA simple constructor that allows initialization from kwargs.\nSets attributes on the constructed instance using the names and\nvalues in kwargs.\nOnly keys that are present as\nattributes of the instance\u2019s class are allowed. These could be,\nfor example, any mapped columns or relationships.\nMethods\n__init__(**kwargs)\nA simple constructor that allows initialization from kwargs.\nAttributes\nmetadata\nregistry\nuuid\nmetadata: MetaData = MetaData()\u00b6\nregistry: RegistryType = \u00b6\nuuid = Column(None, UUID(), table=None, primary_key=True, nullable=False, default=CallableColumnDefault())\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgvector.BaseModel.html"} {"id": "3d0b8c6ec386-0", "text": "langchain.vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch\u00b6\nclass langchain.vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch(collection: Collection[MongoDBDocumentType], embedding: Embeddings, *, index_name: str = 'default', text_key: str = 'text', embedding_key: str = 'embedding')[source]\u00b6\nBases: VectorStore\nWrapper around MongoDB Atlas Vector Search.\nTo use, you should have both:\n- the pymongo python package installed\n- a connection string associated with a MongoDB Atlas Cluster having deployed an\nAtlas Search index\nExample\nfrom langchain.vectorstores import MongoDBAtlasVectorSearch\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom pymongo import MongoClient\nmongo_client = MongoClient(\"\")\ncollection = mongo_client[\"\"][\"\"]\nembeddings = OpenAIEmbeddings()\nvectorstore = MongoDBAtlasVectorSearch(collection, embeddings)\nParameters\ncollection \u2013 MongoDB collection to add the texts to.\nembedding \u2013 Text embedding model to use.\ntext_key \u2013 MongoDB field that will contain the text for each\ndocument.\nembedding_key \u2013 MongoDB field that will contain the embedding for\neach document.\nMethods\n__init__(collection,\u00a0embedding,\u00a0*[,\u00a0...])\nparam collection\nMongoDB collection to add the texts to.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch.html"} {"id": "3d0b8c6ec386-1", "text": "afrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector ID or other criteria.\nfrom_connection_string(connection_string,\u00a0...)\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nConstruct MongoDBAtlasVectorSearch wrapper from raw documents.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k,\u00a0pre_filter,\u00a0...])\nReturn MongoDB documents most similar to query.\nsimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch.html"} {"id": "3d0b8c6ec386-2", "text": "Return docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query,\u00a0*[,\u00a0k,\u00a0...])\nReturn MongoDB documents most similar to query, along with scores.\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[Dict[str, Any]]] = None, **kwargs: Any) \u2192 List[source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nReturns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch.html"} {"id": "3d0b8c6ec386-3", "text": "Return VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch.html"} {"id": "3d0b8c6ec386-4", "text": "Delete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_connection_string(connection_string: str, namespace: str, embedding: Embeddings, **kwargs: Any) \u2192 MongoDBAtlasVectorSearch[source]\u00b6\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, collection: Optional[Collection[MongoDBDocumentType]] = None, **kwargs: Any) \u2192 MongoDBAtlasVectorSearch[source]\u00b6\nConstruct MongoDBAtlasVectorSearch wrapper from raw documents.\nThis is a user-friendly interface that:\nEmbeds documents.\nAdds the documents to a provided MongoDB Atlas Vector Search index(Lucene)\nThis is intended to be a quick way to get started.\nExample\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch.html"} {"id": "3d0b8c6ec386-5", "text": "Defaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, pre_filter: Optional[dict] = None, post_filter_pipeline: Optional[List[Dict]] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn MongoDB documents most similar to query.\nUse the knnBeta Operator available in MongoDB Atlas Search\nThis feature is in early access and available only for evaluation purposes, to\nvalidate functionality, and to gather feedback from a small closed group of\nearly access users. It is not recommended for production deployments as we may\nintroduce breaking changes.\nFor more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta\nParameters\nquery \u2013 Text to look up documents similar to.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch.html"} {"id": "3d0b8c6ec386-6", "text": "Parameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Optional Number of Documents to return. Defaults to 4.\npre_filter \u2013 Optional Dictionary of argument(s) to prefilter on document\nfields.\npost_filter_pipeline \u2013 Optional Pipeline of MongoDB aggregation stages\nfollowing the knnBeta search.\nReturns\nList of Documents most similar to the query and score for each\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, *, k: int = 4, pre_filter: Optional[dict] = None, post_filter_pipeline: Optional[List[Dict]] = None) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn MongoDB documents most similar to query, along with scores.\nUse the knnBeta Operator available in MongoDB Atlas Search\nThis feature is in early access and available only for evaluation purposes, to", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch.html"} {"id": "3d0b8c6ec386-7", "text": "This feature is in early access and available only for evaluation purposes, to\nvalidate functionality, and to gather feedback from a small closed group of\nearly access users. It is not recommended for production deployments as we\nmay introduce breaking changes.\nFor more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Optional Number of Documents to return. Defaults to 4.\npre_filter \u2013 Optional Dictionary of argument(s) to prefilter on document\nfields.\npost_filter_pipeline \u2013 Optional Pipeline of MongoDB aggregation stages\nfollowing the knnBeta search.\nReturns\nList of Documents most similar to the query and score for each", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch.html"} {"id": "1ad242b674a3-0", "text": "langchain.vectorstores.myscale.MyScaleSettings\u00b6\nclass langchain.vectorstores.myscale.MyScaleSettings(_env_file: Optional[Union[str, PathLike, List[Union[str, PathLike]], Tuple[Union[str, PathLike], ...]]] = '', _env_file_encoding: Optional[str] = None, _env_nested_delimiter: Optional[str] = None, _secrets_dir: Optional[Union[str, PathLike]] = None, *, host: str = 'localhost', port: int = 8443, username: Optional[str] = None, password: Optional[str] = None, index_type: str = 'IVFFLAT', index_param: Optional[Dict[str, str]] = None, column_map: Dict[str, str] = {'id': 'id', 'metadata': 'metadata', 'text': 'text', 'vector': 'vector'}, database: str = 'default', table: str = 'langchain', metric: str = 'cosine')[source]\u00b6\nBases: BaseSettings\nMyScale Client Configuration\nAttribute:\nmyscale_host (str)An URL to connect to MyScale backend.Defaults to \u2018localhost\u2019.\nmyscale_port (int) : URL port to connect with HTTP. Defaults to 8443.\nusername (str) : Username to login. Defaults to None.\npassword (str) : Password to login. Defaults to None.\nindex_type (str): index type string.\nindex_param (dict): index build parameter.\ndatabase (str) : Database name to find the table. Defaults to \u2018default\u2019.\ntable (str) : Table name to operate on.\nDefaults to \u2018vector_table\u2019.\nmetric (str)Metric to compute distance,supported are (\u2018l2\u2019, \u2018cosine\u2019, \u2018ip\u2019). Defaults to \u2018cosine\u2019.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.MyScaleSettings.html"} {"id": "1ad242b674a3-1", "text": "column_map (Dict)Column type map to project column name onto langchainsemantics. Must have keys: text, id, vector,\nmust be same size to number of columns. For example:\n.. code-block:: python\n{\u2018id\u2019: \u2018text_id\u2019,\n\u2018vector\u2019: \u2018text_embedding\u2019,\n\u2018text\u2019: \u2018text_plain\u2019,\n\u2018metadata\u2019: \u2018metadata_dictionary_in_json\u2019,\n}\nDefaults to identity map.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam column_map: Dict[str, str] = {'id': 'id', 'metadata': 'metadata', 'text': 'text', 'vector': 'vector'}\u00b6\nparam database: str = 'default'\u00b6\nparam host: str = 'localhost'\u00b6\nparam index_param: Optional[Dict[str, str]] = None\u00b6\nparam index_type: str = 'IVFFLAT'\u00b6\nparam metric: str = 'cosine'\u00b6\nparam password: Optional[str] = None\u00b6\nparam port: int = 8443\u00b6\nparam table: str = 'langchain'\u00b6\nparam username: Optional[str] = None\u00b6\nmodel Config[source]\u00b6\nBases: object\nenv_file = '.env'\u00b6\nenv_file_encoding = 'utf-8'\u00b6\nenv_prefix = 'myscale_'\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.MyScaleSettings.html"} {"id": "a45af96468ae-0", "text": "langchain.vectorstores.atlas.AtlasDB\u00b6\nclass langchain.vectorstores.atlas.AtlasDB(name: str, embedding_function: Optional[Embeddings] = None, api_key: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False)[source]\u00b6\nBases: VectorStore\nWrapper around Atlas: Nomic\u2019s neural database and rhizomatic instrument.\nTo use, you should have the nomic python package installed.\nExample\nfrom langchain.vectorstores import AtlasDB\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nvectorstore = AtlasDB(\"my_project\", embeddings.embed_query)\nInitialize the Atlas Client\nParameters\nname (str) \u2013 The name of your project. If the project already exists,\nit will be loaded.\nembedding_function (Optional[Callable]) \u2013 An optional function used for\nembedding your data. If None, data will be embedded with\nNomic\u2019s embed model.\napi_key (str) \u2013 Your nomic API key\ndescription (str) \u2013 A description for your project.\nis_public (bool) \u2013 Whether your project is publicly accessible.\nTrue by default.\nreset_project_if_exists (bool) \u2013 Whether to reset this project if it\nalready exists. Default False.\nGenerally userful during development and testing.\nMethods\n__init__(name[,\u00a0embedding_function,\u00a0...])\nInitialize the Atlas Client\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.atlas.AtlasDB.html"} {"id": "a45af96468ae-1", "text": "Run more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0ids,\u00a0refresh])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ncreate_index(**kwargs)\nCreates an index in your project.\ndelete([ids])\nDelete by vector ID or other criteria.\nfrom_documents(documents[,\u00a0embedding,\u00a0ids,\u00a0...])\nCreate an AtlasDB vectorstore from a list of documents.\nfrom_texts(texts[,\u00a0embedding,\u00a0metadatas,\u00a0...])\nCreate an AtlasDB vectorstore from a raw documents.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k])", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.atlas.AtlasDB.html"} {"id": "a45af96468ae-2", "text": "similarity_search(query[,\u00a0k])\nRun similarity search with AtlasDB\nsimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, refresh: bool = True, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Texts to add to the vectorstore.\nmetadatas (Optional[List[dict]], optional) \u2013 Optional list of metadatas.\nids (Optional[List[str]]) \u2013 An optional list of ids.\nrefresh (bool) \u2013 Whether or not to refresh indices with the updated data.\nDefault True.\nReturns\nList of IDs of the added texts.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.atlas.AtlasDB.html"} {"id": "a45af96468ae-3", "text": "Default True.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.atlas.AtlasDB.html"} {"id": "a45af96468ae-4", "text": "Return docs most similar to query.\ncreate_index(**kwargs: Any) \u2192 Any[source]\u00b6\nCreates an index in your project.\nSee\nhttps://docs.nomic.ai/atlas_api.html#nomic.project.AtlasProject.create_index\nfor full detail.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_documents(documents: List[Document], embedding: Optional[Embeddings] = None, ids: Optional[List[str]] = None, name: Optional[str] = None, api_key: Optional[str] = None, persist_directory: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False, index_kwargs: Optional[dict] = None, **kwargs: Any) \u2192 AtlasDB[source]\u00b6\nCreate an AtlasDB vectorstore from a list of documents.\nParameters\nname (str) \u2013 Name of the collection to create.\napi_key (str) \u2013 Your nomic API key,\ndocuments (List[Document]) \u2013 List of documents to add to the vectorstore.\nembedding (Optional[Embeddings]) \u2013 Embedding function. Defaults to None.\nids (Optional[List[str]]) \u2013 Optional list of document IDs. If None,\nids will be auto created\ndescription (str) \u2013 A description for your project.\nis_public (bool) \u2013 Whether your project is publicly accessible.\nTrue by default.\nreset_project_if_exists (bool) \u2013 Whether to reset this project if\nit already exists. Default False.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.atlas.AtlasDB.html"} {"id": "a45af96468ae-5", "text": "it already exists. Default False.\nGenerally userful during development and testing.\nindex_kwargs (Optional[dict]) \u2013 Dict of kwargs for index creation.\nSee https://docs.nomic.ai/atlas_api.html\nReturns\nNomic\u2019s neural database and finest rhizomatic instrument\nReturn type\nAtlasDB\nclassmethod from_texts(texts: List[str], embedding: Optional[Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, name: Optional[str] = None, api_key: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False, index_kwargs: Optional[dict] = None, **kwargs: Any) \u2192 AtlasDB[source]\u00b6\nCreate an AtlasDB vectorstore from a raw documents.\nParameters\ntexts (List[str]) \u2013 The list of texts to ingest.\nname (str) \u2013 Name of the project to create.\napi_key (str) \u2013 Your nomic API key,\nembedding (Optional[Embeddings]) \u2013 Embedding function. Defaults to None.\nmetadatas (Optional[List[dict]]) \u2013 List of metadatas. Defaults to None.\nids (Optional[List[str]]) \u2013 Optional list of document IDs. If None,\nids will be auto created\ndescription (str) \u2013 A description for your project.\nis_public (bool) \u2013 Whether your project is publicly accessible.\nTrue by default.\nreset_project_if_exists (bool) \u2013 Whether to reset this project if it\nalready exists. Default False.\nGenerally userful during development and testing.\nindex_kwargs (Optional[dict]) \u2013 Dict of kwargs for index creation.\nSee https://docs.nomic.ai/atlas_api.html\nReturns\nNomic\u2019s neural database and finest rhizomatic instrument\nReturn type\nAtlasDB", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.atlas.AtlasDB.html"} {"id": "a45af96468ae-6", "text": "Returns\nNomic\u2019s neural database and finest rhizomatic instrument\nReturn type\nAtlasDB\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.atlas.AtlasDB.html"} {"id": "a45af96468ae-7", "text": "Return docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nRun similarity search with AtlasDB\nParameters\nquery (str) \u2013 Query text to search for.\nk (int) \u2013 Number of results to return. Defaults to 4.\nReturns\nList of documents most similar to the query text.\nReturn type\nList[Document]\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.atlas.AtlasDB.html"} {"id": "fe071b4aa937-0", "text": "langchain.vectorstores.sklearn.BaseSerializer\u00b6\nclass langchain.vectorstores.sklearn.BaseSerializer(persist_path: str)[source]\u00b6\nBases: ABC\nAbstract base class for saving and loading data.\nMethods\n__init__(persist_path)\nextension()\nThe file extension suggested by this serializer (without dot).\nload()\nLoads the data from the persist_path\nsave(data)\nSaves the data to the persist_path\nabstract classmethod extension() \u2192 str[source]\u00b6\nThe file extension suggested by this serializer (without dot).\nabstract load() \u2192 Any[source]\u00b6\nLoads the data from the persist_path\nabstract save(data: Any) \u2192 None[source]\u00b6\nSaves the data to the persist_path", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.BaseSerializer.html"} {"id": "71da6fc43f08-0", "text": "langchain.vectorstores.azuresearch.AzureSearchVectorStoreRetriever\u00b6\nclass langchain.vectorstores.azuresearch.AzureSearchVectorStoreRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, vectorstore: AzureSearch, search_type: str = 'hybrid', k: int = 4)[source]\u00b6\nBases: BaseRetriever\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam k: int = 4\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam search_type: str = 'hybrid'\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam vectorstore: langchain.vectorstores.azuresearch.AzureSearch [Required]\u00b6\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearchVectorStoreRetriever.html"} {"id": "71da6fc43f08-1", "text": ":param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_search_type\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate search type.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearchVectorStoreRetriever.html"} {"id": "71da6fc43f08-2", "text": "Return a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearchVectorStoreRetriever.html"} {"id": "97f04aa70c21-0", "text": "langchain.vectorstores.sklearn.SKLearnVectorStore\u00b6\nclass langchain.vectorstores.sklearn.SKLearnVectorStore(embedding: Embeddings, *, persist_path: Optional[str] = None, serializer: Literal['json', 'bson', 'parquet'] = 'json', metric: str = 'cosine', **kwargs: Any)[source]\u00b6\nBases: VectorStore\nA simple in-memory vector store based on the scikit-learn library\nNearestNeighbors implementation.\nMethods\n__init__(embedding,\u00a0*[,\u00a0persist_path,\u00a0...])\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0ids])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.SKLearnVectorStore.html"} {"id": "97f04aa70c21-1", "text": "Return docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector ID or other criteria.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nReturn VectorStore initialized from texts and embeddings.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. :param query: Text to look up documents similar to. :param k: Number of Documents to return. Defaults to 4. :param fetch_k: Number of Documents to fetch to pass to MMR algorithm. :param lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. :param embedding: Embedding to look up documents similar to. :param k: Number of Documents to return. Defaults to 4. :param fetch_k: Number of Documents to fetch to pass to MMR algorithm. :param lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.\npersist()\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.SKLearnVectorStore.html"} {"id": "97f04aa70c21-2", "text": "similarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query,\u00a0*[,\u00a0k])\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nkwargs \u2013 vectorstore specific parameters\nReturns\nList of ids from adding the texts into the vectorstore.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.SKLearnVectorStore.html"} {"id": "97f04aa70c21-3", "text": "Returns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.SKLearnVectorStore.html"} {"id": "97f04aa70c21-4", "text": "Return docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, persist_path: Optional[str] = None, **kwargs: Any) \u2192 SKLearnVectorStore[source]\u00b6\nReturn VectorStore initialized from texts and embeddings.\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\n:param query: Text to look up documents similar to.\n:param k: Number of Documents to return. Defaults to 4.\n:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n:param lambda_mult: Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.SKLearnVectorStore.html"} {"id": "97f04aa70c21-5", "text": "Defaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\n:param embedding: Embedding to look up documents similar to.\n:param k: Number of Documents to return. Defaults to 4.\n:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n:param lambda_mult: Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\npersist() \u2192 None[source]\u00b6\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.SKLearnVectorStore.html"} {"id": "97f04aa70c21-6", "text": "Return docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, *, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.SKLearnVectorStore.html"} {"id": "653bf5552684-0", "text": "langchain.vectorstores.faiss.dependable_faiss_import\u00b6\nlangchain.vectorstores.faiss.dependable_faiss_import(no_avx2: Optional[bool] = None) \u2192 Any[source]\u00b6\nImport faiss if available, otherwise raise error.\nIf FAISS_NO_AVX2 environment variable is set, it will be considered\nto load FAISS with no AVX2 optimization.\nParameters\nno_avx2 \u2013 Load FAISS strictly with no AVX2 optimization\nso that the vectorstore is portable and compatible with other devices.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.faiss.dependable_faiss_import.html"} {"id": "c47a2a38673b-0", "text": "langchain.vectorstores.starrocks.has_mul_sub_str\u00b6\nlangchain.vectorstores.starrocks.has_mul_sub_str(s: str, *args: Any) \u2192 bool[source]\u00b6\nCheck if a string has multiple substrings.\n:param s: The string to check\n:param *args: The substrings to check for in the string\nReturns\nTrue if all substrings are present in the string, False otherwise\nReturn type\nbool", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.has_mul_sub_str.html"} {"id": "454ef2ba590f-0", "text": "langchain.vectorstores.typesense.Typesense\u00b6\nclass langchain.vectorstores.typesense.Typesense(typesense_client: Client, embedding: Embeddings, *, typesense_collection_name: Optional[str] = None, text_key: str = 'text')[source]\u00b6\nBases: VectorStore\nWrapper around Typesense vector search.\nTo use, you should have the typesense python package installed.\nExample\nfrom langchain.embedding.openai import OpenAIEmbeddings\nfrom langchain.vectorstores import Typesense\nimport typesense\nnode = {\n \"host\": \"localhost\", # For Typesense Cloud use xxx.a1.typesense.net\n \"port\": \"8108\", # For Typesense Cloud use 443\n \"protocol\": \"http\" # For Typesense Cloud use https\n}\ntypesense_client = typesense.Client(\n {\n \"nodes\": [node],\n \"api_key\": \"\",\n \"connection_timeout_seconds\": 2\n }\n)\ntypesense_collection_name = \"langchain-memory\"\nembedding = OpenAIEmbeddings()\nvectorstore = Typesense(\n typesense_client=typesense_client,\n embedding=embedding,\n typesense_collection_name=typesense_collection_name,\n text_key=\"text\",\n)\nInitialize with Typesense client.\nMethods\n__init__(typesense_client,\u00a0embedding,\u00a0*[,\u00a0...])\nInitialize with Typesense client.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0ids])", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.typesense.Typesense.html"} {"id": "454ef2ba590f-1", "text": "add_texts(texts[,\u00a0metadatas,\u00a0ids])\nRun more texts through the embedding and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector ID or other criteria.\nfrom_client_params(embedding,\u00a0*[,\u00a0host,\u00a0...])\nInitialize Typesense directly from client parameters.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nConstruct Typesense wrapper from raw text.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k,\u00a0filter])\nReturn typesense documents most similar to query.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.typesense.Typesense.html"} {"id": "454ef2ba590f-2", "text": "Return typesense documents most similar to query.\nsimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k,\u00a0filter])\nReturn typesense documents most similar to query, along with scores.\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embedding and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nids \u2013 Optional list of ids to associate with the texts.\nReturns\nList of ids from adding the texts into the vectorstore.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.typesense.Typesense.html"} {"id": "454ef2ba590f-3", "text": "Returns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.typesense.Typesense.html"} {"id": "454ef2ba590f-4", "text": "Return docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_client_params(embedding: Embeddings, *, host: str = 'localhost', port: Union[str, int] = '8108', protocol: str = 'http', typesense_api_key: Optional[str] = None, connection_timeout_seconds: int = 2, **kwargs: Any) \u2192 Typesense[source]\u00b6\nInitialize Typesense directly from client parameters.\nExample\nfrom langchain.embedding.openai import OpenAIEmbeddings\nfrom langchain.vectorstores import Typesense\n# Pass in typesense_api_key as kwarg or set env var \"TYPESENSE_API_KEY\".\nvectorstore = Typesense(\n OpenAIEmbeddings(),\n host=\"localhost\",\n port=\"8108\",\n protocol=\"http\",\n typesense_collection_name=\"langchain-memory\",\n)\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, typesense_client: Optional[Client] = None, typesense_client_params: Optional[dict] = None, typesense_collection_name: Optional[str] = None, text_key: str = 'text', **kwargs: Any) \u2192 Typesense[source]\u00b6\nConstruct Typesense wrapper from raw text.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.typesense.Typesense.html"} {"id": "454ef2ba590f-5", "text": "Construct Typesense wrapper from raw text.\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.typesense.Typesense.html"} {"id": "454ef2ba590f-6", "text": "Return docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 10, filter: Optional[str] = '', **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn typesense documents most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 10.\nMinimum 10 results would be returned.\nfilter \u2013 typesense filter_by expression to filter documents on\nReturns\nList of Documents most similar to the query and score for each\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 10, filter: Optional[str] = '') \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn typesense documents most similar to query, along with scores.\nParameters\nquery \u2013 Text to look up documents similar to.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.typesense.Typesense.html"} {"id": "454ef2ba590f-7", "text": "Parameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 10.\nMinimum 10 results would be returned.\nfilter \u2013 typesense filter_by expression to filter documents on\nReturns\nList of Documents most similar to the query and score for each", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.typesense.Typesense.html"} {"id": "e89fc2fb67d4-0", "text": "langchain.vectorstores.clickhouse.ClickhouseSettings\u00b6\nclass langchain.vectorstores.clickhouse.ClickhouseSettings(_env_file: Optional[Union[str, PathLike, List[Union[str, PathLike]], Tuple[Union[str, PathLike], ...]]] = '', _env_file_encoding: Optional[str] = None, _env_nested_delimiter: Optional[str] = None, _secrets_dir: Optional[Union[str, PathLike]] = None, *, host: str = 'localhost', port: int = 8123, username: Optional[str] = None, password: Optional[str] = None, index_type: str = 'annoy', index_param: Optional[Union[List, Dict]] = [\"'L2Distance'\", 100], index_query_params: Dict[str, str] = {}, column_map: Dict[str, str] = {'document': 'document', 'embedding': 'embedding', 'id': 'id', 'metadata': 'metadata', 'uuid': 'uuid'}, database: str = 'default', table: str = 'langchain', metric: str = 'angular')[source]\u00b6\nBases: BaseSettings\nClickHouse Client Configuration\nAttribute:\nclickhouse_host (str)An URL to connect to MyScale backend.Defaults to \u2018localhost\u2019.\nclickhouse_port (int) : URL port to connect with HTTP. Defaults to 8443.\nusername (str) : Username to login. Defaults to None.\npassword (str) : Password to login. Defaults to None.\nindex_type (str): index type string.\nindex_param (list): index build parameter.\nindex_query_params(dict): index query parameters.\ndatabase (str) : Database name to find the table. Defaults to \u2018default\u2019.\ntable (str) : Table name to operate on.\nDefaults to \u2018vector_table\u2019.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clickhouse.ClickhouseSettings.html"} {"id": "e89fc2fb67d4-1", "text": "table (str) : Table name to operate on.\nDefaults to \u2018vector_table\u2019.\nmetric (str)Metric to compute distance,supported are (\u2018angular\u2019, \u2018euclidean\u2019, \u2018manhattan\u2019, \u2018hamming\u2019,\n\u2018dot\u2019). Defaults to \u2018angular\u2019.\nhttps://github.com/spotify/annoy/blob/main/src/annoymodule.cc#L149-L169\ncolumn_map (Dict)Column type map to project column name onto langchainsemantics. Must have keys: text, id, vector,\nmust be same size to number of columns. For example:\n.. code-block:: python\n{\u2018id\u2019: \u2018text_id\u2019,\n\u2018uuid\u2019: \u2018global_unique_id\u2019\n\u2018embedding\u2019: \u2018text_embedding\u2019,\n\u2018document\u2019: \u2018text_plain\u2019,\n\u2018metadata\u2019: \u2018metadata_dictionary_in_json\u2019,\n}\nDefaults to identity map.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam column_map: Dict[str, str] = {'document': 'document', 'embedding': 'embedding', 'id': 'id', 'metadata': 'metadata', 'uuid': 'uuid'}\u00b6\nparam database: str = 'default'\u00b6\nparam host: str = 'localhost'\u00b6\nparam index_param: Optional[Union[List, Dict]] = [\"'L2Distance'\", 100]\u00b6\nparam index_query_params: Dict[str, str] = {}\u00b6\nparam index_type: str = 'annoy'\u00b6\nparam metric: str = 'angular'\u00b6\nparam password: Optional[str] = None\u00b6\nparam port: int = 8123\u00b6\nparam table: str = 'langchain'\u00b6\nparam username: Optional[str] = None\u00b6\nmodel Config[source]\u00b6\nBases: object\nenv_file = '.env'\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clickhouse.ClickhouseSettings.html"} {"id": "e89fc2fb67d4-2", "text": "model Config[source]\u00b6\nBases: object\nenv_file = '.env'\u00b6\nenv_file_encoding = 'utf-8'\u00b6\nenv_prefix = 'clickhouse_'\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clickhouse.ClickhouseSettings.html"} {"id": "61a995388ec7-0", "text": "langchain.vectorstores.cassandra.Cassandra\u00b6\nclass langchain.vectorstores.cassandra.Cassandra(embedding: Embeddings, session: Session, keyspace: str, table_name: str, ttl_seconds: Optional[int] = None)[source]\u00b6\nBases: VectorStore\nWrapper around Cassandra embeddings platform.\nThere is no notion of a default table name, since each embedding\nfunction implies its own vector dimension, which is part of the schema.\nExample\nfrom langchain.vectorstores import Cassandra\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nsession = ...\nkeyspace = 'my_keyspace'\nvectorstore = Cassandra(embeddings, session, keyspace, 'my_doc_archive')\nMethods\n__init__(embedding,\u00a0session,\u00a0keyspace,\u00a0...)\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0ids,\u00a0...])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.cassandra.Cassandra.html"} {"id": "61a995388ec7-1", "text": "asearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\nclear()\nEmpty the collection.\ndelete([ids])\nDelete by vector IDs.\ndelete_by_document_id(document_id)\ndelete_collection()\nJust an alias for clear (to better align with other VectorStore implementations).\nfrom_documents(documents,\u00a0embedding[,\u00a0...])\nCreate a Cassandra vectorstore from a document list.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nCreate a Cassandra vectorstore from raw texts.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. :param query: Text to look up documents similar to. :param k: Number of Documents to return. :param fetch_k: Number of Documents to fetch to pass to MMR algorithm. :param lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Optional.\nmax_marginal_relevance_search_by_vector(...)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.cassandra.Cassandra.html"} {"id": "61a995388ec7-2", "text": "max_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. :param embedding: Embedding to look up documents similar to. :param k: Number of Documents to return. :param fetch_k: Number of Documents to fetch to pass to MMR algorithm. :param lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k])\nsimilarity_search_with_score_by_vector(embedding)\nReturn docs most similar to embedding vector.\nsimilarity_search_with_score_id(query[,\u00a0k])\nsimilarity_search_with_score_id_by_vector(...)\nReturn docs most similar to embedding vector.\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.cassandra.Cassandra.html"} {"id": "61a995388ec7-3", "text": "Run more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, batch_size: int = 16, ttl_seconds: Optional[int] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Texts to add to the vectorstore.\nmetadatas (Optional[List[dict]], optional) \u2013 Optional list of metadatas.\nids (Optional[List[str]], optional) \u2013 Optional list of IDs.\nbatch_size (int) \u2013 Number of concurrent requests to send to the server.\nttl_seconds (Optional[int], optional) \u2013 Optional time-to-live\nfor the added texts.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.cassandra.Cassandra.html"} {"id": "61a995388ec7-4", "text": "Return VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\nclear() \u2192 None[source]\u00b6\nEmpty the collection.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool][source]\u00b6\nDelete by vector IDs.\nParameters\nids \u2013 List of ids to delete.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.cassandra.Cassandra.html"} {"id": "61a995388ec7-5", "text": "False otherwise, None if not implemented.\nReturn type\nOptional[bool]\ndelete_by_document_id(document_id: str) \u2192 None[source]\u00b6\ndelete_collection() \u2192 None[source]\u00b6\nJust an alias for clear\n(to better align with other VectorStore implementations).\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, batch_size: int = 16, **kwargs: Any) \u2192 CVST[source]\u00b6\nCreate a Cassandra vectorstore from a document list.\nNo support for specifying text IDs\nReturns\na Cassandra vectorstore.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, batch_size: int = 16, **kwargs: Any) \u2192 CVST[source]\u00b6\nCreate a Cassandra vectorstore from raw texts.\nNo support for specifying text IDs\nReturns\na Cassandra vectorstore.\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\n:param query: Text to look up documents similar to.\n:param k: Number of Documents to return.\n:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n:param lambda_mult: Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nOptional.\nReturns\nList of Documents selected by maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.cassandra.Cassandra.html"} {"id": "61a995388ec7-6", "text": "Optional.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\n:param embedding: Embedding to look up documents similar to.\n:param k: Number of Documents to return.\n:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n:param lambda_mult: Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.cassandra.Cassandra.html"} {"id": "61a995388ec7-7", "text": "0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 4) \u2192 List[Tuple[Document, float]][source]\u00b6\nsimilarity_search_with_score_by_vector(embedding: List[float], k: int = 4) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs most similar to embedding vector.\nNo support for filter query (on metadata) along with vector search.\nParameters\nembedding (str) \u2013 Embedding to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of (Document, score), the most similar to the query vector.\nsimilarity_search_with_score_id(query: str, k: int = 4) \u2192 List[Tuple[Document, float, str]][source]\u00b6\nsimilarity_search_with_score_id_by_vector(embedding: List[float], k: int = 4) \u2192 List[Tuple[Document, float, str]][source]\u00b6\nReturn docs most similar to embedding vector.\nNo support for filter query (on metadata) along with vector search.\nParameters\nembedding (str) \u2013 Embedding to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of (Document, score, id), the most similar to the query vector.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.cassandra.Cassandra.html"} {"id": "12cdab8b3f49-0", "text": "langchain.vectorstores.redis.RedisVectorStoreRetriever\u00b6\nclass langchain.vectorstores.redis.RedisVectorStoreRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, vectorstore: Redis, search_type: str = 'similarity', search_kwargs: dict = None, k: int = 4, score_threshold: float = 0.4)[source]\u00b6\nBases: VectorStoreRetriever\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam k: int = 4\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam score_threshold: float = 0.4\u00b6\nparam search_kwargs: dict [Optional]\u00b6\nparam search_type: str = 'similarity'\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam vectorstore: Redis [Required]\u00b6\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str][source]\u00b6\nAdd documents to vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str][source]\u00b6\nAdd documents to vectorstore.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.redis.RedisVectorStoreRetriever.html"} {"id": "12cdab8b3f49-1", "text": "Add documents to vectorstore.\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_search_type\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate search type.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.redis.RedisVectorStoreRetriever.html"} {"id": "12cdab8b3f49-2", "text": "validator validate_search_type\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate search type.\nallowed_search_types: ClassVar[Collection[str]] = ('similarity', 'similarity_score_threshold', 'mmr')\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.redis.RedisVectorStoreRetriever.html"} {"id": "2791f048180d-0", "text": "langchain.vectorstores.singlestoredb.SingleStoreDB\u00b6\nclass langchain.vectorstores.singlestoredb.SingleStoreDB(embedding: Embeddings, *, distance_strategy: DistanceStrategy = DistanceStrategy.DOT_PRODUCT, table_name: str = 'embeddings', content_field: str = 'content', metadata_field: str = 'metadata', vector_field: str = 'vector', pool_size: int = 5, max_overflow: int = 10, timeout: float = 30, **kwargs: Any)[source]\u00b6\nBases: VectorStore\nThis class serves as a Pythonic interface to the SingleStore DB database.\nThe prerequisite for using this class is the installation of the singlestoredb\nPython package.\nThe SingleStoreDB vectorstore can be created by providing an embedding function and\nthe relevant parameters for the database connection, connection pool, and\noptionally, the names of the table and the fields to use.\nInitialize with necessary components.\nParameters\nembedding (Embeddings) \u2013 A text embedding model.\ndistance_strategy (DistanceStrategy, optional) \u2013 Determines the strategy employed for calculating\nthe distance between vectors in the embedding space.\nDefaults to DOT_PRODUCT.\nAvailable options are:\n- DOT_PRODUCT: Computes the scalar product of two vectors.\nThis is the default behavior\nEUCLIDEAN_DISTANCE: Computes the Euclidean distance betweentwo vectors. This metric considers the geometric distance in\nthe vector space, and might be more suitable for embeddings\nthat rely on spatial relationships.\ntable_name (str, optional) \u2013 Specifies the name of the table in use.\nDefaults to \u201cembeddings\u201d.\ncontent_field (str, optional) \u2013 Specifies the field to store the content.\nDefaults to \u201ccontent\u201d.\nmetadata_field (str, optional) \u2013 Specifies the field to store metadata.\nDefaults to \u201cmetadata\u201d.\nvector_field (str, optional) \u2013 Specifies the field to store the vector.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDB.html"} {"id": "2791f048180d-1", "text": "vector_field (str, optional) \u2013 Specifies the field to store the vector.\nDefaults to \u201cvector\u201d.\npool (Following arguments pertain to the connection) \u2013 \npool_size (int, optional) \u2013 Determines the number of active connections in\nthe pool. Defaults to 5.\nmax_overflow (int, optional) \u2013 Determines the maximum number of connections\nallowed beyond the pool_size. Defaults to 10.\ntimeout (float, optional) \u2013 Specifies the maximum wait time in seconds for\nestablishing a connection. Defaults to 30.\nconnection (database) \u2013 \nhost (str, optional) \u2013 Specifies the hostname, IP address, or URL for the\ndatabase connection. The default scheme is \u201cmysql\u201d.\nuser (str, optional) \u2013 Database username.\npassword (str, optional) \u2013 Database password.\nport (int, optional) \u2013 Database port. Defaults to 3306 for non-HTTP\nconnections, 80 for HTTP connections, and 443 for HTTPS connections.\ndatabase (str, optional) \u2013 Database name.\nthe (Additional optional arguments provide further customization over) \u2013 \nconnection \u2013 \npure_python (bool, optional) \u2013 Toggles the connector mode. If True,\noperates in pure Python mode.\nlocal_infile (bool, optional) \u2013 Allows local file uploads.\ncharset (str, optional) \u2013 Specifies the character set for string values.\nssl_key (str, optional) \u2013 Specifies the path of the file containing the SSL\nkey.\nssl_cert (str, optional) \u2013 Specifies the path of the file containing the SSL\ncertificate.\nssl_ca (str, optional) \u2013 Specifies the path of the file containing the SSL\ncertificate authority.\nssl_cipher (str, optional) \u2013 Sets the SSL cipher list.\nssl_disabled (bool, optional) \u2013 Disables SSL usage.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDB.html"} {"id": "2791f048180d-2", "text": "ssl_disabled (bool, optional) \u2013 Disables SSL usage.\nssl_verify_cert (bool, optional) \u2013 Verifies the server\u2019s certificate.\nAutomatically enabled if ssl_ca is specified.\nssl_verify_identity (bool, optional) \u2013 Verifies the server\u2019s identity.\nconv (dict[int, Callable], optional) \u2013 A dictionary of data conversion\nfunctions.\ncredential_type (str, optional) \u2013 Specifies the type of authentication to\nuse: auth.PASSWORD, auth.JWT, or auth.BROWSER_SSO.\nautocommit (bool, optional) \u2013 Enables autocommits.\nresults_type (str, optional) \u2013 Determines the structure of the query results:\ntuples, namedtuples, dicts.\nresults_format (str, optional) \u2013 Deprecated. This option has been renamed to\nresults_type.\nExamples\nBasic Usage:\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.vectorstores import SingleStoreDB\nvectorstore = SingleStoreDB(\n OpenAIEmbeddings(),\n host=\"https://user:password@127.0.0.1:3306/database\"\n)\nAdvanced Usage:\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.vectorstores import SingleStoreDB\nvectorstore = SingleStoreDB(\n OpenAIEmbeddings(),\n distance_strategy=DistanceStrategy.EUCLIDEAN_DISTANCE,\n host=\"127.0.0.1\",\n port=3306,\n user=\"user\",\n password=\"password\",\n database=\"db\",\n table_name=\"my_custom_table\",\n pool_size=10,\n timeout=60,\n)\nUsing environment variables:\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.vectorstores import SingleStoreDB", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDB.html"} {"id": "2791f048180d-3", "text": "from langchain.vectorstores import SingleStoreDB\nos.environ['SINGLESTOREDB_URL'] = 'me:p455w0rd@s2-host.com/my_db'\nvectorstore = SingleStoreDB(OpenAIEmbeddings())\nMethods\n__init__(embedding,\u00a0*[,\u00a0distance_strategy,\u00a0...])\nInitialize with necessary components.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0embeddings])\nAdd more texts to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector ID or other criteria.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDB.html"} {"id": "2791f048180d-4", "text": "Return VectorStore initialized from documents and embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nCreate a SingleStoreDB vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new table for the embeddings in SingleStoreDB. 3. Adds the documents to the newly created table. This is intended to be a quick way to get started. .. rubric:: Example.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k,\u00a0filter])\nReturns the most similar indexed documents to the query text.\nsimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k,\u00a0filter])\nReturn docs most similar to query.\nAttributes\nvector_field\nPass the rest of the kwargs to the connection.\nconnection_kwargs\nAdd program name and version to connection attributes.\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDB.html"} {"id": "2791f048180d-5", "text": "Run more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, embeddings: Optional[List[List[float]]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nAdd more texts to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings/text to add to the vectorstore.\nmetadatas (Optional[List[dict]], optional) \u2013 Optional list of metadatas.\nDefaults to None.\nembeddings (Optional[List[List[float]]], optional) \u2013 Optional pre-generated\nembeddings. Defaults to None.\nReturns\nempty list\nReturn type\nList[str]\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDB.html"} {"id": "2791f048180d-6", "text": "Return docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 SingleStoreDBRetriever[source]\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDB.html"} {"id": "2791f048180d-7", "text": "Return VectorStore initialized from documents and embeddings.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, distance_strategy: DistanceStrategy = DistanceStrategy.DOT_PRODUCT, table_name: str = 'embeddings', content_field: str = 'content', metadata_field: str = 'metadata', vector_field: str = 'vector', pool_size: int = 5, max_overflow: int = 10, timeout: float = 30, **kwargs: Any) \u2192 SingleStoreDB[source]\u00b6\nCreate a SingleStoreDB vectorstore from raw documents.\nThis is a user-friendly interface that:\nEmbeds documents.\nCreates a new table for the embeddings in SingleStoreDB.\nAdds the documents to the newly created table.\nThis is intended to be a quick way to get started.\n.. rubric:: Example\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDB.html"} {"id": "2791f048180d-8", "text": "Defaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturns the most similar indexed documents to the query text.\nUses cosine similarity.\nParameters\nquery (str) \u2013 The query text for which to find similar documents.\nk (int) \u2013 The number of documents to return. Default is 4.\nfilter (dict) \u2013 A dictionary of metadata fields and values to filter by.\nReturns\nA list of documents that are most similar to the query text.\nReturn type\nList[Document]\nExamples\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDB.html"} {"id": "2791f048180d-9", "text": "Return docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs most similar to query. Uses cosine similarity.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter \u2013 A dictionary of metadata fields and values to filter by.\nDefaults to None.\nReturns\nList of Documents most similar to the query and score for each\nconnection_kwargs\u00b6\nAdd program name and version to connection attributes.\nvector_field\u00b6\nPass the rest of the kwargs to the connection.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDB.html"} {"id": "76ceeb95675a-0", "text": "langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch\u00b6\nclass langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch(doc_index: BaseDocIndex, embedding: Embeddings)[source]\u00b6\nBases: DocArrayIndex\nWrapper around HnswLib storage.\nTo use it, you should have the docarray package with version >=0.32.0 installed.\nYou can install it with pip install \u201clangchain[docarray]\u201d.\nInitialize a vector store from DocArray\u2019s DocIndex.\nMethods\n__init__(doc_index,\u00a0embedding)\nInitialize a vector store from DocArray's DocIndex.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch.html"} {"id": "76ceeb95675a-1", "text": "asimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector ID or other criteria.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_params(embedding,\u00a0work_dir,\u00a0n_dim[,\u00a0...])\nInitialize DocArrayHnswSearch store.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nCreate an DocArrayHnswSearch store and insert data.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k])\nReturn docs most similar to query.\nAttributes\ndoc_cls\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch.html"} {"id": "76ceeb95675a-2", "text": "Run more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nReturns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch.html"} {"id": "76ceeb95675a-3", "text": "Return docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch.html"} {"id": "76ceeb95675a-4", "text": "Return VectorStore initialized from documents and embeddings.\nclassmethod from_params(embedding: Embeddings, work_dir: str, n_dim: int, dist_metric: Literal['cosine', 'ip', 'l2'] = 'cosine', max_elements: int = 1024, index: bool = True, ef_construction: int = 200, ef: int = 10, M: int = 16, allow_replace_deleted: bool = True, num_threads: int = 1, **kwargs: Any) \u2192 DocArrayHnswSearch[source]\u00b6\nInitialize DocArrayHnswSearch store.\nParameters\nembedding (Embeddings) \u2013 Embedding function.\nwork_dir (str) \u2013 path to the location where all the data will be stored.\nn_dim (int) \u2013 dimension of an embedding.\ndist_metric (str) \u2013 Distance metric for DocArrayHnswSearch can be one of:\n\u201ccosine\u201d, \u201cip\u201d, and \u201cl2\u201d. Defaults to \u201ccosine\u201d.\nmax_elements (int) \u2013 Maximum number of vectors that can be stored.\nDefaults to 1024.\nindex (bool) \u2013 Whether an index should be built for this field.\nDefaults to True.\nef_construction (int) \u2013 defines a construction time/accuracy trade-off.\nDefaults to 200.\nef (int) \u2013 parameter controlling query time/accuracy trade-off.\nDefaults to 10.\nM (int) \u2013 parameter that defines the maximum number of outgoing\nconnections in the graph. Defaults to 16.\nallow_replace_deleted (bool) \u2013 Enables replacing of deleted elements\nwith new added ones. Defaults to True.\nnum_threads (int) \u2013 Sets the number of cpu threads to use. Defaults to 1.\n**kwargs \u2013 Other keyword arguments to be passed to the get_doc_cls method.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch.html"} {"id": "76ceeb95675a-5", "text": "**kwargs \u2013 Other keyword arguments to be passed to the get_doc_cls method.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, work_dir: Optional[str] = None, n_dim: Optional[int] = None, **kwargs: Any) \u2192 DocArrayHnswSearch[source]\u00b6\nCreate an DocArrayHnswSearch store and insert data.\nParameters\ntexts (List[str]) \u2013 Text data.\nembedding (Embeddings) \u2013 Embedding function.\nmetadatas (Optional[List[dict]]) \u2013 Metadata for each text if it exists.\nDefaults to None.\nwork_dir (str) \u2013 path to the location where all the data will be stored.\nn_dim (int) \u2013 dimension of an embedding.\n**kwargs \u2013 Other keyword arguments to be passed to the __init__ method.\nReturns\nDocArrayHnswSearch Vector Store\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch.html"} {"id": "76ceeb95675a-6", "text": "Defaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch.html"} {"id": "76ceeb95675a-7", "text": "Returns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of documents most similar to the query text and\ncosine distance in float for each.\nLower score represents more similarity.\nproperty doc_cls: Type[BaseDoc]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch.html"} {"id": "bf42aae20af3-0", "text": "langchain.vectorstores.marqo.Marqo\u00b6\nclass langchain.vectorstores.marqo.Marqo(client: marqo.Client, index_name: str, add_documents_settings: Optional[Dict[str, Any]] = None, searchable_attributes: Optional[List[str]] = None, page_content_builder: Optional[Callable[[Dict[str, Any]], str]] = None)[source]\u00b6\nBases: VectorStore\nWrapper around Marqo database.\nMarqo indexes have their own models associated with them to generate your\nembeddings. This means that you can selected from a range of different models\nand also use CLIP models to create multimodal indexes\nwith images and text together.\nMarqo also supports more advanced queries with mutliple weighted terms, see See\nhttps://docs.marqo.ai/latest/#searching-using-weights-in-queries.\nThis class can flexibly take strings or dictionaries for weighted queries\nin its similarity search methods.\nTo use, you should have the marqo python package installed, you can do this with\npip install marqo.\nExample\nimport marqo\nfrom langchain.vectorstores import Marqo\nclient = marqo.Client(url=os.environ[\"MARQO_URL\"], ...)\nvectorstore = Marqo(client, index_name)\nInitialize with Marqo client.\nMethods\n__init__(client,\u00a0index_name[,\u00a0...])\nInitialize with Marqo client.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas])", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.marqo.Marqo.html"} {"id": "bf42aae20af3-1", "text": "add_texts(texts[,\u00a0metadatas])\nUpload texts with metadata (properties) to Marqo.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\nbulk_similarity_search(queries[,\u00a0k])\nSearch the marqo index for the most similar documents in bulk with multiple queries.\nbulk_similarity_search_with_score(queries[,\u00a0k])\nReturn documents from Marqo that are similar to the query as well as their scores using a batch of queries.\ndelete([ids])\nDelete by vector ID or other criteria.\nfrom_documents(documents[,\u00a0embedding])\nReturn VectorStore initialized from documents.\nfrom_texts(texts[,\u00a0embedding,\u00a0metadatas,\u00a0...])\nReturn Marqo initialized from texts.\nget_indexes()\nHelper to see your available indexes in marqo, useful if the from_texts method was used without an index name specified\nget_number_of_documents()\nHelper to see the number of documents in the index\nmarqo_bulk_similarity_search(queries[,\u00a0k])", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.marqo.Marqo.html"} {"id": "bf42aae20af3-2", "text": "marqo_bulk_similarity_search(queries[,\u00a0k])\nReturn documents from Marqo using a bulk search, exposes Marqo's output directly\nmarqo_similarity_search(query[,\u00a0k])\nReturn documents from Marqo exposing Marqo's output directly\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k])\nSearch the marqo index for the most similar documents.\nsimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k])\nReturn documents from Marqo that are similar to the query as well as their scores.\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.marqo.Marqo.html"} {"id": "bf42aae20af3-3", "text": "Parameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nUpload texts with metadata (properties) to Marqo.\nYou can either have marqo generate ids for each document or you can provide\nyour own by including a \u201c_id\u201d field in the metadata objects.\nParameters\ntexts (Iterable[str]) \u2013 am iterator of texts - assumed to preserve an\nmetadatas. (order that matches the) \u2013 \nmetadatas (Optional[List[dict]], optional) \u2013 a list of metadatas.\nRaises\nValueError \u2013 if metadatas is provided and the number of metadatas differs\nfrom the number of texts. \u2013 \nReturns\nThe list of ids that were added.\nReturn type\nList[str]\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.marqo.Marqo.html"} {"id": "bf42aae20af3-4", "text": "Return docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\nbulk_similarity_search(queries: Iterable[Union[str, Dict[str, float]]], k: int = 4, **kwargs: Any) \u2192 List[List[Document]][source]\u00b6\nSearch the marqo index for the most similar documents in bulk with multiple\nqueries.\nParameters\nqueries (Iterable[Union[str, Dict[str, float]]]) \u2013 An iterable of queries to\nbulk (execute in) \u2013 \nof (queries in the list can be strings or dictonaries) \u2013 \nqueries. (weighted) \u2013 \nk (int, optional) \u2013 The number of documents to return for each query.\n4. (Defaults to) \u2013 \nReturns\nA list of results for each query.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.marqo.Marqo.html"} {"id": "bf42aae20af3-5", "text": "4. (Defaults to) \u2013 \nReturns\nA list of results for each query.\nReturn type\nList[List[Document]]\nbulk_similarity_search_with_score(queries: Iterable[Union[str, Dict[str, float]]], k: int = 4, **kwargs: Any) \u2192 List[List[Tuple[Document, float]]][source]\u00b6\nReturn documents from Marqo that are similar to the query as well as\ntheir scores using a batch of queries.\nParameters\nquery (Iterable[Union[str, Dict[str, float]]]) \u2013 An iterable of queries\nbulk (to execute in) \u2013 \ndictonaries (queries in the list can be strings or) \u2013 \nqueries. (of weighted) \u2013 \nk (int, optional) \u2013 The number of documents to return. Defaults to 4.\nReturns\nA list of lists of the matching\ndocuments and their scores for each query\nReturn type\nList[Tuple[Document, float]]\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_documents(documents: List[Document], embedding: Optional[Embeddings] = None, **kwargs: Any) \u2192 Marqo[source]\u00b6\nReturn VectorStore initialized from documents. Note that Marqo does not\nneed embeddings, we retain the parameter to adhere to the Liskov substitution\nprinciple.\nParameters\ndocuments (List[Document]) \u2013 Input documents\nembedding (Any, optional) \u2013 Embeddings (not required). Defaults to None.\nReturns\nA Marqo vectorstore\nReturn type\nVectorStore", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.marqo.Marqo.html"} {"id": "bf42aae20af3-6", "text": "Returns\nA Marqo vectorstore\nReturn type\nVectorStore\nclassmethod from_texts(texts: List[str], embedding: Any = None, metadatas: Optional[List[dict]] = None, index_name: str = '', url: str = 'http://localhost:8882', api_key: str = '', add_documents_settings: Optional[Dict[str, Any]] = {}, searchable_attributes: Optional[List[str]] = None, page_content_builder: Optional[Callable[[Dict[str, str]], str]] = None, index_settings: Optional[Dict[str, Any]] = {}, verbose: bool = True, **kwargs: Any) \u2192 Marqo[source]\u00b6\nReturn Marqo initialized from texts. Note that Marqo does not need\nembeddings, we retain the parameter to adhere to the Liskov\nsubstitution principle.\nThis is a quick way to get started with marqo - simply provide your texts and\nmetadatas and this will create an instance of the data store and index the\nprovided data.\nTo know the ids of your documents with this approach you will need to include\nthem in under the key \u201c_id\u201d in your metadatas for each text\nExample:\n.. code-block:: python\nfrom langchain.vectorstores import Marqo\ndatastore = Marqo(texts=[\u2018text\u2019], index_name=\u2019my-first-index\u2019,\nurl=\u2019http://localhost:8882\u2019)\nParameters\ntexts (List[str]) \u2013 A list of texts to index into marqo upon creation.\nembedding (Any, optional) \u2013 Embeddings (not required). Defaults to None.\nindex_name (str, optional) \u2013 The name of the index to use, if none is\nNone. (accompany the texts. Defaults to) \u2013 \nurl (str, optional) \u2013 The URL for Marqo. Defaults to \u201chttp://localhost:8882\u201d.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.marqo.Marqo.html"} {"id": "bf42aae20af3-7", "text": "api_key (str, optional) \u2013 The API key for Marqo. Defaults to \u201c\u201d.\nmetadatas (Optional[List[dict]], optional) \u2013 A list of metadatas, to\nNone. \u2013 \nCan (this is only used when a new index is being created. Defaults to \"cpu\".) \u2013 \n\"cuda\". (be \"cpu\" or) \u2013 \nadd_documents_settings (Optional[Dict[str, Any]], optional) \u2013 Settings\ndocuments (for adding) \u2013 \nsee \u2013 \nhttps \u2013 //docs.marqo.ai/0.0.16/API-Reference/documents/#query-parameters.\n{}. (Defaults to) \u2013 \nindex_settings (Optional[Dict[str, Any]], optional) \u2013 Index settings if\nexist (the index doesn't) \u2013 \nsee \u2013 \nhttps \u2013 //docs.marqo.ai/0.0.16/API-Reference/indexes/#index-defaults-object.\n{}. \u2013 \nReturns\nAn instance of the Marqo vector store\nReturn type\nMarqo\nget_indexes() \u2192 List[Dict[str, str]][source]\u00b6\nHelper to see your available indexes in marqo, useful if the\nfrom_texts method was used without an index name specified\nReturns\nThe list of indexes\nReturn type\nList[Dict[str, str]]\nget_number_of_documents() \u2192 int[source]\u00b6\nHelper to see the number of documents in the index\nReturns\nThe number of documents\nReturn type\nint\nmarqo_bulk_similarity_search(queries: Iterable[Union[str, Dict[str, float]]], k: int = 4) \u2192 Dict[str, List[Dict[str, List[Dict[str, str]]]]][source]\u00b6\nReturn documents from Marqo using a bulk search, exposes Marqo\u2019s\noutput directly\nParameters", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.marqo.Marqo.html"} {"id": "bf42aae20af3-8", "text": "output directly\nParameters\nqueries (Iterable[Union[str, Dict[str, float]]]) \u2013 A list of queries.\nk (int, optional) \u2013 The number of documents to return for each query.\n4. (Defaults to) \u2013 \nReturns\nA bulk search results\nobject\nReturn type\nDict[str, Dict[List[Dict[str, Dict[str, Any]]]]]\nmarqo_similarity_search(query: Union[str, Dict[str, float]], k: int = 4) \u2192 Dict[str, List[Dict[str, str]]][source]\u00b6\nReturn documents from Marqo exposing Marqo\u2019s output directly\nParameters\nquery (str) \u2013 The query to search with.\nk (int, optional) \u2013 The number of documents to return. Defaults to 4.\nReturns\nThis hits from marqo.\nReturn type\nList[Dict[str, Any]]\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.marqo.Marqo.html"} {"id": "bf42aae20af3-9", "text": "Defaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: Union[str, Dict[str, float]], k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nSearch the marqo index for the most similar documents.\nParameters\nquery (Union[str, Dict[str, float]]) \u2013 The query for the search, either\nquery. (as a string or a weighted) \u2013 \nk (int, optional) \u2013 The number of documents to return. Defaults to 4.\nReturns\nk documents ordered from best to worst match.\nReturn type\nList[Document]\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nParameters", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.marqo.Marqo.html"} {"id": "bf42aae20af3-10", "text": "Return docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: Union[str, Dict[str, float]], k: int = 4) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn documents from Marqo that are similar to the query as well\nas their scores.\nParameters\nquery (str) \u2013 The query to search with, either as a string or a weighted\nquery. \u2013 \nk (int, optional) \u2013 The number of documents to return. Defaults to 4.\nReturns\nThe matching documents and their scores,\nordered by descending score.\nReturn type\nList[Tuple[Document, float]]", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.marqo.Marqo.html"} {"id": "560fd5288baf-0", "text": "langchain.vectorstores.faiss.FAISS\u00b6\nclass langchain.vectorstores.faiss.FAISS(embedding_function: ~typing.Callable, index: ~typing.Any, docstore: ~langchain.docstore.base.Docstore, index_to_docstore_id: ~typing.Dict[int, str], relevance_score_fn: ~typing.Callable[[float], float] = , normalize_L2: bool = False)[source]\u00b6\nBases: VectorStore\nWrapper around FAISS vector database.\nTo use, you should have the faiss python package installed.\nExample\nfrom langchain import FAISS\nfaiss = FAISS(embedding_function, index, docstore, index_to_docstore_id)\nInitialize with necessary components.\nMethods\n__init__(embedding_function,\u00a0index,\u00a0...[,\u00a0...])\nInitialize with necessary components.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_embeddings(text_embeddings[,\u00a0metadatas,\u00a0ids])\nRun more texts through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0ids])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.faiss.FAISS.html"} {"id": "560fd5288baf-1", "text": "amax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector ID or other criteria.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_embeddings(text_embeddings,\u00a0embedding)\nConstruct FAISS wrapper from raw documents.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0ids])\nConstruct FAISS wrapper from raw documents.\nload_local(folder_path,\u00a0embeddings[,\u00a0index_name])\nLoad FAISS index, docstore, and index_to_docstore_id from disk.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_with_score_by_vector(...)\nReturn docs and their similarity scores selected using the maximal marginal\nmerge_from(target)\nMerge another FAISS object with the current one.\nsave_local(folder_path[,\u00a0index_name])\nSave FAISS index, docstore, and index_to_docstore_id to disk.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k,\u00a0filter,\u00a0fetch_k])\nReturn docs most similar to query.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.faiss.FAISS.html"} {"id": "560fd5288baf-2", "text": "Return docs most similar to query.\nsimilarity_search_by_vector(embedding[,\u00a0k,\u00a0...])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k,\u00a0...])\nReturn docs most similar to query.\nsimilarity_search_with_score_by_vector(embedding)\nReturn docs most similar to query.\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_embeddings(text_embeddings: Iterable[Tuple[str, List[float]]], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntext_embeddings \u2013 Iterable pairs of string and embedding to\nadd to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nids \u2013 Optional list of unique IDs.\nReturns", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.faiss.FAISS.html"} {"id": "560fd5288baf-3", "text": "ids \u2013 Optional list of unique IDs.\nReturns\nList of ids from adding the texts into the vectorstore.\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nids \u2013 Optional list of unique IDs.\nReturns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.faiss.FAISS.html"} {"id": "560fd5288baf-4", "text": "Return docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nclassmethod from_embeddings(text_embeddings: List[Tuple[str, List[float]]], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 FAISS[source]\u00b6\nConstruct FAISS wrapper from raw documents.\nThis is a user friendly interface that:\nEmbeds documents.\nCreates an in memory docstore\nInitializes the FAISS database\nThis is intended to be a quick way to get started.\nExample\nfrom langchain import FAISS\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\ntext_embeddings = embeddings.embed_documents(texts)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.faiss.FAISS.html"} {"id": "560fd5288baf-5", "text": "embeddings = OpenAIEmbeddings()\ntext_embeddings = embeddings.embed_documents(texts)\ntext_embedding_pairs = list(zip(texts, text_embeddings))\nfaiss = FAISS.from_embeddings(text_embedding_pairs, embeddings)\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 FAISS[source]\u00b6\nConstruct FAISS wrapper from raw documents.\nThis is a user friendly interface that:\nEmbeds documents.\nCreates an in memory docstore\nInitializes the FAISS database\nThis is intended to be a quick way to get started.\nExample\nfrom langchain import FAISS\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nfaiss = FAISS.from_texts(texts, embeddings)\nclassmethod load_local(folder_path: str, embeddings: Embeddings, index_name: str = 'index', **kwargs: Any) \u2192 FAISS[source]\u00b6\nLoad FAISS index, docstore, and index_to_docstore_id from disk.\nParameters\nfolder_path \u2013 folder path to load index, docstore,\nand index_to_docstore_id from.\nembeddings \u2013 Embeddings to use when generating queries\nindex_name \u2013 for saving with a specific index file name\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.faiss.FAISS.html"} {"id": "560fd5288baf-6", "text": "k \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch before filtering (if needed) to\npass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch before filtering to\npass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_with_score_by_vector(embedding: List[float], *, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[Dict[str, Any]] = None) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs and their similarity scores selected using the maximal marginalrelevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.faiss.FAISS.html"} {"id": "560fd5288baf-7", "text": "Maximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch before filtering to\npass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents and similarity scores selected by maximal marginalrelevance and score for each.\nmerge_from(target: FAISS) \u2192 None[source]\u00b6\nMerge another FAISS object with the current one.\nAdd the target FAISS to the current one.\nParameters\ntarget \u2013 FAISS object you wish to merge into the current one\nReturns\nNone.\nsave_local(folder_path: str, index_name: str = 'index') \u2192 None[source]\u00b6\nSave FAISS index, docstore, and index_to_docstore_id to disk.\nParameters\nfolder_path \u2013 folder path to save index, docstore,\nand index_to_docstore_id to.\nindex_name \u2013 for saving with a specific index file name\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, filter: Optional[Dict[str, Any]] = None, fetch_k: int = 20, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter \u2013 (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.faiss.FAISS.html"} {"id": "560fd5288baf-8", "text": "filter \u2013 (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\nfetch_k \u2013 (Optional[int]) Number of Documents to fetch before filtering.\nDefaults to 20.\nReturns\nList of Documents most similar to the query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, filter: Optional[Dict[str, Any]] = None, fetch_k: int = 20, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nfetch_k \u2013 (Optional[int]) Number of Documents to fetch before filtering.\nDefaults to 20.\nReturns\nList of Documents most similar to the embedding.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 4, filter: Optional[Dict[str, Any]] = None, fetch_k: int = 20, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs most similar to query.\nParameters", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.faiss.FAISS.html"} {"id": "560fd5288baf-9", "text": "Return docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nfetch_k \u2013 (Optional[int]) Number of Documents to fetch before filtering.\nDefaults to 20.\nReturns\nList of documents most similar to the query text with\nL2 distance in float. Lower score represents more similarity.\nsimilarity_search_with_score_by_vector(embedding: List[float], k: int = 4, filter: Optional[Dict[str, Any]] = None, fetch_k: int = 20, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs most similar to query.\nParameters\nembedding \u2013 Embedding vector to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[Dict[str, Any]]) \u2013 Filter by metadata. Defaults to None.\nfetch_k \u2013 (Optional[int]) Number of Documents to fetch before filtering.\nDefaults to 20.\n**kwargs \u2013 kwargs to be passed to similarity search. Can include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of documents most similar to the query text and L2 distance\nin float for each. Lower score represents more similarity.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.faiss.FAISS.html"} {"id": "93a84c6275a3-0", "text": "langchain.vectorstores.vectara.Vectara\u00b6\nclass langchain.vectorstores.vectara.Vectara(vectara_customer_id: Optional[str] = None, vectara_corpus_id: Optional[str] = None, vectara_api_key: Optional[str] = None)[source]\u00b6\nBases: VectorStore\nImplementation of Vector Store using Vectara.\nSee (https://vectara.com).\nExample\nfrom langchain.vectorstores import Vectara\nvectorstore = Vectara(\n vectara_customer_id=vectara_customer_id,\n vectara_corpus_id=vectara_corpus_id,\n vectara_api_key=vectara_api_key\n)\nInitialize with Vectara API.\nMethods\n__init__([vectara_customer_id,\u00a0...])\nInitialize with Vectara API.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_files(files_list[,\u00a0metadatas])\nVectara provides a way to add documents directly via our API where pre-processing and chunking occurs internally in an optimal way This method provides a way to use that API in LangChain\nadd_texts(texts[,\u00a0metadatas,\u00a0doc_metadata])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.vectara.Vectara.html"} {"id": "93a84c6275a3-1", "text": "Return docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector ID or other criteria.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_files(files[,\u00a0embedding,\u00a0metadatas])\nConstruct Vectara wrapper from raw documents.\nfrom_texts(texts[,\u00a0embedding,\u00a0metadatas])\nConstruct Vectara wrapper from raw documents.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k,\u00a0lambda_val,\u00a0...])\nReturn Vectara documents most similar to query, along with scores.\nsimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k,\u00a0...])\nReturn Vectara documents most similar to query, along with scores.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.vectara.Vectara.html"} {"id": "93a84c6275a3-2", "text": "Return Vectara documents most similar to query, along with scores.\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_files(files_list: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nVectara provides a way to add documents directly via our API where\npre-processing and chunking occurs internally in an optimal way\nThis method provides a way to use that API in LangChain\nParameters\nfiles_list \u2013 Iterable of strings, each representing a local file path.\nFiles could be text, HTML, PDF, markdown, doc/docx, ppt/pptx, etc.\nsee API docs for full list\nmetadatas \u2013 Optional list of metadatas associated with each file\nReturns\nList of ids associated with each of the files indexed\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, doc_metadata: Optional[dict] = None, **kwargs: Any) \u2192 List[str][source]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.vectara.Vectara.html"} {"id": "93a84c6275a3-3", "text": "Run more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\ndoc_metadata \u2013 optional metadata for the document\nThis function indexes all the input text strings in the Vectara corpus as a\nsingle Vectara document, where each input text is considered a \u201cpart\u201d and the\nmetadata are associated with each part.\nif \u2018doc_metadata\u2019 is provided, it is associated with the Vectara document.\nReturns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectaraRetriever[source]\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.vectara.Vectara.html"} {"id": "93a84c6275a3-4", "text": "Return docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nclassmethod from_files(files: List[str], embedding: Optional[Embeddings] = None, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 Vectara[source]\u00b6\nConstruct Vectara wrapper from raw documents.\nThis is intended to be a quick way to get started.\n.. rubric:: Example\nfrom langchain import Vectara\nvectara = Vectara.from_files(\n files_list,\n vectara_customer_id=customer_id,\n vectara_corpus_id=corpus_id,\n vectara_api_key=api_key,\n)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.vectara.Vectara.html"} {"id": "93a84c6275a3-5", "text": "vectara_api_key=api_key,\n)\nclassmethod from_texts(texts: List[str], embedding: Optional[Embeddings] = None, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 Vectara[source]\u00b6\nConstruct Vectara wrapper from raw documents.\nThis is intended to be a quick way to get started.\n.. rubric:: Example\nfrom langchain import Vectara\nvectara = Vectara.from_texts(\n texts,\n vectara_customer_id=customer_id,\n vectara_corpus_id=corpus_id,\n vectara_api_key=api_key,\n)\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.vectara.Vectara.html"} {"id": "93a84c6275a3-6", "text": "Maximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 5, lambda_val: float = 0.025, filter: Optional[str] = None, n_sentence_context: int = 0, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn Vectara documents most similar to query, along with scores.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 5.\nfilter \u2013 Dictionary of argument(s) to filter on metadata. For example a\nfilter can be \u201cdoc.rating > 3.0 and part.lang = \u2018deu\u2019\u201d} see\nhttps://docs.vectara.com/docs/search-apis/sql/filter-overview for more\ndetails.\nn_sentence_context \u2013 number of sentences before/after the matching segment\nto add\nReturns\nList of Documents most similar to the query\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.vectara.Vectara.html"} {"id": "93a84c6275a3-7", "text": "Parameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 5, lambda_val: float = 0.025, filter: Optional[str] = None, n_sentence_context: int = 0, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn Vectara documents most similar to query, along with scores.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 5.\nlambda_val \u2013 lexical match parameter for hybrid search.\nfilter \u2013 Dictionary of argument(s) to filter on metadata. For example a\nfilter can be \u201cdoc.rating > 3.0 and part.lang = \u2018deu\u2019\u201d} see\nhttps://docs.vectara.com/docs/search-apis/sql/filter-overview\nfor more details.\nn_sentence_context \u2013 number of sentences before/after the matching segment\nto add\nReturns\nList of Documents most similar to the query and score for each.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.vectara.Vectara.html"} {"id": "7e82bdbce344-0", "text": "langchain.vectorstores.starrocks.get_named_result\u00b6\nlangchain.vectorstores.starrocks.get_named_result(connection: Any, query: str) \u2192 List[dict[str, Any]][source]\u00b6\nGet a named result from a query.\n:param connection: The connection to the database\n:param query: The query to execute\nReturns\nThe result of the query\nReturn type\nList[dict[str, Any]]", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.get_named_result.html"} {"id": "71cb0055fd40-0", "text": "langchain.vectorstores.pgvector.PGVector\u00b6\nclass langchain.vectorstores.pgvector.PGVector(connection_string: str, embedding_function: Embeddings, collection_name: str = 'langchain', collection_metadata: Optional[dict] = None, distance_strategy: DistanceStrategy = DistanceStrategy.COSINE, pre_delete_collection: bool = False, logger: Optional[Logger] = None)[source]\u00b6\nBases: VectorStore\nVectorStore implementation using Postgres and pgvector.\nTo use, you should have the pgvector python package installed.\nParameters\nconnection_string \u2013 Postgres connection string.\nembedding_function \u2013 Any embedding function implementing\nlangchain.embeddings.base.Embeddings interface.\ncollection_name \u2013 The name of the collection to use. (default: langchain)\nNOTE: This is not the name of the table, but the name of the collection.\nThe tables will be created when initializing the store (if not exists)\nSo, make sure the user has the right permissions to create tables.\ndistance_strategy \u2013 The distance strategy to use. (default: COSINE)\npre_delete_collection \u2013 If True, will delete the collection if it exists.\n(default: False). Useful for testing.\nExample\nfrom langchain.vectorstores import PGVector\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nCONNECTION_STRING = \"postgresql+psycopg2://hwc@localhost:5432/test3\"\nCOLLECTION_NAME = \"state_of_the_union_test\"\nembeddings = OpenAIEmbeddings()\nvectorestore = PGVector.from_documents(\n embedding=embeddings,\n documents=docs,\n collection_name=COLLECTION_NAME,\n connection_string=CONNECTION_STRING,\n)\nMethods\n__init__(connection_string,\u00a0embedding_function)\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgvector.PGVector.html"} {"id": "71cb0055fd40-1", "text": "Run more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_embeddings(texts,\u00a0embeddings[,\u00a0...])\nAdd embeddings to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0ids])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\nconnect()\nconnection_string_from_db_params(driver,\u00a0...)\nReturn connection string from database parameters.\ncreate_collection()\ncreate_tables_if_not_exists()\ncreate_vector_extension()\ndelete([ids])\nDelete by vector ID or other criteria.\ndelete_collection()\ndrop_tables()\nfrom_documents(documents,\u00a0embedding[,\u00a0...])\nReturn VectorStore initialized from documents and embeddings.\nfrom_embeddings(text_embeddings,\u00a0embedding)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgvector.PGVector.html"} {"id": "71cb0055fd40-2", "text": "Return VectorStore initialized from documents and embeddings.\nfrom_embeddings(text_embeddings,\u00a0embedding)\nConstruct PGVector wrapper from raw documents and pre- generated embeddings.\nfrom_existing_index(embedding[,\u00a0...])\nGet intsance of an existing PGVector store.This method will return the instance of the store without inserting any new embeddings\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nReturn VectorStore initialized from texts and embeddings.\nget_collection(session)\nget_connection_string(kwargs)\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k,\u00a0filter])\nRun similarity search with PGVector with distance.\nsimilarity_search_by_vector(embedding[,\u00a0k,\u00a0...])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k,\u00a0filter])\nReturn docs most similar to query.\nsimilarity_search_with_score_by_vector(embedding)\nAttributes\ndistance_strategy\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgvector.PGVector.html"} {"id": "71cb0055fd40-3", "text": "Run more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_embeddings(texts: Iterable[str], embeddings: List[List[float]], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nAdd embeddings to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nembeddings \u2013 List of list of embedding vectors.\nmetadatas \u2013 List of metadatas associated with the texts.\nkwargs \u2013 vectorstore specific parameters\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nkwargs \u2013 vectorstore specific parameters\nReturns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgvector.PGVector.html"} {"id": "71cb0055fd40-4", "text": "Return VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\nconnect() \u2192 Connection[source]\u00b6\nclassmethod connection_string_from_db_params(driver: str, host: str, port: int, database: str, user: str, password: str) \u2192 str[source]\u00b6\nReturn connection string from database parameters.\ncreate_collection() \u2192 None[source]\u00b6\ncreate_tables_if_not_exists() \u2192 None[source]\u00b6\ncreate_vector_extension() \u2192 None[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgvector.PGVector.html"} {"id": "71cb0055fd40-5", "text": "create_vector_extension() \u2192 None[source]\u00b6\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\ndelete_collection() \u2192 None[source]\u00b6\ndrop_tables() \u2192 None[source]\u00b6\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, collection_name: str = 'langchain', distance_strategy: DistanceStrategy = DistanceStrategy.COSINE, ids: Optional[List[str]] = None, pre_delete_collection: bool = False, **kwargs: Any) \u2192 PGVector[source]\u00b6\nReturn VectorStore initialized from documents and embeddings.\nPostgres connection string is required\n\u201cEither pass it as a parameter\nor set the PGVECTOR_CONNECTION_STRING environment variable.\nclassmethod from_embeddings(text_embeddings: List[Tuple[str, List[float]]], embedding: Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = 'langchain', distance_strategy: DistanceStrategy = DistanceStrategy.COSINE, ids: Optional[List[str]] = None, pre_delete_collection: bool = False, **kwargs: Any) \u2192 PGVector[source]\u00b6\nConstruct PGVector wrapper from raw documents and pre-\ngenerated embeddings.\nReturn VectorStore initialized from documents and embeddings.\nPostgres connection string is required\n\u201cEither pass it as a parameter\nor set the PGVECTOR_CONNECTION_STRING environment variable.\nExample\nfrom langchain import PGVector\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\ntext_embeddings = embeddings.embed_documents(texts)\ntext_embedding_pairs = list(zip(texts, text_embeddings))", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgvector.PGVector.html"} {"id": "71cb0055fd40-6", "text": "text_embedding_pairs = list(zip(texts, text_embeddings))\nfaiss = PGVector.from_embeddings(text_embedding_pairs, embeddings)\nclassmethod from_existing_index(embedding: Embeddings, collection_name: str = 'langchain', distance_strategy: DistanceStrategy = DistanceStrategy.COSINE, pre_delete_collection: bool = False, **kwargs: Any) \u2192 PGVector[source]\u00b6\nGet intsance of an existing PGVector store.This method will\nreturn the instance of the store without inserting any new\nembeddings\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = 'langchain', distance_strategy: DistanceStrategy = DistanceStrategy.COSINE, ids: Optional[List[str]] = None, pre_delete_collection: bool = False, **kwargs: Any) \u2192 PGVector[source]\u00b6\nReturn VectorStore initialized from texts and embeddings.\nPostgres connection string is required\n\u201cEither pass it as a parameter\nor set the PGVECTOR_CONNECTION_STRING environment variable.\nget_collection(session: Session) \u2192 Optional[CollectionStore][source]\u00b6\nclassmethod get_connection_string(kwargs: Dict[str, Any]) \u2192 str[source]\u00b6\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgvector.PGVector.html"} {"id": "71cb0055fd40-7", "text": "of diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nRun similarity search with PGVector with distance.\nParameters\nquery (str) \u2013 Query text to search for.\nk (int) \u2013 Number of results to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of Documents most similar to the query.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgvector.PGVector.html"} {"id": "71cb0055fd40-8", "text": "Returns\nList of Documents most similar to the query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, filter: Optional[dict] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of Documents most similar to the query and score for each", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgvector.PGVector.html"} {"id": "71cb0055fd40-9", "text": "Returns\nList of Documents most similar to the query and score for each\nsimilarity_search_with_score_by_vector(embedding: List[float], k: int = 4, filter: Optional[dict] = None) \u2192 List[Tuple[Document, float]][source]\u00b6\nproperty distance_strategy: Any\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgvector.PGVector.html"} {"id": "b07fc884a05d-0", "text": "langchain.vectorstores.sklearn.JsonSerializer\u00b6\nclass langchain.vectorstores.sklearn.JsonSerializer(persist_path: str)[source]\u00b6\nBases: BaseSerializer\nSerializes data in json using the json package from python standard library.\nMethods\n__init__(persist_path)\nextension()\nThe file extension suggested by this serializer (without dot).\nload()\nLoads the data from the persist_path\nsave(data)\nSaves the data to the persist_path\nclassmethod extension() \u2192 str[source]\u00b6\nThe file extension suggested by this serializer (without dot).\nload() \u2192 Any[source]\u00b6\nLoads the data from the persist_path\nsave(data: Any) \u2192 None[source]\u00b6\nSaves the data to the persist_path", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.JsonSerializer.html"} {"id": "f86798b6dcd9-0", "text": "langchain.vectorstores.qdrant.Qdrant\u00b6\nclass langchain.vectorstores.qdrant.Qdrant(client: Any, collection_name: str, embeddings: Optional[Embeddings] = None, content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata', vector_name: Optional[str] = None, embedding_function: Optional[Callable] = None)[source]\u00b6\nBases: VectorStore\nWrapper around Qdrant vector database.\nTo use you should have the qdrant-client package installed.\nExample\nfrom qdrant_client import QdrantClient\nfrom langchain import Qdrant\nclient = QdrantClient()\ncollection_name = \"MyCollection\"\nqdrant = Qdrant(client, collection_name, embedding_function)\nInitialize with necessary components.\nMethods\n__init__(client,\u00a0collection_name[,\u00a0...])\nInitialize with necessary components.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0ids,\u00a0batch_size])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html"} {"id": "f86798b6dcd9-1", "text": "Return docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector ID or other criteria.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nConstruct Qdrant wrapper from a list of texts.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k,\u00a0filter,\u00a0...])\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding[,\u00a0k,\u00a0...])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k,\u00a0...])\nReturn docs most similar to query.\nsimilarity_search_with_score_by_vector(embedding)\nReturn docs most similar to embedding vector.\nAttributes\nCONTENT_KEY\nMETADATA_KEY\nVECTOR_NAME\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html"} {"id": "f86798b6dcd9-2", "text": "Run more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[Sequence[str]] = None, batch_size: int = 64, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nids \u2013 Optional list of ids to associate with the texts. Ids have to be\nuuid-like strings.\nbatch_size \u2013 How many vectors upload per-request.\nDefault: 64\nReturns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html"} {"id": "f86798b6dcd9-3", "text": "Return VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html"} {"id": "f86798b6dcd9-4", "text": "False otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[Sequence[str]] = None, location: Optional[str] = None, url: Optional[str] = None, port: Optional[int] = 6333, grpc_port: int = 6334, prefer_grpc: bool = False, https: Optional[bool] = None, api_key: Optional[str] = None, prefix: Optional[str] = None, timeout: Optional[float] = None, host: Optional[str] = None, path: Optional[str] = None, collection_name: Optional[str] = None, distance_func: str = 'Cosine', content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata', vector_name: Optional[str] = None, batch_size: int = 64, shard_number: Optional[int] = None, replication_factor: Optional[int] = None, write_consistency_factor: Optional[int] = None, on_disk_payload: Optional[bool] = None, hnsw_config: Optional[common_types.HnswConfigDiff] = None, optimizers_config: Optional[common_types.OptimizersConfigDiff] = None, wal_config: Optional[common_types.WalConfigDiff] = None, quantization_config: Optional[common_types.QuantizationConfig] = None, init_from: Optional[common_types.InitFrom] = None, **kwargs: Any) \u2192 Qdrant[source]\u00b6\nConstruct Qdrant wrapper from a list of texts.\nParameters\ntexts \u2013 A list of texts to be indexed in Qdrant.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html"} {"id": "f86798b6dcd9-5", "text": "Parameters\ntexts \u2013 A list of texts to be indexed in Qdrant.\nembedding \u2013 A subclass of Embeddings, responsible for text vectorization.\nmetadatas \u2013 An optional list of metadata. If provided it has to be of the same\nlength as a list of texts.\nids \u2013 Optional list of ids to associate with the texts. Ids have to be\nuuid-like strings.\nlocation \u2013 If :memory: - use in-memory Qdrant instance.\nIf str - use it as a url parameter.\nIf None - fallback to relying on host and port parameters.\nurl \u2013 either host or str of \u201cOptional[scheme], host, Optional[port],\nOptional[prefix]\u201d. Default: None\nport \u2013 Port of the REST API interface. Default: 6333\ngrpc_port \u2013 Port of the gRPC interface. Default: 6334\nprefer_grpc \u2013 If true - use gPRC interface whenever possible in custom methods.\nDefault: False\nhttps \u2013 If true - use HTTPS(SSL) protocol. Default: None\napi_key \u2013 API key for authentication in Qdrant Cloud. Default: None\nprefix \u2013 If not None - add prefix to the REST URL path.\nExample: service/v1 will result in\nhttp://localhost:6333/service/v1/{qdrant-endpoint} for REST API.\nDefault: None\ntimeout \u2013 Timeout for REST and gRPC API requests.\nDefault: 5.0 seconds for REST and unlimited for gRPC\nhost \u2013 Host name of Qdrant service. If url and host are None, set to\n\u2018localhost\u2019. Default: None\npath \u2013 Path in which the vectors will be stored while using local mode.\nDefault: None\ncollection_name \u2013 Name of the Qdrant collection to be used. If not provided,\nit will be created randomly. Default: None", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html"} {"id": "f86798b6dcd9-6", "text": "it will be created randomly. Default: None\ndistance_func \u2013 Distance function. One of: \u201cCosine\u201d / \u201cEuclid\u201d / \u201cDot\u201d.\nDefault: \u201cCosine\u201d\ncontent_payload_key \u2013 A payload key used to store the content of the document.\nDefault: \u201cpage_content\u201d\nmetadata_payload_key \u2013 A payload key used to store the metadata of the document.\nDefault: \u201cmetadata\u201d\nvector_name \u2013 Name of the vector to be used internally in Qdrant.\nDefault: None\nbatch_size \u2013 How many vectors upload per-request.\nDefault: 64\nshard_number \u2013 Number of shards in collection. Default is 1, minimum is 1.\nreplication_factor \u2013 Replication factor for collection. Default is 1, minimum is 1.\nDefines how many copies of each shard will be created.\nHave effect only in distributed mode.\nwrite_consistency_factor \u2013 Write consistency factor for collection. Default is 1, minimum is 1.\nDefines how many replicas should apply the operation for us to consider\nit successful. Increasing this number will make the collection more\nresilient to inconsistencies, but will also make it fail if not enough\nreplicas are available.\nDoes not have any performance impact.\nHave effect only in distributed mode.\non_disk_payload \u2013 If true - point`s payload will not be stored in memory.\nIt will be read from the disk every time it is requested.\nThis setting saves RAM by (slightly) increasing the response time.\nNote: those payload values that are involved in filtering and are\nindexed - remain in RAM.\nhnsw_config \u2013 Params for HNSW index\noptimizers_config \u2013 Params for optimizer\nwal_config \u2013 Params for Write-Ahead-Log\nquantization_config \u2013 Params for quantization, if None - quantization will be disabled\ninit_from \u2013 Use data stored in another collection to initialize this collection", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html"} {"id": "f86798b6dcd9-7", "text": "init_from \u2013 Use data stored in another collection to initialize this collection\n**kwargs \u2013 Additional arguments passed directly into REST client initialization\nThis is a user-friendly interface that:\n1. Creates embeddings, one for each text\n2. Initializes the Qdrant database as an in-memory docstore by default\n(and overridable to a remote docstore)\nAdds the text embeddings to the Qdrant database\nThis is intended to be a quick way to get started.\nExample\nfrom langchain import Qdrant\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nqdrant = Qdrant.from_texts(texts, embeddings, \"localhost\")\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nDefaults to 20.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html"} {"id": "f86798b6dcd9-8", "text": "Return docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, filter: Optional[MetadataFilter] = None, search_params: Optional[common_types.SearchParams] = None, offset: int = 0, score_threshold: Optional[float] = None, consistency: Optional[common_types.ReadConsistency] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter \u2013 Filter by metadata. Defaults to None.\nsearch_params \u2013 Additional search params\noffset \u2013 Offset of the first result to return.\nMay be used to paginate results.\nNote: large offset values may cause performance issues.\nscore_threshold \u2013 Define a minimal score threshold for the result.\nIf defined, less similar results will not be returned.\nScore of the returned result might be higher or smaller than the\nthreshold depending on the Distance function used.\nE.g. for cosine similarity only higher scores will be returned.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html"} {"id": "f86798b6dcd9-9", "text": "E.g. for cosine similarity only higher scores will be returned.\nconsistency \u2013 Read consistency of the search. Defines how many replicas should be\nqueried before returning the result.\nValues:\n- int - number of replicas to query, values should present in all\nqueried replicas\n\u2019majority\u2019 - query all replicas, but return values present in themajority of replicas\n\u2019quorum\u2019 - query the majority of replicas, return values present inall of them\n\u2019all\u2019 - query all replicas, and return values present in all replicas\nReturns\nList of Documents most similar to the query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, filter: Optional[MetadataFilter] = None, search_params: Optional[common_types.SearchParams] = None, offset: int = 0, score_threshold: Optional[float] = None, consistency: Optional[common_types.ReadConsistency] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding vector to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter \u2013 Filter by metadata. Defaults to None.\nsearch_params \u2013 Additional search params\noffset \u2013 Offset of the first result to return.\nMay be used to paginate results.\nNote: large offset values may cause performance issues.\nscore_threshold \u2013 Define a minimal score threshold for the result.\nIf defined, less similar results will not be returned.\nScore of the returned result might be higher or smaller than the\nthreshold depending on the Distance function used.\nE.g. for cosine similarity only higher scores will be returned.\nconsistency \u2013 Read consistency of the search. Defines how many replicas should be\nqueried before returning the result.\nValues:\n- int - number of replicas to query, values should present in all", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html"} {"id": "f86798b6dcd9-10", "text": "Values:\n- int - number of replicas to query, values should present in all\nqueried replicas\n\u2019majority\u2019 - query all replicas, but return values present in themajority of replicas\n\u2019quorum\u2019 - query the majority of replicas, return values present inall of them\n\u2019all\u2019 - query all replicas, and return values present in all replicas\nReturns\nList of Documents most similar to the query.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 4, filter: Optional[MetadataFilter] = None, search_params: Optional[common_types.SearchParams] = None, offset: int = 0, score_threshold: Optional[float] = None, consistency: Optional[common_types.ReadConsistency] = None, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter \u2013 Filter by metadata. Defaults to None.\nsearch_params \u2013 Additional search params\noffset \u2013 Offset of the first result to return.\nMay be used to paginate results.\nNote: large offset values may cause performance issues.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html"} {"id": "f86798b6dcd9-11", "text": "May be used to paginate results.\nNote: large offset values may cause performance issues.\nscore_threshold \u2013 Define a minimal score threshold for the result.\nIf defined, less similar results will not be returned.\nScore of the returned result might be higher or smaller than the\nthreshold depending on the Distance function used.\nE.g. for cosine similarity only higher scores will be returned.\nconsistency \u2013 Read consistency of the search. Defines how many replicas should be\nqueried before returning the result.\nValues:\n- int - number of replicas to query, values should present in all\nqueried replicas\n\u2019majority\u2019 - query all replicas, but return values present in themajority of replicas\n\u2019quorum\u2019 - query the majority of replicas, return values present inall of them\n\u2019all\u2019 - query all replicas, and return values present in all replicas\nReturns\nList of documents most similar to the query text and cosine\ndistance in float for each.\nLower score represents more similarity.\nsimilarity_search_with_score_by_vector(embedding: List[float], k: int = 4, filter: Optional[MetadataFilter] = None, search_params: Optional[common_types.SearchParams] = None, offset: int = 0, score_threshold: Optional[float] = None, consistency: Optional[common_types.ReadConsistency] = None, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding vector to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter \u2013 Filter by metadata. Defaults to None.\nsearch_params \u2013 Additional search params\noffset \u2013 Offset of the first result to return.\nMay be used to paginate results.\nNote: large offset values may cause performance issues.\nscore_threshold \u2013 Define a minimal score threshold for the result.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html"} {"id": "f86798b6dcd9-12", "text": "score_threshold \u2013 Define a minimal score threshold for the result.\nIf defined, less similar results will not be returned.\nScore of the returned result might be higher or smaller than the\nthreshold depending on the Distance function used.\nE.g. for cosine similarity only higher scores will be returned.\nconsistency \u2013 Read consistency of the search. Defines how many replicas should be\nqueried before returning the result.\nValues:\n- int - number of replicas to query, values should present in all\nqueried replicas\n\u2019majority\u2019 - query all replicas, but return values present in themajority of replicas\n\u2019quorum\u2019 - query the majority of replicas, return values present inall of them\n\u2019all\u2019 - query all replicas, and return values present in all replicas\nReturns\nList of documents most similar to the query text and cosine\ndistance in float for each.\nLower score represents more similarity.\nCONTENT_KEY = 'page_content'\u00b6\nMETADATA_KEY = 'metadata'\u00b6\nVECTOR_NAME = None\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html"} {"id": "42f0b785e763-0", "text": "langchain.vectorstores.redis.Redis\u00b6\nclass langchain.vectorstores.redis.Redis(redis_url: str, index_name: str, embedding_function: ~typing.Callable, content_key: str = 'content', metadata_key: str = 'metadata', vector_key: str = 'content_vector', relevance_score_fn: ~typing.Optional[~typing.Callable[[float], float]] = , **kwargs: ~typing.Any)[source]\u00b6\nBases: VectorStore\nWrapper around Redis vector database.\nTo use, you should have the redis python package installed.\nExample\nfrom langchain.vectorstores import Redis\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nvectorstore = Redis(\n redis_url=\"redis://username:password@localhost:6379\"\n index_name=\"my-index\",\n embedding_function=embeddings.embed_query,\n)\nInitialize with necessary components.\nMethods\n__init__(redis_url,\u00a0index_name,\u00a0...[,\u00a0...])\nInitialize with necessary components.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0embeddings,\u00a0...])\nAdd more texts to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.redis.Redis.html"} {"id": "42f0b785e763-1", "text": "Return docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete a Redis entry.\ndrop_index(index_name,\u00a0delete_documents,\u00a0...)\nDrop a Redis search index.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_existing_index(embedding,\u00a0index_name[,\u00a0...])\nConnect to an existing Redis index.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nCreate a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. .. rubric:: Example.\nfrom_texts_return_keys(texts,\u00a0embedding[,\u00a0...])\nCreate a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. 4. Returns the keys of the newly created documents. This is intended to be a quick way to get started. .. rubric:: Example.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.redis.Redis.html"} {"id": "42f0b785e763-2", "text": "max_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k])\nReturns the most similar indexed documents to the query text.\nsimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_limit_score(query[,\u00a0k,\u00a0...])\nReturns the most similar indexed documents to the query text within the score_threshold range.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k])\nReturn docs most similar to query.\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.redis.Redis.html"} {"id": "42f0b785e763-3", "text": "Returns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, embeddings: Optional[List[List[float]]] = None, batch_size: int = 1000, **kwargs: Any) \u2192 List[str][source]\u00b6\nAdd more texts to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings/text to add to the vectorstore.\nmetadatas (Optional[List[dict]], optional) \u2013 Optional list of metadatas.\nDefaults to None.\nembeddings (Optional[List[List[float]]], optional) \u2013 Optional pre-generated\nembeddings. Defaults to None.\nkeys (List[str]) or ids (List[str]) \u2013 Identifiers of entries.\nDefaults to None.\nbatch_size (int, optional) \u2013 Batch size to use for writes. Defaults to 1000.\nReturns\nList of ids added to the vectorstore\nReturn type\nList[str]\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.redis.Redis.html"} {"id": "42f0b785e763-4", "text": "Return docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 RedisVectorStoreRetriever[source]\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\nstatic delete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 bool[source]\u00b6\nDelete a Redis entry.\nParameters\nids \u2013 List of ids (keys) to delete.\nReturns\nWhether or not the deletions were successful.\nReturn type\nbool\nstatic drop_index(index_name: str, delete_documents: bool, **kwargs: Any) \u2192 bool[source]\u00b6\nDrop a Redis search index.\nParameters\nindex_name (str) \u2013 Name of the index to drop.\ndelete_documents (bool) \u2013 Whether to drop the associated documents.\nReturns\nWhether or not the drop was successful.\nReturn type\nbool", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.redis.Redis.html"} {"id": "42f0b785e763-5", "text": "Returns\nWhether or not the drop was successful.\nReturn type\nbool\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nclassmethod from_existing_index(embedding: Embeddings, index_name: str, content_key: str = 'content', metadata_key: str = 'metadata', vector_key: str = 'content_vector', **kwargs: Any) \u2192 Redis[source]\u00b6\nConnect to an existing Redis index.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = 'content', metadata_key: str = 'metadata', vector_key: str = 'content_vector', **kwargs: Any) \u2192 Redis[source]\u00b6\nCreate a Redis vectorstore from raw documents.\nThis is a user-friendly interface that:\nEmbeds documents.\nCreates a new index for the embeddings in Redis.\nAdds the documents to the newly created Redis index.\nThis is intended to be a quick way to get started.\n.. rubric:: Example\nclassmethod from_texts_return_keys(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = 'content', metadata_key: str = 'metadata', vector_key: str = 'content_vector', distance_metric: Literal['COSINE', 'IP', 'L2'] = 'COSINE', **kwargs: Any) \u2192 Tuple[Redis, List[str]][source]\u00b6\nCreate a Redis vectorstore from raw documents.\nThis is a user-friendly interface that:\nEmbeds documents.\nCreates a new index for the embeddings in Redis.\nAdds the documents to the newly created Redis index.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.redis.Redis.html"} {"id": "42f0b785e763-6", "text": "Adds the documents to the newly created Redis index.\nReturns the keys of the newly created documents.\nThis is intended to be a quick way to get started.\n.. rubric:: Example\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.redis.Redis.html"} {"id": "42f0b785e763-7", "text": "Defaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturns the most similar indexed documents to the query text.\nParameters\nquery (str) \u2013 The query text for which to find similar documents.\nk (int) \u2013 The number of documents to return. Default is 4.\nReturns\nA list of documents that are most similar to the query text.\nReturn type\nList[Document]\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_limit_score(query: str, k: int = 4, score_threshold: float = 0.2, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturns the most similar indexed documents to the query text within the\nscore_threshold range.\nParameters\nquery (str) \u2013 The query text for which to find similar documents.\nk (int) \u2013 The number of documents to return. Default is 4.\nscore_threshold (float) \u2013 The minimum matching score required for a document\n0.2. (to be considered a match. Defaults to) \u2013 \nsimilarity (Because the similarity calculation algorithm is based on cosine) \u2013 \n:param :\n:param the smaller the angle:\n:param the higher the similarity.:\nReturns", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.redis.Redis.html"} {"id": "42f0b785e763-8", "text": ":param :\n:param the smaller the angle:\n:param the higher the similarity.:\nReturns\nA list of documents that are most similar to the query text,\nincluding the match score for each document.\nReturn type\nList[Document]\nNote\nIf there are no documents that satisfy the score_threshold value,\nan empty list is returned.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 4) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query and score for each", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.redis.Redis.html"} {"id": "177dd930667d-0", "text": "langchain.vectorstores.milvus.Milvus\u00b6\nclass langchain.vectorstores.milvus.Milvus(embedding_function: Embeddings, collection_name: str = 'LangChainCollection', connection_args: Optional[dict[str, Any]] = None, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: Optional[bool] = False)[source]\u00b6\nBases: VectorStore\nInitialize wrapper around the milvus vector database.\nIn order to use this you need to have pymilvus installed and a\nrunning Milvus\nSee the following documentation for how to run a Milvus instance:\nhttps://milvus.io/docs/install_standalone-docker.md\nIf looking for a hosted Milvus, take a look at this documentation:\nhttps://zilliz.com/cloud and make use of the Zilliz vectorstore found in\nthis project,\nIF USING L2/IP metric IT IS HIGHLY SUGGESTED TO NORMALIZE YOUR DATA.\nParameters\nembedding_function (Embeddings) \u2013 Function used to embed the text.\ncollection_name (str) \u2013 Which Milvus collection to use. Defaults to\n\u201cLangChainCollection\u201d.\nconnection_args (Optional[dict[str, any]]) \u2013 The connection args used for\nthis class comes in the form of a dict.\nconsistency_level (str) \u2013 The consistency level to use for a collection.\nDefaults to \u201cSession\u201d.\nindex_params (Optional[dict]) \u2013 Which index params to use. Defaults to\nHNSW/AUTOINDEX depending on service.\nsearch_params (Optional[dict]) \u2013 Which search params to use. Defaults to\ndefault of index.\ndrop_old (Optional[bool]) \u2013 Whether to drop the current collection. Defaults\nto False.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html"} {"id": "177dd930667d-1", "text": "to False.\nThe connection args used for this class comes in the form of a dict,\nhere are a few of the options:\naddress (str): The actual address of Milvusinstance. Example address: \u201clocalhost:19530\u201d\nuri (str): The uri of Milvus instance. Example uri:\u201chttp://randomwebsite:19530\u201d,\n\u201ctcp:foobarsite:19530\u201d,\n\u201chttps://ok.s3.south.com:19530\u201d.\nhost (str): The host of Milvus instance. Default at \u201clocalhost\u201d,PyMilvus will fill in the default host if only port is provided.\nport (str/int): The port of Milvus instance. Default at 19530, PyMilvuswill fill in the default port if only host is provided.\nuser (str): Use which user to connect to Milvus instance. If user andpassword are provided, we will add related header in every RPC call.\npassword (str): Required when user is provided. The passwordcorresponding to the user.\nsecure (bool): Default is false. If set to true, tls will be enabled.\nclient_key_path (str): If use tls two-way authentication, need to\nwrite the client.key path.\nclient_pem_path (str): If use tls two-way authentication, need towrite the client.pem path.\nca_pem_path (str): If use tls two-way authentication, need to writethe ca.pem path.\nserver_pem_path (str): If use tls one-way authentication, need towrite the server.pem path.\nserver_name (str): If use tls, need to write the common name.\nExample\nfrom langchain import Milvus\nfrom langchain.embeddings import OpenAIEmbeddings\nembedding = OpenAIEmbeddings()\n# Connect to a milvus instance on localhost", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html"} {"id": "177dd930667d-2", "text": "embedding = OpenAIEmbeddings()\n# Connect to a milvus instance on localhost\nmilvus_store = Milvus(\nembedding_function = Embeddings,\ncollection_name = \u201cLangChainCollection\u201d,\ndrop_old = True,\n)\nRaises\nValueError \u2013 If the pymilvus python package is not installed.\nInitialize the Milvus vector store.\nMethods\n__init__(embedding_function[,\u00a0...])\nInitialize the Milvus vector store.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0timeout,\u00a0...])\nInsert text data into Milvus.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html"} {"id": "177dd930667d-3", "text": "Return docs most similar to query.\ndelete([ids])\nDelete by vector ID or other criteria.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nCreate a Milvus collection, indexes it with HNSW, and insert data.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nPerform a search and return results that are reordered by MMR.\nmax_marginal_relevance_search_by_vector(...)\nPerform a search and return results that are reordered by MMR.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k,\u00a0param,\u00a0expr,\u00a0...])\nPerform a similarity search against the query string.\nsimilarity_search_by_vector(embedding[,\u00a0k,\u00a0...])\nPerform a similarity search against the query string.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k,\u00a0...])\nPerform a search on a query string and return results with score.\nsimilarity_search_with_score_by_vector(embedding)\nPerform a search on a query string and return results with score.\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html"} {"id": "177dd930667d-4", "text": "Run more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, timeout: Optional[int] = None, batch_size: int = 1000, **kwargs: Any) \u2192 List[str][source]\u00b6\nInsert text data into Milvus.\nInserting data when the collection has not be made yet will result\nin creating a new Collection. The data of the first entity decides\nthe schema of the new collection, the dim is extracted from the first\nembedding and the columns are decided by the first metadata dict.\nMetada keys will need to be present for all inserted values. At\nthe moment there is no None equivalent in Milvus.\nParameters\ntexts (Iterable[str]) \u2013 The texts to embed, it is assumed\nthat they all fit in memory.\nmetadatas (Optional[List[dict]]) \u2013 Metadata dicts attached to each of\nthe texts. Defaults to None.\ntimeout (Optional[int]) \u2013 Timeout for each batch insert. Defaults\nto None.\nbatch_size (int, optional) \u2013 Batch size to use for insertion.\nDefaults to 1000.\nRaises\nMilvusException \u2013 Failure to add texts\nReturns\nThe resulting keys for each inserted element.\nReturn type\nList[str]\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html"} {"id": "177dd930667d-5", "text": "Return VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html"} {"id": "177dd930667d-6", "text": "Delete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = 'LangChainCollection', connection_args: dict[str, Any] = {'host': 'localhost', 'password': '', 'port': '19530', 'secure': False, 'user': ''}, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: bool = False, **kwargs: Any) \u2192 Milvus[source]\u00b6\nCreate a Milvus collection, indexes it with HNSW, and insert data.\nParameters\ntexts (List[str]) \u2013 Text data.\nembedding (Embeddings) \u2013 Embedding function.\nmetadatas (Optional[List[dict]]) \u2013 Metadata for each text if it exists.\nDefaults to None.\ncollection_name (str, optional) \u2013 Collection name to use. Defaults to\n\u201cLangChainCollection\u201d.\nconnection_args (dict[str, Any], optional) \u2013 Connection args to use. Defaults\nto DEFAULT_MILVUS_CONNECTION.\nconsistency_level (str, optional) \u2013 Which consistency level to use. Defaults\nto \u201cSession\u201d.\nindex_params (Optional[dict], optional) \u2013 Which index_params to use. Defaults\nto None.\nsearch_params (Optional[dict], optional) \u2013 Which search params to use.\nDefaults to None.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html"} {"id": "177dd930667d-7", "text": "Defaults to None.\ndrop_old (Optional[bool], optional) \u2013 Whether to drop the collection with\nthat name if it exists. Defaults to False.\nReturns\nMilvus Vector Store\nReturn type\nMilvus\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nPerform a search and return results that are reordered by MMR.\nParameters\nquery (str) \u2013 The text being searched.\nk (int, optional) \u2013 How many results to give. Defaults to 4.\nfetch_k (int, optional) \u2013 Total results to select k from.\nDefaults to 20.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5\nparam (dict, optional) \u2013 The search params for the specified index.\nDefaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.\ntimeout (int, optional) \u2013 How long to wait before timeout error.\nDefaults to None.\nkwargs \u2013 Collection.search() keyword arguments.\nReturns\nDocument results for search.\nReturn type\nList[Document]\nmax_marginal_relevance_search_by_vector(embedding: list[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html"} {"id": "177dd930667d-8", "text": "Perform a search and return results that are reordered by MMR.\nParameters\nembedding (str) \u2013 The embedding vector being searched.\nk (int, optional) \u2013 How many results to give. Defaults to 4.\nfetch_k (int, optional) \u2013 Total results to select k from.\nDefaults to 20.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5\nparam (dict, optional) \u2013 The search params for the specified index.\nDefaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.\ntimeout (int, optional) \u2013 How long to wait before timeout error.\nDefaults to None.\nkwargs \u2013 Collection.search() keyword arguments.\nReturns\nDocument results for search.\nReturn type\nList[Document]\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nPerform a similarity search against the query string.\nParameters\nquery (str) \u2013 The text to search.\nk (int, optional) \u2013 How many results to return. Defaults to 4.\nparam (dict, optional) \u2013 The search params for the index type.\nDefaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.\ntimeout (int, optional) \u2013 How long to wait before timeout error.\nDefaults to None.\nkwargs \u2013 Collection.search() keyword arguments.\nReturns\nDocument results for search.\nReturn type", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html"} {"id": "177dd930667d-9", "text": "kwargs \u2013 Collection.search() keyword arguments.\nReturns\nDocument results for search.\nReturn type\nList[Document]\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nPerform a similarity search against the query string.\nParameters\nembedding (List[float]) \u2013 The embedding vector to search.\nk (int, optional) \u2013 How many results to return. Defaults to 4.\nparam (dict, optional) \u2013 The search params for the index type.\nDefaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.\ntimeout (int, optional) \u2013 How long to wait before timeout error.\nDefaults to None.\nkwargs \u2013 Collection.search() keyword arguments.\nReturns\nDocument results for search.\nReturn type\nList[Document]\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html"} {"id": "177dd930667d-10", "text": "Returns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nPerform a search on a query string and return results with score.\nFor more information about the search parameters, take a look at the pymilvus\ndocumentation found here:\nhttps://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md\nParameters\nquery (str) \u2013 The text being searched.\nk (int, optional) \u2013 The amount of results ot return. Defaults to 4.\nparam (dict) \u2013 The search params for the specified index.\nDefaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.\ntimeout (int, optional) \u2013 How long to wait before timeout error.\nDefaults to None.\nkwargs \u2013 Collection.search() keyword arguments.\nReturn type\nList[float], List[Tuple[Document, any, any]]\nsimilarity_search_with_score_by_vector(embedding: List[float], k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nPerform a search on a query string and return results with score.\nFor more information about the search parameters, take a look at the pymilvus\ndocumentation found here:\nhttps://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md\nParameters\nembedding (List[float]) \u2013 The embedding vector being searched.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html"} {"id": "177dd930667d-11", "text": "Parameters\nembedding (List[float]) \u2013 The embedding vector being searched.\nk (int, optional) \u2013 The amount of results ot return. Defaults to 4.\nparam (dict) \u2013 The search params for the specified index.\nDefaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.\ntimeout (int, optional) \u2013 How long to wait before timeout error.\nDefaults to None.\nkwargs \u2013 Collection.search() keyword arguments.\nReturns\nResult doc and score.\nReturn type\nList[Tuple[Document, float]]", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.milvus.Milvus.html"} {"id": "afd05462ba7b-0", "text": "langchain.vectorstores.vectara.VectaraRetriever\u00b6\nclass langchain.vectorstores.vectara.VectaraRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, vectorstore: Vectara, search_type: str = 'similarity', search_kwargs: dict = None)[source]\u00b6\nBases: VectorStoreRetriever\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam search_kwargs: dict [Optional]\u00b6\nSearch params.\nk: Number of Documents to return. Defaults to 5.\nlambda_val: lexical match parameter for hybrid search.\nfilter: Dictionary of argument(s) to filter on metadata. For example a\nfilter can be \u201cdoc.rating > 3.0 and part.lang = \u2018deu\u2019\u201d} see\nhttps://docs.vectara.com/docs/search-apis/sql/filter-overview\nfor more details.\nn_sentence_context: number of sentences before/after the matching segment to add\nparam search_type: str = 'similarity'\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam vectorstore: Vectara [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.vectara.VectaraRetriever.html"} {"id": "afd05462ba7b-1", "text": "use case.\nparam vectorstore: Vectara [Required]\u00b6\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nAdd documents to vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nAdd documents to vectorstore.\nadd_texts(texts: List[str], metadatas: Optional[List[dict]] = None, doc_metadata: Optional[dict] = {}) \u2192 None[source]\u00b6\nAdd text to the Vectara vectorstore.\nParameters\ntexts (List[str]) \u2013 The text\nmetadatas (List[dict]) \u2013 Metadata dicts, must line up with existing store\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.vectara.VectaraRetriever.html"} {"id": "afd05462ba7b-2", "text": ":param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_search_type\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate search type.\nallowed_search_types: ClassVar[Collection[str]] = ('similarity', 'similarity_score_threshold', 'mmr')\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.vectara.VectaraRetriever.html"} {"id": "bd31b816a02f-0", "text": "langchain.vectorstores.sklearn.BsonSerializer\u00b6\nclass langchain.vectorstores.sklearn.BsonSerializer(persist_path: str)[source]\u00b6\nBases: BaseSerializer\nSerializes data in binary json using the bson python package.\nMethods\n__init__(persist_path)\nextension()\nThe file extension suggested by this serializer (without dot).\nload()\nLoads the data from the persist_path\nsave(data)\nSaves the data to the persist_path\nclassmethod extension() \u2192 str[source]\u00b6\nThe file extension suggested by this serializer (without dot).\nload() \u2192 Any[source]\u00b6\nLoads the data from the persist_path\nsave(data: Any) \u2192 None[source]\u00b6\nSaves the data to the persist_path", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.BsonSerializer.html"} {"id": "99e895a81ff1-0", "text": "langchain.vectorstores.clickhouse.has_mul_sub_str\u00b6\nlangchain.vectorstores.clickhouse.has_mul_sub_str(s: str, *args: Any) \u2192 bool[source]\u00b6\nCheck if a string contains multiple substrings.\n:param s: string to check.\n:param *args: substrings to check.\nReturns\nTrue if all substrings are in the string, False otherwise.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clickhouse.has_mul_sub_str.html"} {"id": "179a34fef10f-0", "text": "langchain.vectorstores.pgembedding.EmbeddingStore\u00b6\nclass langchain.vectorstores.pgembedding.EmbeddingStore(**kwargs)[source]\u00b6\nBases: BaseModel\nA simple constructor that allows initialization from kwargs.\nSets attributes on the constructed instance using the names and\nvalues in kwargs.\nOnly keys that are present as\nattributes of the instance\u2019s class are allowed. These could be,\nfor example, any mapped columns or relationships.\nMethods\n__init__(**kwargs)\nA simple constructor that allows initialization from kwargs.\nAttributes\ncmetadata\ncollection\ncollection_id\ncustom_id\ndocument\nembedding\nmetadata\nregistry\nuuid\ncmetadata\u00b6\ncollection\u00b6\ncollection_id\u00b6\ncustom_id\u00b6\ndocument\u00b6\nembedding\u00b6\nmetadata: MetaData = MetaData()\u00b6\nregistry: RegistryType = \u00b6\nuuid\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgembedding.EmbeddingStore.html"} {"id": "836eb16ddd27-0", "text": "langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch\u00b6\nclass langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch(embedding: Embeddings, config: AlibabaCloudOpenSearchSettings, **kwargs: Any)[source]\u00b6\nBases: VectorStore\nAlibaba Cloud OpenSearch Vector Store\nMethods\n__init__(embedding,\u00a0config,\u00a0**kwargs)\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ncreate_results(json_result)\ncreate_results_with_score(json_result)\ndelete([ids])\nDelete by vector ID or other criteria.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch.html"} {"id": "836eb16ddd27-1", "text": "delete([ids])\nDelete by vector ID or other criteria.\nfrom_documents(documents,\u00a0embedding[,\u00a0ids,\u00a0...])\nReturn VectorStore initialized from documents and embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0config])\nReturn VectorStore initialized from texts and embeddings.\ninner_embedding_query(embedding[,\u00a0...])\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k,\u00a0search_filter])\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding[,\u00a0k,\u00a0...])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch.html"} {"id": "836eb16ddd27-2", "text": "(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nkwargs \u2013 vectorstore specific parameters\nReturns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch.html"} {"id": "836eb16ddd27-3", "text": "Return docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ncreate_results(json_result: Dict[str, Any]) \u2192 List[Document][source]\u00b6\ncreate_results_with_score(json_result: Dict[str, Any]) \u2192 List[Tuple[Document, float]][source]\u00b6\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, ids: Optional[List[str]] = None, config: Optional[AlibabaCloudOpenSearchSettings] = None, **kwargs: Any) \u2192 AlibabaCloudOpenSearch[source]\u00b6\nReturn VectorStore initialized from documents and embeddings.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, config: Optional[AlibabaCloudOpenSearchSettings] = None, **kwargs: Any) \u2192 AlibabaCloudOpenSearch[source]\u00b6\nReturn VectorStore initialized from texts and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch.html"} {"id": "836eb16ddd27-4", "text": "Return VectorStore initialized from texts and embeddings.\ninner_embedding_query(embedding: List[float], search_filter: Optional[Dict[str, Any]] = None, k: int = 4) \u2192 Dict[str, Any][source]\u00b6\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch.html"} {"id": "836eb16ddd27-5", "text": "Defaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, search_filter: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, search_filter: Optional[dict] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, search_filter: Optional[dict] = None, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch.html"} {"id": "a6d646a28005-0", "text": "langchain.vectorstores.starrocks.StarRocks\u00b6\nclass langchain.vectorstores.starrocks.StarRocks(embedding: Embeddings, config: Optional[StarRocksSettings] = None, **kwargs: Any)[source]\u00b6\nBases: VectorStore\nWrapper around StarRocks vector database\nYou need a pymysql python package, and a valid account\nto connect to StarRocks.\nRight now StarRocks has only implemented cosine_similarity function to\ncompute distance between two vectors. And there is no vector inside right now,\nso we have to iterate all vectors and compute spatial distance.\nFor more information, please visit[StarRocks official site](https://www.starrocks.io/)\n[StarRocks github](https://github.com/StarRocks/starrocks)\nStarRocks Wrapper to LangChain\nembedding_function (Embeddings):\nconfig (StarRocksSettings): Configuration to StarRocks Client\nMethods\n__init__(embedding[,\u00a0config])\nStarRocks Wrapper to LangChain\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0batch_size,\u00a0ids])\nInsert more texts through the embeddings and add to the VectorStore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.StarRocks.html"} {"id": "a6d646a28005-1", "text": "Return docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector ID or other criteria.\ndrop()\nHelper function: Drop data\nescape_str(value)\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nCreate StarRocks wrapper with existing texts\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k,\u00a0where_str])\nPerform a similarity search with StarRocks\nsimilarity_search_by_vector(embedding[,\u00a0k,\u00a0...])\nPerform a similarity search with StarRocks by vectors\nsimilarity_search_with_relevance_scores(query)\nPerform a similarity search with StarRocks\nAttributes\nmetadata_column\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.StarRocks.html"} {"id": "a6d646a28005-2", "text": "(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, batch_size: int = 32, ids: Optional[Iterable[str]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nInsert more texts through the embeddings and add to the VectorStore.\nParameters\ntexts \u2013 Iterable of strings to add to the VectorStore.\nids \u2013 Optional list of ids to associate with the texts.\nbatch_size \u2013 Batch size of insertion\nmetadata \u2013 Optional column data to be inserted\nReturns\nList of ids from adding the texts into the VectorStore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.StarRocks.html"} {"id": "a6d646a28005-3", "text": "Return VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\ndrop() \u2192 None[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.StarRocks.html"} {"id": "a6d646a28005-4", "text": "Return type\nOptional[bool]\ndrop() \u2192 None[source]\u00b6\nHelper function: Drop data\nescape_str(value: str) \u2192 str[source]\u00b6\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[Dict[Any, Any]]] = None, config: Optional[StarRocksSettings] = None, text_ids: Optional[Iterable[str]] = None, batch_size: int = 32, **kwargs: Any) \u2192 StarRocks[source]\u00b6\nCreate StarRocks wrapper with existing texts\nParameters\nembedding_function (Embeddings) \u2013 Function to extract text embedding\ntexts (Iterable[str]) \u2013 List or tuple of strings to be added\nconfig (StarRocksSettings, Optional) \u2013 StarRocks configuration\ntext_ids (Optional[Iterable], optional) \u2013 IDs for the texts.\nDefaults to None.\nbatch_size (int, optional) \u2013 Batchsize when transmitting data to StarRocks.\nDefaults to 32.\nmetadata (List[dict], optional) \u2013 metadata to texts. Defaults to None.\nReturns\nStarRocks Index\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.StarRocks.html"} {"id": "a6d646a28005-5", "text": "fetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nPerform a similarity search with StarRocks\nParameters\nquery (str) \u2013 query string\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 where condition string.\nDefaults to None.\nNOTE \u2013 Please do not let end-user to fill this and always be aware", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.StarRocks.html"} {"id": "a6d646a28005-6", "text": "NOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection. When dealing with metadatas, remember to\nuse {self.metadata_column}.attribute instead of attribute\nalone. The default name for it is metadata.\nReturns\nList of Documents\nReturn type\nList[Document]\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, where_str: Optional[str] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nPerform a similarity search with StarRocks by vectors\nParameters\nquery (str) \u2013 query string\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 where condition string.\nDefaults to None.\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection. When dealing with metadatas, remember to\nuse {self.metadata_column}.attribute instead of attribute\nalone. The default name for it is metadata.\nReturns\nList of (Document, similarity)\nReturn type\nList[Document]\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nPerform a similarity search with StarRocks\nParameters\nquery (str) \u2013 query string\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 where condition string.\nDefaults to None.\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection. When dealing with metadatas, remember to\nuse {self.metadata_column}.attribute instead of attribute\nalone. The default name for it is metadata.\nReturns\nList of documents\nReturn type", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.StarRocks.html"} {"id": "a6d646a28005-7", "text": "alone. The default name for it is metadata.\nReturns\nList of documents\nReturn type\nList[Document]\nproperty metadata_column: str\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.StarRocks.html"} {"id": "ac080611b22e-0", "text": "langchain.vectorstores.pgvector.DistanceStrategy\u00b6\nclass langchain.vectorstores.pgvector.DistanceStrategy(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]\u00b6\nBases: str, Enum\nEnumerator of the Distance strategies.\nMethods\n__init__(*args,\u00a0**kwds)\ncapitalize()\nReturn a capitalized version of the string.\ncasefold()\nReturn a version of the string suitable for caseless comparisons.\ncenter(width[,\u00a0fillchar])\nReturn a centered string of length width.\ncount(sub[,\u00a0start[,\u00a0end]])\nReturn the number of non-overlapping occurrences of substring sub in string S[start:end].\nencode([encoding,\u00a0errors])\nEncode the string using the codec registered for encoding.\nendswith(suffix[,\u00a0start[,\u00a0end]])\nReturn True if S ends with the specified suffix, False otherwise.\nexpandtabs([tabsize])\nReturn a copy where all tab characters are expanded using spaces.\nfind(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nformat(*args,\u00a0**kwargs)\nReturn a formatted version of S, using substitutions from args and kwargs.\nformat_map(mapping)\nReturn a formatted version of S, using substitutions from mapping.\nindex(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nisalnum()\nReturn True if the string is an alpha-numeric string, False otherwise.\nisalpha()\nReturn True if the string is an alphabetic string, False otherwise.\nisascii()\nReturn True if all characters in the string are ASCII, False otherwise.\nisdecimal()\nReturn True if the string is a decimal string, False otherwise.\nisdigit()", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgvector.DistanceStrategy.html"} {"id": "ac080611b22e-1", "text": "Return True if the string is a decimal string, False otherwise.\nisdigit()\nReturn True if the string is a digit string, False otherwise.\nisidentifier()\nReturn True if the string is a valid Python identifier, False otherwise.\nislower()\nReturn True if the string is a lowercase string, False otherwise.\nisnumeric()\nReturn True if the string is a numeric string, False otherwise.\nisprintable()\nReturn True if the string is printable, False otherwise.\nisspace()\nReturn True if the string is a whitespace string, False otherwise.\nistitle()\nReturn True if the string is a title-cased string, False otherwise.\nisupper()\nReturn True if the string is an uppercase string, False otherwise.\njoin(iterable,\u00a0/)\nConcatenate any number of strings.\nljust(width[,\u00a0fillchar])\nReturn a left-justified string of length width.\nlower()\nReturn a copy of the string converted to lowercase.\nlstrip([chars])\nReturn a copy of the string with leading whitespace removed.\nmaketrans\nReturn a translation table usable for str.translate().\npartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nremoveprefix(prefix,\u00a0/)\nReturn a str with the given prefix string removed if present.\nremovesuffix(suffix,\u00a0/)\nReturn a str with the given suffix string removed if present.\nreplace(old,\u00a0new[,\u00a0count])\nReturn a copy with all occurrences of substring old replaced by new.\nrfind(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].\nrindex(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgvector.DistanceStrategy.html"} {"id": "ac080611b22e-2", "text": "rjust(width[,\u00a0fillchar])\nReturn a right-justified string of length width.\nrpartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nrsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nrstrip([chars])\nReturn a copy of the string with trailing whitespace removed.\nsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nsplitlines([keepends])\nReturn a list of the lines in the string, breaking at line boundaries.\nstartswith(prefix[,\u00a0start[,\u00a0end]])\nReturn True if S starts with the specified prefix, False otherwise.\nstrip([chars])\nReturn a copy of the string with leading and trailing whitespace removed.\nswapcase()\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\nReturn a version of the string where each word is titlecased.\ntranslate(table,\u00a0/)\nReplace each character in the string using the given translation table.\nupper()\nReturn a copy of the string converted to uppercase.\nzfill(width,\u00a0/)\nPad a numeric string with zeros on the left, to fill a field of the given width.\nAttributes\nEUCLIDEAN\nCOSINE\nMAX_INNER_PRODUCT\ncapitalize()\u00b6\nReturn a capitalized version of the string.\nMore specifically, make the first character have upper case and the rest lower\ncase.\ncasefold()\u00b6\nReturn a version of the string suitable for caseless comparisons.\ncenter(width, fillchar=' ', /)\u00b6\nReturn a centered string of length width.\nPadding is done using the specified fill character (default is a space).\ncount(sub[, start[, end]]) \u2192 int\u00b6\nReturn the number of non-overlapping occurrences of substring sub in", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgvector.DistanceStrategy.html"} {"id": "ac080611b22e-3", "text": "Return the number of non-overlapping occurrences of substring sub in\nstring S[start:end]. Optional arguments start and end are\ninterpreted as in slice notation.\nencode(encoding='utf-8', errors='strict')\u00b6\nEncode the string using the codec registered for encoding.\nencodingThe encoding in which to encode the string.\nerrorsThe error handling scheme to use for encoding errors.\nThe default is \u2018strict\u2019 meaning that encoding errors raise a\nUnicodeEncodeError. Other possible values are \u2018ignore\u2019, \u2018replace\u2019 and\n\u2018xmlcharrefreplace\u2019 as well as any other name registered with\ncodecs.register_error that can handle UnicodeEncodeErrors.\nendswith(suffix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S ends with the specified suffix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nsuffix can also be a tuple of strings to try.\nexpandtabs(tabsize=8)\u00b6\nReturn a copy where all tab characters are expanded using spaces.\nIf tabsize is not given, a tab size of 8 characters is assumed.\nfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nformat(*args, **kwargs) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from args and kwargs.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nformat_map(mapping) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from mapping.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgvector.DistanceStrategy.html"} {"id": "ac080611b22e-4", "text": "Return the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nisalnum()\u00b6\nReturn True if the string is an alpha-numeric string, False otherwise.\nA string is alpha-numeric if all characters in the string are alpha-numeric and\nthere is at least one character in the string.\nisalpha()\u00b6\nReturn True if the string is an alphabetic string, False otherwise.\nA string is alphabetic if all characters in the string are alphabetic and there\nis at least one character in the string.\nisascii()\u00b6\nReturn True if all characters in the string are ASCII, False otherwise.\nASCII characters have code points in the range U+0000-U+007F.\nEmpty string is ASCII too.\nisdecimal()\u00b6\nReturn True if the string is a decimal string, False otherwise.\nA string is a decimal string if all characters in the string are decimal and\nthere is at least one character in the string.\nisdigit()\u00b6\nReturn True if the string is a digit string, False otherwise.\nA string is a digit string if all characters in the string are digits and there\nis at least one character in the string.\nisidentifier()\u00b6\nReturn True if the string is a valid Python identifier, False otherwise.\nCall keyword.iskeyword(s) to test whether string s is a reserved identifier,\nsuch as \u201cdef\u201d or \u201cclass\u201d.\nislower()\u00b6\nReturn True if the string is a lowercase string, False otherwise.\nA string is lowercase if all cased characters in the string are lowercase and\nthere is at least one cased character in the string.\nisnumeric()\u00b6\nReturn True if the string is a numeric string, False otherwise.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgvector.DistanceStrategy.html"} {"id": "ac080611b22e-5", "text": "isnumeric()\u00b6\nReturn True if the string is a numeric string, False otherwise.\nA string is numeric if all characters in the string are numeric and there is at\nleast one character in the string.\nisprintable()\u00b6\nReturn True if the string is printable, False otherwise.\nA string is printable if all of its characters are considered printable in\nrepr() or if it is empty.\nisspace()\u00b6\nReturn True if the string is a whitespace string, False otherwise.\nA string is whitespace if all characters in the string are whitespace and there\nis at least one character in the string.\nistitle()\u00b6\nReturn True if the string is a title-cased string, False otherwise.\nIn a title-cased string, upper- and title-case characters may only\nfollow uncased characters and lowercase characters only cased ones.\nisupper()\u00b6\nReturn True if the string is an uppercase string, False otherwise.\nA string is uppercase if all cased characters in the string are uppercase and\nthere is at least one cased character in the string.\njoin(iterable, /)\u00b6\nConcatenate any number of strings.\nThe string whose method is called is inserted in between each given string.\nThe result is returned as a new string.\nExample: \u2018.\u2019.join([\u2018ab\u2019, \u2018pq\u2019, \u2018rs\u2019]) -> \u2018ab.pq.rs\u2019\nljust(width, fillchar=' ', /)\u00b6\nReturn a left-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nlower()\u00b6\nReturn a copy of the string converted to lowercase.\nlstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nstatic maketrans()\u00b6\nReturn a translation table usable for str.translate().", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgvector.DistanceStrategy.html"} {"id": "ac080611b22e-6", "text": "static maketrans()\u00b6\nReturn a translation table usable for str.translate().\nIf there is only one argument, it must be a dictionary mapping Unicode\nordinals (integers) or characters to Unicode ordinals, strings or None.\nCharacter keys will be then converted to ordinals.\nIf there are two arguments, they must be strings of equal length, and\nin the resulting dictionary, each character in x will be mapped to the\ncharacter at the same position in y. If there is a third argument, it\nmust be a string, whose characters will be mapped to None in the result.\npartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string. If the separator is found,\nreturns a 3-tuple containing the part before the separator, the separator\nitself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing the original string\nand two empty strings.\nremoveprefix(prefix, /)\u00b6\nReturn a str with the given prefix string removed if present.\nIf the string starts with the prefix string, return string[len(prefix):].\nOtherwise, return a copy of the original string.\nremovesuffix(suffix, /)\u00b6\nReturn a str with the given suffix string removed if present.\nIf the string ends with the suffix string and that suffix is not empty,\nreturn string[:-len(suffix)]. Otherwise, return a copy of the original\nstring.\nreplace(old, new, count=- 1, /)\u00b6\nReturn a copy with all occurrences of substring old replaced by new.\ncountMaximum number of occurrences to replace.\n-1 (the default value) means replace all occurrences.\nIf the optional argument count is given, only the first count occurrences are\nreplaced.\nrfind(sub[, start[, end]]) \u2192 int\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgvector.DistanceStrategy.html"} {"id": "ac080611b22e-7", "text": "replaced.\nrfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nrindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nrjust(width, fillchar=' ', /)\u00b6\nReturn a right-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nrpartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string, starting at the end. If\nthe separator is found, returns a 3-tuple containing the part before the\nseparator, the separator itself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing two empty strings\nand the original string.\nrsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nSplitting starts at the end of the string and works to the front.\nrstrip(chars=None, /)\u00b6\nReturn a copy of the string with trailing whitespace removed.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgvector.DistanceStrategy.html"} {"id": "ac080611b22e-8", "text": "rstrip(chars=None, /)\u00b6\nReturn a copy of the string with trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nNote, str.split() is mainly useful for data that has been intentionally\ndelimited. With natural text that includes punctuation, consider using\nthe regular expression module.\nsplitlines(keepends=False)\u00b6\nReturn a list of the lines in the string, breaking at line boundaries.\nLine breaks are not included in the resulting list unless keepends is given and\ntrue.\nstartswith(prefix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S starts with the specified prefix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nprefix can also be a tuple of strings to try.\nstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading and trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nswapcase()\u00b6\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\u00b6\nReturn a version of the string where each word is titlecased.\nMore specifically, words start with uppercased characters and all remaining\ncased characters have lower case.\ntranslate(table, /)\u00b6\nReplace each character in the string using the given translation table.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgvector.DistanceStrategy.html"} {"id": "ac080611b22e-9", "text": "translate(table, /)\u00b6\nReplace each character in the string using the given translation table.\ntableTranslation table, which must be a mapping of Unicode ordinals to\nUnicode ordinals, strings, or None.\nThe table must implement lookup/indexing via __getitem__, for instance a\ndictionary or list. If this operation raises LookupError, the character is\nleft untouched. Characters mapped to None are deleted.\nupper()\u00b6\nReturn a copy of the string converted to uppercase.\nzfill(width, /)\u00b6\nPad a numeric string with zeros on the left, to fill a field of the given width.\nThe string is never truncated.\nCOSINE = 'cosine'\u00b6\nEUCLIDEAN = 'l2'\u00b6\nMAX_INNER_PRODUCT = 'inner'\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgvector.DistanceStrategy.html"} {"id": "a77e477c7316-0", "text": "langchain.vectorstores.annoy.dependable_annoy_import\u00b6\nlangchain.vectorstores.annoy.dependable_annoy_import() \u2192 Any[source]\u00b6\nImport annoy if available, otherwise raise error.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.annoy.dependable_annoy_import.html"} {"id": "900750337ebd-0", "text": "langchain.vectorstores.deeplake.DeepLake\u00b6\nclass langchain.vectorstores.deeplake.DeepLake(dataset_path: str = './deeplake/', token: Optional[str] = None, embedding_function: Optional[Embeddings] = None, read_only: bool = False, ingestion_batch_size: int = 1000, num_workers: int = 0, verbose: bool = True, exec_option: str = 'python', **kwargs: Any)[source]\u00b6\nBases: VectorStore\nWrapper around Deep Lake, a data lake for deep learning applications.\nWe integrated deeplake\u2019s similarity search and filtering for fast prototyping,\nNow, it supports Tensor Query Language (TQL) for production use cases\nover billion rows.\nWhy Deep Lake?\nNot only stores embeddings, but also the original data with version control.\nServerless, doesn\u2019t require another service and can be used with majorcloud providers (S3, GCS, etc.)\nMore than just a multi-modal vector store. You can use the datasetto fine-tune your own LLM models.\nTo use, you should have the deeplake python package installed.\nExample\nfrom langchain.vectorstores import DeepLake\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nvectorstore = DeepLake(\"langchain_store\", embeddings.embed_query)\nCreates an empty DeepLakeVectorStore or loads an existing one.\nThe DeepLakeVectorStore is located at the specified path.\nExamples\n>>> # Create a vector store with default tensors\n>>> deeplake_vectorstore = DeepLake(\n... path = ,\n... )\n>>>\n>>> # Create a vector store in the Deep Lake Managed Tensor Database\n>>> data = DeepLake(\n... path = \"hub://org_id/dataset_name\",", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html"} {"id": "900750337ebd-1", "text": "... path = \"hub://org_id/dataset_name\",\n... exec_option = \"tensor_db\",\n... )\nParameters\ndataset_path (str) \u2013 Path to existing dataset or where to create\na new one. Defaults to _LANGCHAIN_DEFAULT_DEEPLAKE_PATH.\ntoken (str, optional) \u2013 Activeloop token, for fetching credentials\nto the dataset at path if it is a Deep Lake dataset.\nTokens are normally autogenerated. Optional.\nembedding_function (str, optional) \u2013 Function to convert\neither documents or query. Optional.\nread_only (bool) \u2013 Open dataset in read-only mode. Default is False.\ningestion_batch_size (int) \u2013 During data ingestion, data is divided\ninto batches. Batch size is the size of each batch.\nDefault is 1000.\nnum_workers (int) \u2013 Number of workers to use during data ingestion.\nDefault is 0.\nverbose (bool) \u2013 Print dataset summary after each operation.\nDefault is True.\nexec_option (str) \u2013 DeepLakeVectorStore supports 3 ways to perform\nsearching - \u201cpython\u201d, \u201ccompute_engine\u201d, \u201ctensor_db\u201d.\nDefault is \u201cpython\u201d.\n- python - Pure-python implementation that runs on the client.\nWARNING: using this with big datasets can lead to memory\nissues. Data can be stored anywhere.\n- compute_engine - C++ implementation of the Deep Lake Compute\nEngine that runs on the client. Can be used for any data stored in\nor connected to Deep Lake. Not for in-memory or local datasets.\n- tensor_db - Hosted Managed Tensor Database that is\nresponsible for storage and query execution. Only for data stored in\nthe Deep Lake Managed Database. Use runtime = {\u201cdb_engine\u201d: True} during\ndataset creation.\n**kwargs \u2013 Other optional keyword arguments.\nRaises\nValueError \u2013 If some condition is not met.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html"} {"id": "900750337ebd-2", "text": "Raises\nValueError \u2013 If some condition is not met.\nMethods\n__init__([dataset_path,\u00a0token,\u00a0...])\nCreates an empty DeepLakeVectorStore or loads an existing one.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0ids])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete the entities in the dataset.\ndelete_dataset()\nDelete the collection.\nforce_delete_by_path(path)\nForce delete dataset by path.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html"} {"id": "900750337ebd-3", "text": "Return VectorStore initialized from documents and embeddings.\nfrom_texts(texts[,\u00a0embedding,\u00a0metadatas,\u00a0...])\nCreate a Deep Lake dataset from a raw documents.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k])\nRun similarity search with Deep Lake with distance returned.\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html"} {"id": "900750337ebd-4", "text": "Returns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nExamples\n>>> ids = deeplake_vectorstore.add_texts(\n... texts = ,\n... metadatas = ,\n... ids = ,\n... )\nParameters\ntexts (Iterable[str]) \u2013 Texts to add to the vectorstore.\nmetadatas (Optional[List[dict]], optional) \u2013 Optional list of metadatas.\nids (Optional[List[str]], optional) \u2013 Optional list of IDs.\n**kwargs \u2013 other optional keyword arguments.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html"} {"id": "900750337ebd-5", "text": "Return docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 bool[source]\u00b6\nDelete the entities in the dataset.\nParameters\nids (Optional[List[str]], optional) \u2013 The document_ids to delete.\nDefaults to None.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\n- filter (Optional[Dict[str, str]], optional): The filter to delete by.\n- delete_all (Optional[bool], optional): Whether to drop the dataset.\nReturns\nWhether the delete operation was successful.\nReturn type\nbool\ndelete_dataset() \u2192 None[source]\u00b6\nDelete the collection.\nclassmethod force_delete_by_path(path: str) \u2192 None[source]\u00b6\nForce delete dataset by path.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html"} {"id": "900750337ebd-6", "text": "Force delete dataset by path.\nParameters\npath (str) \u2013 path of the dataset to delete.\nRaises\nValueError \u2013 if deeplake is not installed.\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nclassmethod from_texts(texts: List[str], embedding: Optional[Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, dataset_path: str = './deeplake/', **kwargs: Any) \u2192 DeepLake[source]\u00b6\nCreate a Deep Lake dataset from a raw documents.\nIf a dataset_path is specified, the dataset will be persisted in that location,\notherwise by default at ./deeplake\nExamples:\n>>> # Search using an embedding\n>>> vector_store = DeepLake.from_texts(\n\u2026 texts = ,\n\u2026 embedding_function = ,\n\u2026 k = ,\n\u2026 exec_option = ,\n\u2026 )\nParameters\ndataset_path (str) \u2013 \nThe full path to the dataset. Can be:\nDeep Lake cloud path of the form hub://username/dataset_name.To write to Deep Lake cloud datasets,\nensure that you are logged in to Deep Lake\n(use \u2018activeloop login\u2019 from command line)\nAWS S3 path of the form s3://bucketname/path/to/dataset.Credentials are required in either the environment\nGoogle Cloud Storage path of the formgcs://bucketname/path/to/dataset Credentials are required\nin either the environment\nLocal file system path of the form ./path/to/dataset or~/path/to/dataset or path/to/dataset.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html"} {"id": "900750337ebd-7", "text": "In-memory path of the form mem://path/to/dataset which doesn\u2019tsave the dataset, but keeps it in memory instead.\nShould be used only for testing as it does not persist.\ntexts (List[Document]) \u2013 List of documents to add.\nembedding (Optional[Embeddings]) \u2013 Embedding function. Defaults to None.\nNote, in other places, it is called embedding_function.\nmetadatas (Optional[List[dict]]) \u2013 List of metadatas. Defaults to None.\nids (Optional[List[str]]) \u2013 List of document IDs. Defaults to None.\n**kwargs \u2013 Additional keyword arguments.\nReturns\nDeep Lake dataset.\nReturn type\nDeepLake\nRaises\nValueError \u2013 If \u2018embedding\u2019 is provided in kwargs. This is deprecated,\n please use embedding_function instead.\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, exec_option: Optional[str] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nExamples:\n>>> # Search using an embedding\n>>> data = vector_store.max_marginal_relevance_search(\n\u2026 query = ,\n\u2026 embedding_function = ,\n\u2026 k = ,\n\u2026 exec_option = ,\n\u2026 )\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents for MMR algorithm.\nlambda_mult \u2013 Value between 0 and 1. 0 corresponds\nto maximum diversity and 1 to minimum.\nDefaults to 0.5.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html"} {"id": "900750337ebd-8", "text": "to maximum diversity and 1 to minimum.\nDefaults to 0.5.\nexec_option (str) \u2013 Supports 3 ways to perform searching.\n- \u201cpython\u201d - Pure-python implementation running on the client.\nCan be used for data stored anywhere. WARNING: using this\noption with big datasets is discouraged due to potential\nmemory issues.\n\u201dcompute_engine\u201d - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for\nany data stored in or connected to Deep Lake. It cannot be\nused with in-memory or local datasets.\n\u201dtensor_db\u201d - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available\nfor data stored in the Deep Lake Managed Database. To store\ndatasets in this database, specify\nruntime = {\u201cdb_engine\u201d: True} during dataset creation.\n**kwargs \u2013 Additional keyword arguments\nReturns\nList of Documents selected by maximal marginal relevance.\nRaises\nValueError \u2013 when MRR search is on but embedding function is\n not specified.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, exec_option: Optional[str] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance. Maximal marginal\nrelevance optimizes for similarity to query AND diversity among selected docs.\nExamples:\n>>> data = vector_store.max_marginal_relevance_search_by_vector(\n\u2026 embedding=,\n\u2026 fetch_k=,\n\u2026 k=,\n\u2026 exec_option=,\n\u2026 )\nParameters\nembedding \u2013 Embedding to look up documents similar to.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html"} {"id": "900750337ebd-9", "text": "\u2026 )\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch for MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 determining the degree of diversity.\n0 corresponds to max diversity and 1 to min diversity. Defaults to 0.5.\nexec_option (str) \u2013 DeepLakeVectorStore supports 3 ways for searching.\nCould be \u201cpython\u201d, \u201ccompute_engine\u201d or \u201ctensor_db\u201d. Defaults to\n\u201cpython\u201d.\n- \u201cpython\u201d - Pure-python implementation running on the client.\nCan be used for data stored anywhere. WARNING: using this\noption with big datasets is discouraged due to potential\nmemory issues.\n\u201dcompute_engine\u201d - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for\nany data stored in or connected to Deep Lake. It cannot be used\nwith in-memory or local datasets.\n\u201dtensor_db\u201d - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available for\ndata stored in the Deep Lake Managed Database. To store datasets\nin this database, specify runtime = {\u201cdb_engine\u201d: True}\nduring dataset creation.\n**kwargs \u2013 Additional keyword arguments.\nReturns\nList[Documents] - A list of documents.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to query.\nExamples\n>>> # Search using an embedding\n>>> data = vector_store.similarity_search(\n... query=,\n... k=,", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html"} {"id": "900750337ebd-10", "text": "... query=,\n... k=,\n... exec_option=,\n... )\n>>> # Run tql search:\n>>> data = vector_store.tql_search(\n... tql_query=\"SELECT * WHERE id == \",\n... exec_option=\"compute_engine\",\n... )\nParameters\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nquery (str) \u2013 Text to look up similar documents.\n**kwargs \u2013 Additional keyword arguments include:\nembedding (Callable): Embedding function to use. Defaults to None.\ndistance_metric (str): \u2018L2\u2019 for Euclidean, \u2018L1\u2019 for Nuclear, \u2018max\u2019\nfor L-infinity, \u2018cos\u2019 for cosine, \u2018dot\u2019 for dot product.\nDefaults to \u2018L2\u2019.\nfilter (Union[Dict, Callable], optional): Additional filterbefore embedding search.\n- Dict: Key-value search on tensors of htype json,\n(sample must satisfy all key-value filters)\nDict = {\u201ctensor_1\u201d: {\u201ckey\u201d: value}, \u201ctensor_2\u201d: {\u201ckey\u201d: value}}\nFunction: Compatible with deeplake.filter.\nDefaults to None.\nexec_option (str): Supports 3 ways to perform searching.\u2019python\u2019, \u2018compute_engine\u2019, or \u2018tensor_db\u2019. Defaults to \u2018python\u2019.\n- \u2018python\u2019: Pure-python implementation for the client.\nWARNING: not recommended for big datasets.\n\u2019compute_engine\u2019: C++ implementation of the Compute Engine forthe client. Not for in-memory or local datasets.\n\u2019tensor_db\u2019: Managed Tensor Database for storage and query.Only for data in Deep Lake Managed Database.\nUse runtime = {\u201cdb_engine\u201d: True} during dataset creation.\nReturns\nList of Documents most similar to the query vector.\nReturn type\nList[Document]", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html"} {"id": "900750337ebd-11", "text": "List of Documents most similar to the query vector.\nReturn type\nList[Document]\nsimilarity_search_by_vector(embedding: Union[List[float], ndarray], k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to embedding vector.\nExamples\n>>> # Search using an embedding\n>>> data = vector_store.similarity_search_by_vector(\n... embedding=,\n... k=,\n... exec_option=,\n... )\nParameters\nembedding (Union[List[float], np.ndarray]) \u2013 Embedding to find similar docs.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 Additional keyword arguments including:\nfilter (Union[Dict, Callable], optional):\nAdditional filter before embedding search.\n- Dict - Key-value search on tensors of htype json. True\nif all key-value filters are satisfied.\nDict = {\u201ctensor_name_1\u201d: {\u201ckey\u201d: value},\n\u201dtensor_name_2\u201d: {\u201ckey\u201d: value}}\nFunction - Any function compatible withdeeplake.filter.\nDefaults to None.\nexec_option (str): Options for search execution include\u201dpython\u201d, \u201ccompute_engine\u201d, or \u201ctensor_db\u201d. Defaults to\n\u201cpython\u201d.\n- \u201cpython\u201d - Pure-python implementation running on the client.\nCan be used for data stored anywhere. WARNING: using this\noption with big datasets is discouraged due to potential\nmemory issues.\n\u201dcompute_engine\u201d - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for\nany data stored in or connected to Deep Lake. It cannot be\nused with in-memory or local datasets.\n\u201dtensor_db\u201d - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html"} {"id": "900750337ebd-12", "text": "for data stored in the Deep Lake Managed Database.\nTo store datasets in this database, specify\nruntime = {\u201cdb_engine\u201d: True} during dataset creation.\ndistance_metric (str): L2 for Euclidean, L1 for Nuclear,max for L-infinity distance, cos for cosine similarity,\n\u2018dot\u2019 for dot product. Defaults to L2.\nReturns\nList of Documents most similar to the query vector.\nReturn type\nList[Document]\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nRun similarity search with Deep Lake with distance returned.\nExamples:\n>>> data = vector_store.similarity_search_with_score(\n\u2026 query=,\n\u2026 embedding=\n\u2026 k=,\n\u2026 exec_option=,\n\u2026 )\nParameters\nquery (str) \u2013 Query text to search for.\nk (int) \u2013 Number of results to return. Defaults to 4.\n**kwargs \u2013 Additional keyword arguments. Some of these arguments are:\ndistance_metric: L2 for Euclidean, L1 for Nuclear, max L-infinity", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html"} {"id": "900750337ebd-13", "text": "distance_metric: L2 for Euclidean, L1 for Nuclear, max L-infinity\ndistance, cos for cosine similarity, \u2018dot\u2019 for dot product.\nDefaults to L2.\nfilter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.embedding_function (Callable): Embedding function to use. Defaults\nto None.\nexec_option (str): DeepLakeVectorStore supports 3 ways to performsearching. It could be either \u201cpython\u201d, \u201ccompute_engine\u201d or\n\u201ctensor_db\u201d. Defaults to \u201cpython\u201d.\n- \u201cpython\u201d - Pure-python implementation running on the client.\nCan be used for data stored anywhere. WARNING: using this\noption with big datasets is discouraged due to potential\nmemory issues.\n\u201dcompute_engine\u201d - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for\nany data stored in or connected to Deep Lake. It cannot be used\nwith in-memory or local datasets.\n\u201dtensor_db\u201d - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available for\ndata stored in the Deep Lake Managed Database. To store datasets\nin this database, specify runtime = {\u201cdb_engine\u201d: True}\nduring dataset creation.\nReturns\nList of documents most similar to the querytext with distance in float.\nReturn type\nList[Tuple[Document, float]]", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html"} {"id": "ade7850fb742-0", "text": "langchain.vectorstores.lancedb.LanceDB\u00b6\nclass langchain.vectorstores.lancedb.LanceDB(connection: Any, embedding: Embeddings, vector_key: Optional[str] = 'vector', id_key: Optional[str] = 'id', text_key: Optional[str] = 'text')[source]\u00b6\nBases: VectorStore\nWrapper around LanceDB vector database.\nTo use, you should have lancedb python package installed.\nExample\ndb = lancedb.connect('./lancedb')\ntable = db.open_table('my_table')\nvectorstore = LanceDB(table, embedding_function)\nvectorstore.add_texts(['text1', 'text2'])\nresult = vectorstore.similarity_search('text1')\nInitialize with Lance DB connection\nMethods\n__init__(connection,\u00a0embedding[,\u00a0...])\nInitialize with Lance DB connection\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0ids])\nTurn texts into embedding and add it to the database\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.lancedb.LanceDB.html"} {"id": "ade7850fb742-1", "text": "asearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector ID or other criteria.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nReturn VectorStore initialized from texts and embeddings.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k])\nReturn documents most similar to the query\nsimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.lancedb.LanceDB.html"} {"id": "ade7850fb742-2", "text": "Run more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nTurn texts into embedding and add it to the database\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nids \u2013 Optional list of ids to associate with the texts.\nReturns\nList of ids of the added texts.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.lancedb.LanceDB.html"} {"id": "ade7850fb742-3", "text": "Return docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, connection: Any = None, vector_key: Optional[str] = 'vector', id_key: Optional[str] = 'id', text_key: Optional[str] = 'text', **kwargs: Any) \u2192 LanceDB[source]\u00b6\nReturn VectorStore initialized from texts and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.lancedb.LanceDB.html"} {"id": "ade7850fb742-4", "text": "Return VectorStore initialized from texts and embeddings.\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.lancedb.LanceDB.html"} {"id": "ade7850fb742-5", "text": "Return docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn documents most similar to the query\nParameters\nquery \u2013 String to query the vectorstore with.\nk \u2013 Number of documents to return.\nReturns\nList of documents most similar to the query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.lancedb.LanceDB.html"} {"id": "71467bcf6f26-0", "text": "langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch\u00b6\nclass langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch(doc_index: BaseDocIndex, embedding: Embeddings)[source]\u00b6\nBases: DocArrayIndex\nWrapper around in-memory storage for exact search.\nTo use it, you should have the docarray package with version >=0.32.0 installed.\nYou can install it with pip install \u201clangchain[docarray]\u201d.\nInitialize a vector store from DocArray\u2019s DocIndex.\nMethods\n__init__(doc_index,\u00a0embedding)\nInitialize a vector store from DocArray's DocIndex.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch.html"} {"id": "71467bcf6f26-1", "text": "asimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector ID or other criteria.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_params(embedding[,\u00a0metric])\nInitialize DocArrayInMemorySearch store.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nCreate an DocArrayInMemorySearch store and insert data.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k])\nReturn docs most similar to query.\nAttributes\ndoc_cls\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch.html"} {"id": "71467bcf6f26-2", "text": "Run more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nReturns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch.html"} {"id": "71467bcf6f26-3", "text": "Return docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nclassmethod from_params(embedding: Embeddings, metric: Literal['cosine_sim', 'euclidian_dist', 'sgeuclidean_dist'] = 'cosine_sim', **kwargs: Any) \u2192 DocArrayInMemorySearch[source]\u00b6\nInitialize DocArrayInMemorySearch store.\nParameters\nembedding (Embeddings) \u2013 Embedding function.\nmetric (str) \u2013 metric for exact nearest-neighbor search.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch.html"} {"id": "71467bcf6f26-4", "text": "metric (str) \u2013 metric for exact nearest-neighbor search.\nCan be one of: \u201ccosine_sim\u201d, \u201ceuclidean_dist\u201d and \u201csqeuclidean_dist\u201d.\nDefaults to \u201ccosine_sim\u201d.\n**kwargs \u2013 Other keyword arguments to be passed to the get_doc_cls method.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[Dict[Any, Any]]] = None, **kwargs: Any) \u2192 DocArrayInMemorySearch[source]\u00b6\nCreate an DocArrayInMemorySearch store and insert data.\nParameters\ntexts (List[str]) \u2013 Text data.\nembedding (Embeddings) \u2013 Embedding function.\nmetadatas (Optional[List[Dict[Any, Any]]]) \u2013 Metadata for each text\nif it exists. Defaults to None.\nmetric (str) \u2013 metric for exact nearest-neighbor search.\nCan be one of: \u201ccosine_sim\u201d, \u201ceuclidean_dist\u201d and \u201csqeuclidean_dist\u201d.\nDefaults to \u201ccosine_sim\u201d.\nReturns\nDocArrayInMemorySearch Vector Store\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch.html"} {"id": "71467bcf6f26-5", "text": "Defaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch.html"} {"id": "71467bcf6f26-6", "text": "Returns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of documents most similar to the query text and\ncosine distance in float for each.\nLower score represents more similarity.\nproperty doc_cls: Type[BaseDoc]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch.html"} {"id": "3db409c1c67d-0", "text": "langchain.vectorstores.matching_engine.MatchingEngine\u00b6\nclass langchain.vectorstores.matching_engine.MatchingEngine(project_id: str, index: MatchingEngineIndex, endpoint: MatchingEngineIndexEndpoint, embedding: Embeddings, gcs_client: storage.Client, gcs_bucket_name: str, credentials: Optional[Credentials] = None)[source]\u00b6\nBases: VectorStore\nVertex Matching Engine implementation of the vector store.\nWhile the embeddings are stored in the Matching Engine, the embedded\ndocuments will be stored in GCS.\nAn existing Index and corresponding Endpoint are preconditions for\nusing this module.\nSee usage in docs/modules/indexes/vectorstores/examples/matchingengine.ipynb\nNote that this implementation is mostly meant for reading if you are\nplanning to do a real time implementation. While reading is a real time\noperation, updating the index takes close to one hour.\nVertex Matching Engine implementation of the vector store.\nWhile the embeddings are stored in the Matching Engine, the embedded\ndocuments will be stored in GCS.\nAn existing Index and corresponding Endpoint are preconditions for\nusing this module.\nSee usage in\ndocs/modules/indexes/vectorstores/examples/matchingengine.ipynb.\nNote that this implementation is mostly meant for reading if you are\nplanning to do a real time implementation. While reading is a real time\noperation, updating the index takes close to one hour.\nproject_id\u00b6\nThe GCS project id.\nindex\u00b6\nThe created index class. See\n~:func:MatchingEngine.from_components.\nendpoint\u00b6\nThe created endpoint class. See\n~:func:MatchingEngine.from_components.\nembedding\u00b6\nA Embeddings that will be used for\nembedding the text sent. If none is sent, then the\nmultilingual Tensorflow Universal Sentence Encoder will be used.\ngcs_client\u00b6\nThe GCS client.\ngcs_bucket_name\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.matching_engine.MatchingEngine.html"} {"id": "3db409c1c67d-1", "text": "gcs_client\u00b6\nThe GCS client.\ngcs_bucket_name\u00b6\nThe GCS bucket name.\ncredentials\u00b6\nCreated GCP credentials.\nType\nOptional\nMethods\n__init__(project_id,\u00a0index,\u00a0endpoint,\u00a0...[,\u00a0...])\nVertex Matching Engine implementation of the vector store.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector ID or other criteria.\nfrom_components(project_id,\u00a0region,\u00a0...[,\u00a0...])\nTakes the object creation out of the constructor.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.matching_engine.MatchingEngine.html"} {"id": "3db409c1c67d-2", "text": "Takes the object creation out of the constructor.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nUse from components instead.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.matching_engine.MatchingEngine.html"} {"id": "3db409c1c67d-3", "text": "Returns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nkwargs \u2013 vectorstore specific parameters.\nReturns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.matching_engine.MatchingEngine.html"} {"id": "3db409c1c67d-4", "text": "Return docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_components(project_id: str, region: str, gcs_bucket_name: str, index_id: str, endpoint_id: str, credentials_path: Optional[str] = None, embedding: Optional[Embeddings] = None) \u2192 MatchingEngine[source]\u00b6\nTakes the object creation out of the constructor.\nParameters\nproject_id \u2013 The GCP project id.\nregion \u2013 The default location making the API calls. It must have\nregional. (the same location as the GCS bucket and must be) \u2013 \ngcs_bucket_name \u2013 The location where the vectors will be stored in\ncreated. (order for the index to be) \u2013 \nindex_id \u2013 The id of the created index.\nendpoint_id \u2013 The id of the created endpoint.\ncredentials_path \u2013 (Optional) The path of the Google credentials on\nsystem. (the local file) \u2013", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.matching_engine.MatchingEngine.html"} {"id": "3db409c1c67d-5", "text": "system. (the local file) \u2013 \nembedding \u2013 The Embeddings that will be used for\ntexts. (embedding the) \u2013 \nReturns\nA configured MatchingEngine with the texts added to the index.\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 MatchingEngine[source]\u00b6\nUse from components instead.\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.matching_engine.MatchingEngine.html"} {"id": "3db409c1c67d-6", "text": "among selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to query.\nParameters\nquery \u2013 The string that will be used to search for similar documents.\nk \u2013 The amount of neighbors that will be retrieved.\nReturns\nA list of k matching documents.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.matching_engine.MatchingEngine.html"} {"id": "3db409c1c67d-7", "text": "**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.matching_engine.MatchingEngine.html"} {"id": "38707a2ceb26-0", "text": "langchain.vectorstores.clarifai.Clarifai\u00b6\nclass langchain.vectorstores.clarifai.Clarifai(user_id: Optional[str] = None, app_id: Optional[str] = None, pat: Optional[str] = None, number_of_docs: Optional[int] = None, api_base: Optional[str] = None)[source]\u00b6\nBases: VectorStore\nWrapper around Clarifai AI platform\u2019s vector store.\nTo use, you should have the clarifai python package installed.\nExample\nfrom langchain.vectorstores import Clarifai\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nvectorstore = Clarifai(\"langchain_store\", embeddings.embed_query)\nInitialize with Clarifai client.\nParameters\nuser_id (Optional[str], optional) \u2013 User ID. Defaults to None.\napp_id (Optional[str], optional) \u2013 App ID. Defaults to None.\npat (Optional[str], optional) \u2013 Personal access token. Defaults to None.\nnumber_of_docs (Optional[int], optional) \u2013 Number of documents to return\nNone. (during vector search. Defaults to) \u2013 \napi_base (Optional[str], optional) \u2013 API base. Defaults to None.\nRaises\nValueError \u2013 If user ID, app ID or personal access token is not provided.\nMethods\n__init__([user_id,\u00a0app_id,\u00a0pat,\u00a0...])\nInitialize with Clarifai client.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clarifai.Clarifai.html"} {"id": "38707a2ceb26-1", "text": "Run more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0ids])\nAdd texts to the Clarifai vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector ID or other criteria.\nfrom_documents(documents[,\u00a0embedding,\u00a0...])\nCreate a Clarifai vectorstore from a list of documents.\nfrom_texts(texts[,\u00a0embedding,\u00a0metadatas,\u00a0...])\nCreate a Clarifai vectorstore from a list of texts.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k])\nRun similarity search using Clarifai.\nsimilarity_search_by_vector(embedding[,\u00a0k])", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clarifai.Clarifai.html"} {"id": "38707a2ceb26-2", "text": "similarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k,\u00a0...])\nRun similarity search with score using Clarifai.\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nAdd texts to the Clarifai vectorstore. This will push the text\nto a Clarifai application.\nApplication use base workflow that create and store embedding for each text.\nMake sure you are using a base workflow that is compatible with text\n(such as Language Understanding).\nParameters\ntexts (Iterable[str]) \u2013 Texts to add to the vectorstore.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clarifai.Clarifai.html"} {"id": "38707a2ceb26-3", "text": "Parameters\ntexts (Iterable[str]) \u2013 Texts to add to the vectorstore.\nmetadatas (Optional[List[dict]], optional) \u2013 Optional list of metadatas.\nids (Optional[List[str]], optional) \u2013 Optional list of IDs.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clarifai.Clarifai.html"} {"id": "38707a2ceb26-4", "text": "Return docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_documents(documents: List[Document], embedding: Optional[Embeddings] = None, user_id: Optional[str] = None, app_id: Optional[str] = None, pat: Optional[str] = None, number_of_docs: Optional[int] = None, api_base: Optional[str] = None, **kwargs: Any) \u2192 Clarifai[source]\u00b6\nCreate a Clarifai vectorstore from a list of documents.\nParameters\nuser_id (str) \u2013 User ID.\napp_id (str) \u2013 App ID.\ndocuments (List[Document]) \u2013 List of documents to add.\npat (Optional[str]) \u2013 Personal access token. Defaults to None.\nnumber_of_docs (Optional[int]) \u2013 Number of documents to return\nNone. (during vector search. Defaults to) \u2013 \napi_base (Optional[str]) \u2013 API base. Defaults to None.\nReturns\nClarifai vectorstore.\nReturn type\nClarifai", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clarifai.Clarifai.html"} {"id": "38707a2ceb26-5", "text": "Returns\nClarifai vectorstore.\nReturn type\nClarifai\nclassmethod from_texts(texts: List[str], embedding: Optional[Embeddings] = None, metadatas: Optional[List[dict]] = None, user_id: Optional[str] = None, app_id: Optional[str] = None, pat: Optional[str] = None, number_of_docs: Optional[int] = None, api_base: Optional[str] = None, **kwargs: Any) \u2192 Clarifai[source]\u00b6\nCreate a Clarifai vectorstore from a list of texts.\nParameters\nuser_id (str) \u2013 User ID.\napp_id (str) \u2013 App ID.\ntexts (List[str]) \u2013 List of texts to add.\npat (Optional[str]) \u2013 Personal access token. Defaults to None.\nnumber_of_docs (Optional[int]) \u2013 Number of documents to return\nNone. (Defaults to) \u2013 \napi_base (Optional[str]) \u2013 API base. Defaults to None.\nmetadatas (Optional[List[dict]]) \u2013 Optional list of metadatas.\nNone. \u2013 \nReturns\nClarifai vectorstore.\nReturn type\nClarifai\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clarifai.Clarifai.html"} {"id": "38707a2ceb26-6", "text": "of diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nRun similarity search using Clarifai.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query and score for each\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clarifai.Clarifai.html"} {"id": "38707a2ceb26-7", "text": "Parameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None, namespace: Optional[str] = None, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nRun similarity search with score using Clarifai.\nParameters\nquery (str) \u2013 Query text to search for.\nk (int) \u2013 Number of results to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata.\nNone. (Defaults to) \u2013 \nReturns\nList of documents most simmilar to the query text.\nReturn type\nList[Document]", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clarifai.Clarifai.html"} {"id": "64330cb4f5df-0", "text": "langchain.vectorstores.elastic_vector_search.ElasticKnnSearch\u00b6\nclass langchain.vectorstores.elastic_vector_search.ElasticKnnSearch(index_name: str, embedding: Embeddings, es_connection: Optional['Elasticsearch'] = None, es_cloud_id: Optional[str] = None, es_user: Optional[str] = None, es_password: Optional[str] = None, vector_query_field: Optional[str] = 'vector', query_field: Optional[str] = 'text')[source]\u00b6\nBases: ElasticVectorSearch\nA class for performing k-Nearest Neighbors (k-NN) search on an Elasticsearch index.\nThe class is designed for a text search scenario where documents are text strings\nand their embeddings are vector representations of those strings.\nInitializes an instance of the ElasticKnnSearch class and sets up theElasticsearch client.\nParameters\nindex_name \u2013 The name of the Elasticsearch index.\nembedding \u2013 An instance of the Embeddings class, used to generate vector\nrepresentations of text strings.\nes_connection \u2013 An existing Elasticsearch connection.\nes_cloud_id \u2013 The Cloud ID of the Elasticsearch instance. Required if\ncreating a new connection.\nes_user \u2013 The username for the Elasticsearch instance. Required if\ncreating a new connection.\nes_password \u2013 The password for the Elasticsearch instance. Required if\ncreating a new connection.\nMethods\n__init__(index_name,\u00a0embedding[,\u00a0...])\nInitializes an instance of the ElasticKnnSearch class and sets up the\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html"} {"id": "64330cb4f5df-1", "text": "Run more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0...])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\nclient_search(client,\u00a0index_name,\u00a0...)\ncreate_index(client,\u00a0index_name,\u00a0mapping)\ndelete([ids])\nDelete by vector IDs.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nConstruct ElasticVectorSearch wrapper from raw documents.\nknn_hybrid_search([query,\u00a0k,\u00a0query_vector,\u00a0...])\nPerforms a hybrid k-nearest neighbor (k-NN) and text-based search on the\nknn_search([query,\u00a0k,\u00a0query_vector,\u00a0...])\nPerforms a k-nearest neighbor (k-NN) search on the Elasticsearch index.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html"} {"id": "64330cb4f5df-2", "text": "Performs a k-nearest neighbor (k-NN) search on the Elasticsearch index.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k,\u00a0filter])\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k,\u00a0filter])\nReturn docs most similar to query.\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html"} {"id": "64330cb4f5df-3", "text": "Returns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, refresh_indices: bool = True, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nrefresh_indices \u2013 bool to refresh ElasticSearch indices\nReturns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html"} {"id": "64330cb4f5df-4", "text": "Return docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\nclient_search(client: Any, index_name: str, script_query: Dict, size: int) \u2192 Any\u00b6\ncreate_index(client: Any, index_name: str, mapping: Dict) \u2192 None\u00b6\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 None\u00b6\nDelete by vector IDs.\nParameters\nids \u2013 List of ids to delete.\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, elasticsearch_url: Optional[str] = None, index_name: Optional[str] = None, refresh_indices: bool = True, **kwargs: Any) \u2192 ElasticVectorSearch\u00b6\nConstruct ElasticVectorSearch wrapper from raw documents.\nThis is a user-friendly interface that:\nEmbeds documents.\nCreates a new index for the embeddings in the Elasticsearch instance.\nAdds the documents to the newly created Elasticsearch index.\nThis is intended to be a quick way to get started.\nExample\nfrom langchain import ElasticVectorSearch\nfrom langchain.embeddings import OpenAIEmbeddings", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html"} {"id": "64330cb4f5df-5", "text": "from langchain import ElasticVectorSearch\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nelastic_vector_search = ElasticVectorSearch.from_texts(\n texts,\n embeddings,\n elasticsearch_url=\"http://localhost:9200\"\n)\nknn_hybrid_search(query: Optional[str] = None, k: Optional[int] = 10, query_vector: Optional[List[float]] = None, model_id: Optional[str] = None, size: Optional[int] = 10, source: Optional[bool] = True, knn_boost: Optional[float] = 0.9, query_boost: Optional[float] = 0.1, fields: Optional[Union[List[Mapping[str, Any]], Tuple[Mapping[str, Any], ...]]] = None) \u2192 Dict[Any, Any][source]\u00b6\nPerforms a hybrid k-nearest neighbor (k-NN) and text-based search on theElasticsearch index.\nThe search can be conducted using either a raw query vector or a model ID.\nThe method first generates\nthe body of the k-NN search query and the text-based query, which can be\ninterpreted by Elasticsearch.\nIt then performs the hybrid search on the Elasticsearch index and returns the\nresults.\nParameters\nquery \u2013 The query or queries to be used for the search. Required if\nquery_vector is not provided.\nk \u2013 The number of nearest neighbors to return. Defaults to 10.\nquery_vector \u2013 The query vector to be used for the search. Required if\nquery is not provided.\nmodel_id \u2013 The ID of the model to use for generating the query vector, if\nquery is provided.\nsize \u2013 The number of search hits to return. Defaults to 10.\nsource \u2013 Whether to include the source of each hit in the results.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html"} {"id": "64330cb4f5df-6", "text": "source \u2013 Whether to include the source of each hit in the results.\nknn_boost \u2013 The boost factor for the k-NN part of the search.\nquery_boost \u2013 The boost factor for the text-based part of the search.\nfields \u2013 The fields to include in the source of each hit. If None, all fields are\nincluded. Defaults to None.\nvector_query_field \u2013 Field name to use in knn search if not default \u2018vector\u2019\nquery_field \u2013 Field name to use in search if not default \u2018text\u2019\nReturns\nThe search results.\nRaises\nValueError \u2013 If neither query_vector nor model_id is provided, or if\n both are provided.\nknn_search(query: Optional[str] = None, k: Optional[int] = 10, query_vector: Optional[List[float]] = None, model_id: Optional[str] = None, size: Optional[int] = 10, source: Optional[bool] = True, fields: Optional[Union[List[Mapping[str, Any]], Tuple[Mapping[str, Any], ...]]] = None) \u2192 Dict[source]\u00b6\nPerforms a k-nearest neighbor (k-NN) search on the Elasticsearch index.\nThe search can be conducted using either a raw query vector or a model ID.\nThe method first generates\nthe body of the search query, which can be interpreted by Elasticsearch.\nIt then performs the k-NN\nsearch on the Elasticsearch index and returns the results.\nParameters\nquery \u2013 The query or queries to be used for the search. Required if\nquery_vector is not provided.\nk \u2013 The number of nearest neighbors to return. Defaults to 10.\nquery_vector \u2013 The query vector to be used for the search. Required if\nquery is not provided.\nmodel_id \u2013 The ID of the model to use for generating the query vector, if\nquery is provided.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html"} {"id": "64330cb4f5df-7", "text": "query is provided.\nsize \u2013 The number of search hits to return. Defaults to 10.\nsource \u2013 Whether to include the source of each hit in the results.\nfields \u2013 The fields to include in the source of each hit. If None, all\nfields are included.\nvector_query_field \u2013 Field name to use in knn search if not default \u2018vector\u2019\nReturns\nThe search results.\nRaises\nValueError \u2013 If neither query_vector nor model_id is provided, or if\n both are provided.\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html"} {"id": "64330cb4f5df-8", "text": "k \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html"} {"id": "64330cb4f5df-9", "text": "score_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\n:param query: Text to look up documents similar to.\n:param k: Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html"} {"id": "3829f5d087d5-0", "text": "langchain.vectorstores.singlestoredb.SingleStoreDBRetriever\u00b6\nclass langchain.vectorstores.singlestoredb.SingleStoreDBRetriever(*, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, vectorstore: SingleStoreDB, search_type: str = 'similarity', search_kwargs: dict = None, k: int = 4)[source]\u00b6\nBases: VectorStoreRetriever\nRetriever for SingleStoreDB vector stores.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam k: int = 4\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam search_kwargs: dict [Optional]\u00b6\nparam search_type: str = 'similarity'\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a retriever with its\nuse case.\nparam vectorstore: SingleStoreDB [Required]\u00b6\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nAdd documents to vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nAdd documents to vectorstore.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDBRetriever.html"} {"id": "3829f5d087d5-1", "text": "Add documents to vectorstore.\nasync aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nAsynchronously get documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nRetrieve documents relevant to a query.\n:param query: string to find relevant documents for\n:param callbacks: Callback manager or list of callbacks\n:param tags: Optional list of tags associated with the retriever. Defaults to None\nThese tags will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nParameters\nmetadata \u2013 Optional metadata associated with the retriever. Defaults to None\nThis metadata will be associated with each call to this retriever,\nand passed as arguments to the handlers defined in callbacks.\nReturns\nList of relevant documents\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_search_type\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate search type.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDBRetriever.html"} {"id": "3829f5d087d5-2", "text": "validator validate_search_type\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate search type.\nallowed_search_types: ClassVar[Collection[str]] = ('similarity',)\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDBRetriever.html"} {"id": "f737deac57cd-0", "text": "langchain.vectorstores.pgvector.CollectionStore\u00b6\nclass langchain.vectorstores.pgvector.CollectionStore(**kwargs)[source]\u00b6\nBases: BaseModel\nA simple constructor that allows initialization from kwargs.\nSets attributes on the constructed instance using the names and\nvalues in kwargs.\nOnly keys that are present as\nattributes of the instance\u2019s class are allowed. These could be,\nfor example, any mapped columns or relationships.\nMethods\n__init__(**kwargs)\nA simple constructor that allows initialization from kwargs.\nget_by_name(session,\u00a0name)\nget_or_create(session,\u00a0name[,\u00a0cmetadata])\nGet or create a collection.\nAttributes\ncmetadata\nembeddings\nmetadata\nname\nregistry\nuuid\nclassmethod get_by_name(session: Session, name: str) \u2192 Optional[CollectionStore][source]\u00b6\nclassmethod get_or_create(session: Session, name: str, cmetadata: Optional[dict] = None) \u2192 Tuple[CollectionStore, bool][source]\u00b6\nGet or create a collection.\nReturns [Collection, bool] where the bool is True if the collection was created.\ncmetadata\u00b6\nembeddings\u00b6\nmetadata: MetaData = MetaData()\u00b6\nname\u00b6\nregistry: RegistryType = \u00b6\nuuid\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgvector.CollectionStore.html"} {"id": "ee3a563ebd5c-0", "text": "langchain.vectorstores.elastic_vector_search.ElasticVectorSearch\u00b6\nclass langchain.vectorstores.elastic_vector_search.ElasticVectorSearch(elasticsearch_url: str, index_name: str, embedding: Embeddings, *, ssl_verify: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: VectorStore, ABC\nWrapper around Elasticsearch as a vector database.\nTo connect to an Elasticsearch instance that does not require\nlogin credentials, pass the Elasticsearch URL and index name along with the\nembedding object to the constructor.\nExample\nfrom langchain import ElasticVectorSearch\nfrom langchain.embeddings import OpenAIEmbeddings\nembedding = OpenAIEmbeddings()\nelastic_vector_search = ElasticVectorSearch(\n elasticsearch_url=\"http://localhost:9200\",\n index_name=\"test_index\",\n embedding=embedding\n)\nTo connect to an Elasticsearch instance that requires login credentials,\nincluding Elastic Cloud, use the Elasticsearch URL format\nhttps://username:password@es_host:9243. For example, to connect to Elastic\nCloud, create the Elasticsearch URL with the required authentication details and\npass it to the ElasticVectorSearch constructor as the named parameter\nelasticsearch_url.\nYou can obtain your Elastic Cloud URL and login credentials by logging in to the\nElastic Cloud console at https://cloud.elastic.co, selecting your deployment, and\nnavigating to the \u201cDeployments\u201d page.\nTo obtain your Elastic Cloud password for the default \u201celastic\u201d user:\nLog in to the Elastic Cloud console at https://cloud.elastic.co\nGo to \u201cSecurity\u201d > \u201cUsers\u201d\nLocate the \u201celastic\u201d user and click \u201cEdit\u201d\nClick \u201cReset password\u201d\nFollow the prompts to reset the password\nThe format for Elastic Cloud URLs is\nhttps://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.\nExample\nfrom langchain import ElasticVectorSearch", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html"} {"id": "ee3a563ebd5c-1", "text": "Example\nfrom langchain import ElasticVectorSearch\nfrom langchain.embeddings import OpenAIEmbeddings\nembedding = OpenAIEmbeddings()\nelastic_host = \"cluster_id.region_id.gcp.cloud.es.io\"\nelasticsearch_url = f\"https://username:password@{elastic_host}:9243\"\nelastic_vector_search = ElasticVectorSearch(\n elasticsearch_url=elasticsearch_url,\n index_name=\"test_index\",\n embedding=embedding\n)\nParameters\nelasticsearch_url (str) \u2013 The URL for the Elasticsearch instance.\nindex_name (str) \u2013 The name of the Elasticsearch index for the embeddings.\nembedding (Embeddings) \u2013 An object that provides the ability to embed text.\nIt should be an instance of a class that subclasses the Embeddings\nabstract base class, such as OpenAIEmbeddings()\nRaises\nValueError \u2013 If the elasticsearch python package is not installed.\nInitialize with necessary components.\nMethods\n__init__(elasticsearch_url,\u00a0index_name,\u00a0...)\nInitialize with necessary components.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0...])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html"} {"id": "ee3a563ebd5c-2", "text": "Return docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\nclient_search(client,\u00a0index_name,\u00a0...)\ncreate_index(client,\u00a0index_name,\u00a0mapping)\ndelete([ids])\nDelete by vector IDs.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nConstruct ElasticVectorSearch wrapper from raw documents.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k,\u00a0filter])\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k,\u00a0filter])\nReturn docs most similar to query.\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html"} {"id": "ee3a563ebd5c-3", "text": "Run more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, refresh_indices: bool = True, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nrefresh_indices \u2013 bool to refresh ElasticSearch indices\nReturns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html"} {"id": "ee3a563ebd5c-4", "text": "Return VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\nclient_search(client: Any, index_name: str, script_query: Dict, size: int) \u2192 Any[source]\u00b6\ncreate_index(client: Any, index_name: str, mapping: Dict) \u2192 None[source]\u00b6\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 None[source]\u00b6\nDelete by vector IDs.\nParameters\nids \u2013 List of ids to delete.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html"} {"id": "ee3a563ebd5c-5", "text": "Delete by vector IDs.\nParameters\nids \u2013 List of ids to delete.\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, elasticsearch_url: Optional[str] = None, index_name: Optional[str] = None, refresh_indices: bool = True, **kwargs: Any) \u2192 ElasticVectorSearch[source]\u00b6\nConstruct ElasticVectorSearch wrapper from raw documents.\nThis is a user-friendly interface that:\nEmbeds documents.\nCreates a new index for the embeddings in the Elasticsearch instance.\nAdds the documents to the newly created Elasticsearch index.\nThis is intended to be a quick way to get started.\nExample\nfrom langchain import ElasticVectorSearch\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nelastic_vector_search = ElasticVectorSearch.from_texts(\n texts,\n embeddings,\n elasticsearch_url=\"http://localhost:9200\"\n)\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html"} {"id": "ee3a563ebd5c-6", "text": "to maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html"} {"id": "ee3a563ebd5c-7", "text": "k \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs most similar to query.\n:param query: Text to look up documents similar to.\n:param k: Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html"} {"id": "e750ba3343ae-0", "text": "langchain.vectorstores.utils.maximal_marginal_relevance\u00b6\nlangchain.vectorstores.utils.maximal_marginal_relevance(query_embedding: ndarray, embedding_list: list, lambda_mult: float = 0.5, k: int = 4) \u2192 List[int][source]\u00b6\nCalculate maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.utils.maximal_marginal_relevance.html"} {"id": "ce9af711bc0a-0", "text": "langchain.vectorstores.pgembedding.BaseModel\u00b6\nclass langchain.vectorstores.pgembedding.BaseModel(**kwargs: Any)[source]\u00b6\nBases: Base\nA simple constructor that allows initialization from kwargs.\nSets attributes on the constructed instance using the names and\nvalues in kwargs.\nOnly keys that are present as\nattributes of the instance\u2019s class are allowed. These could be,\nfor example, any mapped columns or relationships.\nMethods\n__init__(**kwargs)\nA simple constructor that allows initialization from kwargs.\nAttributes\nmetadata\nregistry\nuuid\nmetadata: MetaData = MetaData()\u00b6\nregistry: RegistryType = \u00b6\nuuid = Column(None, UUID(), table=None, primary_key=True, nullable=False, default=CallableColumnDefault())\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgembedding.BaseModel.html"} {"id": "349d95e71ee4-0", "text": "langchain.vectorstores.myscale.has_mul_sub_str\u00b6\nlangchain.vectorstores.myscale.has_mul_sub_str(s: str, *args: Any) \u2192 bool[source]\u00b6\nCheck if a string contains multiple substrings.\n:param s: string to check.\n:param *args: substrings to check.\nReturns\nTrue if all substrings are in the string, False otherwise.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.has_mul_sub_str.html"} {"id": "8f4bb1d88b15-0", "text": "langchain.vectorstores.base.VectorStore\u00b6\nclass langchain.vectorstores.base.VectorStore[source]\u00b6\nBases: ABC\nInterface for vector stores.\nMethods\n__init__()\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector ID or other criteria.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.base.VectorStore.html"} {"id": "8f4bb1d88b15-1", "text": "Return VectorStore initialized from texts and embeddings.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nabstract add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.base.VectorStore.html"} {"id": "8f4bb1d88b15-2", "text": "Run more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nkwargs \u2013 vectorstore specific parameters\nReturns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST[source]\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST[source]\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever[source]\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to query.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.base.VectorStore.html"} {"id": "8f4bb1d88b15-3", "text": "Return docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool][source]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST[source]\u00b6\nReturn VectorStore initialized from documents and embeddings.\nabstract classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST[source]\u00b6\nReturn VectorStore initialized from texts and embeddings.\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.base.VectorStore.html"} {"id": "8f4bb1d88b15-4", "text": "fetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to query using specified search type.\nabstract similarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.base.VectorStore.html"} {"id": "8f4bb1d88b15-5", "text": "k \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.base.VectorStore.html"} {"id": "9b5378ddd0a0-0", "text": "langchain.vectorstores.annoy.Annoy\u00b6\nclass langchain.vectorstores.annoy.Annoy(embedding_function: Callable, index: Any, metric: str, docstore: Docstore, index_to_docstore_id: Dict[int, str])[source]\u00b6\nBases: VectorStore\nWrapper around Annoy vector database.\nTo use, you should have the annoy python package installed.\nExample\nfrom langchain import Annoy\ndb = Annoy(embedding_function, index, docstore, index_to_docstore_id)\nInitialize with necessary components.\nMethods\n__init__(embedding_function,\u00a0index,\u00a0metric,\u00a0...)\nInitialize with necessary components.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.annoy.Annoy.html"} {"id": "9b5378ddd0a0-1", "text": "asimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector ID or other criteria.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_embeddings(text_embeddings,\u00a0embedding)\nConstruct Annoy wrapper from embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nConstruct Annoy wrapper from raw documents.\nload_local(folder_path,\u00a0embeddings)\nLoad Annoy index, docstore, and index_to_docstore_id to disk.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nprocess_index_results(idxs,\u00a0dists)\nTurns annoy results into a list of documents and scores.\nsave_local(folder_path[,\u00a0prefault])\nSave Annoy index, docstore, and index_to_docstore_id to disk.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k,\u00a0search_k])\nReturn docs most similar to query.\nsimilarity_search_by_index(docstore_index[,\u00a0...])\nReturn docs most similar to docstore_index.\nsimilarity_search_by_vector(embedding[,\u00a0k,\u00a0...])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.annoy.Annoy.html"} {"id": "9b5378ddd0a0-2", "text": "Return docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k,\u00a0...])\nReturn docs most similar to query.\nsimilarity_search_with_score_by_index(...[,\u00a0...])\nReturn docs most similar to query.\nsimilarity_search_with_score_by_vector(embedding)\nReturn docs most similar to query.\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nkwargs \u2013 vectorstore specific parameters\nReturns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.annoy.Annoy.html"} {"id": "9b5378ddd0a0-3", "text": "Return VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.annoy.Annoy.html"} {"id": "9b5378ddd0a0-4", "text": "Delete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nclassmethod from_embeddings(text_embeddings: List[Tuple[str, List[float]]], embedding: Embeddings, metadatas: Optional[List[dict]] = None, metric: str = 'angular', trees: int = 100, n_jobs: int = - 1, **kwargs: Any) \u2192 Annoy[source]\u00b6\nConstruct Annoy wrapper from embeddings.\nParameters\ntext_embeddings \u2013 List of tuples of (text, embedding)\nembedding \u2013 Embedding function to use.\nmetadatas \u2013 List of metadata dictionaries to associate with documents.\nmetric \u2013 Metric to use for indexing. Defaults to \u201cangular\u201d.\ntrees \u2013 Number of trees to use for indexing. Defaults to 100.\nn_jobs \u2013 Number of jobs to use for indexing. Defaults to -1\nThis is a user friendly interface that:\nCreates an in memory docstore with provided embeddings\nInitializes the Annoy database\nThis is intended to be a quick way to get started.\nExample\nfrom langchain import Annoy\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\ntext_embeddings = embeddings.embed_documents(texts)\ntext_embedding_pairs = list(zip(texts, text_embeddings))\ndb = Annoy.from_embeddings(text_embedding_pairs, embeddings)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.annoy.Annoy.html"} {"id": "9b5378ddd0a0-5", "text": "db = Annoy.from_embeddings(text_embedding_pairs, embeddings)\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, metric: str = 'angular', trees: int = 100, n_jobs: int = - 1, **kwargs: Any) \u2192 Annoy[source]\u00b6\nConstruct Annoy wrapper from raw documents.\nParameters\ntexts \u2013 List of documents to index.\nembedding \u2013 Embedding function to use.\nmetadatas \u2013 List of metadata dictionaries to associate with documents.\nmetric \u2013 Metric to use for indexing. Defaults to \u201cangular\u201d.\ntrees \u2013 Number of trees to use for indexing. Defaults to 100.\nn_jobs \u2013 Number of jobs to use for indexing. Defaults to -1.\nThis is a user friendly interface that:\nEmbeds documents.\nCreates an in memory docstore\nInitializes the Annoy database\nThis is intended to be a quick way to get started.\nExample\nfrom langchain import Annoy\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nindex = Annoy.from_texts(texts, embeddings)\nclassmethod load_local(folder_path: str, embeddings: Embeddings) \u2192 Annoy[source]\u00b6\nLoad Annoy index, docstore, and index_to_docstore_id to disk.\nParameters\nfolder_path \u2013 folder path to load index, docstore,\nand index_to_docstore_id from.\nembeddings \u2013 Embeddings to use when generating queries.\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.annoy.Annoy.html"} {"id": "9b5378ddd0a0-6", "text": "Return docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nk \u2013 Number of Documents to return. Defaults to 4.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nprocess_index_results(idxs: List[int], dists: List[float]) \u2192 List[Tuple[Document, float]][source]\u00b6\nTurns annoy results into a list of documents and scores.\nParameters\nidxs \u2013 List of indices of the documents in the index.\ndists \u2013 List of distances of the documents in the index.\nReturns\nList of Documents and scores.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.annoy.Annoy.html"} {"id": "9b5378ddd0a0-7", "text": "Returns\nList of Documents and scores.\nsave_local(folder_path: str, prefault: bool = False) \u2192 None[source]\u00b6\nSave Annoy index, docstore, and index_to_docstore_id to disk.\nParameters\nfolder_path \u2013 folder path to save index, docstore,\nand index_to_docstore_id to.\nprefault \u2013 Whether to pre-load the index into memory.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, search_k: int = - 1, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nsearch_k \u2013 inspect up to search_k nodes which defaults\nto n_trees * n if not provided\nReturns\nList of Documents most similar to the query.\nsimilarity_search_by_index(docstore_index: int, k: int = 4, search_k: int = - 1, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to docstore_index.\nParameters\ndocstore_index \u2013 Index of document in docstore\nk \u2013 Number of Documents to return. Defaults to 4.\nsearch_k \u2013 inspect up to search_k nodes which defaults\nto n_trees * n if not provided\nReturns\nList of Documents most similar to the embedding.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, search_k: int = - 1, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.annoy.Annoy.html"} {"id": "9b5378ddd0a0-8", "text": "Parameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nsearch_k \u2013 inspect up to search_k nodes which defaults\nto n_trees * n if not provided\nReturns\nList of Documents most similar to the embedding.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 4, search_k: int = - 1) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nsearch_k \u2013 inspect up to search_k nodes which defaults\nto n_trees * n if not provided\nReturns\nList of Documents most similar to the query and score for each\nsimilarity_search_with_score_by_index(docstore_index: int, k: int = 4, search_k: int = - 1) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nsearch_k \u2013 inspect up to search_k nodes which defaults", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.annoy.Annoy.html"} {"id": "9b5378ddd0a0-9", "text": "search_k \u2013 inspect up to search_k nodes which defaults\nto n_trees * n if not provided\nReturns\nList of Documents most similar to the query and score for each\nsimilarity_search_with_score_by_vector(embedding: List[float], k: int = 4, search_k: int = - 1) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nsearch_k \u2013 inspect up to search_k nodes which defaults\nto n_trees * n if not provided\nReturns\nList of Documents most similar to the query and score for each", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.annoy.Annoy.html"} {"id": "751e2b0b1a69-0", "text": "langchain.vectorstores.docarray.base.DocArrayIndex\u00b6\nclass langchain.vectorstores.docarray.base.DocArrayIndex(doc_index: BaseDocIndex, embedding: Embeddings)[source]\u00b6\nBases: VectorStore, ABC\nInitialize a vector store from DocArray\u2019s DocIndex.\nMethods\n__init__(doc_index,\u00a0embedding)\nInitialize a vector store from DocArray's DocIndex.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector ID or other criteria.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.base.DocArrayIndex.html"} {"id": "751e2b0b1a69-1", "text": "from_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k])\nReturn docs most similar to query.\nAttributes\ndoc_cls\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.base.DocArrayIndex.html"} {"id": "751e2b0b1a69-2", "text": "Returns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nReturns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.base.DocArrayIndex.html"} {"id": "751e2b0b1a69-3", "text": "Return docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nabstract classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.base.DocArrayIndex.html"} {"id": "751e2b0b1a69-4", "text": "among selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.base.DocArrayIndex.html"} {"id": "751e2b0b1a69-5", "text": "Returns\nList of Documents most similar to the query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of documents most similar to the query text and\ncosine distance in float for each.\nLower score represents more similarity.\nproperty doc_cls: Type[BaseDoc]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.base.DocArrayIndex.html"} {"id": "2e24222410ad-0", "text": "langchain.vectorstores.pgembedding.CollectionStore\u00b6\nclass langchain.vectorstores.pgembedding.CollectionStore(**kwargs)[source]\u00b6\nBases: BaseModel\nA simple constructor that allows initialization from kwargs.\nSets attributes on the constructed instance using the names and\nvalues in kwargs.\nOnly keys that are present as\nattributes of the instance\u2019s class are allowed. These could be,\nfor example, any mapped columns or relationships.\nMethods\n__init__(**kwargs)\nA simple constructor that allows initialization from kwargs.\nget_by_name(session,\u00a0name)\nget_or_create(session,\u00a0name[,\u00a0cmetadata])\nGet or create a collection.\nAttributes\ncmetadata\nembeddings\nmetadata\nname\nregistry\nuuid\nclassmethod get_by_name(session: Session, name: str) \u2192 Optional[CollectionStore][source]\u00b6\nclassmethod get_or_create(session: Session, name: str, cmetadata: Optional[dict] = None) \u2192 Tuple[CollectionStore, bool][source]\u00b6\nGet or create a collection.\nReturns [Collection, bool] where the bool is True if the collection was created.\ncmetadata\u00b6\nembeddings\u00b6\nmetadata: MetaData = MetaData()\u00b6\nname\u00b6\nregistry: RegistryType = \u00b6\nuuid\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgembedding.CollectionStore.html"} {"id": "df1995f5661b-0", "text": "langchain.vectorstores.pinecone.Pinecone\u00b6\nclass langchain.vectorstores.pinecone.Pinecone(index: Any, embedding_function: Callable, text_key: str, namespace: Optional[str] = None)[source]\u00b6\nBases: VectorStore\nWrapper around Pinecone vector database.\nTo use, you should have the pinecone-client python package installed.\nExample\nfrom langchain.vectorstores import Pinecone\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nimport pinecone\n# The environment should be the one specified next to the API key\n# in your Pinecone console\npinecone.init(api_key=\"***\", environment=\"...\")\nindex = pinecone.Index(\"langchain-demo\")\nembeddings = OpenAIEmbeddings()\nvectorstore = Pinecone(index, embeddings.embed_query, \"text\")\nInitialize with Pinecone client.\nMethods\n__init__(index,\u00a0embedding_function,\u00a0text_key)\nInitialize with Pinecone client.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0ids,\u00a0...])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html"} {"id": "df1995f5661b-1", "text": "amax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids,\u00a0delete_all,\u00a0namespace,\u00a0filter])\nDelete by vector IDs or filter.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_existing_index(index_name,\u00a0embedding[,\u00a0...])\nLoad pinecone vectorstore from index name.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nConstruct Pinecone wrapper from raw documents.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k,\u00a0filter,\u00a0namespace])\nReturn pinecone documents most similar to query.\nsimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k,\u00a0...])\nReturn pinecone documents most similar to query, along with scores.\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html"} {"id": "df1995f5661b-2", "text": "Run more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, namespace: Optional[str] = None, batch_size: int = 32, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nids \u2013 Optional list of ids to associate with the texts.\nnamespace \u2013 Optional pinecone namespace to add the texts to.\nReturns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html"} {"id": "df1995f5661b-3", "text": "Return VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, delete_all: Optional[bool] = None, namespace: Optional[str] = None, filter: Optional[dict] = None, **kwargs: Any) \u2192 None[source]\u00b6\nDelete by vector IDs or filter.\n:param ids: List of ids to delete.\n:param filter: Dictionary of conditions to filter vectors to delete.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html"} {"id": "df1995f5661b-4", "text": ":param filter: Dictionary of conditions to filter vectors to delete.\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nclassmethod from_existing_index(index_name: str, embedding: Embeddings, text_key: str = 'text', namespace: Optional[str] = None) \u2192 Pinecone[source]\u00b6\nLoad pinecone vectorstore from index name.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, batch_size: int = 32, text_key: str = 'text', index_name: Optional[str] = None, namespace: Optional[str] = None, **kwargs: Any) \u2192 Pinecone[source]\u00b6\nConstruct Pinecone wrapper from raw documents.\nThis is a user friendly interface that:\nEmbeds documents.\nAdds the documents to a provided Pinecone index\nThis is intended to be a quick way to get started.\nExample\nfrom langchain import Pinecone\nfrom langchain.embeddings import OpenAIEmbeddings\nimport pinecone\n# The environment should be the one specified next to the API key\n# in your Pinecone console\npinecone.init(api_key=\"***\", environment=\"...\")\nembeddings = OpenAIEmbeddings()\npinecone = Pinecone.from_texts(\n texts,\n embeddings,\n index_name=\"langchain-demo\"\n)\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[dict] = None, namespace: Optional[str] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html"} {"id": "df1995f5661b-5", "text": "Return docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[dict] = None, namespace: Optional[str] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html"} {"id": "df1995f5661b-6", "text": "Return docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, filter: Optional[dict] = None, namespace: Optional[str] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn pinecone documents most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter \u2013 Dictionary of argument(s) to filter on metadata\nnamespace \u2013 Namespace to search in. Default will search in \u2018\u2019 namespace.\nReturns\nList of Documents most similar to the query and score for each\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None, namespace: Optional[str] = None) \u2192 List[Tuple[Document, float]][source]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html"} {"id": "df1995f5661b-7", "text": "Return pinecone documents most similar to query, along with scores.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter \u2013 Dictionary of argument(s) to filter on metadata\nnamespace \u2013 Namespace to search in. Default will search in \u2018\u2019 namespace.\nReturns\nList of Documents most similar to the query and score for each", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html"} {"id": "d963b2404543-0", "text": "langchain.vectorstores.chroma.Chroma\u00b6\nclass langchain.vectorstores.chroma.Chroma(collection_name: str = 'langchain', embedding_function: Optional[Embeddings] = None, persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, collection_metadata: Optional[Dict] = None, client: Optional[chromadb.Client] = None)[source]\u00b6\nBases: VectorStore\nWrapper around ChromaDB embeddings platform.\nTo use, you should have the chromadb python package installed.\nExample\nfrom langchain.vectorstores import Chroma\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nvectorstore = Chroma(\"langchain_store\", embeddings)\nInitialize with Chroma client.\nMethods\n__init__([collection_name,\u00a0...])\nInitialize with Chroma client.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0ids])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.chroma.Chroma.html"} {"id": "d963b2404543-1", "text": "Return docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector IDs.\ndelete_collection()\nDelete the collection.\nfrom_documents(documents[,\u00a0embedding,\u00a0ids,\u00a0...])\nCreate a Chroma vectorstore from a list of documents.\nfrom_texts(texts[,\u00a0embedding,\u00a0metadatas,\u00a0...])\nCreate a Chroma vectorstore from a raw documents.\nget([ids,\u00a0where,\u00a0limit,\u00a0offset,\u00a0...])\nGets the collection.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\npersist()\nPersist the collection.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k,\u00a0filter])\nRun similarity search with Chroma.\nsimilarity_search_by_vector(embedding[,\u00a0k,\u00a0...])\nReturn docs most similar to embedding vector.\nsimilarity_search_by_vector_with_relevance_scores(...)\nReturn docs most similar to embedding vector and similarity score.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k,\u00a0filter])\nRun similarity search with Chroma with distance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.chroma.Chroma.html"} {"id": "d963b2404543-2", "text": "Run similarity search with Chroma with distance.\nupdate_document(document_id,\u00a0document)\nUpdate a document in the collection.\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Texts to add to the vectorstore.\nmetadatas (Optional[List[dict]], optional) \u2013 Optional list of metadatas.\nids (Optional[List[str]], optional) \u2013 Optional list of IDs.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.chroma.Chroma.html"} {"id": "d963b2404543-3", "text": "Return VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 None[source]\u00b6\nDelete by vector IDs.\nParameters\nids \u2013 List of ids to delete.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.chroma.Chroma.html"} {"id": "d963b2404543-4", "text": "Delete by vector IDs.\nParameters\nids \u2013 List of ids to delete.\ndelete_collection() \u2192 None[source]\u00b6\nDelete the collection.\nclassmethod from_documents(documents: List[Document], embedding: Optional[Embeddings] = None, ids: Optional[List[str]] = None, collection_name: str = 'langchain', persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, client: Optional[chromadb.Client] = None, **kwargs: Any) \u2192 Chroma[source]\u00b6\nCreate a Chroma vectorstore from a list of documents.\nIf a persist_directory is specified, the collection will be persisted there.\nOtherwise, the data will be ephemeral in-memory.\nParameters\ncollection_name (str) \u2013 Name of the collection to create.\npersist_directory (Optional[str]) \u2013 Directory to persist the collection.\nids (Optional[List[str]]) \u2013 List of document IDs. Defaults to None.\ndocuments (List[Document]) \u2013 List of documents to add to the vectorstore.\nembedding (Optional[Embeddings]) \u2013 Embedding function. Defaults to None.\nclient_settings (Optional[chromadb.config.Settings]) \u2013 Chroma client settings\nReturns\nChroma vectorstore.\nReturn type\nChroma\nclassmethod from_texts(texts: List[str], embedding: Optional[Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, collection_name: str = 'langchain', persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, client: Optional[chromadb.Client] = None, **kwargs: Any) \u2192 Chroma[source]\u00b6\nCreate a Chroma vectorstore from a raw documents.\nIf a persist_directory is specified, the collection will be persisted there.\nOtherwise, the data will be ephemeral in-memory.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.chroma.Chroma.html"} {"id": "d963b2404543-5", "text": "Otherwise, the data will be ephemeral in-memory.\nParameters\ntexts (List[str]) \u2013 List of texts to add to the collection.\ncollection_name (str) \u2013 Name of the collection to create.\npersist_directory (Optional[str]) \u2013 Directory to persist the collection.\nembedding (Optional[Embeddings]) \u2013 Embedding function. Defaults to None.\nmetadatas (Optional[List[dict]]) \u2013 List of metadatas. Defaults to None.\nids (Optional[List[str]]) \u2013 List of document IDs. Defaults to None.\nclient_settings (Optional[chromadb.config.Settings]) \u2013 Chroma client settings\nReturns\nChroma vectorstore.\nReturn type\nChroma\nget(ids: Optional[OneOrMany[ID]] = None, where: Optional[Where] = None, limit: Optional[int] = None, offset: Optional[int] = None, where_document: Optional[WhereDocument] = None, include: Optional[List[str]] = None) \u2192 Dict[str, Any][source]\u00b6\nGets the collection.\nParameters\nids \u2013 The ids of the embeddings to get. Optional.\nwhere \u2013 A Where type dict used to filter results by.\nE.g. {\u201ccolor\u201d : \u201cred\u201d, \u201cprice\u201d: 4.20}. Optional.\nlimit \u2013 The number of documents to return. Optional.\noffset \u2013 The offset to start returning results from.\nUseful for paging results with limit. Optional.\nwhere_document \u2013 A WhereDocument type dict used to filter by the documents.\nE.g. {$contains: {\u201ctext\u201d: \u201chello\u201d}}. Optional.\ninclude \u2013 A list of what to include in the results.\nCan contain \u201cembeddings\u201d, \u201cmetadatas\u201d, \u201cdocuments\u201d.\nIds are always included.\nDefaults to [\u201cmetadatas\u201d, \u201cdocuments\u201d]. Optional.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.chroma.Chroma.html"} {"id": "d963b2404543-6", "text": "Defaults to [\u201cmetadatas\u201d, \u201cdocuments\u201d]. Optional.\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[Dict[str, str]] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[Dict[str, str]] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.chroma.Chroma.html"} {"id": "d963b2404543-7", "text": "to maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of Documents selected by maximal marginal relevance.\npersist() \u2192 None[source]\u00b6\nPersist the collection.\nThis can be used to explicitly persist the data to disk.\nIt will also be called automatically when the object is destroyed.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nRun similarity search with Chroma.\nParameters\nquery (str) \u2013 Query text to search for.\nk (int) \u2013 Number of results to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of documents most similar to the query text.\nReturn type\nList[Document]\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to embedding vector.\n:param embedding: Embedding to look up documents similar to.\n:type embedding: List[float]\n:param k: Number of Documents to return. Defaults to 4.\n:type k: int\n:param filter: Filter by metadata. Defaults to None.\n:type filter: Optional[Dict[str, str]]\nReturns\nList of Documents most similar to the query vector.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.chroma.Chroma.html"} {"id": "d963b2404543-8", "text": "Returns\nList of Documents most similar to the query vector.\nsimilarity_search_by_vector_with_relevance_scores(embedding: List[float], k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs most similar to embedding vector and similarity score.\nParameters\nembedding (List[float]) \u2013 Embedding to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of documents most similar to\nthe query text and cosine distance in float for each.\nLower score represents more similarity.\nReturn type\nList[Tuple[Document, float]]\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nRun similarity search with Chroma with distance.\nParameters\nquery (str) \u2013 Query text to search for.\nk (int) \u2013 Number of results to return. Defaults to 4.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.chroma.Chroma.html"} {"id": "d963b2404543-9", "text": "k (int) \u2013 Number of results to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of documents most similar to\nthe query text and cosine distance in float for each.\nLower score represents more similarity.\nReturn type\nList[Tuple[Document, float]]\nupdate_document(document_id: str, document: Document) \u2192 None[source]\u00b6\nUpdate a document in the collection.\nParameters\ndocument_id (str) \u2013 ID of the document to update.\ndocument (Document) \u2013 Document to update.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.chroma.Chroma.html"} {"id": "9dc6a2e3292e-0", "text": "langchain.vectorstores.zilliz.Zilliz\u00b6\nclass langchain.vectorstores.zilliz.Zilliz(embedding_function: Embeddings, collection_name: str = 'LangChainCollection', connection_args: Optional[dict[str, Any]] = None, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: Optional[bool] = False)[source]\u00b6\nBases: Milvus\nInitialize wrapper around the Zilliz vector database.\nIn order to use this you need to have pymilvus installed and a\nrunning Zilliz database.\nSee the following documentation for how to run a Zilliz instance:\nhttps://docs.zilliz.com/docs/create-cluster\nIF USING L2/IP metric IT IS HIGHLY SUGGESTED TO NORMALIZE YOUR DATA.\nParameters\nembedding_function (Embeddings) \u2013 Function used to embed the text.\ncollection_name (str) \u2013 Which Zilliz collection to use. Defaults to\n\u201cLangChainCollection\u201d.\nconnection_args (Optional[dict[str, any]]) \u2013 The connection args used for\nthis class comes in the form of a dict.\nconsistency_level (str) \u2013 The consistency level to use for a collection.\nDefaults to \u201cSession\u201d.\nindex_params (Optional[dict]) \u2013 Which index params to use. Defaults to\nHNSW/AUTOINDEX depending on service.\nsearch_params (Optional[dict]) \u2013 Which search params to use. Defaults to\ndefault of index.\ndrop_old (Optional[bool]) \u2013 Whether to drop the current collection. Defaults\nto False.\nThe connection args used for this class comes in the form of a dict,\nhere are a few of the options:\naddress (str): The actual address of Zillizinstance. Example address: \u201clocalhost:19530\u201d", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.zilliz.Zilliz.html"} {"id": "9dc6a2e3292e-1", "text": "uri (str): The uri of Zilliz instance. Example uri:\u201chttps://in03-ba4234asae.api.gcp-us-west1.zillizcloud.com\u201d,\nhost (str): The host of Zilliz instance. Default at \u201clocalhost\u201d,PyMilvus will fill in the default host if only port is provided.\nport (str/int): The port of Zilliz instance. Default at 19530, PyMilvuswill fill in the default port if only host is provided.\nuser (str): Use which user to connect to Zilliz instance. If user andpassword are provided, we will add related header in every RPC call.\npassword (str): Required when user is provided. The passwordcorresponding to the user.\nsecure (bool): Default is false. If set to true, tls will be enabled.\nclient_key_path (str): If use tls two-way authentication, need to\nwrite the client.key path.\nclient_pem_path (str): If use tls two-way authentication, need towrite the client.pem path.\nca_pem_path (str): If use tls two-way authentication, need to writethe ca.pem path.\nserver_pem_path (str): If use tls one-way authentication, need towrite the server.pem path.\nserver_name (str): If use tls, need to write the common name.\nExample\nfrom langchain import Zilliz\nfrom langchain.embeddings import OpenAIEmbeddings\nembedding = OpenAIEmbeddings()\n# Connect to a Zilliz instance\nmilvus_store = Milvus(\nembedding_function = embedding,\ncollection_name = \u201cLangChainCollection\u201d,\nconnection_args = {\n\u201curi\u201d: \u201chttps://in03-ba4234asae.api.gcp-us-west1.zillizcloud.com\u201d,\n\u201cuser\u201d: \u201ctemp\u201d,\n\u201cpassword\u201d: \u201ctemp\u201d,", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.zilliz.Zilliz.html"} {"id": "9dc6a2e3292e-2", "text": "\u201cuser\u201d: \u201ctemp\u201d,\n\u201cpassword\u201d: \u201ctemp\u201d,\n\u201csecure\u201d: True\n}\ndrop_old: True,\n)\nRaises\nValueError \u2013 If the pymilvus python package is not installed.\nInitialize the Milvus vector store.\nMethods\n__init__(embedding_function[,\u00a0...])\nInitialize the Milvus vector store.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0timeout,\u00a0...])\nInsert text data into Milvus.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector ID or other criteria.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.zilliz.Zilliz.html"} {"id": "9dc6a2e3292e-3", "text": "from_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nCreate a Zilliz collection, indexes it with HNSW, and insert data.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nPerform a search and return results that are reordered by MMR.\nmax_marginal_relevance_search_by_vector(...)\nPerform a search and return results that are reordered by MMR.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k,\u00a0param,\u00a0expr,\u00a0...])\nPerform a similarity search against the query string.\nsimilarity_search_by_vector(embedding[,\u00a0k,\u00a0...])\nPerform a similarity search against the query string.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k,\u00a0...])\nPerform a search on a query string and return results with score.\nsimilarity_search_with_score_by_vector(embedding)\nPerform a search on a query string and return results with score.\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.zilliz.Zilliz.html"} {"id": "9dc6a2e3292e-4", "text": "Run more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, timeout: Optional[int] = None, batch_size: int = 1000, **kwargs: Any) \u2192 List[str]\u00b6\nInsert text data into Milvus.\nInserting data when the collection has not be made yet will result\nin creating a new Collection. The data of the first entity decides\nthe schema of the new collection, the dim is extracted from the first\nembedding and the columns are decided by the first metadata dict.\nMetada keys will need to be present for all inserted values. At\nthe moment there is no None equivalent in Milvus.\nParameters\ntexts (Iterable[str]) \u2013 The texts to embed, it is assumed\nthat they all fit in memory.\nmetadatas (Optional[List[dict]]) \u2013 Metadata dicts attached to each of\nthe texts. Defaults to None.\ntimeout (Optional[int]) \u2013 Timeout for each batch insert. Defaults\nto None.\nbatch_size (int, optional) \u2013 Batch size to use for insertion.\nDefaults to 1000.\nRaises\nMilvusException \u2013 Failure to add texts\nReturns\nThe resulting keys for each inserted element.\nReturn type\nList[str]\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.zilliz.Zilliz.html"} {"id": "9dc6a2e3292e-5", "text": "Return VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.zilliz.Zilliz.html"} {"id": "9dc6a2e3292e-6", "text": "Delete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = 'LangChainCollection', connection_args: dict[str, Any] = {}, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: bool = False, **kwargs: Any) \u2192 Zilliz[source]\u00b6\nCreate a Zilliz collection, indexes it with HNSW, and insert data.\nParameters\ntexts (List[str]) \u2013 Text data.\nembedding (Embeddings) \u2013 Embedding function.\nmetadatas (Optional[List[dict]]) \u2013 Metadata for each text if it exists.\nDefaults to None.\ncollection_name (str, optional) \u2013 Collection name to use. Defaults to\n\u201cLangChainCollection\u201d.\nconnection_args (dict[str, Any], optional) \u2013 Connection args to use. Defaults\nto DEFAULT_MILVUS_CONNECTION.\nconsistency_level (str, optional) \u2013 Which consistency level to use. Defaults\nto \u201cSession\u201d.\nindex_params (Optional[dict], optional) \u2013 Which index_params to use.\nDefaults to None.\nsearch_params (Optional[dict], optional) \u2013 Which search params to use.\nDefaults to None.\ndrop_old (Optional[bool], optional) \u2013 Whether to drop the collection with\nthat name if it exists. Defaults to False.\nReturns", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.zilliz.Zilliz.html"} {"id": "9dc6a2e3292e-7", "text": "that name if it exists. Defaults to False.\nReturns\nZilliz Vector Store\nReturn type\nZilliz\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nPerform a search and return results that are reordered by MMR.\nParameters\nquery (str) \u2013 The text being searched.\nk (int, optional) \u2013 How many results to give. Defaults to 4.\nfetch_k (int, optional) \u2013 Total results to select k from.\nDefaults to 20.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5\nparam (dict, optional) \u2013 The search params for the specified index.\nDefaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.\ntimeout (int, optional) \u2013 How long to wait before timeout error.\nDefaults to None.\nkwargs \u2013 Collection.search() keyword arguments.\nReturns\nDocument results for search.\nReturn type\nList[Document]\nmax_marginal_relevance_search_by_vector(embedding: list[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nPerform a search and return results that are reordered by MMR.\nParameters\nembedding (str) \u2013 The embedding vector being searched.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.zilliz.Zilliz.html"} {"id": "9dc6a2e3292e-8", "text": "Parameters\nembedding (str) \u2013 The embedding vector being searched.\nk (int, optional) \u2013 How many results to give. Defaults to 4.\nfetch_k (int, optional) \u2013 Total results to select k from.\nDefaults to 20.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5\nparam (dict, optional) \u2013 The search params for the specified index.\nDefaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.\ntimeout (int, optional) \u2013 How long to wait before timeout error.\nDefaults to None.\nkwargs \u2013 Collection.search() keyword arguments.\nReturns\nDocument results for search.\nReturn type\nList[Document]\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nPerform a similarity search against the query string.\nParameters\nquery (str) \u2013 The text to search.\nk (int, optional) \u2013 How many results to return. Defaults to 4.\nparam (dict, optional) \u2013 The search params for the index type.\nDefaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.\ntimeout (int, optional) \u2013 How long to wait before timeout error.\nDefaults to None.\nkwargs \u2013 Collection.search() keyword arguments.\nReturns\nDocument results for search.\nReturn type\nList[Document]", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.zilliz.Zilliz.html"} {"id": "9dc6a2e3292e-9", "text": "Returns\nDocument results for search.\nReturn type\nList[Document]\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) \u2192 List[Document]\u00b6\nPerform a similarity search against the query string.\nParameters\nembedding (List[float]) \u2013 The embedding vector to search.\nk (int, optional) \u2013 How many results to return. Defaults to 4.\nparam (dict, optional) \u2013 The search params for the index type.\nDefaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.\ntimeout (int, optional) \u2013 How long to wait before timeout error.\nDefaults to None.\nkwargs \u2013 Collection.search() keyword arguments.\nReturns\nDocument results for search.\nReturn type\nList[Document]\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.zilliz.Zilliz.html"} {"id": "9dc6a2e3292e-10", "text": "Perform a search on a query string and return results with score.\nFor more information about the search parameters, take a look at the pymilvus\ndocumentation found here:\nhttps://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md\nParameters\nquery (str) \u2013 The text being searched.\nk (int, optional) \u2013 The amount of results ot return. Defaults to 4.\nparam (dict) \u2013 The search params for the specified index.\nDefaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.\ntimeout (int, optional) \u2013 How long to wait before timeout error.\nDefaults to None.\nkwargs \u2013 Collection.search() keyword arguments.\nReturn type\nList[float], List[Tuple[Document, any, any]]\nsimilarity_search_with_score_by_vector(embedding: List[float], k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nPerform a search on a query string and return results with score.\nFor more information about the search parameters, take a look at the pymilvus\ndocumentation found here:\nhttps://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md\nParameters\nembedding (List[float]) \u2013 The embedding vector being searched.\nk (int, optional) \u2013 The amount of results ot return. Defaults to 4.\nparam (dict) \u2013 The search params for the specified index.\nDefaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.\ntimeout (int, optional) \u2013 How long to wait before timeout error.\nDefaults to None.\nkwargs \u2013 Collection.search() keyword arguments.\nReturns", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.zilliz.Zilliz.html"} {"id": "9dc6a2e3292e-11", "text": "Defaults to None.\nkwargs \u2013 Collection.search() keyword arguments.\nReturns\nResult doc and score.\nReturn type\nList[Tuple[Document, float]]", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.zilliz.Zilliz.html"} {"id": "866191e022ef-0", "text": "langchain.vectorstores.tigris.Tigris\u00b6\nclass langchain.vectorstores.tigris.Tigris(client: TigrisClient, embeddings: Embeddings, index_name: str)[source]\u00b6\nBases: VectorStore\nInitialize Tigris vector store\nMethods\n__init__(client,\u00a0embeddings,\u00a0index_name)\nInitialize Tigris vector store\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0ids])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector ID or other criteria.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.tigris.Tigris.html"} {"id": "866191e022ef-1", "text": "from_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nReturn VectorStore initialized from texts and embeddings.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k,\u00a0filter])\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k,\u00a0filter])\nRun similarity search with Chroma with distance.\nAttributes\nsearch_index\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.tigris.Tigris.html"} {"id": "866191e022ef-2", "text": "Returns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nids \u2013 Optional list of ids for documents.\nIds will be autogenerated if not provided.\nkwargs \u2013 vectorstore specific parameters\nReturns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.tigris.Tigris.html"} {"id": "866191e022ef-3", "text": "Return docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, client: Optional[TigrisClient] = None, index_name: Optional[str] = None, **kwargs: Any) \u2192 Tigris[source]\u00b6\nReturn VectorStore initialized from texts and embeddings.\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.tigris.Tigris.html"} {"id": "866191e022ef-4", "text": "Maximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, filter: Optional[TigrisFilter] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to query.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.tigris.Tigris.html"} {"id": "866191e022ef-5", "text": "Return docs most similar to query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 4, filter: Optional[TigrisFilter] = None) \u2192 List[Tuple[Document, float]][source]\u00b6\nRun similarity search with Chroma with distance.\nParameters\nquery (str) \u2013 Query text to search for.\nk (int) \u2013 Number of results to return. Defaults to 4.\nfilter (Optional[TigrisFilter]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of documents most similar to the querytext with distance in float.\nReturn type\nList[Tuple[Document, float]]\nproperty search_index: TigrisVectorStore\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.tigris.Tigris.html"} {"id": "3c1fddcfd8d9-0", "text": "langchain.vectorstores.tair.Tair\u00b6\nclass langchain.vectorstores.tair.Tair(embedding_function: Embeddings, url: str, index_name: str, content_key: str = 'content', metadata_key: str = 'metadata', search_params: Optional[dict] = None, **kwargs: Any)[source]\u00b6\nBases: VectorStore\nWrapper around Tair Vector store.\nMethods\n__init__(embedding_function,\u00a0url,\u00a0index_name)\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas])\nAdd texts data to an existing index.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ncreate_index_if_not_exist(dim,\u00a0...)\ndelete([ids])", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.tair.Tair.html"} {"id": "3c1fddcfd8d9-1", "text": "create_index_if_not_exist(dim,\u00a0...)\ndelete([ids])\nDelete by vector ID or other criteria.\ndrop_index([index_name])\nDrop an existing index.\nfrom_documents(documents,\u00a0embedding[,\u00a0...])\nReturn VectorStore initialized from documents and embeddings.\nfrom_existing_index(embedding[,\u00a0index_name,\u00a0...])\nConnect to an existing Tair index.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nReturn VectorStore initialized from texts and embeddings.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k])\nReturns the most similar indexed documents to the query text.\nsimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.tair.Tair.html"} {"id": "3c1fddcfd8d9-2", "text": "add_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nAdd texts data to an existing index.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.tair.Tair.html"} {"id": "3c1fddcfd8d9-3", "text": "Return docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ncreate_index_if_not_exist(dim: int, distance_type: str, index_type: str, data_type: str, **kwargs: Any) \u2192 bool[source]\u00b6\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nstatic drop_index(index_name: str = 'langchain', **kwargs: Any) \u2192 bool[source]\u00b6\nDrop an existing index.\nParameters\nindex_name (str) \u2013 Name of the index to drop.\nReturns\nTrue if the index is dropped successfully.\nReturn type\nbool\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: str = 'langchain', content_key: str = 'content', metadata_key: str = 'metadata', **kwargs: Any) \u2192 Tair[source]\u00b6\nReturn VectorStore initialized from documents and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.tair.Tair.html"} {"id": "3c1fddcfd8d9-4", "text": "Return VectorStore initialized from documents and embeddings.\nclassmethod from_existing_index(embedding: Embeddings, index_name: str = 'langchain', content_key: str = 'content', metadata_key: str = 'metadata', **kwargs: Any) \u2192 Tair[source]\u00b6\nConnect to an existing Tair index.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: str = 'langchain', content_key: str = 'content', metadata_key: str = 'metadata', **kwargs: Any) \u2192 Tair[source]\u00b6\nReturn VectorStore initialized from texts and embeddings.\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.tair.Tair.html"} {"id": "3c1fddcfd8d9-5", "text": "Maximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturns the most similar indexed documents to the query text.\nParameters\nquery (str) \u2013 The query text for which to find similar documents.\nk (int) \u2013 The number of documents to return. Default is 4.\nReturns\nA list of documents that are most similar to the query text.\nReturn type\nList[Document]\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.tair.Tair.html"} {"id": "3c1fddcfd8d9-6", "text": "0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.tair.Tair.html"} {"id": "15d3f6bd3ba8-0", "text": "langchain.vectorstores.clickhouse.Clickhouse\u00b6\nclass langchain.vectorstores.clickhouse.Clickhouse(embedding: Embeddings, config: Optional[ClickhouseSettings] = None, **kwargs: Any)[source]\u00b6\nBases: VectorStore\nWrapper around ClickHouse vector database\nYou need a clickhouse-connect python package, and a valid account\nto connect to ClickHouse.\nClickHouse can not only search with simple vector indexes,\nit also supports complex query with multiple conditions,\nconstraints and even sub-queries.\nFor more information, please visit[ClickHouse official site](https://clickhouse.com/clickhouse)\nClickHouse Wrapper to LangChain\nembedding_function (Embeddings):\nconfig (ClickHouseSettings): Configuration to ClickHouse Client\nOther keyword arguments will pass into\n[clickhouse-connect](https://docs.clickhouse.com/)\nMethods\n__init__(embedding[,\u00a0config])\nClickHouse Wrapper to LangChain\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0batch_size,\u00a0ids])\nInsert more texts through the embeddings and add to the VectorStore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clickhouse.Clickhouse.html"} {"id": "15d3f6bd3ba8-1", "text": "Return docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector ID or other criteria.\ndrop()\nHelper function: Drop data\nescape_str(value)\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nCreate ClickHouse wrapper with existing texts\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k,\u00a0where_str])\nPerform a similarity search with ClickHouse\nsimilarity_search_by_vector(embedding[,\u00a0k,\u00a0...])\nPerform a similarity search with ClickHouse by vectors\nsimilarity_search_with_relevance_scores(query)\nPerform a similarity search with ClickHouse\nAttributes\nmetadata_column\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clickhouse.Clickhouse.html"} {"id": "15d3f6bd3ba8-2", "text": "Returns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, batch_size: int = 32, ids: Optional[Iterable[str]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nInsert more texts through the embeddings and add to the VectorStore.\nParameters\ntexts \u2013 Iterable of strings to add to the VectorStore.\nids \u2013 Optional list of ids to associate with the texts.\nbatch_size \u2013 Batch size of insertion\nmetadata \u2013 Optional column data to be inserted\nReturns\nList of ids from adding the texts into the VectorStore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clickhouse.Clickhouse.html"} {"id": "15d3f6bd3ba8-3", "text": "Return docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\ndrop() \u2192 None[source]\u00b6\nHelper function: Drop data\nescape_str(value: str) \u2192 str[source]\u00b6\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clickhouse.Clickhouse.html"} {"id": "15d3f6bd3ba8-4", "text": "Return VectorStore initialized from documents and embeddings.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[Dict[Any, Any]]] = None, config: Optional[ClickhouseSettings] = None, text_ids: Optional[Iterable[str]] = None, batch_size: int = 32, **kwargs: Any) \u2192 Clickhouse[source]\u00b6\nCreate ClickHouse wrapper with existing texts\nParameters\nembedding_function (Embeddings) \u2013 Function to extract text embedding\ntexts (Iterable[str]) \u2013 List or tuple of strings to be added\nconfig (ClickHouseSettings, Optional) \u2013 ClickHouse configuration\ntext_ids (Optional[Iterable], optional) \u2013 IDs for the texts.\nDefaults to None.\nbatch_size (int, optional) \u2013 Batchsize when transmitting data to ClickHouse.\nDefaults to 32.\nmetadata (List[dict], optional) \u2013 metadata to texts. Defaults to None.\ninto (Other keyword arguments will pass) \u2013 [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api)\nReturns\nClickHouse Index\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clickhouse.Clickhouse.html"} {"id": "15d3f6bd3ba8-5", "text": "Defaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nPerform a similarity search with ClickHouse\nParameters\nquery (str) \u2013 query string\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 where condition string.\nDefaults to None.\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection. When dealing with metadatas, remember to\nuse {self.metadata_column}.attribute instead of attribute\nalone. The default name for it is metadata.\nReturns\nList of Documents\nReturn type\nList[Document]", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clickhouse.Clickhouse.html"} {"id": "15d3f6bd3ba8-6", "text": "Returns\nList of Documents\nReturn type\nList[Document]\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, where_str: Optional[str] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nPerform a similarity search with ClickHouse by vectors\nParameters\nquery (str) \u2013 query string\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 where condition string.\nDefaults to None.\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection. When dealing with metadatas, remember to\nuse {self.metadata_column}.attribute instead of attribute\nalone. The default name for it is metadata.\nReturns\nList of (Document, similarity)\nReturn type\nList[Document]\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nPerform a similarity search with ClickHouse\nParameters\nquery (str) \u2013 query string\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 where condition string.\nDefaults to None.\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection. When dealing with metadatas, remember to\nuse {self.metadata_column}.attribute instead of attribute\nalone. The default name for it is metadata.\nReturns\nList of documents\nReturn type\nList[Document]\nproperty metadata_column: str\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.clickhouse.Clickhouse.html"} {"id": "22cd7646838d-0", "text": "langchain.vectorstores.analyticdb.AnalyticDB\u00b6\nclass langchain.vectorstores.analyticdb.AnalyticDB(connection_string: str, embedding_function: Embeddings, embedding_dimension: int = 1536, collection_name: str = 'langchain_document', pre_delete_collection: bool = False, logger: Optional[Logger] = None, engine_args: Optional[dict] = None)[source]\u00b6\nBases: VectorStore\nVectorStore implementation using AnalyticDB.\nAnalyticDB is a distributed full PostgresSQL syntax cloud-native database.\n- connection_string is a postgres connection string.\n- embedding_function any embedding function implementing\nlangchain.embeddings.base.Embeddings interface.\ncollection_name is the name of the collection to use. (default: langchain)\nNOTE: This is not the name of the table, but the name of the collection.The tables will be created when initializing the store (if not exists)\nSo, make sure the user has the right permissions to create tables.\npre_delete_collection if True, will delete the collection if it exists.(default: False)\n- Useful for testing.\nMethods\n__init__(connection_string,\u00a0embedding_function)\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0ids,\u00a0batch_size])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.analyticdb.AnalyticDB.html"} {"id": "22cd7646838d-1", "text": "afrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\nconnection_string_from_db_params(driver,\u00a0...)\nReturn connection string from database parameters.\ncreate_collection()\ncreate_table_if_not_exists()\ndelete([ids])\nDelete by vector IDs.\ndelete_collection()\nfrom_documents(documents,\u00a0embedding[,\u00a0...])\nReturn VectorStore initialized from documents and embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nReturn VectorStore initialized from texts and embeddings.\nget_connection_string(kwargs)\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k,\u00a0filter])\nRun similarity search with AnalyticDB with distance.\nsimilarity_search_by_vector(embedding[,\u00a0k,\u00a0...])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.analyticdb.AnalyticDB.html"} {"id": "22cd7646838d-2", "text": "Return docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k,\u00a0filter])\nReturn docs most similar to query.\nsimilarity_search_with_score_by_vector(embedding)\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, batch_size: int = 500, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nkwargs \u2013 vectorstore specific parameters\nReturns\nList of ids from adding the texts into the vectorstore.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.analyticdb.AnalyticDB.html"} {"id": "22cd7646838d-3", "text": "Returns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.analyticdb.AnalyticDB.html"} {"id": "22cd7646838d-4", "text": "Return docs most similar to query.\nclassmethod connection_string_from_db_params(driver: str, host: str, port: int, database: str, user: str, password: str) \u2192 str[source]\u00b6\nReturn connection string from database parameters.\ncreate_collection() \u2192 None[source]\u00b6\ncreate_table_if_not_exists() \u2192 None[source]\u00b6\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool][source]\u00b6\nDelete by vector IDs.\nParameters\nids \u2013 List of ids to delete.\ndelete_collection() \u2192 None[source]\u00b6\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, embedding_dimension: int = 1536, collection_name: str = 'langchain_document', ids: Optional[List[str]] = None, pre_delete_collection: bool = False, engine_args: Optional[dict] = None, **kwargs: Any) \u2192 AnalyticDB[source]\u00b6\nReturn VectorStore initialized from documents and embeddings.\nPostgres Connection string is required\nEither pass it as a parameter\nor set the PG_CONNECTION_STRING environment variable.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, embedding_dimension: int = 1536, collection_name: str = 'langchain_document', ids: Optional[List[str]] = None, pre_delete_collection: bool = False, engine_args: Optional[dict] = None, **kwargs: Any) \u2192 AnalyticDB[source]\u00b6\nReturn VectorStore initialized from texts and embeddings.\nPostgres Connection string is required\nEither pass it as a parameter\nor set the PG_CONNECTION_STRING environment variable.\nclassmethod get_connection_string(kwargs: Dict[str, Any]) \u2192 str[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.analyticdb.AnalyticDB.html"} {"id": "22cd7646838d-5", "text": "classmethod get_connection_string(kwargs: Dict[str, Any]) \u2192 str[source]\u00b6\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.analyticdb.AnalyticDB.html"} {"id": "22cd7646838d-6", "text": "Return docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nRun similarity search with AnalyticDB with distance.\nParameters\nquery (str) \u2013 Query text to search for.\nk (int) \u2013 Number of results to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of Documents most similar to the query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, filter: Optional[dict] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.analyticdb.AnalyticDB.html"} {"id": "22cd7646838d-7", "text": "Returns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of Documents most similar to the query and score for each\nsimilarity_search_with_score_by_vector(embedding: List[float], k: int = 4, filter: Optional[dict] = None) \u2192 List[Tuple[Document, float]][source]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.analyticdb.AnalyticDB.html"} {"id": "9aa59c7e365e-0", "text": "langchain.vectorstores.starrocks.debug_output\u00b6\nlangchain.vectorstores.starrocks.debug_output(s: Any) \u2192 None[source]\u00b6\nPrint a debug message if DEBUG is True.\n:param s: The message to print", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.debug_output.html"} {"id": "d93f2019f6a2-0", "text": "langchain.vectorstores.myscale.MyScale\u00b6\nclass langchain.vectorstores.myscale.MyScale(embedding: Embeddings, config: Optional[MyScaleSettings] = None, **kwargs: Any)[source]\u00b6\nBases: VectorStore\nWrapper around MyScale vector database\nYou need a clickhouse-connect python package, and a valid account\nto connect to MyScale.\nMyScale can not only search with simple vector indexes,\nit also supports complex query with multiple conditions,\nconstraints and even sub-queries.\nFor more information, please visit[myscale official site](https://docs.myscale.com/en/overview/)\nMyScale Wrapper to LangChain\nembedding_function (Embeddings):\nconfig (MyScaleSettings): Configuration to MyScale Client\nOther keyword arguments will pass into\n[clickhouse-connect](https://docs.myscale.com/)\nMethods\n__init__(embedding[,\u00a0config])\nMyScale Wrapper to LangChain\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0batch_size,\u00a0ids])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.MyScale.html"} {"id": "d93f2019f6a2-1", "text": "Return docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector ID or other criteria.\ndrop()\nHelper function: Drop data\nescape_str(value)\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nCreate Myscale wrapper with existing texts\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k,\u00a0where_str])\nPerform a similarity search with MyScale\nsimilarity_search_by_vector(embedding[,\u00a0k,\u00a0...])\nPerform a similarity search with MyScale by vectors\nsimilarity_search_with_relevance_scores(query)\nPerform a similarity search with MyScale\nAttributes\nmetadata_column\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.MyScale.html"} {"id": "d93f2019f6a2-2", "text": "Returns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, batch_size: int = 32, ids: Optional[Iterable[str]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nids \u2013 Optional list of ids to associate with the texts.\nbatch_size \u2013 Batch size of insertion\nmetadata \u2013 Optional column data to be inserted\nReturns\nList of ids from adding the texts into the vectorstore.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.MyScale.html"} {"id": "d93f2019f6a2-3", "text": "Return docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\ndrop() \u2192 None[source]\u00b6\nHelper function: Drop data\nescape_str(value: str) \u2192 str[source]\u00b6\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.MyScale.html"} {"id": "d93f2019f6a2-4", "text": "Return VectorStore initialized from documents and embeddings.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[Dict[Any, Any]]] = None, config: Optional[MyScaleSettings] = None, text_ids: Optional[Iterable[str]] = None, batch_size: int = 32, **kwargs: Any) \u2192 MyScale[source]\u00b6\nCreate Myscale wrapper with existing texts\nParameters\nembedding_function (Embeddings) \u2013 Function to extract text embedding\ntexts (Iterable[str]) \u2013 List or tuple of strings to be added\nconfig (MyScaleSettings, Optional) \u2013 Myscale configuration\ntext_ids (Optional[Iterable], optional) \u2013 IDs for the texts.\nDefaults to None.\nbatch_size (int, optional) \u2013 Batchsize when transmitting data to MyScale.\nDefaults to 32.\nmetadata (List[dict], optional) \u2013 metadata to texts. Defaults to None.\ninto (Other keyword arguments will pass) \u2013 [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api)\nReturns\nMyScale Index\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.MyScale.html"} {"id": "d93f2019f6a2-5", "text": "Defaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nPerform a similarity search with MyScale\nParameters\nquery (str) \u2013 query string\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 where condition string.\nDefaults to None.\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection. When dealing with metadatas, remember to\nuse {self.metadata_column}.attribute instead of attribute\nalone. The default name for it is metadata.\nReturns\nList of Documents\nReturn type\nList[Document]", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.MyScale.html"} {"id": "d93f2019f6a2-6", "text": "Returns\nList of Documents\nReturn type\nList[Document]\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, where_str: Optional[str] = None, **kwargs: Any) \u2192 List[Document][source]\u00b6\nPerform a similarity search with MyScale by vectors\nParameters\nquery (str) \u2013 query string\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 where condition string.\nDefaults to None.\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection. When dealing with metadatas, remember to\nuse {self.metadata_column}.attribute instead of attribute\nalone. The default name for it is metadata.\nReturns\nList of (Document, similarity)\nReturn type\nList[Document]\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nPerform a similarity search with MyScale\nParameters\nquery (str) \u2013 query string\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 where condition string.\nDefaults to None.\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection. When dealing with metadatas, remember to\nuse {self.metadata_column}.attribute instead of attribute\nalone. The default name for it is metadata.\nReturns\nList of documents most similar to the query text\nand cosine distance in float for each.\nLower score represents more similarity.\nReturn type\nList[Document]\nproperty metadata_column: str\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.MyScale.html"} {"id": "d926a68ee3bc-0", "text": "langchain.vectorstores.sklearn.ParquetSerializer\u00b6\nclass langchain.vectorstores.sklearn.ParquetSerializer(persist_path: str)[source]\u00b6\nBases: BaseSerializer\nSerializes data in Apache Parquet format using the pyarrow package.\nMethods\n__init__(persist_path)\nextension()\nThe file extension suggested by this serializer (without dot).\nload()\nLoads the data from the persist_path\nsave(data)\nSaves the data to the persist_path\nclassmethod extension() \u2192 str[source]\u00b6\nThe file extension suggested by this serializer (without dot).\nload() \u2192 Any[source]\u00b6\nLoads the data from the persist_path\nsave(data: Any) \u2192 None[source]\u00b6\nSaves the data to the persist_path", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.ParquetSerializer.html"} {"id": "34b05c949737-0", "text": "langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch\u00b6\nclass langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch(opensearch_url: str, index_name: str, embedding_function: Embeddings, **kwargs: Any)[source]\u00b6\nBases: VectorStore\nWrapper around OpenSearch as a vector database.\nExample\nfrom langchain import OpenSearchVectorSearch\nopensearch_vector_search = OpenSearchVectorSearch(\n \"http://localhost:9200\",\n \"embeddings\",\n embedding_function\n)\nInitialize with necessary components.\nMethods\n__init__(opensearch_url,\u00a0index_name,\u00a0...)\nInitialize with necessary components.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas,\u00a0ids,\u00a0bulk_size])\nRun more texts through the embeddings and add to the vectorstore.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html"} {"id": "34b05c949737-1", "text": "asimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector ID or other criteria.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas,\u00a0...])\nConstruct OpenSearchVectorSearch wrapper from raw documents.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k])\nReturn docs and it's scores most similar to query.\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html"} {"id": "34b05c949737-2", "text": "Run more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, bulk_size: int = 500, **kwargs: Any) \u2192 List[str][source]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts \u2013 Iterable of strings to add to the vectorstore.\nmetadatas \u2013 Optional list of metadatas associated with the texts.\nids \u2013 Optional list of ids to associate with the texts.\nbulk_size \u2013 Bulk API request count; Default: 500\nReturns\nList of ids from adding the texts into the vectorstore.\nOptional Args:vector_field: Document field embeddings are stored in. Defaults to\n\u201cvector_field\u201d.\ntext_field: Document field the text of the document is stored in. Defaults\nto \u201ctext\u201d.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html"} {"id": "34b05c949737-3", "text": "Return docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 Optional[bool]\u00b6\nDelete by vector ID or other criteria.\nParameters\nids \u2013 List of ids to delete.\n**kwargs \u2013 Other keyword arguments that subclasses might use.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html"} {"id": "34b05c949737-4", "text": "Return VectorStore initialized from documents and embeddings.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, bulk_size: int = 500, **kwargs: Any) \u2192 OpenSearchVectorSearch[source]\u00b6\nConstruct OpenSearchVectorSearch wrapper from raw documents.\nExample\nfrom langchain import OpenSearchVectorSearch\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nopensearch_vector_search = OpenSearchVectorSearch.from_texts(\n texts,\n embeddings,\n opensearch_url=\"http://localhost:9200\"\n)\nOpenSearch by default supports Approximate Search powered by nmslib, faiss\nand lucene engines recommended for large datasets. Also supports brute force\nsearch through Script Scoring and Painless Scripting.\nOptional Args:vector_field: Document field embeddings are stored in. Defaults to\n\u201cvector_field\u201d.\ntext_field: Document field the text of the document is stored in. Defaults\nto \u201ctext\u201d.\nOptional Keyword Args for Approximate Search:engine: \u201cnmslib\u201d, \u201cfaiss\u201d, \u201clucene\u201d; default: \u201cnmslib\u201d\nspace_type: \u201cl2\u201d, \u201cl1\u201d, \u201ccosinesimil\u201d, \u201clinf\u201d, \u201cinnerproduct\u201d; default: \u201cl2\u201d\nef_search: Size of the dynamic list used during k-NN searches. Higher values\nlead to more accurate but slower searches; default: 512\nef_construction: Size of the dynamic list used during k-NN graph creation.\nHigher values lead to more accurate graph but slower indexing speed;\ndefault: 512\nm: Number of bidirectional links created for each new element. Large impact\non memory consumption. Between 2 and 100; default: 16", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html"} {"id": "34b05c949737-5", "text": "on memory consumption. Between 2 and 100; default: 16\nKeyword Args for Script Scoring or Painless Scripting:is_appx_search: False\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 list[langchain.schema.document.Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nDefaults to 20.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html"} {"id": "34b05c949737-6", "text": "Defaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to query.\nBy default, supports Approximate Search.\nAlso supports Script Scoring and Painless Scripting.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query.\nOptional Args:vector_field: Document field embeddings are stored in. Defaults to\n\u201cvector_field\u201d.\ntext_field: Document field the text of the document is stored in. Defaults\nto \u201ctext\u201d.\nmetadata_field: Document field that metadata is stored in. Defaults to\n\u201cmetadata\u201d.\nCan be set to a special value \u201c*\u201d to include the entire document.\nOptional Args for Approximate Search:search_type: \u201capproximate_search\u201d; default: \u201capproximate_search\u201d\nboolean_filter: A Boolean filter consists of a Boolean query that\ncontains a k-NN query and a filter.\nsubquery_clause: Query clause on the knn vector field; default: \u201cmust\u201d\nlucene_filter: the Lucene algorithm decides whether to perform an exact\nk-NN search with pre-filtering or an approximate search with modified\npost-filtering.\nOptional Args for Script Scoring Search:search_type: \u201cscript_scoring\u201d; default: \u201capproximate_search\u201d\nspace_type: \u201cl2\u201d, \u201cl1\u201d, \u201clinf\u201d, \u201ccosinesimil\u201d, \u201cinnerproduct\u201d,\n\u201chammingbit\u201d; default: \u201cl2\u201d", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html"} {"id": "34b05c949737-7", "text": "\u201chammingbit\u201d; default: \u201cl2\u201d\npre_filter: script_score query to pre-filter documents before identifying\nnearest neighbors; default: {\u201cmatch_all\u201d: {}}\nOptional Args for Painless Scripting Search:search_type: \u201cpainless_scripting\u201d; default: \u201capproximate_search\u201d\nspace_type: \u201cl2Squared\u201d, \u201cl1Norm\u201d, \u201ccosineSimilarity\u201d; default: \u201cl2Squared\u201d\npre_filter: script_score query to pre-filter documents before identifying\nnearest neighbors; default: {\u201cmatch_all\u201d: {}}\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query vector.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn docs and it\u2019s scores most similar to query.\nBy default, supports Approximate Search.\nAlso supports Script Scoring and Painless Scripting.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html"} {"id": "34b05c949737-8", "text": "Also supports Script Scoring and Painless Scripting.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents along with its scores most similar to the query.\nOptional Args:same as similarity_search", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html"} {"id": "f1b95e4e17fd-0", "text": "langchain.vectorstores.sklearn.SKLearnVectorStoreException\u00b6\nclass langchain.vectorstores.sklearn.SKLearnVectorStoreException[source]\u00b6\nBases: RuntimeError\nException raised by SKLearnVectorStore.\nadd_note()\u00b6\nException.add_note(note) \u2013\nadd a note to the exception\nwith_traceback()\u00b6\nException.with_traceback(tb) \u2013\nset self.__traceback__ to tb and return self.\nargs\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.SKLearnVectorStoreException.html"} {"id": "bc7503ab31e9-0", "text": "langchain.vectorstores.weaviate.Weaviate\u00b6\nclass langchain.vectorstores.weaviate.Weaviate(client: ~typing.Any, index_name: str, text_key: str, embedding: ~typing.Optional[~langchain.embeddings.base.Embeddings] = None, attributes: ~typing.Optional[~typing.List[str]] = None, relevance_score_fn: ~typing.Optional[~typing.Callable[[float], float]] = , by_text: bool = True)[source]\u00b6\nBases: VectorStore\nWrapper around Weaviate vector database.\nTo use, you should have the weaviate-client python package installed.\nExample\nimport weaviate\nfrom langchain.vectorstores import Weaviate\nclient = weaviate.Client(url=os.environ[\"WEAVIATE_URL\"], ...)\nweaviate = Weaviate(client, index_name, text_key)\nInitialize with Weaviate client.\nMethods\n__init__(client,\u00a0index_name,\u00a0text_key[,\u00a0...])\nInitialize with Weaviate client.\naadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\naadd_texts(texts[,\u00a0metadatas])\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents,\u00a0**kwargs)\nRun more documents through the embeddings and add to the vectorstore.\nadd_texts(texts[,\u00a0metadatas])\nUpload texts with metadata (properties) to Weaviate.\nafrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nafrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nReturn VectorStore initialized from texts and embeddings.\namax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.weaviate.Weaviate.html"} {"id": "bc7503ab31e9-1", "text": "Return docs selected using the maximal marginal relevance.\namax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs)\nasearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nasimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nasimilarity_search_by_vector(embedding[,\u00a0k])\nReturn docs most similar to embedding vector.\nasimilarity_search_with_relevance_scores(query)\nReturn docs most similar to query.\ndelete([ids])\nDelete by vector IDs.\nfrom_documents(documents,\u00a0embedding,\u00a0**kwargs)\nReturn VectorStore initialized from documents and embeddings.\nfrom_texts(texts,\u00a0embedding[,\u00a0metadatas])\nConstruct Weaviate wrapper from raw documents.\nmax_marginal_relevance_search(query[,\u00a0k,\u00a0...])\nReturn docs selected using the maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(...)\nReturn docs selected using the maximal marginal relevance.\nsearch(query,\u00a0search_type,\u00a0**kwargs)\nReturn docs most similar to query using specified search type.\nsimilarity_search(query[,\u00a0k])\nReturn docs most similar to query.\nsimilarity_search_by_text(query[,\u00a0k])\nReturn docs most similar to query.\nsimilarity_search_by_vector(embedding[,\u00a0k])\nLook up similar documents by embedding vector in Weaviate.\nsimilarity_search_with_relevance_scores(query)\nReturn docs and relevance scores in the range [0, 1].\nsimilarity_search_with_score(query[,\u00a0k])\nReturn list of documents most similar to the query text and cosine distance in float for each.\nasync aadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.weaviate.Weaviate.html"} {"id": "bc7503ab31e9-2", "text": "Run more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str]\u00b6\nRun more texts through the embeddings and add to the vectorstore.\nadd_documents(documents: List[Document], **kwargs: Any) \u2192 List[str]\u00b6\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 List[str][source]\u00b6\nUpload texts with metadata (properties) to Weaviate.\nasync classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nasync classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from texts and embeddings.\nasync amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.weaviate.Weaviate.html"} {"id": "bc7503ab31e9-3", "text": "Return docs selected using the maximal marginal relevance.\nasync amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs selected using the maximal marginal relevance.\nas_retriever(**kwargs: Any) \u2192 VectorStoreRetriever\u00b6\nasync asearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nasync asimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query.\nasync asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to embedding vector.\nasync asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs most similar to query.\ndelete(ids: Optional[List[str]] = None, **kwargs: Any) \u2192 None[source]\u00b6\nDelete by vector IDs.\nParameters\nids \u2013 List of ids to delete.\nclassmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) \u2192 VST\u00b6\nReturn VectorStore initialized from documents and embeddings.\nclassmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) \u2192 Weaviate[source]\u00b6\nConstruct Weaviate wrapper from raw documents.\nThis is a user-friendly interface that:\nEmbeds documents.\nCreates a new index for the embeddings in the Weaviate instance.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.weaviate.Weaviate.html"} {"id": "bc7503ab31e9-4", "text": "Embeds documents.\nCreates a new index for the embeddings in the Weaviate instance.\nAdds the documents to the newly created Weaviate index.\nThis is intended to be a quick way to get started.\nExample\nfrom langchain.vectorstores.weaviate import Weaviate\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nweaviate = Weaviate.from_texts(\n texts,\n embeddings,\n weaviate_url=\"http://localhost:8080\"\n)\nmax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nmax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.weaviate.Weaviate.html"} {"id": "bc7503ab31e9-5", "text": "among selected documents.\nParameters\nembedding \u2013 Embedding to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nfetch_k \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nsearch(query: str, search_type: str, **kwargs: Any) \u2192 List[Document]\u00b6\nReturn docs most similar to query using specified search type.\nsimilarity_search(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query.\nsimilarity_search_by_text(query: str, k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query.\nsimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) \u2192 List[Document][source]\u00b6\nLook up similar documents by embedding vector in Weaviate.\nsimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]]\u00b6\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.weaviate.Weaviate.html"} {"id": "bc7503ab31e9-6", "text": "0 is dissimilar, 1 is most similar.\nParameters\nquery \u2013 input text\nk \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nReturns\nList of Tuples of (doc, similarity_score)\nsimilarity_search_with_score(query: str, k: int = 4, **kwargs: Any) \u2192 List[Tuple[Document, float]][source]\u00b6\nReturn list of documents most similar to the query\ntext and cosine distance in float for each.\nLower score represents more similarity.", "source": "https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.weaviate.Weaviate.html"} {"id": "d7fd6664fecd-0", "text": "langchain.formatting.StrictFormatter\u00b6\nclass langchain.formatting.StrictFormatter[source]\u00b6\nBases: Formatter\nA subclass of formatter that checks for extra keys.\nMethods\n__init__()\ncheck_unused_args(used_args,\u00a0args,\u00a0kwargs)\nCheck to see if extra parameters are passed.\nconvert_field(value,\u00a0conversion)\nformat(format_string,\u00a0/,\u00a0*args,\u00a0**kwargs)\nformat_field(value,\u00a0format_spec)\nget_field(field_name,\u00a0args,\u00a0kwargs)\nget_value(key,\u00a0args,\u00a0kwargs)\nparse(format_string)\nvalidate_input_variables(format_string,\u00a0...)\nvformat(format_string,\u00a0args,\u00a0kwargs)\nCheck that no arguments are provided.\ncheck_unused_args(used_args: Sequence[Union[int, str]], args: Sequence, kwargs: Mapping[str, Any]) \u2192 None[source]\u00b6\nCheck to see if extra parameters are passed.\nconvert_field(value, conversion)\u00b6\nformat(format_string, /, *args, **kwargs)\u00b6\nformat_field(value, format_spec)\u00b6\nget_field(field_name, args, kwargs)\u00b6\nget_value(key, args, kwargs)\u00b6\nparse(format_string)\u00b6\nvalidate_input_variables(format_string: str, input_variables: List[str]) \u2192 None[source]\u00b6\nvformat(format_string: str, args: Sequence, kwargs: Mapping[str, Any]) \u2192 str[source]\u00b6\nCheck that no arguments are provided.", "source": "https://api.python.langchain.com/en/latest/formatting/langchain.formatting.StrictFormatter.html"} {"id": "fd0172f9e28c-0", "text": "langchain.callbacks.tracers.schemas.ToolRun\u00b6\nclass langchain.callbacks.tracers.schemas.ToolRun(*, uuid: str, parent_uuid: Optional[str] = None, start_time: datetime = None, end_time: datetime = None, extra: Optional[Dict[str, Any]] = None, execution_order: int, child_execution_order: int, serialized: Dict[str, Any], session_id: int, error: Optional[str] = None, tool_input: str, output: Optional[str] = None, action: str, child_llm_runs: List[LLMRun] = None, child_chain_runs: List[ChainRun] = None, child_tool_runs: List[ToolRun] = None)[source]\u00b6\nBases: BaseRun\nClass for ToolRun.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam action: str [Required]\u00b6\nparam child_chain_runs: List[langchain.callbacks.tracers.schemas.ChainRun] [Optional]\u00b6\nparam child_execution_order: int [Required]\u00b6\nparam child_llm_runs: List[langchain.callbacks.tracers.schemas.LLMRun] [Optional]\u00b6\nparam child_tool_runs: List[langchain.callbacks.tracers.schemas.ToolRun] [Optional]\u00b6\nparam end_time: datetime.datetime [Optional]\u00b6\nparam error: Optional[str] = None\u00b6\nparam execution_order: int [Required]\u00b6\nparam extra: Optional[Dict[str, Any]] = None\u00b6\nparam output: Optional[str] = None\u00b6\nparam parent_uuid: Optional[str] = None\u00b6\nparam serialized: Dict[str, Any] [Required]\u00b6\nparam session_id: int [Required]\u00b6\nparam start_time: datetime.datetime [Optional]\u00b6\nparam tool_input: str [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.schemas.ToolRun.html"} {"id": "fd0172f9e28c-1", "text": "param tool_input: str [Required]\u00b6\nparam uuid: str [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.schemas.ToolRun.html"} {"id": "44153d35b88b-0", "text": "langchain.callbacks.manager.AsyncCallbackManagerForRetrieverRun\u00b6\nclass langchain.callbacks.manager.AsyncCallbackManagerForRetrieverRun(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: AsyncParentRunManager, RetrieverManagerMixin\nAsync callback manager for retriever run.\nInitialize the run manager.\nParameters\nrun_id (UUID) \u2013 The ID of the run.\nhandlers (List[BaseCallbackHandler]) \u2013 The list of handlers.\ninheritable_handlers (List[BaseCallbackHandler]) \u2013 The list of inheritable handlers.\nparent_run_id (UUID, optional) \u2013 The ID of the parent run.\nDefaults to None.\ntags (Optional[List[str]]) \u2013 The list of tags.\ninheritable_tags (Optional[List[str]]) \u2013 The list of inheritable tags.\nmetadata (Optional[Dict[str, Any]]) \u2013 The metadata.\ninheritable_metadata (Optional[Dict[str, Any]]) \u2013 The inheritable metadata.\nMethods\n__init__(*,\u00a0run_id,\u00a0handlers,\u00a0...[,\u00a0...])\nInitialize the run manager.\nget_child([tag])\nGet a child callback manager.\nget_noop_manager()\nReturn a manager that doesn't perform any operations.\non_retriever_end(documents,\u00a0**kwargs)\nRun when retriever ends running.\non_retriever_error(error,\u00a0**kwargs)\nRun when retriever errors.\non_text(text,\u00a0**kwargs)\nRun when text is received.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncCallbackManagerForRetrieverRun.html"} {"id": "44153d35b88b-1", "text": "on_text(text,\u00a0**kwargs)\nRun when text is received.\nget_child(tag: Optional[str] = None) \u2192 AsyncCallbackManager\u00b6\nGet a child callback manager.\nParameters\ntag (str, optional) \u2013 The tag for the child callback manager.\nDefaults to None.\nReturns\nThe child callback manager.\nReturn type\nAsyncCallbackManager\nclassmethod get_noop_manager() \u2192 BRM\u00b6\nReturn a manager that doesn\u2019t perform any operations.\nReturns\nThe noop manager.\nReturn type\nBaseRunManager\nasync on_retriever_end(documents: Sequence[Document], **kwargs: Any) \u2192 None[source]\u00b6\nRun when retriever ends running.\nasync on_retriever_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when retriever errors.\nasync on_text(text: str, **kwargs: Any) \u2192 Any\u00b6\nRun when text is received.\nParameters\ntext (str) \u2013 The received text.\nReturns\nThe result of the callback.\nReturn type\nAny", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncCallbackManagerForRetrieverRun.html"} {"id": "c4fcc9767715-0", "text": "langchain.callbacks.mlflow_callback.import_mlflow\u00b6\nlangchain.callbacks.mlflow_callback.import_mlflow() \u2192 Any[source]\u00b6\nImport the mlflow python package and raise an error if it is not installed.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.mlflow_callback.import_mlflow.html"} {"id": "71f09f8fd407-0", "text": "langchain.callbacks.infino_callback.import_infino\u00b6\nlangchain.callbacks.infino_callback.import_infino() \u2192 Any[source]\u00b6\nImport the infino client.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.infino_callback.import_infino.html"} {"id": "38e740b0ecd1-0", "text": "langchain.callbacks.tracers.schemas.ChainRun\u00b6\nclass langchain.callbacks.tracers.schemas.ChainRun(*, uuid: str, parent_uuid: Optional[str] = None, start_time: datetime = None, end_time: datetime = None, extra: Optional[Dict[str, Any]] = None, execution_order: int, child_execution_order: int, serialized: Dict[str, Any], session_id: int, error: Optional[str] = None, inputs: Dict[str, Any], outputs: Optional[Dict[str, Any]] = None, child_llm_runs: List[LLMRun] = None, child_chain_runs: List[ChainRun] = None, child_tool_runs: List[ToolRun] = None)[source]\u00b6\nBases: BaseRun\nClass for ChainRun.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam child_chain_runs: List[langchain.callbacks.tracers.schemas.ChainRun] [Optional]\u00b6\nparam child_execution_order: int [Required]\u00b6\nparam child_llm_runs: List[langchain.callbacks.tracers.schemas.LLMRun] [Optional]\u00b6\nparam child_tool_runs: List[langchain.callbacks.tracers.schemas.ToolRun] [Optional]\u00b6\nparam end_time: datetime.datetime [Optional]\u00b6\nparam error: Optional[str] = None\u00b6\nparam execution_order: int [Required]\u00b6\nparam extra: Optional[Dict[str, Any]] = None\u00b6\nparam inputs: Dict[str, Any] [Required]\u00b6\nparam outputs: Optional[Dict[str, Any]] = None\u00b6\nparam parent_uuid: Optional[str] = None\u00b6\nparam serialized: Dict[str, Any] [Required]\u00b6\nparam session_id: int [Required]\u00b6\nparam start_time: datetime.datetime [Optional]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.schemas.ChainRun.html"} {"id": "38e740b0ecd1-1", "text": "param start_time: datetime.datetime [Optional]\u00b6\nparam uuid: str [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.schemas.ChainRun.html"} {"id": "783dae2c156c-0", "text": "langchain.callbacks.aim_callback.import_aim\u00b6\nlangchain.callbacks.aim_callback.import_aim() \u2192 Any[source]\u00b6\nImport the aim python package and raise an error if it is not installed.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.aim_callback.import_aim.html"} {"id": "267400d85fd7-0", "text": "langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler\u00b6\nclass langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler[source]\u00b6\nBases: AsyncCallbackHandler\nCallback handler that returns an async iterator.\nMethods\n__init__()\naiter()\non_agent_action(action,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent action.\non_agent_finish(finish,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent end.\non_chain_end(outputs,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when chain ends running.\non_chain_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when chain errors.\non_chain_start(serialized,\u00a0inputs,\u00a0*,\u00a0run_id)\nRun when chain starts running.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0**kwargs)\nRun when LLM ends running.\non_llm_error(error,\u00a0**kwargs)\nRun when LLM errors.\non_llm_new_token(token,\u00a0**kwargs)\nRun on new LLM token.\non_llm_start(serialized,\u00a0prompts,\u00a0**kwargs)\nRun when LLM starts running.\non_retriever_end(documents,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on retriever end.\non_retriever_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on retriever error.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun on retriever start.\non_text(text,\u00a0*,\u00a0run_id[,\u00a0parent_run_id,\u00a0tags])\nRun on arbitrary text.\non_tool_end(output,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when tool ends running.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html"} {"id": "267400d85fd7-1", "text": "Run when tool ends running.\non_tool_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when tool errors.\non_tool_start(serialized,\u00a0input_str,\u00a0*,\u00a0run_id)\nRun when tool starts running.\nAttributes\nalways_verbose\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\nqueue\ndone\nasync aiter() \u2192 AsyncIterator[str][source]\u00b6\nasync on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None\u00b6\nRun on agent action.\nasync on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None\u00b6\nRun on agent end.\nasync on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None\u00b6\nRun when chain ends running.\nasync on_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None\u00b6\nRun when chain errors.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html"} {"id": "267400d85fd7-2", "text": "Run when chain errors.\nasync on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nRun when chain starts running.\nasync on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\nasync on_llm_end(response: LLMResult, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM ends running.\nasync on_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM errors.\nasync on_llm_new_token(token: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun on new LLM token. Only available when streaming is enabled.\nasync on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM starts running.\nasync on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None\u00b6\nRun on retriever end.\nasync on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html"} {"id": "267400d85fd7-3", "text": "Run on retriever error.\nasync on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nRun on retriever start.\nasync on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None\u00b6\nRun on arbitrary text.\nasync on_tool_end(output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None\u00b6\nRun when tool ends running.\nasync on_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None\u00b6\nRun when tool errors.\nasync on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nRun when tool starts running.\nproperty always_verbose: bool\u00b6\ndone: asyncio.locks.Event\u00b6\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html"} {"id": "267400d85fd7-4", "text": "property ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nqueue: asyncio.queues.Queue[str]\u00b6\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html"} {"id": "375f4fde1d3e-0", "text": "langchain.callbacks.tracers.schemas.TracerSessionV1Base\u00b6\nclass langchain.callbacks.tracers.schemas.TracerSessionV1Base(*, start_time: datetime = None, name: Optional[str] = None, extra: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: BaseModel\nBase class for TracerSessionV1.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam extra: Optional[Dict[str, Any]] = None\u00b6\nparam name: Optional[str] = None\u00b6\nparam start_time: datetime.datetime [Optional]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.schemas.TracerSessionV1Base.html"} {"id": "5c2be8014c2d-0", "text": "langchain.callbacks.base.AsyncCallbackHandler\u00b6\nclass langchain.callbacks.base.AsyncCallbackHandler[source]\u00b6\nBases: BaseCallbackHandler\nAsync callback handler that can be used to handle callbacks from langchain.\nMethods\n__init__()\non_agent_action(action,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent action.\non_agent_finish(finish,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent end.\non_chain_end(outputs,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when chain ends running.\non_chain_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when chain errors.\non_chain_start(serialized,\u00a0inputs,\u00a0*,\u00a0run_id)\nRun when chain starts running.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when LLM ends running.\non_llm_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when LLM errors.\non_llm_new_token(token,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on new LLM token.\non_llm_start(serialized,\u00a0prompts,\u00a0*,\u00a0run_id)\nRun when LLM starts running.\non_retriever_end(documents,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on retriever end.\non_retriever_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on retriever error.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun on retriever start.\non_text(text,\u00a0*,\u00a0run_id[,\u00a0parent_run_id,\u00a0tags])\nRun on arbitrary text.\non_tool_end(output,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when tool ends running.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.base.AsyncCallbackHandler.html"} {"id": "5c2be8014c2d-1", "text": "Run when tool ends running.\non_tool_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when tool errors.\non_tool_start(serialized,\u00a0input_str,\u00a0*,\u00a0run_id)\nRun when tool starts running.\nAttributes\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\nasync on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None[source]\u00b6\nRun on agent action.\nasync on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None[source]\u00b6\nRun on agent end.\nasync on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain ends running.\nasync on_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain errors.\nasync on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.base.AsyncCallbackHandler.html"} {"id": "5c2be8014c2d-2", "text": "Run when chain starts running.\nasync on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any[source]\u00b6\nRun when a chat model starts running.\nasync on_llm_end(response: LLMResult, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM ends running.\nasync on_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM errors.\nasync on_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None[source]\u00b6\nRun on new LLM token. Only available when streaming is enabled.\nasync on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM starts running.\nasync on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None[source]\u00b6\nRun on retriever end.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.base.AsyncCallbackHandler.html"} {"id": "5c2be8014c2d-3", "text": "Run on retriever end.\nasync on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None[source]\u00b6\nRun on retriever error.\nasync on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None[source]\u00b6\nRun on retriever start.\nasync on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None[source]\u00b6\nRun on arbitrary text.\nasync on_tool_end(output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool ends running.\nasync on_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool errors.\nasync on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool starts running.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.base.AsyncCallbackHandler.html"} {"id": "5c2be8014c2d-4", "text": "property ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.base.AsyncCallbackHandler.html"} {"id": "4876a8313efc-0", "text": "langchain.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler\u00b6\nclass langchain.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler(parent_container: DeltaGenerator, *, max_thought_containers: int = 4, expand_new_thoughts: bool = True, collapse_completed_thoughts: bool = True, thought_labeler: Optional[LLMThoughtLabeler] = None)[source]\u00b6\nBases: BaseCallbackHandler\nA callback handler that writes to a Streamlit app.\nCreate a StreamlitCallbackHandler instance.\nParameters\nparent_container \u2013 The st.container that will contain all the Streamlit elements that the\nHandler creates.\nmax_thought_containers \u2013 The max number of completed LLM thought containers to show at once. When\nthis threshold is reached, a new thought will cause the oldest thoughts to\nbe collapsed into a \u201cHistory\u201d expander. Defaults to 4.\nexpand_new_thoughts \u2013 Each LLM \u201cthought\u201d gets its own st.expander. This param controls whether\nthat expander is expanded by default. Defaults to True.\ncollapse_completed_thoughts \u2013 If True, LLM thought expanders will be collapsed when completed.\nDefaults to True.\nthought_labeler \u2013 An optional custom LLMThoughtLabeler instance. If unspecified, the handler\nwill use the default thought labeling logic. Defaults to None.\nMethods\n__init__(parent_container,\u00a0*[,\u00a0...])\nCreate a StreamlitCallbackHandler instance.\non_agent_action(action[,\u00a0color])\nRun on agent action.\non_agent_finish(finish[,\u00a0color])\nRun on agent end.\non_chain_end(outputs,\u00a0**kwargs)\nRun when chain ends running.\non_chain_error(error,\u00a0**kwargs)\nRun when chain errors.\non_chain_start(serialized,\u00a0inputs,\u00a0**kwargs)\nRun when chain starts running.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler.html"} {"id": "4876a8313efc-1", "text": "Run when chain starts running.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0**kwargs)\nRun when LLM ends running.\non_llm_error(error,\u00a0**kwargs)\nRun when LLM errors.\non_llm_new_token(token,\u00a0**kwargs)\nRun on new LLM token.\non_llm_start(serialized,\u00a0prompts,\u00a0**kwargs)\nRun when LLM starts running.\non_retriever_end(documents,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text[,\u00a0color,\u00a0end])\nRun on arbitrary text.\non_tool_end(output[,\u00a0color,\u00a0...])\nRun when tool ends running.\non_tool_error(error,\u00a0**kwargs)\nRun when tool errors.\non_tool_start(serialized,\u00a0input_str,\u00a0**kwargs)\nRun when tool starts running.\nAttributes\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\non_agent_action(action: AgentAction, color: Optional[str] = None, **kwargs: Any) \u2192 Any[source]\u00b6\nRun on agent action.\non_agent_finish(finish: AgentFinish, color: Optional[str] = None, **kwargs: Any) \u2192 None[source]\u00b6\nRun on agent end.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler.html"} {"id": "4876a8313efc-2", "text": "Run on agent end.\non_chain_end(outputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain ends running.\non_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain errors.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain starts running.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\non_llm_end(response: LLMResult, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM ends running.\non_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM errors.\non_llm_new_token(token: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun on new LLM token. Only available when streaming is enabled.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM starts running.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever errors.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler.html"} {"id": "4876a8313efc-3", "text": "Run when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever starts running.\non_text(text: str, color: Optional[str] = None, end: str = '', **kwargs: Any) \u2192 None[source]\u00b6\nRun on arbitrary text.\non_tool_end(output: str, color: Optional[str] = None, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool ends running.\non_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool errors.\non_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool starts running.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler.html"} {"id": "77feeb4d8631-0", "text": "langchain.callbacks.utils.hash_string\u00b6\nlangchain.callbacks.utils.hash_string(s: str) \u2192 str[source]\u00b6\nHash a string using sha1.\nParameters\ns (str) \u2013 The string to hash.\nReturns\nThe hashed string.\nReturn type\n(str)", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.utils.hash_string.html"} {"id": "4f1b0e42c8e8-0", "text": "langchain.callbacks.human.HumanApprovalCallbackHandler\u00b6\nclass langchain.callbacks.human.HumanApprovalCallbackHandler(approve: ~typing.Callable[[~typing.Any], bool] = , should_check: ~typing.Callable[[~typing.Dict[str, ~typing.Any]], bool] = )[source]\u00b6\nBases: BaseCallbackHandler\nCallback for manually validating values.\nMethods\n__init__([approve,\u00a0should_check])\non_agent_action(action,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent action.\non_agent_finish(finish,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent end.\non_chain_end(outputs,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when chain ends running.\non_chain_error(error,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when chain errors.\non_chain_start(serialized,\u00a0inputs,\u00a0*,\u00a0run_id)\nRun when chain starts running.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when LLM ends running.\non_llm_error(error,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when LLM errors.\non_llm_new_token(token,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on new LLM token.\non_llm_start(serialized,\u00a0prompts,\u00a0*,\u00a0run_id)\nRun when LLM starts running.\non_retriever_end(documents,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever errors.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.human.HumanApprovalCallbackHandler.html"} {"id": "4f1b0e42c8e8-1", "text": "Run when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun on arbitrary text.\non_tool_end(output,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when tool ends running.\non_tool_error(error,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when tool errors.\non_tool_start(serialized,\u00a0input_str,\u00a0*,\u00a0run_id)\nRun when tool starts running.\nAttributes\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\non_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on agent action.\non_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on agent end.\non_chain_end(outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when chain ends running.\non_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when chain errors.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.human.HumanApprovalCallbackHandler.html"} {"id": "4f1b0e42c8e8-2", "text": "Run when chain errors.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when chain starts running.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\non_llm_end(response: LLMResult, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when LLM ends running.\non_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when LLM errors.\non_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on new LLM token. Only available when streaming is enabled.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when LLM starts running.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.human.HumanApprovalCallbackHandler.html"} {"id": "4f1b0e42c8e8-3", "text": "Run when LLM starts running.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever starts running.\non_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on arbitrary text.\non_tool_end(output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when tool ends running.\non_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when tool errors.\non_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any[source]\u00b6\nRun when tool starts running.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.human.HumanApprovalCallbackHandler.html"} {"id": "4f1b0e42c8e8-4", "text": "Whether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = True\u00b6\nrun_inline: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.human.HumanApprovalCallbackHandler.html"} {"id": "04a1b348517f-0", "text": "langchain.callbacks.streamlit.mutable_expander.ChildType\u00b6\nclass langchain.callbacks.streamlit.mutable_expander.ChildType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]\u00b6\nBases: Enum\nThe enumerator of the child type.\nAttributes\nMARKDOWN\nEXCEPTION\nEXCEPTION = 'EXCEPTION'\u00b6\nMARKDOWN = 'MARKDOWN'\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streamlit.mutable_expander.ChildType.html"} {"id": "4233d2bccbf2-0", "text": "langchain.callbacks.utils.import_spacy\u00b6\nlangchain.callbacks.utils.import_spacy() \u2192 Any[source]\u00b6\nImport the spacy python package and raise an error if it is not installed.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.utils.import_spacy.html"} {"id": "3e33abec9650-0", "text": "langchain.callbacks.tracers.stdout.try_json_stringify\u00b6\nlangchain.callbacks.tracers.stdout.try_json_stringify(obj: Any, fallback: str) \u2192 str[source]\u00b6\nTry to stringify an object to JSON.\n:param obj: Object to stringify.\n:param fallback: Fallback string to return if the object cannot be stringified.\nReturns\nA JSON string if the object can be stringified, otherwise the fallback string.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.stdout.try_json_stringify.html"} {"id": "72278052a5fc-0", "text": "langchain.callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler\u00b6\nclass langchain.callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler(*, answer_prefix_tokens: Optional[List[str]] = None, strip_tokens: bool = True, stream_prefix: bool = False)[source]\u00b6\nBases: StreamingStdOutCallbackHandler\nCallback handler for streaming in agents.\nOnly works with agents using LLMs that support streaming.\nOnly the final output of the agent will be streamed.\nInstantiate FinalStreamingStdOutCallbackHandler.\nParameters\nanswer_prefix_tokens \u2013 Token sequence that prefixes the answer.\nDefault is [\u201cFinal\u201d, \u201cAnswer\u201d, \u201c:\u201d]\nstrip_tokens \u2013 Ignore white spaces and new lines when comparing\nanswer_prefix_tokens to last tokens? (to determine if answer has been\nreached)\nstream_prefix \u2013 Should answer prefix itself also be streamed?\nMethods\n__init__(*[,\u00a0answer_prefix_tokens,\u00a0...])\nInstantiate FinalStreamingStdOutCallbackHandler.\nappend_to_last_tokens(token)\ncheck_if_answer_reached()\non_agent_action(action,\u00a0**kwargs)\nRun on agent action.\non_agent_finish(finish,\u00a0**kwargs)\nRun on agent end.\non_chain_end(outputs,\u00a0**kwargs)\nRun when chain ends running.\non_chain_error(error,\u00a0**kwargs)\nRun when chain errors.\non_chain_start(serialized,\u00a0inputs,\u00a0**kwargs)\nRun when chain starts running.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0**kwargs)\nRun when LLM ends running.\non_llm_error(error,\u00a0**kwargs)\nRun when LLM errors.\non_llm_new_token(token,\u00a0**kwargs)\nRun on new LLM token.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler.html"} {"id": "72278052a5fc-1", "text": "on_llm_new_token(token,\u00a0**kwargs)\nRun on new LLM token.\non_llm_start(serialized,\u00a0prompts,\u00a0**kwargs)\nRun when LLM starts running.\non_retriever_end(documents,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text,\u00a0**kwargs)\nRun on arbitrary text.\non_tool_end(output,\u00a0**kwargs)\nRun when tool ends running.\non_tool_error(error,\u00a0**kwargs)\nRun when tool errors.\non_tool_start(serialized,\u00a0input_str,\u00a0**kwargs)\nRun when tool starts running.\nAttributes\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\nappend_to_last_tokens(token: str) \u2192 None[source]\u00b6\ncheck_if_answer_reached() \u2192 bool[source]\u00b6\non_agent_action(action: AgentAction, **kwargs: Any) \u2192 Any\u00b6\nRun on agent action.\non_agent_finish(finish: AgentFinish, **kwargs: Any) \u2192 None\u00b6\nRun on agent end.\non_chain_end(outputs: Dict[str, Any], **kwargs: Any) \u2192 None\u00b6\nRun when chain ends running.\non_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None\u00b6\nRun when chain errors.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler.html"} {"id": "72278052a5fc-2", "text": "Run when chain errors.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) \u2192 None\u00b6\nRun when chain starts running.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\non_llm_end(response: LLMResult, **kwargs: Any) \u2192 None\u00b6\nRun when LLM ends running.\non_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None\u00b6\nRun when LLM errors.\non_llm_new_token(token: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun on new LLM token. Only available when streaming is enabled.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM starts running.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler.html"} {"id": "72278052a5fc-3", "text": "Run when Retriever starts running.\non_text(text: str, **kwargs: Any) \u2192 None\u00b6\nRun on arbitrary text.\non_tool_end(output: str, **kwargs: Any) \u2192 None\u00b6\nRun when tool ends running.\non_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None\u00b6\nRun when tool errors.\non_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) \u2192 None\u00b6\nRun when tool starts running.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler.html"} {"id": "292df7c0569a-0", "text": "langchain.callbacks.wandb_callback.load_json_to_dict\u00b6\nlangchain.callbacks.wandb_callback.load_json_to_dict(json_path: Union[str, Path]) \u2192 dict[source]\u00b6\nLoad json file to a dictionary.\nParameters\njson_path (str) \u2013 The path to the json file.\nReturns\nThe dictionary representation of the json file.\nReturn type\n(dict)", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.wandb_callback.load_json_to_dict.html"} {"id": "83b1f57a24ec-0", "text": "langchain.callbacks.tracers.wandb.WandbRunArgs\u00b6\nclass langchain.callbacks.tracers.wandb.WandbRunArgs[source]\u00b6\nBases: TypedDict\nArguments for the WandbTracer.\nMethods\n__init__(*args,\u00a0**kwargs)\nclear()\ncopy()\nfromkeys([value])\nCreate a new dictionary with keys from iterable and values set to value.\nget(key[,\u00a0default])\nReturn the value for key if key is in the dictionary, else default.\nitems()\nkeys()\npop(k[,d])\nIf the key is not found, return the default if given; otherwise, raise a KeyError.\npopitem()\nRemove and return a (key, value) pair as a 2-tuple.\nsetdefault(key[,\u00a0default])\nInsert key with a value of default if key is not in the dictionary.\nupdate([E,\u00a0]**F)\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]\nvalues()\nAttributes\njob_type\ndir\nconfig\nproject\nentity\nreinit\ntags\ngroup\nname\nnotes\nmagic\nconfig_exclude_keys\nconfig_include_keys\nanonymous\nmode\nallow_val_change\nresume\nforce\ntensorboard\nsync_tensorboard\nmonitor_gym\nsave_code\nid\nsettings\nclear() \u2192 None.\u00a0 Remove all items from D.\u00b6\ncopy() \u2192 a shallow copy of D\u00b6\nfromkeys(value=None, /)\u00b6\nCreate a new dictionary with keys from iterable and values set to value.\nget(key, default=None, /)\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.wandb.WandbRunArgs.html"} {"id": "83b1f57a24ec-1", "text": "get(key, default=None, /)\u00b6\nReturn the value for key if key is in the dictionary, else default.\nitems() \u2192 a set-like object providing a view on D's items\u00b6\nkeys() \u2192 a set-like object providing a view on D's keys\u00b6\npop(k[, d]) \u2192 v, remove specified key and return the corresponding value.\u00b6\nIf the key is not found, return the default if given; otherwise,\nraise a KeyError.\npopitem()\u00b6\nRemove and return a (key, value) pair as a 2-tuple.\nPairs are returned in LIFO (last-in, first-out) order.\nRaises KeyError if the dict is empty.\nsetdefault(key, default=None, /)\u00b6\nInsert key with a value of default if key is not in the dictionary.\nReturn the value for key if key is in the dictionary, else default.\nupdate([E, ]**F) \u2192 None.\u00a0 Update D from dict/iterable E and F.\u00b6\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k]\nIf E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v\nIn either case, this is followed by: for k in F: D[k] = F[k]\nvalues() \u2192 an object providing a view on D's values\u00b6\nallow_val_change: Optional[bool]\u00b6\nanonymous: Optional[str]\u00b6\nconfig: Union[Dict, str, None]\u00b6\nconfig_exclude_keys: Optional[List[str]]\u00b6\nconfig_include_keys: Optional[List[str]]\u00b6\ndir: Optional[StrPath]\u00b6\nentity: Optional[str]\u00b6\nforce: Optional[bool]\u00b6\ngroup: Optional[str]\u00b6\nid: Optional[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.wandb.WandbRunArgs.html"} {"id": "83b1f57a24ec-2", "text": "group: Optional[str]\u00b6\nid: Optional[str]\u00b6\njob_type: Optional[str]\u00b6\nmagic: Optional[Union[dict, str, bool]]\u00b6\nmode: Optional[str]\u00b6\nmonitor_gym: Optional[bool]\u00b6\nname: Optional[str]\u00b6\nnotes: Optional[str]\u00b6\nproject: Optional[str]\u00b6\nreinit: Optional[bool]\u00b6\nresume: Optional[Union[bool, str]]\u00b6\nsave_code: Optional[bool]\u00b6\nsettings: Union[WBSettings, Dict[str, Any], None]\u00b6\nsync_tensorboard: Optional[bool]\u00b6\ntags: Optional[Sequence]\u00b6\ntensorboard: Optional[bool]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.wandb.WandbRunArgs.html"} {"id": "27cb72bff9d4-0", "text": "langchain.callbacks.tracers.stdout.elapsed\u00b6\nlangchain.callbacks.tracers.stdout.elapsed(run: Any) \u2192 str[source]\u00b6\nGet the elapsed time of a run.\nParameters\nrun \u2013 any object with a start_time and end_time attribute.\nReturns\nA string with the elapsed time in seconds ormilliseconds if time is less than a second.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.stdout.elapsed.html"} {"id": "62c3bb007937-0", "text": "langchain.callbacks.manager.AsyncParentRunManager\u00b6\nclass langchain.callbacks.manager.AsyncParentRunManager(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: AsyncRunManager\nAsync Parent Run Manager.\nInitialize the run manager.\nParameters\nrun_id (UUID) \u2013 The ID of the run.\nhandlers (List[BaseCallbackHandler]) \u2013 The list of handlers.\ninheritable_handlers (List[BaseCallbackHandler]) \u2013 The list of inheritable handlers.\nparent_run_id (UUID, optional) \u2013 The ID of the parent run.\nDefaults to None.\ntags (Optional[List[str]]) \u2013 The list of tags.\ninheritable_tags (Optional[List[str]]) \u2013 The list of inheritable tags.\nmetadata (Optional[Dict[str, Any]]) \u2013 The metadata.\ninheritable_metadata (Optional[Dict[str, Any]]) \u2013 The inheritable metadata.\nMethods\n__init__(*,\u00a0run_id,\u00a0handlers,\u00a0...[,\u00a0...])\nInitialize the run manager.\nget_child([tag])\nGet a child callback manager.\nget_noop_manager()\nReturn a manager that doesn't perform any operations.\non_text(text,\u00a0**kwargs)\nRun when text is received.\nget_child(tag: Optional[str] = None) \u2192 AsyncCallbackManager[source]\u00b6\nGet a child callback manager.\nParameters\ntag (str, optional) \u2013 The tag for the child callback manager.\nDefaults to None.\nReturns\nThe child callback manager.\nReturn type\nAsyncCallbackManager", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncParentRunManager.html"} {"id": "62c3bb007937-1", "text": "Defaults to None.\nReturns\nThe child callback manager.\nReturn type\nAsyncCallbackManager\nclassmethod get_noop_manager() \u2192 BRM\u00b6\nReturn a manager that doesn\u2019t perform any operations.\nReturns\nThe noop manager.\nReturn type\nBaseRunManager\nasync on_text(text: str, **kwargs: Any) \u2192 Any\u00b6\nRun when text is received.\nParameters\ntext (str) \u2013 The received text.\nReturns\nThe result of the callback.\nReturn type\nAny", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncParentRunManager.html"} {"id": "e157791c759c-0", "text": "langchain.callbacks.wandb_callback.analyze_text\u00b6\nlangchain.callbacks.wandb_callback.analyze_text(text: str, complexity_metrics: bool = True, visualize: bool = True, nlp: Any = None, output_dir: Optional[Union[str, Path]] = None) \u2192 dict[source]\u00b6\nAnalyze text using textstat and spacy.\nParameters\ntext (str) \u2013 The text to analyze.\ncomplexity_metrics (bool) \u2013 Whether to compute complexity metrics.\nvisualize (bool) \u2013 Whether to visualize the text.\nnlp (spacy.lang) \u2013 The spacy language model to use for visualization.\noutput_dir (str) \u2013 The directory to save the visualization files to.\nReturns\nA dictionary containing the complexity metrics and visualizationfiles serialized in a wandb.Html element.\nReturn type\n(dict)", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.wandb_callback.analyze_text.html"} {"id": "0a74cf52da46-0", "text": "langchain.callbacks.utils.import_pandas\u00b6\nlangchain.callbacks.utils.import_pandas() \u2192 Any[source]\u00b6\nImport the pandas python package and raise an error if it is not installed.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.utils.import_pandas.html"} {"id": "a382c816675a-0", "text": "langchain.callbacks.mlflow_callback.MlflowCallbackHandler\u00b6\nclass langchain.callbacks.mlflow_callback.MlflowCallbackHandler(name: Optional[str] = 'langchainrun-%', experiment: Optional[str] = 'langchain', tags: Optional[Dict] = {}, tracking_uri: Optional[str] = None)[source]\u00b6\nBases: BaseMetadataCallbackHandler, BaseCallbackHandler\nCallback Handler that logs metrics and artifacts to mlflow server.\nParameters\nname (str) \u2013 Name of the run.\nexperiment (str) \u2013 Name of the experiment.\ntags (dict) \u2013 Tags to be attached for the run.\ntracking_uri (str) \u2013 MLflow tracking server uri.\nThis handler will utilize the associated callback method called and formats\nthe input of each callback function with metadata regarding the state of LLM run,\nand adds the response to the list of records for both the {method}_records and\naction. It then logs the response to mlflow server.\nInitialize callback handler.\nMethods\n__init__([name,\u00a0experiment,\u00a0tags,\u00a0tracking_uri])\nInitialize callback handler.\nflush_tracker([langchain_asset,\u00a0finish])\nget_custom_callback_meta()\non_agent_action(action,\u00a0**kwargs)\nRun on agent action.\non_agent_finish(finish,\u00a0**kwargs)\nRun when agent ends running.\non_chain_end(outputs,\u00a0**kwargs)\nRun when chain ends running.\non_chain_error(error,\u00a0**kwargs)\nRun when chain errors.\non_chain_start(serialized,\u00a0inputs,\u00a0**kwargs)\nRun when chain starts running.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0**kwargs)\nRun when LLM ends running.\non_llm_error(error,\u00a0**kwargs)\nRun when LLM errors.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.mlflow_callback.MlflowCallbackHandler.html"} {"id": "a382c816675a-1", "text": "on_llm_error(error,\u00a0**kwargs)\nRun when LLM errors.\non_llm_new_token(token,\u00a0**kwargs)\nRun when LLM generates a new token.\non_llm_start(serialized,\u00a0prompts,\u00a0**kwargs)\nRun when LLM starts.\non_retriever_end(documents,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text,\u00a0**kwargs)\nRun when agent is ending.\non_tool_end(output,\u00a0**kwargs)\nRun when tool ends running.\non_tool_error(error,\u00a0**kwargs)\nRun when tool errors.\non_tool_start(serialized,\u00a0input_str,\u00a0**kwargs)\nRun when tool starts running.\nreset_callback_meta()\nReset the callback metadata.\nAttributes\nalways_verbose\nWhether to call verbose callbacks even if verbose is False.\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\nflush_tracker(langchain_asset: Any = None, finish: bool = False) \u2192 None[source]\u00b6\nget_custom_callback_meta() \u2192 Dict[str, Any]\u00b6\non_agent_action(action: AgentAction, **kwargs: Any) \u2192 Any[source]\u00b6\nRun on agent action.\non_agent_finish(finish: AgentFinish, **kwargs: Any) \u2192 None[source]\u00b6\nRun when agent ends running.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.mlflow_callback.MlflowCallbackHandler.html"} {"id": "a382c816675a-2", "text": "Run when agent ends running.\non_chain_end(outputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain ends running.\non_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain errors.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain starts running.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\non_llm_end(response: LLMResult, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM ends running.\non_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM errors.\non_llm_new_token(token: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM generates a new token.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM starts.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever errors.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.mlflow_callback.MlflowCallbackHandler.html"} {"id": "a382c816675a-3", "text": "Run when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever starts running.\non_text(text: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when agent is ending.\non_tool_end(output: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool ends running.\non_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool errors.\non_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool starts running.\nreset_callback_meta() \u2192 None\u00b6\nReset the callback metadata.\nproperty always_verbose: bool\u00b6\nWhether to call verbose callbacks even if verbose is False.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.mlflow_callback.MlflowCallbackHandler.html"} {"id": "56b9b4a4d6ae-0", "text": "langchain.callbacks.tracers.schemas.BaseRun\u00b6\nclass langchain.callbacks.tracers.schemas.BaseRun(*, uuid: str, parent_uuid: Optional[str] = None, start_time: datetime = None, end_time: datetime = None, extra: Optional[Dict[str, Any]] = None, execution_order: int, child_execution_order: int, serialized: Dict[str, Any], session_id: int, error: Optional[str] = None)[source]\u00b6\nBases: BaseModel\nBase class for Run.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam child_execution_order: int [Required]\u00b6\nparam end_time: datetime.datetime [Optional]\u00b6\nparam error: Optional[str] = None\u00b6\nparam execution_order: int [Required]\u00b6\nparam extra: Optional[Dict[str, Any]] = None\u00b6\nparam parent_uuid: Optional[str] = None\u00b6\nparam serialized: Dict[str, Any] [Required]\u00b6\nparam session_id: int [Required]\u00b6\nparam start_time: datetime.datetime [Optional]\u00b6\nparam uuid: str [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.schemas.BaseRun.html"} {"id": "285afbea671b-0", "text": "langchain.callbacks.streamlit.mutable_expander.ChildRecord\u00b6\nclass langchain.callbacks.streamlit.mutable_expander.ChildRecord(type: ChildType, kwargs: Dict[str, Any], dg: DeltaGenerator)[source]\u00b6\nBases: NamedTuple\nThe child record as a NamedTuple.\nCreate new instance of ChildRecord(type, kwargs, dg)\nMethods\n__init__()\ncount(value,\u00a0/)\nReturn number of occurrences of value.\nindex(value[,\u00a0start,\u00a0stop])\nReturn first index of value.\nAttributes\ndg\nAlias for field number 2\nkwargs\nAlias for field number 1\ntype\nAlias for field number 0\ncount(value, /)\u00b6\nReturn number of occurrences of value.\nindex(value, start=0, stop=9223372036854775807, /)\u00b6\nReturn first index of value.\nRaises ValueError if the value is not present.\ndg: DeltaGenerator\u00b6\nAlias for field number 2\nkwargs: Dict[str, Any]\u00b6\nAlias for field number 1\ntype: ChildType\u00b6\nAlias for field number 0", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streamlit.mutable_expander.ChildRecord.html"} {"id": "91b86a9018d1-0", "text": "langchain.callbacks.utils.import_textstat\u00b6\nlangchain.callbacks.utils.import_textstat() \u2192 Any[source]\u00b6\nImport the textstat python package and raise an error if it is not installed.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.utils.import_textstat.html"} {"id": "f6df7545db92-0", "text": "langchain.callbacks.wandb_callback.import_wandb\u00b6\nlangchain.callbacks.wandb_callback.import_wandb() \u2192 Any[source]\u00b6\nImport the wandb python package and raise an error if it is not installed.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.wandb_callback.import_wandb.html"} {"id": "7cee1f7b4e95-0", "text": "langchain.callbacks.arthur_callback.ArthurCallbackHandler\u00b6\nclass langchain.callbacks.arthur_callback.ArthurCallbackHandler(arthur_model: ArthurModel)[source]\u00b6\nBases: BaseCallbackHandler\nCallback Handler that logs to Arthur platform.\nArthur helps enterprise teams optimize model operations\nand performance at scale. The Arthur API tracks model\nperformance, explainability, and fairness across tabular,\nNLP, and CV models. Our API is model- and platform-agnostic,\nand continuously scales with complex and dynamic enterprise needs.\nTo learn more about Arthur, visit our website at\nhttps://www.arthur.ai/ or read the Arthur docs at\nhttps://docs.arthur.ai/\nInitialize callback handler.\nMethods\n__init__(arthur_model)\nInitialize callback handler.\nfrom_credentials(model_id[,\u00a0arthur_url,\u00a0...])\nInitialize callback handler from Arthur credentials.\non_agent_action(action,\u00a0**kwargs)\nDo nothing when agent takes a specific action.\non_agent_finish(finish,\u00a0**kwargs)\nDo nothing\non_chain_end(outputs,\u00a0**kwargs)\nOn chain end, do nothing.\non_chain_error(error,\u00a0**kwargs)\nDo nothing when LLM chain outputs an error.\non_chain_start(serialized,\u00a0inputs,\u00a0**kwargs)\nOn chain start, do nothing.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0**kwargs)\nOn LLM end, send data to Arthur.\non_llm_error(error,\u00a0**kwargs)\nDo nothing when LLM outputs an error.\non_llm_new_token(token,\u00a0**kwargs)\nOn new token, pass.\non_llm_start(serialized,\u00a0prompts,\u00a0**kwargs)\nOn LLM start, save the input prompts", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.arthur_callback.ArthurCallbackHandler.html"} {"id": "7cee1f7b4e95-1", "text": "On LLM start, save the input prompts\non_retriever_end(documents,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text,\u00a0**kwargs)\nDo nothing\non_tool_end(output[,\u00a0observation_prefix,\u00a0...])\nDo nothing when tool ends.\non_tool_error(error,\u00a0**kwargs)\nDo nothing when tool outputs an error.\non_tool_start(serialized,\u00a0input_str,\u00a0**kwargs)\nDo nothing when tool starts.\nAttributes\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\nclassmethod from_credentials(model_id: str, arthur_url: Optional[str] = 'https://app.arthur.ai', arthur_login: Optional[str] = None, arthur_password: Optional[str] = None) \u2192 ArthurCallbackHandler[source]\u00b6\nInitialize callback handler from Arthur credentials.\nParameters\nmodel_id (str) \u2013 The ID of the arthur model to log to.\narthur_url (str, optional) \u2013 The URL of the Arthur instance to log to.\nDefaults to \u201chttps://app.arthur.ai\u201d.\narthur_login (str, optional) \u2013 The login to use to connect to Arthur.\nDefaults to None.\narthur_password (str, optional) \u2013 The password to use to connect to\nArthur. Defaults to None.\nReturns\nThe initialized callback handler.\nReturn type", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.arthur_callback.ArthurCallbackHandler.html"} {"id": "7cee1f7b4e95-2", "text": "Arthur. Defaults to None.\nReturns\nThe initialized callback handler.\nReturn type\nArthurCallbackHandler\non_agent_action(action: AgentAction, **kwargs: Any) \u2192 Any[source]\u00b6\nDo nothing when agent takes a specific action.\non_agent_finish(finish: AgentFinish, **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing\non_chain_end(outputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nOn chain end, do nothing.\non_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing when LLM chain outputs an error.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nOn chain start, do nothing.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\non_llm_end(response: LLMResult, **kwargs: Any) \u2192 None[source]\u00b6\nOn LLM end, send data to Arthur.\non_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing when LLM outputs an error.\non_llm_new_token(token: str, **kwargs: Any) \u2192 None[source]\u00b6\nOn new token, pass.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) \u2192 None[source]\u00b6\nOn LLM start, save the input prompts", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.arthur_callback.ArthurCallbackHandler.html"} {"id": "7cee1f7b4e95-3", "text": "On LLM start, save the input prompts\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever starts running.\non_text(text: str, **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing\non_tool_end(output: str, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing when tool ends.\non_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing when tool outputs an error.\non_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing when tool starts.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.arthur_callback.ArthurCallbackHandler.html"} {"id": "8d0f9476b763-0", "text": "langchain.callbacks.promptlayer_callback.PromptLayerCallbackHandler\u00b6\nclass langchain.callbacks.promptlayer_callback.PromptLayerCallbackHandler(pl_id_callback: Optional[Callable[[...], Any]] = None, pl_tags: Optional[List[str]] = [])[source]\u00b6\nBases: BaseCallbackHandler\nCallback handler for promptlayer.\nInitialize the PromptLayerCallbackHandler.\nMethods\n__init__([pl_id_callback,\u00a0pl_tags])\nInitialize the PromptLayerCallbackHandler.\non_agent_action(action,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent action.\non_agent_finish(finish,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent end.\non_chain_end(outputs,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when chain ends running.\non_chain_error(error,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when chain errors.\non_chain_start(serialized,\u00a0inputs,\u00a0*,\u00a0run_id)\nRun when chain starts running.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when LLM ends running.\non_llm_error(error,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when LLM errors.\non_llm_new_token(token,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on new LLM token.\non_llm_start(serialized,\u00a0prompts,\u00a0*,\u00a0run_id)\nRun when LLM starts running.\non_retriever_end(documents,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever errors.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.promptlayer_callback.PromptLayerCallbackHandler.html"} {"id": "8d0f9476b763-1", "text": "Run when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun on arbitrary text.\non_tool_end(output,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when tool ends running.\non_tool_error(error,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when tool errors.\non_tool_start(serialized,\u00a0input_str,\u00a0*,\u00a0run_id)\nRun when tool starts running.\nAttributes\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\non_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on agent action.\non_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on agent end.\non_chain_end(outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when chain ends running.\non_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when chain errors.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.promptlayer_callback.PromptLayerCallbackHandler.html"} {"id": "8d0f9476b763-2", "text": "Run when chain errors.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when chain starts running.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 Any[source]\u00b6\nRun when a chat model starts running.\non_llm_end(response: LLMResult, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM ends running.\non_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when LLM errors.\non_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on new LLM token. Only available when streaming is enabled.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 Any[source]\u00b6\nRun when LLM starts running.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever ends running.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.promptlayer_callback.PromptLayerCallbackHandler.html"} {"id": "8d0f9476b763-3", "text": "Run when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever starts running.\non_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on arbitrary text.\non_tool_end(output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when tool ends running.\non_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when tool errors.\non_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when tool starts running.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.promptlayer_callback.PromptLayerCallbackHandler.html"} {"id": "8d0f9476b763-4", "text": "property ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.promptlayer_callback.PromptLayerCallbackHandler.html"} {"id": "133386afc47b-0", "text": "langchain.callbacks.tracers.langchain_v1.get_headers\u00b6\nlangchain.callbacks.tracers.langchain_v1.get_headers() \u2192 Dict[str, Any][source]\u00b6\nGet the headers for the LangChain API.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.langchain_v1.get_headers.html"} {"id": "3fd27d64cfae-0", "text": "langchain.callbacks.manager.tracing_v2_enabled\u00b6\nlangchain.callbacks.manager.tracing_v2_enabled(project_name: Optional[str] = None, *, example_id: Optional[Union[UUID, str]] = None, tags: Optional[List[str]] = None) \u2192 Generator[None, None, None][source]\u00b6\nInstruct LangChain to log all runs in context to LangSmith.\nParameters\nproject_name (str, optional) \u2013 The name of the project.\nDefaults to \u201cdefault\u201d.\nexample_id (str or UUID, optional) \u2013 The ID of the example.\nDefaults to None.\ntags (List[str], optional) \u2013 The tags to add to the run.\nDefaults to None.\nReturns\nNone\nExample\n>>> with tracing_v2_enabled():\n... # LangChain code will automatically be traced", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.tracing_v2_enabled.html"} {"id": "f0f0f1d3080d-0", "text": "langchain.callbacks.streamlit.__init__.StreamlitCallbackHandler\u00b6\nlangchain.callbacks.streamlit.__init__.StreamlitCallbackHandler(parent_container: DeltaGenerator, *, max_thought_containers: int = 4, expand_new_thoughts: bool = True, collapse_completed_thoughts: bool = True, thought_labeler: Optional[LLMThoughtLabeler] = None) \u2192 BaseCallbackHandler[source]\u00b6\nConstruct a new StreamlitCallbackHandler. This CallbackHandler is geared towards\nuse with a LangChain Agent; it displays the Agent\u2019s LLM and tool-usage \u201cthoughts\u201d\ninside a series of Streamlit expanders.\nParameters\nparent_container \u2013 The st.container that will contain all the Streamlit elements that the\nHandler creates.\nmax_thought_containers \u2013 The max number of completed LLM thought containers to show at once. When this\nthreshold is reached, a new thought will cause the oldest thoughts to be\ncollapsed into a \u201cHistory\u201d expander. Defaults to 4.\nexpand_new_thoughts \u2013 Each LLM \u201cthought\u201d gets its own st.expander. This param controls whether that\nexpander is expanded by default. Defaults to True.\ncollapse_completed_thoughts \u2013 If True, LLM thought expanders will be collapsed when completed.\nDefaults to True.\nthought_labeler \u2013 An optional custom LLMThoughtLabeler instance. If unspecified, the handler\nwill use the default thought labeling logic. Defaults to None.\nReturns\nA new StreamlitCallbackHandler instance.\nNote that this is an \u201cauto-updating\u201d API (if the installed version of Streamlit)\nhas a more recent StreamlitCallbackHandler implementation, an instance of that class\nwill be used.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streamlit.__init__.StreamlitCallbackHandler.html"} {"id": "118544f9c308-0", "text": "langchain.callbacks.manager.CallbackManager\u00b6\nclass langchain.callbacks.manager.CallbackManager(handlers: List[BaseCallbackHandler], inheritable_handlers: Optional[List[BaseCallbackHandler]] = None, parent_run_id: Optional[UUID] = None, *, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: BaseCallbackManager\nCallback manager that can be used to handle callbacks from langchain.\nInitialize callback manager.\nMethods\n__init__(handlers[,\u00a0inheritable_handlers,\u00a0...])\nInitialize callback manager.\nadd_handler(handler[,\u00a0inherit])\nAdd a handler to the callback manager.\nadd_metadata(metadata[,\u00a0inherit])\nadd_tags(tags[,\u00a0inherit])\nconfigure([inheritable_callbacks,\u00a0...])\nConfigure the callback manager.\non_chain_start(serialized,\u00a0inputs[,\u00a0run_id])\nRun when chain starts running.\non_chat_model_start(serialized,\u00a0messages,\u00a0...)\nRun when LLM starts running.\non_llm_start(serialized,\u00a0prompts,\u00a0**kwargs)\nRun when LLM starts running.\non_retriever_start(serialized,\u00a0query[,\u00a0...])\nRun when retriever starts running.\non_tool_start(serialized,\u00a0input_str[,\u00a0...])\nRun when tool starts running.\nremove_handler(handler)\nRemove a handler from the callback manager.\nremove_metadata(keys)\nremove_tags(tags)\nset_handler(handler[,\u00a0inherit])\nSet handler as the only handler on the callback manager.\nset_handlers(handlers[,\u00a0inherit])\nSet handlers as the only handlers on the callback manager.\nAttributes\nis_async\nWhether the callback manager is async.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.CallbackManager.html"} {"id": "118544f9c308-1", "text": "Attributes\nis_async\nWhether the callback manager is async.\nadd_handler(handler: BaseCallbackHandler, inherit: bool = True) \u2192 None\u00b6\nAdd a handler to the callback manager.\nadd_metadata(metadata: Dict[str, Any], inherit: bool = True) \u2192 None\u00b6\nadd_tags(tags: List[str], inherit: bool = True) \u2192 None\u00b6\nclassmethod configure(inheritable_callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, local_callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, verbose: bool = False, inheritable_tags: Optional[List[str]] = None, local_tags: Optional[List[str]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None, local_metadata: Optional[Dict[str, Any]] = None) \u2192 CallbackManager[source]\u00b6\nConfigure the callback manager.\nParameters\ninheritable_callbacks (Optional[Callbacks], optional) \u2013 The inheritable\ncallbacks. Defaults to None.\nlocal_callbacks (Optional[Callbacks], optional) \u2013 The local callbacks.\nDefaults to None.\nverbose (bool, optional) \u2013 Whether to enable verbose mode. Defaults to False.\ninheritable_tags (Optional[List[str]], optional) \u2013 The inheritable tags.\nDefaults to None.\nlocal_tags (Optional[List[str]], optional) \u2013 The local tags.\nDefaults to None.\ninheritable_metadata (Optional[Dict[str, Any]], optional) \u2013 The inheritable\nmetadata. Defaults to None.\nlocal_metadata (Optional[Dict[str, Any]], optional) \u2013 The local metadata.\nDefaults to None.\nReturns\nThe configured callback manager.\nReturn type\nCallbackManager\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], run_id: Optional[UUID] = None, **kwargs: Any) \u2192 CallbackManagerForChainRun[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.CallbackManager.html"} {"id": "118544f9c308-2", "text": "Run when chain starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 The serialized chain.\ninputs (Dict[str, Any]) \u2013 The inputs to the chain.\nrun_id (UUID, optional) \u2013 The ID of the run. Defaults to None.\nReturns\nThe callback manager for the chain run.\nReturn type\nCallbackManagerForChainRun\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs: Any) \u2192 List[CallbackManagerForLLMRun][source]\u00b6\nRun when LLM starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 The serialized LLM.\nmessages (List[List[BaseMessage]]) \u2013 The list of messages.\nrun_id (UUID, optional) \u2013 The ID of the run. Defaults to None.\nReturns\nA callback manager for eachlist of messages as an LLM run.\nReturn type\nList[CallbackManagerForLLMRun]\non_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) \u2192 List[CallbackManagerForLLMRun][source]\u00b6\nRun when LLM starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 The serialized LLM.\nprompts (List[str]) \u2013 The list of prompts.\nrun_id (UUID, optional) \u2013 The ID of the run. Defaults to None.\nReturns\nA callback manager for eachprompt as an LLM run.\nReturn type\nList[CallbackManagerForLLMRun]\non_retriever_start(serialized: Dict[str, Any], query: str, run_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 CallbackManagerForRetrieverRun[source]\u00b6\nRun when retriever starts running.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.CallbackManager.html"} {"id": "118544f9c308-3", "text": "Run when retriever starts running.\non_tool_start(serialized: Dict[str, Any], input_str: str, run_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 CallbackManagerForToolRun[source]\u00b6\nRun when tool starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 The serialized tool.\ninput_str (str) \u2013 The input to the tool.\nrun_id (UUID, optional) \u2013 The ID of the run. Defaults to None.\nparent_run_id (UUID, optional) \u2013 The ID of the parent run. Defaults to None.\nReturns\nThe callback manager for the tool run.\nReturn type\nCallbackManagerForToolRun\nremove_handler(handler: BaseCallbackHandler) \u2192 None\u00b6\nRemove a handler from the callback manager.\nremove_metadata(keys: List[str]) \u2192 None\u00b6\nremove_tags(tags: List[str]) \u2192 None\u00b6\nset_handler(handler: BaseCallbackHandler, inherit: bool = True) \u2192 None\u00b6\nSet handler as the only handler on the callback manager.\nset_handlers(handlers: List[BaseCallbackHandler], inherit: bool = True) \u2192 None\u00b6\nSet handlers as the only handlers on the callback manager.\nproperty is_async: bool\u00b6\nWhether the callback manager is async.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.CallbackManager.html"} {"id": "2d3385680529-0", "text": "langchain.callbacks.manager.ParentRunManager\u00b6\nclass langchain.callbacks.manager.ParentRunManager(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: RunManager\nSync Parent Run Manager.\nInitialize the run manager.\nParameters\nrun_id (UUID) \u2013 The ID of the run.\nhandlers (List[BaseCallbackHandler]) \u2013 The list of handlers.\ninheritable_handlers (List[BaseCallbackHandler]) \u2013 The list of inheritable handlers.\nparent_run_id (UUID, optional) \u2013 The ID of the parent run.\nDefaults to None.\ntags (Optional[List[str]]) \u2013 The list of tags.\ninheritable_tags (Optional[List[str]]) \u2013 The list of inheritable tags.\nmetadata (Optional[Dict[str, Any]]) \u2013 The metadata.\ninheritable_metadata (Optional[Dict[str, Any]]) \u2013 The inheritable metadata.\nMethods\n__init__(*,\u00a0run_id,\u00a0handlers,\u00a0...[,\u00a0...])\nInitialize the run manager.\nget_child([tag])\nGet a child callback manager.\nget_noop_manager()\nReturn a manager that doesn't perform any operations.\non_text(text,\u00a0**kwargs)\nRun when text is received.\nget_child(tag: Optional[str] = None) \u2192 CallbackManager[source]\u00b6\nGet a child callback manager.\nParameters\ntag (str, optional) \u2013 The tag for the child callback manager.\nDefaults to None.\nReturns\nThe child callback manager.\nReturn type\nCallbackManager\nclassmethod get_noop_manager() \u2192 BRM\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.ParentRunManager.html"} {"id": "2d3385680529-1", "text": "Return type\nCallbackManager\nclassmethod get_noop_manager() \u2192 BRM\u00b6\nReturn a manager that doesn\u2019t perform any operations.\nReturns\nThe noop manager.\nReturn type\nBaseRunManager\non_text(text: str, **kwargs: Any) \u2192 Any\u00b6\nRun when text is received.\nParameters\ntext (str) \u2013 The received text.\nReturns\nThe result of the callback.\nReturn type\nAny", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.ParentRunManager.html"} {"id": "d9f5d8164037-0", "text": "langchain.callbacks.base.BaseCallbackManager\u00b6\nclass langchain.callbacks.base.BaseCallbackManager(handlers: List[BaseCallbackHandler], inheritable_handlers: Optional[List[BaseCallbackHandler]] = None, parent_run_id: Optional[UUID] = None, *, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: CallbackManagerMixin\nBase callback manager that can be used to handle callbacks from LangChain.\nInitialize callback manager.\nMethods\n__init__(handlers[,\u00a0inheritable_handlers,\u00a0...])\nInitialize callback manager.\nadd_handler(handler[,\u00a0inherit])\nAdd a handler to the callback manager.\nadd_metadata(metadata[,\u00a0inherit])\nadd_tags(tags[,\u00a0inherit])\non_chain_start(serialized,\u00a0inputs,\u00a0*,\u00a0run_id)\nRun when chain starts running.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_start(serialized,\u00a0prompts,\u00a0*,\u00a0run_id)\nRun when LLM starts running.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_tool_start(serialized,\u00a0input_str,\u00a0*,\u00a0run_id)\nRun when tool starts running.\nremove_handler(handler)\nRemove a handler from the callback manager.\nremove_metadata(keys)\nremove_tags(tags)\nset_handler(handler[,\u00a0inherit])\nSet handler as the only handler on the callback manager.\nset_handlers(handlers[,\u00a0inherit])\nSet handlers as the only handlers on the callback manager.\nAttributes\nis_async\nWhether the callback manager is async.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.base.BaseCallbackManager.html"} {"id": "d9f5d8164037-1", "text": "Attributes\nis_async\nWhether the callback manager is async.\nadd_handler(handler: BaseCallbackHandler, inherit: bool = True) \u2192 None[source]\u00b6\nAdd a handler to the callback manager.\nadd_metadata(metadata: Dict[str, Any], inherit: bool = True) \u2192 None[source]\u00b6\nadd_tags(tags: List[str], inherit: bool = True) \u2192 None[source]\u00b6\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when chain starts running.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when LLM starts running.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever starts running.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.base.BaseCallbackManager.html"} {"id": "d9f5d8164037-2", "text": "Run when Retriever starts running.\non_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when tool starts running.\nremove_handler(handler: BaseCallbackHandler) \u2192 None[source]\u00b6\nRemove a handler from the callback manager.\nremove_metadata(keys: List[str]) \u2192 None[source]\u00b6\nremove_tags(tags: List[str]) \u2192 None[source]\u00b6\nset_handler(handler: BaseCallbackHandler, inherit: bool = True) \u2192 None[source]\u00b6\nSet handler as the only handler on the callback manager.\nset_handlers(handlers: List[BaseCallbackHandler], inherit: bool = True) \u2192 None[source]\u00b6\nSet handlers as the only handlers on the callback manager.\nproperty is_async: bool\u00b6\nWhether the callback manager is async.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.base.BaseCallbackManager.html"} {"id": "804878e1b75f-0", "text": "langchain.callbacks.comet_ml_callback.import_comet_ml\u00b6\nlangchain.callbacks.comet_ml_callback.import_comet_ml() \u2192 Any[source]\u00b6\nImport comet_ml and raise an error if it is not installed.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.comet_ml_callback.import_comet_ml.html"} {"id": "c73954108d58-0", "text": "langchain.callbacks.arize_callback.ArizeCallbackHandler\u00b6\nclass langchain.callbacks.arize_callback.ArizeCallbackHandler(model_id: Optional[str] = None, model_version: Optional[str] = None, SPACE_KEY: Optional[str] = None, API_KEY: Optional[str] = None)[source]\u00b6\nBases: BaseCallbackHandler\nCallback Handler that logs to Arize.\nInitialize callback handler.\nMethods\n__init__([model_id,\u00a0model_version,\u00a0...])\nInitialize callback handler.\non_agent_action(action,\u00a0**kwargs)\nDo nothing.\non_agent_finish(finish,\u00a0**kwargs)\nRun on agent end.\non_chain_end(outputs,\u00a0**kwargs)\nDo nothing.\non_chain_error(error,\u00a0**kwargs)\nDo nothing.\non_chain_start(serialized,\u00a0inputs,\u00a0**kwargs)\nRun when chain starts running.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0**kwargs)\nRun when LLM ends running.\non_llm_error(error,\u00a0**kwargs)\nDo nothing.\non_llm_new_token(token,\u00a0**kwargs)\nDo nothing.\non_llm_start(serialized,\u00a0prompts,\u00a0**kwargs)\nRun when LLM starts running.\non_retriever_end(documents,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text,\u00a0**kwargs)\nRun on arbitrary text.\non_tool_end(output[,\u00a0observation_prefix,\u00a0...])\nRun when tool ends running.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.arize_callback.ArizeCallbackHandler.html"} {"id": "c73954108d58-1", "text": "on_tool_end(output[,\u00a0observation_prefix,\u00a0...])\nRun when tool ends running.\non_tool_error(error,\u00a0**kwargs)\nRun when tool errors.\non_tool_start(serialized,\u00a0input_str,\u00a0**kwargs)\nRun when tool starts running.\nAttributes\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\non_agent_action(action: AgentAction, **kwargs: Any) \u2192 Any[source]\u00b6\nDo nothing.\non_agent_finish(finish: AgentFinish, **kwargs: Any) \u2192 None[source]\u00b6\nRun on agent end.\non_chain_end(outputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing.\non_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain starts running.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\non_llm_end(response: LLMResult, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM ends running.\non_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.arize_callback.ArizeCallbackHandler.html"} {"id": "c73954108d58-2", "text": "Do nothing.\non_llm_new_token(token: str, **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM starts running.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever starts running.\non_text(text: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun on arbitrary text.\non_tool_end(output: str, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool ends running.\non_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool errors.\non_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool starts running.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.arize_callback.ArizeCallbackHandler.html"} {"id": "c73954108d58-3", "text": "Whether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.arize_callback.ArizeCallbackHandler.html"} {"id": "79b9d2ee2597-0", "text": "langchain.callbacks.stdout.StdOutCallbackHandler\u00b6\nclass langchain.callbacks.stdout.StdOutCallbackHandler(color: Optional[str] = None)[source]\u00b6\nBases: BaseCallbackHandler\nCallback Handler that prints to std out.\nInitialize callback handler.\nMethods\n__init__([color])\nInitialize callback handler.\non_agent_action(action[,\u00a0color])\nRun on agent action.\non_agent_finish(finish[,\u00a0color])\nRun on agent end.\non_chain_end(outputs,\u00a0**kwargs)\nPrint out that we finished a chain.\non_chain_error(error,\u00a0**kwargs)\nDo nothing.\non_chain_start(serialized,\u00a0inputs,\u00a0**kwargs)\nPrint out that we are entering a chain.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0**kwargs)\nDo nothing.\non_llm_error(error,\u00a0**kwargs)\nDo nothing.\non_llm_new_token(token,\u00a0**kwargs)\nDo nothing.\non_llm_start(serialized,\u00a0prompts,\u00a0**kwargs)\nPrint out the prompts.\non_retriever_end(documents,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text[,\u00a0color,\u00a0end])\nRun when agent ends.\non_tool_end(output[,\u00a0color,\u00a0...])\nIf not the final action, print out observation.\non_tool_error(error,\u00a0**kwargs)\nDo nothing.\non_tool_start(serialized,\u00a0input_str,\u00a0**kwargs)\nDo nothing.\nAttributes", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.stdout.StdOutCallbackHandler.html"} {"id": "79b9d2ee2597-1", "text": "Do nothing.\nAttributes\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\non_agent_action(action: AgentAction, color: Optional[str] = None, **kwargs: Any) \u2192 Any[source]\u00b6\nRun on agent action.\non_agent_finish(finish: AgentFinish, color: Optional[str] = None, **kwargs: Any) \u2192 None[source]\u00b6\nRun on agent end.\non_chain_end(outputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nPrint out that we finished a chain.\non_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nPrint out that we are entering a chain.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\non_llm_end(response: LLMResult, **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing.\non_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing.\non_llm_new_token(token: str, **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.stdout.StdOutCallbackHandler.html"} {"id": "79b9d2ee2597-2", "text": "Do nothing.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) \u2192 None[source]\u00b6\nPrint out the prompts.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever starts running.\non_text(text: str, color: Optional[str] = None, end: str = '', **kwargs: Any) \u2192 None[source]\u00b6\nRun when agent ends.\non_tool_end(output: str, color: Optional[str] = None, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) \u2192 None[source]\u00b6\nIf not the final action, print out observation.\non_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing.\non_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.stdout.StdOutCallbackHandler.html"} {"id": "79b9d2ee2597-3", "text": "property ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.stdout.StdOutCallbackHandler.html"} {"id": "60e9965bf93d-0", "text": "langchain.callbacks.flyte_callback.import_flytekit\u00b6\nlangchain.callbacks.flyte_callback.import_flytekit() \u2192 Tuple[flytekit, renderer][source]\u00b6\nImport flytekit and flytekitplugins-deck-standard.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.flyte_callback.import_flytekit.html"} {"id": "2598d9795214-0", "text": "langchain.callbacks.manager.get_openai_callback\u00b6\nlangchain.callbacks.manager.get_openai_callback() \u2192 Generator[OpenAICallbackHandler, None, None][source]\u00b6\nGet the OpenAI callback handler in a context manager.\nwhich conveniently exposes token and cost information.\nReturns\nThe OpenAI callback handler.\nReturn type\nOpenAICallbackHandler\nExample\n>>> with get_openai_callback() as cb:\n... # Use the OpenAI callback handler", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.get_openai_callback.html"} {"id": "00f8859070be-0", "text": "langchain.callbacks.tracers.schemas.Run\u00b6\nclass langchain.callbacks.tracers.schemas.Run(*, id: UUID, name: str, start_time: datetime, run_type: str, end_time: Optional[datetime] = None, extra: Optional[dict] = None, error: Optional[str] = None, serialized: Optional[dict] = None, events: Optional[List[Dict]] = None, inputs: dict, outputs: Optional[dict] = None, reference_example_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, execution_order: int, child_execution_order: int, child_runs: List[Run] = None)[source]\u00b6\nBases: RunBase\nRun schema for the V2 API in the Tracer.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam child_execution_order: int [Required]\u00b6\nparam child_runs: List[langchain.callbacks.tracers.schemas.Run] [Optional]\u00b6\nparam end_time: Optional[] = None\u00b6\nparam error: Optional[str] = None\u00b6\nparam events: Optional[List[Dict]] = None\u00b6\nparam execution_order: int [Required]\u00b6\nparam extra: Optional[dict] = None\u00b6\nparam id: uuid.UUID [Required]\u00b6\nparam inputs: dict [Required]\u00b6\nparam name: str [Required]\u00b6\nparam outputs: Optional[dict] = None\u00b6\nparam parent_run_id: Optional[uuid.UUID] = None\u00b6\nparam reference_example_id: Optional[uuid.UUID] = None\u00b6\nparam run_type: str [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.schemas.Run.html"} {"id": "00f8859070be-1", "text": "param run_type: str [Required]\u00b6\nparam serialized: Optional[dict] = None\u00b6\nparam start_time: [Required]\u00b6\nparam tags: Optional[List[str]] [Optional]\u00b6\nvalidator assign_name\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nAssign name to the run.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.schemas.Run.html"} {"id": "66489488fc2b-0", "text": "langchain.callbacks.flyte_callback.FlyteCallbackHandler\u00b6\nclass langchain.callbacks.flyte_callback.FlyteCallbackHandler[source]\u00b6\nBases: BaseMetadataCallbackHandler, BaseCallbackHandler\nThis callback handler is designed specifically for usage within a Flyte task.\nInitialize callback handler.\nMethods\n__init__()\nInitialize callback handler.\nget_custom_callback_meta()\non_agent_action(action,\u00a0**kwargs)\nRun on agent action.\non_agent_finish(finish,\u00a0**kwargs)\nRun when agent ends running.\non_chain_end(outputs,\u00a0**kwargs)\nRun when chain ends running.\non_chain_error(error,\u00a0**kwargs)\nRun when chain errors.\non_chain_start(serialized,\u00a0inputs,\u00a0**kwargs)\nRun when chain starts running.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0**kwargs)\nRun when LLM ends running.\non_llm_error(error,\u00a0**kwargs)\nRun when LLM errors.\non_llm_new_token(token,\u00a0**kwargs)\nRun when LLM generates a new token.\non_llm_start(serialized,\u00a0prompts,\u00a0**kwargs)\nRun when LLM starts.\non_retriever_end(documents,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text,\u00a0**kwargs)\nRun when agent is ending.\non_tool_end(output,\u00a0**kwargs)\nRun when tool ends running.\non_tool_error(error,\u00a0**kwargs)", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.flyte_callback.FlyteCallbackHandler.html"} {"id": "66489488fc2b-1", "text": "Run when tool ends running.\non_tool_error(error,\u00a0**kwargs)\nRun when tool errors.\non_tool_start(serialized,\u00a0input_str,\u00a0**kwargs)\nRun when tool starts running.\nreset_callback_meta()\nReset the callback metadata.\nAttributes\nalways_verbose\nWhether to call verbose callbacks even if verbose is False.\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\nget_custom_callback_meta() \u2192 Dict[str, Any]\u00b6\non_agent_action(action: AgentAction, **kwargs: Any) \u2192 Any[source]\u00b6\nRun on agent action.\non_agent_finish(finish: AgentFinish, **kwargs: Any) \u2192 None[source]\u00b6\nRun when agent ends running.\non_chain_end(outputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain ends running.\non_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain errors.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain starts running.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\non_llm_end(response: LLMResult, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM ends running.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.flyte_callback.FlyteCallbackHandler.html"} {"id": "66489488fc2b-2", "text": "Run when LLM ends running.\non_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM errors.\non_llm_new_token(token: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM generates a new token.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM starts.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever starts running.\non_text(text: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when agent is ending.\non_tool_end(output: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool ends running.\non_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool errors.\non_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool starts running.\nreset_callback_meta() \u2192 None\u00b6\nReset the callback metadata.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.flyte_callback.FlyteCallbackHandler.html"} {"id": "66489488fc2b-3", "text": "reset_callback_meta() \u2192 None\u00b6\nReset the callback metadata.\nproperty always_verbose: bool\u00b6\nWhether to call verbose callbacks even if verbose is False.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.flyte_callback.FlyteCallbackHandler.html"} {"id": "696e7d456144-0", "text": "langchain.callbacks.manager.AsyncRunManager\u00b6\nclass langchain.callbacks.manager.AsyncRunManager(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: BaseRunManager\nAsync Run Manager.\nInitialize the run manager.\nParameters\nrun_id (UUID) \u2013 The ID of the run.\nhandlers (List[BaseCallbackHandler]) \u2013 The list of handlers.\ninheritable_handlers (List[BaseCallbackHandler]) \u2013 The list of inheritable handlers.\nparent_run_id (UUID, optional) \u2013 The ID of the parent run.\nDefaults to None.\ntags (Optional[List[str]]) \u2013 The list of tags.\ninheritable_tags (Optional[List[str]]) \u2013 The list of inheritable tags.\nmetadata (Optional[Dict[str, Any]]) \u2013 The metadata.\ninheritable_metadata (Optional[Dict[str, Any]]) \u2013 The inheritable metadata.\nMethods\n__init__(*,\u00a0run_id,\u00a0handlers,\u00a0...[,\u00a0...])\nInitialize the run manager.\nget_noop_manager()\nReturn a manager that doesn't perform any operations.\non_text(text,\u00a0**kwargs)\nRun when text is received.\nclassmethod get_noop_manager() \u2192 BRM\u00b6\nReturn a manager that doesn\u2019t perform any operations.\nReturns\nThe noop manager.\nReturn type\nBaseRunManager\nasync on_text(text: str, **kwargs: Any) \u2192 Any[source]\u00b6\nRun when text is received.\nParameters\ntext (str) \u2013 The received text.\nReturns\nThe result of the callback.\nReturn type\nAny", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncRunManager.html"} {"id": "e4eba9e03ce3-0", "text": "langchain.callbacks.manager.trace_as_chain_group\u00b6\nlangchain.callbacks.manager.trace_as_chain_group(group_name: str, *, project_name: Optional[str] = None, example_id: Optional[Union[UUID, str]] = None, tags: Optional[List[str]] = None) \u2192 Generator[CallbackManager, None, None][source]\u00b6\nGet a callback manager for a chain group in a context manager.\nUseful for grouping different calls together as a single run even if\nthey aren\u2019t composed in a single chain.\nParameters\ngroup_name (str) \u2013 The name of the chain group.\nproject_name (str, optional) \u2013 The name of the project.\nDefaults to None.\nexample_id (str or UUID, optional) \u2013 The ID of the example.\nDefaults to None.\ntags (List[str], optional) \u2013 The inheritable tags to apply to all runs.\nDefaults to None.\nReturns\nThe callback manager for the chain group.\nReturn type\nCallbackManager\nExample\n>>> with trace_as_chain_group(\"group_name\") as manager:\n... # Use the callback manager for the chain group\n... llm.predict(\"Foo\", callbacks=manager)", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.trace_as_chain_group.html"} {"id": "498e76ca40eb-0", "text": "langchain.callbacks.manager.AsyncCallbackManager\u00b6\nclass langchain.callbacks.manager.AsyncCallbackManager(handlers: List[BaseCallbackHandler], inheritable_handlers: Optional[List[BaseCallbackHandler]] = None, parent_run_id: Optional[UUID] = None, *, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: BaseCallbackManager\nAsync callback manager that can be used to handle callbacks from LangChain.\nInitialize callback manager.\nMethods\n__init__(handlers[,\u00a0inheritable_handlers,\u00a0...])\nInitialize callback manager.\nadd_handler(handler[,\u00a0inherit])\nAdd a handler to the callback manager.\nadd_metadata(metadata[,\u00a0inherit])\nadd_tags(tags[,\u00a0inherit])\nconfigure([inheritable_callbacks,\u00a0...])\nConfigure the async callback manager.\non_chain_start(serialized,\u00a0inputs[,\u00a0run_id])\nRun when chain starts running.\non_chat_model_start(serialized,\u00a0messages,\u00a0...)\nRun when LLM starts running.\non_llm_start(serialized,\u00a0prompts,\u00a0**kwargs)\nRun when LLM starts running.\non_retriever_start(serialized,\u00a0query[,\u00a0...])\nRun when retriever starts running.\non_tool_start(serialized,\u00a0input_str[,\u00a0...])\nRun when tool starts running.\nremove_handler(handler)\nRemove a handler from the callback manager.\nremove_metadata(keys)\nremove_tags(tags)\nset_handler(handler[,\u00a0inherit])\nSet handler as the only handler on the callback manager.\nset_handlers(handlers[,\u00a0inherit])\nSet handlers as the only handlers on the callback manager.\nAttributes\nis_async\nReturn whether the handler is async.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncCallbackManager.html"} {"id": "498e76ca40eb-1", "text": "Attributes\nis_async\nReturn whether the handler is async.\nadd_handler(handler: BaseCallbackHandler, inherit: bool = True) \u2192 None\u00b6\nAdd a handler to the callback manager.\nadd_metadata(metadata: Dict[str, Any], inherit: bool = True) \u2192 None\u00b6\nadd_tags(tags: List[str], inherit: bool = True) \u2192 None\u00b6\nclassmethod configure(inheritable_callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, local_callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, verbose: bool = False, inheritable_tags: Optional[List[str]] = None, local_tags: Optional[List[str]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None, local_metadata: Optional[Dict[str, Any]] = None) \u2192 AsyncCallbackManager[source]\u00b6\nConfigure the async callback manager.\nParameters\ninheritable_callbacks (Optional[Callbacks], optional) \u2013 The inheritable\ncallbacks. Defaults to None.\nlocal_callbacks (Optional[Callbacks], optional) \u2013 The local callbacks.\nDefaults to None.\nverbose (bool, optional) \u2013 Whether to enable verbose mode. Defaults to False.\ninheritable_tags (Optional[List[str]], optional) \u2013 The inheritable tags.\nDefaults to None.\nlocal_tags (Optional[List[str]], optional) \u2013 The local tags.\nDefaults to None.\ninheritable_metadata (Optional[Dict[str, Any]], optional) \u2013 The inheritable\nmetadata. Defaults to None.\nlocal_metadata (Optional[Dict[str, Any]], optional) \u2013 The local metadata.\nDefaults to None.\nReturns\nThe configured async callback manager.\nReturn type\nAsyncCallbackManager\nasync on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], run_id: Optional[UUID] = None, **kwargs: Any) \u2192 AsyncCallbackManagerForChainRun[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncCallbackManager.html"} {"id": "498e76ca40eb-2", "text": "Run when chain starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 The serialized chain.\ninputs (Dict[str, Any]) \u2013 The inputs to the chain.\nrun_id (UUID, optional) \u2013 The ID of the run. Defaults to None.\nReturns\nThe async callback managerfor the chain run.\nReturn type\nAsyncCallbackManagerForChainRun\nasync on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs: Any) \u2192 Any[source]\u00b6\nRun when LLM starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 The serialized LLM.\nmessages (List[List[BaseMessage]]) \u2013 The list of messages.\nrun_id (UUID, optional) \u2013 The ID of the run. Defaults to None.\nReturns\nThe list ofasync callback managers, one for each LLM Run\ncorresponding to each inner message list.\nReturn type\nList[AsyncCallbackManagerForLLMRun]\nasync on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) \u2192 List[AsyncCallbackManagerForLLMRun][source]\u00b6\nRun when LLM starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 The serialized LLM.\nprompts (List[str]) \u2013 The list of prompts.\nrun_id (UUID, optional) \u2013 The ID of the run. Defaults to None.\nReturns\nThe list of asynccallback managers, one for each LLM Run corresponding\nto each prompt.\nReturn type\nList[AsyncCallbackManagerForLLMRun]\nasync on_retriever_start(serialized: Dict[str, Any], query: str, run_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 AsyncCallbackManagerForRetrieverRun[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncCallbackManager.html"} {"id": "498e76ca40eb-3", "text": "Run when retriever starts running.\nasync on_tool_start(serialized: Dict[str, Any], input_str: str, run_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 AsyncCallbackManagerForToolRun[source]\u00b6\nRun when tool starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 The serialized tool.\ninput_str (str) \u2013 The input to the tool.\nrun_id (UUID, optional) \u2013 The ID of the run. Defaults to None.\nparent_run_id (UUID, optional) \u2013 The ID of the parent run.\nDefaults to None.\nReturns\nThe async callback managerfor the tool run.\nReturn type\nAsyncCallbackManagerForToolRun\nremove_handler(handler: BaseCallbackHandler) \u2192 None\u00b6\nRemove a handler from the callback manager.\nremove_metadata(keys: List[str]) \u2192 None\u00b6\nremove_tags(tags: List[str]) \u2192 None\u00b6\nset_handler(handler: BaseCallbackHandler, inherit: bool = True) \u2192 None\u00b6\nSet handler as the only handler on the callback manager.\nset_handlers(handlers: List[BaseCallbackHandler], inherit: bool = True) \u2192 None\u00b6\nSet handlers as the only handlers on the callback manager.\nproperty is_async: bool\u00b6\nReturn whether the handler is async.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncCallbackManager.html"} {"id": "96c62a5ad9b8-0", "text": "langchain.callbacks.manager.AsyncCallbackManagerForChainRun\u00b6\nclass langchain.callbacks.manager.AsyncCallbackManagerForChainRun(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: AsyncParentRunManager, ChainManagerMixin\nAsync callback manager for chain run.\nInitialize the run manager.\nParameters\nrun_id (UUID) \u2013 The ID of the run.\nhandlers (List[BaseCallbackHandler]) \u2013 The list of handlers.\ninheritable_handlers (List[BaseCallbackHandler]) \u2013 The list of inheritable handlers.\nparent_run_id (UUID, optional) \u2013 The ID of the parent run.\nDefaults to None.\ntags (Optional[List[str]]) \u2013 The list of tags.\ninheritable_tags (Optional[List[str]]) \u2013 The list of inheritable tags.\nmetadata (Optional[Dict[str, Any]]) \u2013 The metadata.\ninheritable_metadata (Optional[Dict[str, Any]]) \u2013 The inheritable metadata.\nMethods\n__init__(*,\u00a0run_id,\u00a0handlers,\u00a0...[,\u00a0...])\nInitialize the run manager.\nget_child([tag])\nGet a child callback manager.\nget_noop_manager()\nReturn a manager that doesn't perform any operations.\non_agent_action(action,\u00a0**kwargs)\nRun when agent action is received.\non_agent_finish(finish,\u00a0**kwargs)\nRun when agent finish is received.\non_chain_end(outputs,\u00a0**kwargs)\nRun when chain ends running.\non_chain_error(error,\u00a0**kwargs)\nRun when chain errors.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncCallbackManagerForChainRun.html"} {"id": "96c62a5ad9b8-1", "text": "on_chain_error(error,\u00a0**kwargs)\nRun when chain errors.\non_text(text,\u00a0**kwargs)\nRun when text is received.\nget_child(tag: Optional[str] = None) \u2192 AsyncCallbackManager\u00b6\nGet a child callback manager.\nParameters\ntag (str, optional) \u2013 The tag for the child callback manager.\nDefaults to None.\nReturns\nThe child callback manager.\nReturn type\nAsyncCallbackManager\nclassmethod get_noop_manager() \u2192 BRM\u00b6\nReturn a manager that doesn\u2019t perform any operations.\nReturns\nThe noop manager.\nReturn type\nBaseRunManager\nasync on_agent_action(action: AgentAction, **kwargs: Any) \u2192 Any[source]\u00b6\nRun when agent action is received.\nParameters\naction (AgentAction) \u2013 The agent action.\nReturns\nThe result of the callback.\nReturn type\nAny\nasync on_agent_finish(finish: AgentFinish, **kwargs: Any) \u2192 Any[source]\u00b6\nRun when agent finish is received.\nParameters\nfinish (AgentFinish) \u2013 The agent finish.\nReturns\nThe result of the callback.\nReturn type\nAny\nasync on_chain_end(outputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain ends running.\nParameters\noutputs (Dict[str, Any]) \u2013 The outputs of the chain.\nasync on_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain errors.\nParameters\nerror (Exception or KeyboardInterrupt) \u2013 The error.\nasync on_text(text: str, **kwargs: Any) \u2192 Any\u00b6\nRun when text is received.\nParameters\ntext (str) \u2013 The received text.\nReturns\nThe result of the callback.\nReturn type\nAny", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncCallbackManagerForChainRun.html"} {"id": "2555de9b70b0-0", "text": "langchain.callbacks.infino_callback.InfinoCallbackHandler\u00b6\nclass langchain.callbacks.infino_callback.InfinoCallbackHandler(model_id: Optional[str] = None, model_version: Optional[str] = None, verbose: bool = False)[source]\u00b6\nBases: BaseCallbackHandler\nCallback Handler that logs to Infino.\nMethods\n__init__([model_id,\u00a0model_version,\u00a0verbose])\non_agent_action(action,\u00a0**kwargs)\nDo nothing when agent takes a specific action.\non_agent_finish(finish,\u00a0**kwargs)\nDo nothing.\non_chain_end(outputs,\u00a0**kwargs)\nDo nothing when LLM chain ends.\non_chain_error(error,\u00a0**kwargs)\nNeed to log the error.\non_chain_start(serialized,\u00a0inputs,\u00a0**kwargs)\nDo nothing when LLM chain starts.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0**kwargs)\nLog the latency, error, token usage, and response to Infino.\non_llm_error(error,\u00a0**kwargs)\nSet the error flag.\non_llm_new_token(token,\u00a0**kwargs)\nDo nothing when a new token is generated.\non_llm_start(serialized,\u00a0prompts,\u00a0**kwargs)\nLog the prompts to Infino, and set start time and error flag.\non_retriever_end(documents,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text,\u00a0**kwargs)\nDo nothing.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.infino_callback.InfinoCallbackHandler.html"} {"id": "2555de9b70b0-1", "text": "on_text(text,\u00a0**kwargs)\nDo nothing.\non_tool_end(output[,\u00a0observation_prefix,\u00a0...])\nDo nothing when tool ends.\non_tool_error(error,\u00a0**kwargs)\nDo nothing when tool outputs an error.\non_tool_start(serialized,\u00a0input_str,\u00a0**kwargs)\nDo nothing when tool starts.\nAttributes\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\non_agent_action(action: AgentAction, **kwargs: Any) \u2192 Any[source]\u00b6\nDo nothing when agent takes a specific action.\non_agent_finish(finish: AgentFinish, **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing.\non_chain_end(outputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing when LLM chain ends.\non_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nNeed to log the error.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing when LLM chain starts.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\non_llm_end(response: LLMResult, **kwargs: Any) \u2192 None[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.infino_callback.InfinoCallbackHandler.html"} {"id": "2555de9b70b0-2", "text": "Log the latency, error, token usage, and response to Infino.\non_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nSet the error flag.\non_llm_new_token(token: str, **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing when a new token is generated.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) \u2192 None[source]\u00b6\nLog the prompts to Infino, and set start time and error flag.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever starts running.\non_text(text: str, **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing.\non_tool_end(output: str, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing when tool ends.\non_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing when tool outputs an error.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.infino_callback.InfinoCallbackHandler.html"} {"id": "2555de9b70b0-3", "text": "Do nothing when tool outputs an error.\non_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing when tool starts.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.infino_callback.InfinoCallbackHandler.html"} {"id": "d7231a202568-0", "text": "langchain.callbacks.tracers.langchain.wait_for_all_tracers\u00b6\nlangchain.callbacks.tracers.langchain.wait_for_all_tracers() \u2192 None[source]\u00b6\nWait for all tracers to finish.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.langchain.wait_for_all_tracers.html"} {"id": "18fdc792f901-0", "text": "langchain.callbacks.context_callback.import_context\u00b6\nlangchain.callbacks.context_callback.import_context() \u2192 Any[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.context_callback.import_context.html"} {"id": "67aed01dca44-0", "text": "langchain.callbacks.tracers.langchain_v1.LangChainTracerV1\u00b6\nclass langchain.callbacks.tracers.langchain_v1.LangChainTracerV1(**kwargs: Any)[source]\u00b6\nBases: BaseTracer\nAn implementation of the SharedTracer that POSTS to the langchain endpoint.\nInitialize the LangChain tracer.\nMethods\n__init__(**kwargs)\nInitialize the LangChain tracer.\nload_default_session()\nLoad the default tracing session and set it as the Tracer's session.\nload_session(session_name)\nLoad a session with the given name from the tracer.\non_agent_action(action,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent action.\non_agent_finish(finish,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent end.\non_chain_end(outputs,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nEnd a trace for a chain run.\non_chain_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nHandle an error for a chain run.\non_chain_start(serialized,\u00a0inputs,\u00a0*,\u00a0run_id)\nStart a trace for a chain run.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nEnd a trace for an LLM run.\non_llm_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nHandle an error for an LLM run.\non_llm_new_token(token,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on new LLM token.\non_llm_start(serialized,\u00a0prompts,\u00a0*,\u00a0run_id)\nStart a trace for an LLM run.\non_retriever_end(documents,\u00a0*,\u00a0run_id,\u00a0**kwargs)", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.langchain_v1.LangChainTracerV1.html"} {"id": "67aed01dca44-1", "text": "on_retriever_end(documents,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun on arbitrary text.\non_tool_end(output,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nEnd a trace for a tool run.\non_tool_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nHandle an error for a tool run.\non_tool_start(serialized,\u00a0input_str,\u00a0*,\u00a0run_id)\nStart a trace for a tool run.\nAttributes\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\nload_default_session() \u2192 Union[TracerSessionV1, TracerSession][source]\u00b6\nLoad the default tracing session and set it as the Tracer\u2019s session.\nload_session(session_name: str) \u2192 Union[TracerSessionV1, TracerSession][source]\u00b6\nLoad a session with the given name from the tracer.\non_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on agent action.\non_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on agent end.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.langchain_v1.LangChainTracerV1.html"} {"id": "67aed01dca44-2", "text": "Run on agent end.\non_chain_end(outputs: Dict[str, Any], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nEnd a trace for a chain run.\non_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nHandle an error for a chain run.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nStart a trace for a chain run.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\non_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nEnd a trace for an LLM run.\non_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nHandle an error for an LLM run.\non_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 None\u00b6\nRun on new LLM token. Only available when streaming is enabled.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.langchain_v1.LangChainTracerV1.html"} {"id": "67aed01dca44-3", "text": "Run on new LLM token. Only available when streaming is enabled.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nStart a trace for an LLM run.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nRun when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nRun when Retriever starts running.\non_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on arbitrary text.\non_tool_end(output: str, *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nEnd a trace for a tool run.\non_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nHandle an error for a tool run.\non_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.langchain_v1.LangChainTracerV1.html"} {"id": "67aed01dca44-4", "text": "Start a trace for a tool run.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6\nrun_map: Dict[str, langchain.callbacks.tracers.schemas.Run]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.langchain_v1.LangChainTracerV1.html"} {"id": "554bf4a81bb3-0", "text": "langchain.callbacks.tracers.base.TracerException\u00b6\nclass langchain.callbacks.tracers.base.TracerException[source]\u00b6\nBases: Exception\nBase class for exceptions in tracers module.\nadd_note()\u00b6\nException.add_note(note) \u2013\nadd a note to the exception\nwith_traceback()\u00b6\nException.with_traceback(tb) \u2013\nset self.__traceback__ to tb and return self.\nargs\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.base.TracerException.html"} {"id": "2e200a553536-0", "text": "langchain.callbacks.base.BaseCallbackHandler\u00b6\nclass langchain.callbacks.base.BaseCallbackHandler[source]\u00b6\nBases: LLMManagerMixin, ChainManagerMixin, ToolManagerMixin, RetrieverManagerMixin, CallbackManagerMixin, RunManagerMixin\nBase callback handler that can be used to handle callbacks from langchain.\nMethods\n__init__()\non_agent_action(action,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent action.\non_agent_finish(finish,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent end.\non_chain_end(outputs,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when chain ends running.\non_chain_error(error,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when chain errors.\non_chain_start(serialized,\u00a0inputs,\u00a0*,\u00a0run_id)\nRun when chain starts running.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when LLM ends running.\non_llm_error(error,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when LLM errors.\non_llm_new_token(token,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on new LLM token.\non_llm_start(serialized,\u00a0prompts,\u00a0*,\u00a0run_id)\nRun when LLM starts running.\non_retriever_end(documents,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.base.BaseCallbackHandler.html"} {"id": "2e200a553536-1", "text": "Run when Retriever starts running.\non_text(text,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun on arbitrary text.\non_tool_end(output,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when tool ends running.\non_tool_error(error,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when tool errors.\non_tool_start(serialized,\u00a0input_str,\u00a0*,\u00a0run_id)\nRun when tool starts running.\nAttributes\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\non_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on agent action.\non_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on agent end.\non_chain_end(outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when chain ends running.\non_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when chain errors.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.base.BaseCallbackHandler.html"} {"id": "2e200a553536-2", "text": "Run when chain starts running.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\non_llm_end(response: LLMResult, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when LLM ends running.\non_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when LLM errors.\non_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on new LLM token. Only available when streaming is enabled.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when LLM starts running.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever errors.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.base.BaseCallbackHandler.html"} {"id": "2e200a553536-3", "text": "Run when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever starts running.\non_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on arbitrary text.\non_tool_end(output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when tool ends running.\non_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when tool errors.\non_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when tool starts running.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.base.BaseCallbackHandler.html"} {"id": "c08c03329994-0", "text": "langchain.callbacks.manager.CallbackManagerForToolRun\u00b6\nclass langchain.callbacks.manager.CallbackManagerForToolRun(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: ParentRunManager, ToolManagerMixin\nCallback manager for tool run.\nInitialize the run manager.\nParameters\nrun_id (UUID) \u2013 The ID of the run.\nhandlers (List[BaseCallbackHandler]) \u2013 The list of handlers.\ninheritable_handlers (List[BaseCallbackHandler]) \u2013 The list of inheritable handlers.\nparent_run_id (UUID, optional) \u2013 The ID of the parent run.\nDefaults to None.\ntags (Optional[List[str]]) \u2013 The list of tags.\ninheritable_tags (Optional[List[str]]) \u2013 The list of inheritable tags.\nmetadata (Optional[Dict[str, Any]]) \u2013 The metadata.\ninheritable_metadata (Optional[Dict[str, Any]]) \u2013 The inheritable metadata.\nMethods\n__init__(*,\u00a0run_id,\u00a0handlers,\u00a0...[,\u00a0...])\nInitialize the run manager.\nget_child([tag])\nGet a child callback manager.\nget_noop_manager()\nReturn a manager that doesn't perform any operations.\non_text(text,\u00a0**kwargs)\nRun when text is received.\non_tool_end(output,\u00a0**kwargs)\nRun when tool ends running.\non_tool_error(error,\u00a0**kwargs)\nRun when tool errors.\nget_child(tag: Optional[str] = None) \u2192 CallbackManager\u00b6\nGet a child callback manager.\nParameters", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.CallbackManagerForToolRun.html"} {"id": "c08c03329994-1", "text": "Get a child callback manager.\nParameters\ntag (str, optional) \u2013 The tag for the child callback manager.\nDefaults to None.\nReturns\nThe child callback manager.\nReturn type\nCallbackManager\nclassmethod get_noop_manager() \u2192 BRM\u00b6\nReturn a manager that doesn\u2019t perform any operations.\nReturns\nThe noop manager.\nReturn type\nBaseRunManager\non_text(text: str, **kwargs: Any) \u2192 Any\u00b6\nRun when text is received.\nParameters\ntext (str) \u2013 The received text.\nReturns\nThe result of the callback.\nReturn type\nAny\non_tool_end(output: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool ends running.\nParameters\noutput (str) \u2013 The output of the tool.\non_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool errors.\nParameters\nerror (Exception or KeyboardInterrupt) \u2013 The error.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.CallbackManagerForToolRun.html"} {"id": "2c4a37c67069-0", "text": "langchain.callbacks.whylabs_callback.WhyLabsCallbackHandler\u00b6\nclass langchain.callbacks.whylabs_callback.WhyLabsCallbackHandler(logger: Logger)[source]\u00b6\nBases: BaseCallbackHandler\nCallback Handler for logging to WhyLabs. This callback handler utilizes\nlangkit to extract features from the prompts & responses when interacting with\nan LLM. These features can be used to guardrail, evaluate, and observe interactions\nover time to detect issues relating to hallucinations, prompt engineering,\nor output validation. LangKit is an LLM monitoring toolkit developed by WhyLabs.\nHere are some examples of what can be monitored with LangKit:\n* Text Quality\nreadability score\ncomplexity and grade scores\nText Relevance\n- Similarity scores between prompt/responses\n- Similarity scores against user-defined themes\n- Topic classification\nSecurity and Privacy\n- patterns - count of strings matching a user-defined regex pattern group\n- jailbreaks - similarity scores with respect to known jailbreak attempts\n- prompt injection - similarity scores with respect to known prompt attacks\n- refusals - similarity scores with respect to known LLM refusal responses\nSentiment and Toxicity\n- sentiment analysis\n- toxicity analysis\nFor more information, see https://docs.whylabs.ai/docs/language-model-monitoring\nor check out the LangKit repo here: https://github.com/whylabs/langkit\n\u2014\n:param api_key: WhyLabs API key. Optional because the preferred\nway to specify the API key is with environment variable\nWHYLABS_API_KEY.\nParameters\norg_id (Optional[str]) \u2013 WhyLabs organization id to write profiles to.\nOptional because the preferred way to specify the organization id is\nwith environment variable WHYLABS_DEFAULT_ORG_ID.\ndataset_id (Optional[str]) \u2013 WhyLabs dataset id to write profiles to.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.whylabs_callback.WhyLabsCallbackHandler.html"} {"id": "2c4a37c67069-1", "text": "dataset_id (Optional[str]) \u2013 WhyLabs dataset id to write profiles to.\nOptional because the preferred way to specify the dataset id is\nwith environment variable WHYLABS_DEFAULT_DATASET_ID.\nsentiment (bool) \u2013 Whether to enable sentiment analysis. Defaults to False.\ntoxicity (bool) \u2013 Whether to enable toxicity analysis. Defaults to False.\nthemes (bool) \u2013 Whether to enable theme analysis. Defaults to False.\nInitiate the rolling logger\nMethods\n__init__(logger)\nInitiate the rolling logger\nclose()\nflush()\nfrom_params(*[,\u00a0api_key,\u00a0org_id,\u00a0...])\nInstantiate whylogs Logger from params.\non_agent_action(action[,\u00a0color])\nDo nothing.\non_agent_finish(finish[,\u00a0color])\nRun on agent end.\non_chain_end(outputs,\u00a0**kwargs)\nDo nothing.\non_chain_error(error,\u00a0**kwargs)\nDo nothing.\non_chain_start(serialized,\u00a0inputs,\u00a0**kwargs)\nDo nothing.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0**kwargs)\nPass the generated response to the logger.\non_llm_error(error,\u00a0**kwargs)\nDo nothing.\non_llm_new_token(token,\u00a0**kwargs)\nDo nothing.\non_llm_start(serialized,\u00a0prompts,\u00a0**kwargs)\nPass the input prompts to the logger\non_retriever_end(documents,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.whylabs_callback.WhyLabsCallbackHandler.html"} {"id": "2c4a37c67069-2", "text": "Run when Retriever starts running.\non_text(text,\u00a0**kwargs)\nDo nothing.\non_tool_end(output[,\u00a0color,\u00a0...])\nDo nothing.\non_tool_error(error,\u00a0**kwargs)\nDo nothing.\non_tool_start(serialized,\u00a0input_str,\u00a0**kwargs)\nDo nothing.\nAttributes\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\nclose() \u2192 None[source]\u00b6\nflush() \u2192 None[source]\u00b6\nclassmethod from_params(*, api_key: Optional[str] = None, org_id: Optional[str] = None, dataset_id: Optional[str] = None, sentiment: bool = False, toxicity: bool = False, themes: bool = False) \u2192 Logger[source]\u00b6\nInstantiate whylogs Logger from params.\nParameters\napi_key (Optional[str]) \u2013 WhyLabs API key. Optional because the preferred\nway to specify the API key is with environment variable\nWHYLABS_API_KEY.\norg_id (Optional[str]) \u2013 WhyLabs organization id to write profiles to.\nIf not set must be specified in environment variable\nWHYLABS_DEFAULT_ORG_ID.\ndataset_id (Optional[str]) \u2013 The model or dataset this callback is gathering\ntelemetry for. If not set must be specified in environment variable\nWHYLABS_DEFAULT_DATASET_ID.\nsentiment (bool) \u2013 If True will initialize a model to perform\nsentiment analysis compound score. Defaults to False and will not gather\nthis metric.\ntoxicity (bool) \u2013 If True will initialize a model to score\ntoxicity. Defaults to False and will not gather this metric.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.whylabs_callback.WhyLabsCallbackHandler.html"} {"id": "2c4a37c67069-3", "text": "toxicity. Defaults to False and will not gather this metric.\nthemes (bool) \u2013 If True will initialize a model to calculate\ndistance to configured themes. Defaults to None and will not gather this\nmetric.\non_agent_action(action: AgentAction, color: Optional[str] = None, **kwargs: Any) \u2192 Any[source]\u00b6\nDo nothing.\non_agent_finish(finish: AgentFinish, color: Optional[str] = None, **kwargs: Any) \u2192 None[source]\u00b6\nRun on agent end.\non_chain_end(outputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing.\non_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\non_llm_end(response: LLMResult, **kwargs: Any) \u2192 None[source]\u00b6\nPass the generated response to the logger.\non_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing.\non_llm_new_token(token: str, **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) \u2192 None[source]\u00b6\nPass the input prompts to the logger", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.whylabs_callback.WhyLabsCallbackHandler.html"} {"id": "2c4a37c67069-4", "text": "Pass the input prompts to the logger\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever starts running.\non_text(text: str, **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing.\non_tool_end(output: str, color: Optional[str] = None, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing.\non_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing.\non_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.whylabs_callback.WhyLabsCallbackHandler.html"} {"id": "eb79e003b847-0", "text": "langchain.callbacks.tracers.schemas.TracerSessionBase\u00b6\nclass langchain.callbacks.tracers.schemas.TracerSessionBase(*, start_time: datetime = None, name: Optional[str] = None, extra: Optional[Dict[str, Any]] = None, tenant_id: UUID)[source]\u00b6\nBases: TracerSessionV1Base\nA creation class for TracerSession.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam extra: Optional[Dict[str, Any]] = None\u00b6\nparam name: Optional[str] = None\u00b6\nparam start_time: datetime.datetime [Optional]\u00b6\nparam tenant_id: uuid.UUID [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.schemas.TracerSessionBase.html"} {"id": "da0e865891d9-0", "text": "langchain.callbacks.manager.AsyncCallbackManagerForLLMRun\u00b6\nclass langchain.callbacks.manager.AsyncCallbackManagerForLLMRun(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: AsyncRunManager, LLMManagerMixin\nAsync callback manager for LLM run.\nInitialize the run manager.\nParameters\nrun_id (UUID) \u2013 The ID of the run.\nhandlers (List[BaseCallbackHandler]) \u2013 The list of handlers.\ninheritable_handlers (List[BaseCallbackHandler]) \u2013 The list of inheritable handlers.\nparent_run_id (UUID, optional) \u2013 The ID of the parent run.\nDefaults to None.\ntags (Optional[List[str]]) \u2013 The list of tags.\ninheritable_tags (Optional[List[str]]) \u2013 The list of inheritable tags.\nmetadata (Optional[Dict[str, Any]]) \u2013 The metadata.\ninheritable_metadata (Optional[Dict[str, Any]]) \u2013 The inheritable metadata.\nMethods\n__init__(*,\u00a0run_id,\u00a0handlers,\u00a0...[,\u00a0...])\nInitialize the run manager.\nget_noop_manager()\nReturn a manager that doesn't perform any operations.\non_llm_end(response,\u00a0**kwargs)\nRun when LLM ends running.\non_llm_error(error,\u00a0**kwargs)\nRun when LLM errors.\non_llm_new_token(token,\u00a0**kwargs)\nRun when LLM generates a new token.\non_text(text,\u00a0**kwargs)\nRun when text is received.\nclassmethod get_noop_manager() \u2192 BRM\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncCallbackManagerForLLMRun.html"} {"id": "da0e865891d9-1", "text": "Run when text is received.\nclassmethod get_noop_manager() \u2192 BRM\u00b6\nReturn a manager that doesn\u2019t perform any operations.\nReturns\nThe noop manager.\nReturn type\nBaseRunManager\nasync on_llm_end(response: LLMResult, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM ends running.\nParameters\nresponse (LLMResult) \u2013 The LLM result.\nasync on_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM errors.\nParameters\nerror (Exception or KeyboardInterrupt) \u2013 The error.\nasync on_llm_new_token(token: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM generates a new token.\nParameters\ntoken (str) \u2013 The new token.\nasync on_text(text: str, **kwargs: Any) \u2192 Any\u00b6\nRun when text is received.\nParameters\ntext (str) \u2013 The received text.\nReturns\nThe result of the callback.\nReturn type\nAny", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncCallbackManagerForLLMRun.html"} {"id": "8698761de59c-0", "text": "langchain.callbacks.tracers.evaluation.EvaluatorCallbackHandler\u00b6\nclass langchain.callbacks.tracers.evaluation.EvaluatorCallbackHandler(evaluators: Sequence[RunEvaluator], max_workers: Optional[int] = None, client: Optional[LangChainPlusClient] = None, example_id: Optional[Union[UUID, str]] = None, skip_unfinished: bool = True, project_name: Optional[str] = None, **kwargs: Any)[source]\u00b6\nBases: BaseTracer\nA tracer that runs a run evaluator whenever a run is persisted.\nParameters\nevaluators (Sequence[RunEvaluator]) \u2013 The run evaluators to apply to all top level runs.\nmax_workers (int, optional) \u2013 The maximum number of worker threads to use for running the evaluators.\nIf not specified, it will default to the number of evaluators.\nclient (LangChainPlusClient, optional) \u2013 The LangChainPlusClient instance to use for evaluating the runs.\nIf not specified, a new instance will be created.\nexample_id (Union[UUID, str], optional) \u2013 The example ID to be associated with the runs.\nproject_name (str, optional) \u2013 The LangSmith project name to be organize eval chain runs under.\nexample_id\u00b6\nThe example ID associated with the runs.\nType\nUnion[UUID, None]\nclient\u00b6\nThe LangChainPlusClient instance used for evaluating the runs.\nType\nLangChainPlusClient\nevaluators\u00b6\nThe sequence of run evaluators to be executed.\nType\nSequence[RunEvaluator]\nexecutor\u00b6\nThe thread pool executor used for running the evaluators.\nType\nThreadPoolExecutor\nfutures\u00b6\nThe set of futures representing the running evaluators.\nType\nSet[Future]\nskip_unfinished\u00b6\nWhether to skip runs that are not finished or raised\nan error.\nType\nbool\nproject_name\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.evaluation.EvaluatorCallbackHandler.html"} {"id": "8698761de59c-1", "text": "an error.\nType\nbool\nproject_name\u00b6\nThe LangSmith project name to be organize eval chain runs under.\nType\nOptional[str]\nMethods\n__init__(evaluators[,\u00a0max_workers,\u00a0client,\u00a0...])\non_agent_action(action,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent action.\non_agent_finish(finish,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent end.\non_chain_end(outputs,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nEnd a trace for a chain run.\non_chain_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nHandle an error for a chain run.\non_chain_start(serialized,\u00a0inputs,\u00a0*,\u00a0run_id)\nStart a trace for a chain run.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nEnd a trace for an LLM run.\non_llm_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nHandle an error for an LLM run.\non_llm_new_token(token,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on new LLM token.\non_llm_start(serialized,\u00a0prompts,\u00a0*,\u00a0run_id)\nStart a trace for an LLM run.\non_retriever_end(documents,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.evaluation.EvaluatorCallbackHandler.html"} {"id": "8698761de59c-2", "text": "on_text(text,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun on arbitrary text.\non_tool_end(output,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nEnd a trace for a tool run.\non_tool_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nHandle an error for a tool run.\non_tool_start(serialized,\u00a0input_str,\u00a0*,\u00a0run_id)\nStart a trace for a tool run.\nwait_for_futures()\nWait for all futures to complete.\nAttributes\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nname\nraise_error\nrun_inline\non_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on agent action.\non_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on agent end.\non_chain_end(outputs: Dict[str, Any], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nEnd a trace for a chain run.\non_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nHandle an error for a chain run.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.evaluation.EvaluatorCallbackHandler.html"} {"id": "8698761de59c-3", "text": "Start a trace for a chain run.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\non_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nEnd a trace for an LLM run.\non_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nHandle an error for an LLM run.\non_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 None\u00b6\nRun on new LLM token. Only available when streaming is enabled.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nStart a trace for an LLM run.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nRun when Retriever errors.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.evaluation.EvaluatorCallbackHandler.html"} {"id": "8698761de59c-4", "text": "Run when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nRun when Retriever starts running.\non_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on arbitrary text.\non_tool_end(output: str, *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nEnd a trace for a tool run.\non_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nHandle an error for a tool run.\non_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nStart a trace for a tool run.\nwait_for_futures() \u2192 None[source]\u00b6\nWait for all futures to complete.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nname = 'evaluator_callback_handler'\u00b6\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6\nrun_map: Dict[str, Run]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.evaluation.EvaluatorCallbackHandler.html"} {"id": "fcfe5ebe347c-0", "text": "langchain.callbacks.openai_info.standardize_model_name\u00b6\nlangchain.callbacks.openai_info.standardize_model_name(model_name: str, is_completion: bool = False) \u2192 str[source]\u00b6\nStandardize the model name to a format that can be used in the OpenAI API.\n:param model_name: Model name to standardize.\n:param is_completion: Whether the model is used for completion or not.\nDefaults to False.\nReturns\nStandardized model name.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.standardize_model_name.html"} {"id": "5fec41e19bbf-0", "text": "langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler\u00b6\nclass langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler(*, answer_prefix_tokens: Optional[List[str]] = None, strip_tokens: bool = True, stream_prefix: bool = False)[source]\u00b6\nBases: AsyncIteratorCallbackHandler\nCallback handler that returns an async iterator.\nOnly the final output of the agent will be iterated.\nInstantiate AsyncFinalIteratorCallbackHandler.\nParameters\nanswer_prefix_tokens \u2013 Token sequence that prefixes the answer.\nDefault is [\u201cFinal\u201d, \u201cAnswer\u201d, \u201c:\u201d]\nstrip_tokens \u2013 Ignore white spaces and new lines when comparing\nanswer_prefix_tokens to last tokens? (to determine if answer has been\nreached)\nstream_prefix \u2013 Should answer prefix itself also be streamed?\nMethods\n__init__(*[,\u00a0answer_prefix_tokens,\u00a0...])\nInstantiate AsyncFinalIteratorCallbackHandler.\naiter()\nappend_to_last_tokens(token)\ncheck_if_answer_reached()\non_agent_action(action,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent action.\non_agent_finish(finish,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent end.\non_chain_end(outputs,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when chain ends running.\non_chain_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when chain errors.\non_chain_start(serialized,\u00a0inputs,\u00a0*,\u00a0run_id)\nRun when chain starts running.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0**kwargs)\nRun when LLM ends running.\non_llm_error(error,\u00a0**kwargs)\nRun when LLM errors.\non_llm_new_token(token,\u00a0**kwargs)", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html"} {"id": "5fec41e19bbf-1", "text": "Run when LLM errors.\non_llm_new_token(token,\u00a0**kwargs)\nRun on new LLM token.\non_llm_start(serialized,\u00a0prompts,\u00a0**kwargs)\nRun when LLM starts running.\non_retriever_end(documents,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on retriever end.\non_retriever_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on retriever error.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun on retriever start.\non_text(text,\u00a0*,\u00a0run_id[,\u00a0parent_run_id,\u00a0tags])\nRun on arbitrary text.\non_tool_end(output,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when tool ends running.\non_tool_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when tool errors.\non_tool_start(serialized,\u00a0input_str,\u00a0*,\u00a0run_id)\nRun when tool starts running.\nAttributes\nalways_verbose\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\nasync aiter() \u2192 AsyncIterator[str]\u00b6\nappend_to_last_tokens(token: str) \u2192 None[source]\u00b6\ncheck_if_answer_reached() \u2192 bool[source]\u00b6\nasync on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None\u00b6\nRun on agent action.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html"} {"id": "5fec41e19bbf-2", "text": "Run on agent action.\nasync on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None\u00b6\nRun on agent end.\nasync on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None\u00b6\nRun when chain ends running.\nasync on_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None\u00b6\nRun when chain errors.\nasync on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nRun when chain starts running.\nasync on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\nasync on_llm_end(response: LLMResult, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM ends running.\nasync on_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None\u00b6\nRun when LLM errors.\nasync on_llm_new_token(token: str, **kwargs: Any) \u2192 None[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html"} {"id": "5fec41e19bbf-3", "text": "Run on new LLM token. Only available when streaming is enabled.\nasync on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM starts running.\nasync on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None\u00b6\nRun on retriever end.\nasync on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None\u00b6\nRun on retriever error.\nasync on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nRun on retriever start.\nasync on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None\u00b6\nRun on arbitrary text.\nasync on_tool_end(output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None\u00b6\nRun when tool ends running.\nasync on_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) \u2192 None\u00b6\nRun when tool errors.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html"} {"id": "5fec41e19bbf-4", "text": "Run when tool errors.\nasync on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nRun when tool starts running.\nproperty always_verbose: bool\u00b6\ndone: asyncio.Event\u00b6\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nqueue: asyncio.Queue[str]\u00b6\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html"} {"id": "5e9ae8dc2fe8-0", "text": "langchain.callbacks.whylabs_callback.import_langkit\u00b6\nlangchain.callbacks.whylabs_callback.import_langkit(sentiment: bool = False, toxicity: bool = False, themes: bool = False) \u2192 Any[source]\u00b6\nImport the langkit python package and raise an error if it is not installed.\nParameters\nsentiment \u2013 Whether to import the langkit.sentiment module. Defaults to False.\ntoxicity \u2013 Whether to import the langkit.toxicity module. Defaults to False.\nthemes \u2013 Whether to import the langkit.themes module. Defaults to False.\nReturns\nThe imported langkit module.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.whylabs_callback.import_langkit.html"} {"id": "7aed9bc29d20-0", "text": "langchain.callbacks.tracers.base.BaseTracer\u00b6\nclass langchain.callbacks.tracers.base.BaseTracer(**kwargs: Any)[source]\u00b6\nBases: BaseCallbackHandler, ABC\nBase interface for tracers.\nMethods\n__init__(**kwargs)\non_agent_action(action,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent action.\non_agent_finish(finish,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent end.\non_chain_end(outputs,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nEnd a trace for a chain run.\non_chain_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nHandle an error for a chain run.\non_chain_start(serialized,\u00a0inputs,\u00a0*,\u00a0run_id)\nStart a trace for a chain run.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nEnd a trace for an LLM run.\non_llm_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nHandle an error for an LLM run.\non_llm_new_token(token,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on new LLM token.\non_llm_start(serialized,\u00a0prompts,\u00a0*,\u00a0run_id)\nStart a trace for an LLM run.\non_retriever_end(documents,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.base.BaseTracer.html"} {"id": "7aed9bc29d20-1", "text": "on_text(text,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun on arbitrary text.\non_tool_end(output,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nEnd a trace for a tool run.\non_tool_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nHandle an error for a tool run.\non_tool_start(serialized,\u00a0input_str,\u00a0*,\u00a0run_id)\nStart a trace for a tool run.\nAttributes\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\non_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on agent action.\non_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on agent end.\non_chain_end(outputs: Dict[str, Any], *, run_id: UUID, **kwargs: Any) \u2192 None[source]\u00b6\nEnd a trace for a chain run.\non_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None[source]\u00b6\nHandle an error for a chain run.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None[source]\u00b6\nStart a trace for a chain run.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.base.BaseTracer.html"} {"id": "7aed9bc29d20-2", "text": "Start a trace for a chain run.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\non_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) \u2192 None[source]\u00b6\nEnd a trace for an LLM run.\non_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None[source]\u00b6\nHandle an error for an LLM run.\non_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 None[source]\u00b6\nRun on new LLM token. Only available when streaming is enabled.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None[source]\u00b6\nStart a trace for an LLM run.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) \u2192 None[source]\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None[source]\u00b6\nRun when Retriever errors.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.base.BaseTracer.html"} {"id": "7aed9bc29d20-3", "text": "Run when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None[source]\u00b6\nRun when Retriever starts running.\non_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on arbitrary text.\non_tool_end(output: str, *, run_id: UUID, **kwargs: Any) \u2192 None[source]\u00b6\nEnd a trace for a tool run.\non_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None[source]\u00b6\nHandle an error for a tool run.\non_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None[source]\u00b6\nStart a trace for a tool run.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.base.BaseTracer.html"} {"id": "c1d2f761c541-0", "text": "langchain.callbacks.manager.wandb_tracing_enabled\u00b6\nlangchain.callbacks.manager.wandb_tracing_enabled(session_name: str = 'default') \u2192 Generator[None, None, None][source]\u00b6\nGet the WandbTracer in a context manager.\nParameters\nsession_name (str, optional) \u2013 The name of the session.\nDefaults to \u201cdefault\u201d.\nReturns\nNone\nExample\n>>> with wandb_tracing_enabled() as session:\n... # Use the WandbTracer session", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.wandb_tracing_enabled.html"} {"id": "0e439736fedc-0", "text": "langchain.callbacks.wandb_callback.WandbCallbackHandler\u00b6\nclass langchain.callbacks.wandb_callback.WandbCallbackHandler(job_type: Optional[str] = None, project: Optional[str] = 'langchain_callback_demo', entity: Optional[str] = None, tags: Optional[Sequence] = None, group: Optional[str] = None, name: Optional[str] = None, notes: Optional[str] = None, visualize: bool = False, complexity_metrics: bool = False, stream_logs: bool = False)[source]\u00b6\nBases: BaseMetadataCallbackHandler, BaseCallbackHandler\nCallback Handler that logs to Weights and Biases.\nParameters\njob_type (str) \u2013 The type of job.\nproject (str) \u2013 The project to log to.\nentity (str) \u2013 The entity to log to.\ntags (list) \u2013 The tags to log.\ngroup (str) \u2013 The group to log to.\nname (str) \u2013 The name of the run.\nnotes (str) \u2013 The notes to log.\nvisualize (bool) \u2013 Whether to visualize the run.\ncomplexity_metrics (bool) \u2013 Whether to log complexity metrics.\nstream_logs (bool) \u2013 Whether to stream callback actions to W&B\nThis handler will utilize the associated callback method called and formats\nthe input of each callback function with metadata regarding the state of LLM run,\nand adds the response to the list of records for both the {method}_records and\naction. It then logs the response using the run.log() method to Weights and Biases.\nInitialize callback handler.\nMethods\n__init__([job_type,\u00a0project,\u00a0entity,\u00a0tags,\u00a0...])\nInitialize callback handler.\nflush_tracker([langchain_asset,\u00a0reset,\u00a0...])\nFlush the tracker and reset the session.\nget_custom_callback_meta()", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.wandb_callback.WandbCallbackHandler.html"} {"id": "0e439736fedc-1", "text": "Flush the tracker and reset the session.\nget_custom_callback_meta()\non_agent_action(action,\u00a0**kwargs)\nRun on agent action.\non_agent_finish(finish,\u00a0**kwargs)\nRun when agent ends running.\non_chain_end(outputs,\u00a0**kwargs)\nRun when chain ends running.\non_chain_error(error,\u00a0**kwargs)\nRun when chain errors.\non_chain_start(serialized,\u00a0inputs,\u00a0**kwargs)\nRun when chain starts running.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0**kwargs)\nRun when LLM ends running.\non_llm_error(error,\u00a0**kwargs)\nRun when LLM errors.\non_llm_new_token(token,\u00a0**kwargs)\nRun when LLM generates a new token.\non_llm_start(serialized,\u00a0prompts,\u00a0**kwargs)\nRun when LLM starts.\non_retriever_end(documents,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text,\u00a0**kwargs)\nRun when agent is ending.\non_tool_end(output,\u00a0**kwargs)\nRun when tool ends running.\non_tool_error(error,\u00a0**kwargs)\nRun when tool errors.\non_tool_start(serialized,\u00a0input_str,\u00a0**kwargs)\nRun when tool starts running.\nreset_callback_meta()\nReset the callback metadata.\nAttributes\nalways_verbose\nWhether to call verbose callbacks even if verbose is False.\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.wandb_callback.WandbCallbackHandler.html"} {"id": "0e439736fedc-2", "text": "ignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\nflush_tracker(langchain_asset: Any = None, reset: bool = True, finish: bool = False, job_type: Optional[str] = None, project: Optional[str] = None, entity: Optional[str] = None, tags: Optional[Sequence] = None, group: Optional[str] = None, name: Optional[str] = None, notes: Optional[str] = None, visualize: Optional[bool] = None, complexity_metrics: Optional[bool] = None) \u2192 None[source]\u00b6\nFlush the tracker and reset the session.\nParameters\nlangchain_asset \u2013 The langchain asset to save.\nreset \u2013 Whether to reset the session.\nfinish \u2013 Whether to finish the run.\njob_type \u2013 The job type.\nproject \u2013 The project.\nentity \u2013 The entity.\ntags \u2013 The tags.\ngroup \u2013 The group.\nname \u2013 The name.\nnotes \u2013 The notes.\nvisualize \u2013 Whether to visualize.\ncomplexity_metrics \u2013 Whether to compute complexity metrics.\nReturns \u2013 None\nget_custom_callback_meta() \u2192 Dict[str, Any]\u00b6\non_agent_action(action: AgentAction, **kwargs: Any) \u2192 Any[source]\u00b6\nRun on agent action.\non_agent_finish(finish: AgentFinish, **kwargs: Any) \u2192 None[source]\u00b6\nRun when agent ends running.\non_chain_end(outputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain ends running.\non_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain errors.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.wandb_callback.WandbCallbackHandler.html"} {"id": "0e439736fedc-3", "text": "Run when chain errors.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain starts running.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\non_llm_end(response: LLMResult, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM ends running.\non_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM errors.\non_llm_new_token(token: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM generates a new token.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM starts.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.wandb_callback.WandbCallbackHandler.html"} {"id": "0e439736fedc-4", "text": "Run when Retriever starts running.\non_text(text: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when agent is ending.\non_tool_end(output: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool ends running.\non_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool errors.\non_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool starts running.\nreset_callback_meta() \u2192 None\u00b6\nReset the callback metadata.\nproperty always_verbose: bool\u00b6\nWhether to call verbose callbacks even if verbose is False.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.wandb_callback.WandbCallbackHandler.html"} {"id": "50eee91ad0ea-0", "text": "langchain.callbacks.human.HumanRejectedException\u00b6\nclass langchain.callbacks.human.HumanRejectedException[source]\u00b6\nBases: Exception\nException to raise when a person manually review and rejects a value.\nadd_note()\u00b6\nException.add_note(note) \u2013\nadd a note to the exception\nwith_traceback()\u00b6\nException.with_traceback(tb) \u2013\nset self.__traceback__ to tb and return self.\nargs\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.human.HumanRejectedException.html"} {"id": "88b67526cbdc-0", "text": "langchain.callbacks.mlflow_callback.analyze_text\u00b6\nlangchain.callbacks.mlflow_callback.analyze_text(text: str, nlp: Any = None) \u2192 dict[source]\u00b6\nAnalyze text using textstat and spacy.\nParameters\ntext (str) \u2013 The text to analyze.\nnlp (spacy.lang) \u2013 The spacy language model to use for visualization.\nReturns\nA dictionary containing the complexity metrics and visualizationfiles serialized to HTML string.\nReturn type\n(dict)", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.mlflow_callback.analyze_text.html"} {"id": "80d94a3634fe-0", "text": "langchain.callbacks.tracers.schemas.TracerSessionV1\u00b6\nclass langchain.callbacks.tracers.schemas.TracerSessionV1(*, start_time: datetime = None, name: Optional[str] = None, extra: Optional[Dict[str, Any]] = None, id: int)[source]\u00b6\nBases: TracerSessionV1Base\nTracerSessionV1 schema.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam extra: Optional[Dict[str, Any]] = None\u00b6\nparam id: int [Required]\u00b6\nparam name: Optional[str] = None\u00b6\nparam start_time: datetime.datetime [Optional]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.schemas.TracerSessionV1.html"} {"id": "ea37dcc8fd1d-0", "text": "langchain.callbacks.comet_ml_callback.CometCallbackHandler\u00b6\nclass langchain.callbacks.comet_ml_callback.CometCallbackHandler(task_type: Optional[str] = 'inference', workspace: Optional[str] = None, project_name: Optional[str] = None, tags: Optional[Sequence] = None, name: Optional[str] = None, visualizations: Optional[List[str]] = None, complexity_metrics: bool = False, custom_metrics: Optional[Callable] = None, stream_logs: bool = True)[source]\u00b6\nBases: BaseMetadataCallbackHandler, BaseCallbackHandler\nCallback Handler that logs to Comet.\nParameters\njob_type (str) \u2013 The type of comet_ml task such as \u201cinference\u201d,\n\u201ctesting\u201d or \u201cqc\u201d\nproject_name (str) \u2013 The comet_ml project name\ntags (list) \u2013 Tags to add to the task\ntask_name (str) \u2013 Name of the comet_ml task\nvisualize (bool) \u2013 Whether to visualize the run.\ncomplexity_metrics (bool) \u2013 Whether to log complexity metrics\nstream_logs (bool) \u2013 Whether to stream callback actions to Comet\nThis handler will utilize the associated callback method and formats\nthe input of each callback function with metadata regarding the state of LLM run,\nand adds the response to the list of records for both the {method}_records and\naction. It then logs the response to Comet.\nInitialize callback handler.\nMethods\n__init__([task_type,\u00a0workspace,\u00a0...])\nInitialize callback handler.\nflush_tracker([langchain_asset,\u00a0task_type,\u00a0...])\nFlush the tracker and setup the session.\nget_custom_callback_meta()\non_agent_action(action,\u00a0**kwargs)\nRun on agent action.\non_agent_finish(finish,\u00a0**kwargs)\nRun when agent ends running.\non_chain_end(outputs,\u00a0**kwargs)\nRun when chain ends running.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.comet_ml_callback.CometCallbackHandler.html"} {"id": "ea37dcc8fd1d-1", "text": "on_chain_end(outputs,\u00a0**kwargs)\nRun when chain ends running.\non_chain_error(error,\u00a0**kwargs)\nRun when chain errors.\non_chain_start(serialized,\u00a0inputs,\u00a0**kwargs)\nRun when chain starts running.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0**kwargs)\nRun when LLM ends running.\non_llm_error(error,\u00a0**kwargs)\nRun when LLM errors.\non_llm_new_token(token,\u00a0**kwargs)\nRun when LLM generates a new token.\non_llm_start(serialized,\u00a0prompts,\u00a0**kwargs)\nRun when LLM starts.\non_retriever_end(documents,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text,\u00a0**kwargs)\nRun when agent is ending.\non_tool_end(output,\u00a0**kwargs)\nRun when tool ends running.\non_tool_error(error,\u00a0**kwargs)\nRun when tool errors.\non_tool_start(serialized,\u00a0input_str,\u00a0**kwargs)\nRun when tool starts running.\nreset_callback_meta()\nReset the callback metadata.\nAttributes\nalways_verbose\nWhether to call verbose callbacks even if verbose is False.\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.comet_ml_callback.CometCallbackHandler.html"} {"id": "ea37dcc8fd1d-2", "text": "ignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\nflush_tracker(langchain_asset: Any = None, task_type: Optional[str] = 'inference', workspace: Optional[str] = None, project_name: Optional[str] = 'comet-langchain-demo', tags: Optional[Sequence] = None, name: Optional[str] = None, visualizations: Optional[List[str]] = None, complexity_metrics: bool = False, custom_metrics: Optional[Callable] = None, finish: bool = False, reset: bool = False) \u2192 None[source]\u00b6\nFlush the tracker and setup the session.\nEverything after this will be a new table.\nParameters\nname \u2013 Name of the preformed session so far so it is identifyable\nlangchain_asset \u2013 The langchain asset to save.\nfinish \u2013 Whether to finish the run.\nReturns \u2013 None\nget_custom_callback_meta() \u2192 Dict[str, Any]\u00b6\non_agent_action(action: AgentAction, **kwargs: Any) \u2192 Any[source]\u00b6\nRun on agent action.\non_agent_finish(finish: AgentFinish, **kwargs: Any) \u2192 None[source]\u00b6\nRun when agent ends running.\non_chain_end(outputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain ends running.\non_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain errors.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain starts running.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.comet_ml_callback.CometCallbackHandler.html"} {"id": "ea37dcc8fd1d-3", "text": "Run when chain starts running.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\non_llm_end(response: LLMResult, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM ends running.\non_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM errors.\non_llm_new_token(token: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM generates a new token.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM starts.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever starts running.\non_text(text: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when agent is ending.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.comet_ml_callback.CometCallbackHandler.html"} {"id": "ea37dcc8fd1d-4", "text": "Run when agent is ending.\non_tool_end(output: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool ends running.\non_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool errors.\non_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool starts running.\nreset_callback_meta() \u2192 None\u00b6\nReset the callback metadata.\nproperty always_verbose: bool\u00b6\nWhether to call verbose callbacks even if verbose is False.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.comet_ml_callback.CometCallbackHandler.html"} {"id": "c3130bc63e4f-0", "text": "langchain.callbacks.tracers.wandb.WandbTracer\u00b6\nclass langchain.callbacks.tracers.wandb.WandbTracer(run_args: Optional[WandbRunArgs] = None, **kwargs: Any)[source]\u00b6\nBases: BaseTracer\nCallback Handler that logs to Weights and Biases.\nThis handler will log the model architecture and run traces to Weights and Biases.\nThis will ensure that all LangChain activity is logged to W&B.\nInitializes the WandbTracer.\nParameters\nrun_args \u2013 (dict, optional) Arguments to pass to wandb.init(). If not\nprovided, wandb.init() will be called with no arguments. Please\nrefer to the wandb.init for more details.\nTo use W&B to monitor all LangChain activity, add this tracer like any other\nLangChain callback:\n```\nfrom wandb.integration.langchain import WandbTracer\ntracer = WandbTracer()\nchain = LLMChain(llm, callbacks=[tracer])\n# \u2026end of notebook / script:\ntracer.finish()\n```\nMethods\n__init__([run_args])\nInitializes the WandbTracer.\nfinish()\nWaits for all asynchronous processes to finish and data to upload.\non_agent_action(action,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent action.\non_agent_finish(finish,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent end.\non_chain_end(outputs,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nEnd a trace for a chain run.\non_chain_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nHandle an error for a chain run.\non_chain_start(serialized,\u00a0inputs,\u00a0*,\u00a0run_id)\nStart a trace for a chain run.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.wandb.WandbTracer.html"} {"id": "c3130bc63e4f-1", "text": "on_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nEnd a trace for an LLM run.\non_llm_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nHandle an error for an LLM run.\non_llm_new_token(token,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on new LLM token.\non_llm_start(serialized,\u00a0prompts,\u00a0*,\u00a0run_id)\nStart a trace for an LLM run.\non_retriever_end(documents,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun on arbitrary text.\non_tool_end(output,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nEnd a trace for a tool run.\non_tool_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nHandle an error for a tool run.\non_tool_start(serialized,\u00a0input_str,\u00a0*,\u00a0run_id)\nStart a trace for a tool run.\nAttributes\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\nfinish() \u2192 None[source]\u00b6\nWaits for all asynchronous processes to finish and data to upload.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.wandb.WandbTracer.html"} {"id": "c3130bc63e4f-2", "text": "Waits for all asynchronous processes to finish and data to upload.\nProxy for wandb.finish().\non_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on agent action.\non_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on agent end.\non_chain_end(outputs: Dict[str, Any], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nEnd a trace for a chain run.\non_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nHandle an error for a chain run.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nStart a trace for a chain run.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\non_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nEnd a trace for an LLM run.\non_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nHandle an error for an LLM run.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.wandb.WandbTracer.html"} {"id": "c3130bc63e4f-3", "text": "Handle an error for an LLM run.\non_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 None\u00b6\nRun on new LLM token. Only available when streaming is enabled.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nStart a trace for an LLM run.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nRun when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nRun when Retriever starts running.\non_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on arbitrary text.\non_tool_end(output: str, *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nEnd a trace for a tool run.\non_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nHandle an error for a tool run.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.wandb.WandbTracer.html"} {"id": "c3130bc63e4f-4", "text": "Handle an error for a tool run.\non_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nStart a trace for a tool run.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6\nrun_map: Dict[str, Run]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.wandb.WandbTracer.html"} {"id": "5abbedb19513-0", "text": "langchain.callbacks.argilla_callback.ArgillaCallbackHandler\u00b6\nclass langchain.callbacks.argilla_callback.ArgillaCallbackHandler(dataset_name: str, workspace_name: Optional[str] = None, api_url: Optional[str] = None, api_key: Optional[str] = None)[source]\u00b6\nBases: BaseCallbackHandler\nCallback Handler that logs into Argilla.\nParameters\ndataset_name \u2013 name of the FeedbackDataset in Argilla. Note that it must\nexist in advance. If you need help on how to create a FeedbackDataset in\nArgilla, please visit\nhttps://docs.argilla.io/en/latest/guides/llms/practical_guides/use_argilla_callback_in_langchain.html.\nworkspace_name \u2013 name of the workspace in Argilla where the specified\nFeedbackDataset lives in. Defaults to None, which means that the\ndefault workspace will be used.\napi_url \u2013 URL of the Argilla Server that we want to use, and where the\nFeedbackDataset lives in. Defaults to None, which means that either\nARGILLA_API_URL environment variable or the default http://localhost:6900\nwill be used.\napi_key \u2013 API Key to connect to the Argilla Server. Defaults to None, which\nmeans that either ARGILLA_API_KEY environment variable or the default\nargilla.apikey will be used.\nRaises\nImportError \u2013 if the argilla package is not installed.\nConnectionError \u2013 if the connection to Argilla fails.\nFileNotFoundError \u2013 if the FeedbackDataset retrieval from Argilla fails.\nExamples\n>>> from langchain.llms import OpenAI\n>>> from langchain.callbacks import ArgillaCallbackHandler\n>>> argilla_callback = ArgillaCallbackHandler(\n... dataset_name=\"my-dataset\",\n... workspace_name=\"my-workspace\",\n... api_url=\"http://localhost:6900\",\n... api_key=\"argilla.apikey\",\n... )", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.argilla_callback.ArgillaCallbackHandler.html"} {"id": "5abbedb19513-1", "text": "... api_key=\"argilla.apikey\",\n... )\n>>> llm = OpenAI(\n... temperature=0,\n... callbacks=[argilla_callback],\n... verbose=True,\n... openai_api_key=\"API_KEY_HERE\",\n... )\n>>> llm.generate([\n... \"What is the best NLP-annotation tool out there? (no bias at all)\",\n... ])\n\"Argilla, no doubt about it.\"\nInitializes the ArgillaCallbackHandler.\nParameters\ndataset_name \u2013 name of the FeedbackDataset in Argilla. Note that it must\nexist in advance. If you need help on how to create a FeedbackDataset\nin Argilla, please visit\nhttps://docs.argilla.io/en/latest/guides/llms/practical_guides/use_argilla_callback_in_langchain.html.\nworkspace_name \u2013 name of the workspace in Argilla where the specified\nFeedbackDataset lives in. Defaults to None, which means that the\ndefault workspace will be used.\napi_url \u2013 URL of the Argilla Server that we want to use, and where the\nFeedbackDataset lives in. Defaults to None, which means that either\nARGILLA_API_URL environment variable or the default\nhttp://localhost:6900 will be used.\napi_key \u2013 API Key to connect to the Argilla Server. Defaults to None, which\nmeans that either ARGILLA_API_KEY environment variable or the default\nargilla.apikey will be used.\nRaises\nImportError \u2013 if the argilla package is not installed.\nConnectionError \u2013 if the connection to Argilla fails.\nFileNotFoundError \u2013 if the FeedbackDataset retrieval from Argilla fails.\nMethods\n__init__(dataset_name[,\u00a0workspace_name,\u00a0...])\nInitializes the ArgillaCallbackHandler.\non_agent_action(action,\u00a0**kwargs)\nDo nothing when agent takes a specific action.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.argilla_callback.ArgillaCallbackHandler.html"} {"id": "5abbedb19513-2", "text": "on_agent_action(action,\u00a0**kwargs)\nDo nothing when agent takes a specific action.\non_agent_finish(finish,\u00a0**kwargs)\nDo nothing\non_chain_end(outputs,\u00a0**kwargs)\nIf either the parent_run_id or the run_id is in self.prompts, then log the outputs to Argilla, and pop the run from self.prompts.\non_chain_error(error,\u00a0**kwargs)\nDo nothing when LLM chain outputs an error.\non_chain_start(serialized,\u00a0inputs,\u00a0**kwargs)\nIf the key input is in inputs, then save it in self.prompts using either the parent_run_id or the run_id as the key.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0**kwargs)\nLog records to Argilla when an LLM ends.\non_llm_error(error,\u00a0**kwargs)\nDo nothing when LLM outputs an error.\non_llm_new_token(token,\u00a0**kwargs)\nDo nothing when a new token is generated.\non_llm_start(serialized,\u00a0prompts,\u00a0**kwargs)\nSave the prompts in memory when an LLM starts.\non_retriever_end(documents,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text,\u00a0**kwargs)\nDo nothing\non_tool_end(output[,\u00a0observation_prefix,\u00a0...])\nDo nothing when tool ends.\non_tool_error(error,\u00a0**kwargs)\nDo nothing when tool outputs an error.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.argilla_callback.ArgillaCallbackHandler.html"} {"id": "5abbedb19513-3", "text": "on_tool_error(error,\u00a0**kwargs)\nDo nothing when tool outputs an error.\non_tool_start(serialized,\u00a0input_str,\u00a0**kwargs)\nDo nothing when tool starts.\nAttributes\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\non_agent_action(action: AgentAction, **kwargs: Any) \u2192 Any[source]\u00b6\nDo nothing when agent takes a specific action.\non_agent_finish(finish: AgentFinish, **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing\non_chain_end(outputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nIf either the parent_run_id or the run_id is in self.prompts, then\nlog the outputs to Argilla, and pop the run from self.prompts. The behavior\ndiffers if the output is a list or not.\non_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing when LLM chain outputs an error.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nIf the key input is in inputs, then save it in self.prompts using\neither the parent_run_id or the run_id as the key. This is done so that\nwe don\u2019t log the same input prompt twice, once when the LLM starts and once\nwhen the chain starts.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.argilla_callback.ArgillaCallbackHandler.html"} {"id": "5abbedb19513-4", "text": "when the chain starts.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\non_llm_end(response: LLMResult, **kwargs: Any) \u2192 None[source]\u00b6\nLog records to Argilla when an LLM ends.\non_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing when LLM outputs an error.\non_llm_new_token(token: str, **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing when a new token is generated.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) \u2192 None[source]\u00b6\nSave the prompts in memory when an LLM starts.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever starts running.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.argilla_callback.ArgillaCallbackHandler.html"} {"id": "5abbedb19513-5", "text": "Run when Retriever starts running.\non_text(text: str, **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing\non_tool_end(output: str, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing when tool ends.\non_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing when tool outputs an error.\non_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) \u2192 None[source]\u00b6\nDo nothing when tool starts.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.argilla_callback.ArgillaCallbackHandler.html"} {"id": "8a4e3f367828-0", "text": "langchain.callbacks.wandb_callback.construct_html_from_prompt_and_generation\u00b6\nlangchain.callbacks.wandb_callback.construct_html_from_prompt_and_generation(prompt: str, generation: str) \u2192 Any[source]\u00b6\nConstruct an html element from a prompt and a generation.\nParameters\nprompt (str) \u2013 The prompt.\ngeneration (str) \u2013 The generation.\nReturns\nThe html element.\nReturn type\n(wandb.Html)", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.wandb_callback.construct_html_from_prompt_and_generation.html"} {"id": "c6fdf726a96e-0", "text": "langchain.callbacks.manager.CallbackManagerForRetrieverRun\u00b6\nclass langchain.callbacks.manager.CallbackManagerForRetrieverRun(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: ParentRunManager, RetrieverManagerMixin\nCallback manager for retriever run.\nInitialize the run manager.\nParameters\nrun_id (UUID) \u2013 The ID of the run.\nhandlers (List[BaseCallbackHandler]) \u2013 The list of handlers.\ninheritable_handlers (List[BaseCallbackHandler]) \u2013 The list of inheritable handlers.\nparent_run_id (UUID, optional) \u2013 The ID of the parent run.\nDefaults to None.\ntags (Optional[List[str]]) \u2013 The list of tags.\ninheritable_tags (Optional[List[str]]) \u2013 The list of inheritable tags.\nmetadata (Optional[Dict[str, Any]]) \u2013 The metadata.\ninheritable_metadata (Optional[Dict[str, Any]]) \u2013 The inheritable metadata.\nMethods\n__init__(*,\u00a0run_id,\u00a0handlers,\u00a0...[,\u00a0...])\nInitialize the run manager.\nget_child([tag])\nGet a child callback manager.\nget_noop_manager()\nReturn a manager that doesn't perform any operations.\non_retriever_end(documents,\u00a0**kwargs)\nRun when retriever ends running.\non_retriever_error(error,\u00a0**kwargs)\nRun when retriever errors.\non_text(text,\u00a0**kwargs)\nRun when text is received.\nget_child(tag: Optional[str] = None) \u2192 CallbackManager\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.CallbackManagerForRetrieverRun.html"} {"id": "c6fdf726a96e-1", "text": "get_child(tag: Optional[str] = None) \u2192 CallbackManager\u00b6\nGet a child callback manager.\nParameters\ntag (str, optional) \u2013 The tag for the child callback manager.\nDefaults to None.\nReturns\nThe child callback manager.\nReturn type\nCallbackManager\nclassmethod get_noop_manager() \u2192 BRM\u00b6\nReturn a manager that doesn\u2019t perform any operations.\nReturns\nThe noop manager.\nReturn type\nBaseRunManager\non_retriever_end(documents: Sequence[Document], **kwargs: Any) \u2192 None[source]\u00b6\nRun when retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when retriever errors.\non_text(text: str, **kwargs: Any) \u2192 Any\u00b6\nRun when text is received.\nParameters\ntext (str) \u2013 The received text.\nReturns\nThe result of the callback.\nReturn type\nAny", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.CallbackManagerForRetrieverRun.html"} {"id": "76ddc464eb57-0", "text": "langchain.callbacks.tracers.langchain.LangChainTracer\u00b6\nclass langchain.callbacks.tracers.langchain.LangChainTracer(example_id: Optional[Union[UUID, str]] = None, project_name: Optional[str] = None, client: Optional[LangChainPlusClient] = None, tags: Optional[List[str]] = None, **kwargs: Any)[source]\u00b6\nBases: BaseTracer\nAn implementation of the SharedTracer that POSTS to the langchain endpoint.\nInitialize the LangChain tracer.\nMethods\n__init__([example_id,\u00a0project_name,\u00a0client,\u00a0...])\nInitialize the LangChain tracer.\non_agent_action(action,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent action.\non_agent_finish(finish,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent end.\non_chain_end(outputs,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nEnd a trace for a chain run.\non_chain_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nHandle an error for a chain run.\non_chain_start(serialized,\u00a0inputs,\u00a0*,\u00a0run_id)\nStart a trace for a chain run.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nStart a trace for an LLM run.\non_llm_end(response,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nEnd a trace for an LLM run.\non_llm_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nHandle an error for an LLM run.\non_llm_new_token(token,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on new LLM token.\non_llm_start(serialized,\u00a0prompts,\u00a0*,\u00a0run_id)\nStart a trace for an LLM run.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.langchain.LangChainTracer.html"} {"id": "76ddc464eb57-1", "text": "Start a trace for an LLM run.\non_retriever_end(documents,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun on arbitrary text.\non_tool_end(output,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nEnd a trace for a tool run.\non_tool_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nHandle an error for a tool run.\non_tool_start(serialized,\u00a0input_str,\u00a0*,\u00a0run_id)\nStart a trace for a tool run.\nwait_for_futures()\nWait for the given futures to complete.\nAttributes\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\non_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on agent action.\non_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on agent end.\non_chain_end(outputs: Dict[str, Any], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nEnd a trace for a chain run.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.langchain.LangChainTracer.html"} {"id": "76ddc464eb57-2", "text": "End a trace for a chain run.\non_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nHandle an error for a chain run.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nStart a trace for a chain run.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None[source]\u00b6\nStart a trace for an LLM run.\non_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nEnd a trace for an LLM run.\non_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nHandle an error for an LLM run.\non_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 None\u00b6\nRun on new LLM token. Only available when streaming is enabled.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nStart a trace for an LLM run.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.langchain.LangChainTracer.html"} {"id": "76ddc464eb57-3", "text": "Start a trace for an LLM run.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nRun when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nRun when Retriever starts running.\non_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on arbitrary text.\non_tool_end(output: str, *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nEnd a trace for a tool run.\non_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nHandle an error for a tool run.\non_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nStart a trace for a tool run.\nwait_for_futures() \u2192 None[source]\u00b6\nWait for the given futures to complete.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.langchain.LangChainTracer.html"} {"id": "76ddc464eb57-4", "text": "Whether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6\nrun_map: Dict[str, langchain.callbacks.tracers.schemas.Run]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.langchain.LangChainTracer.html"} {"id": "507ba8173559-0", "text": "langchain.callbacks.context_callback.ContextCallbackHandler\u00b6\nclass langchain.callbacks.context_callback.ContextCallbackHandler(token: str = '', verbose: bool = False, **kwargs: Any)[source]\u00b6\nBases: BaseCallbackHandler\nCallback Handler that records transcripts to Context (https://getcontext.ai).\nKeyword Arguments\ntoken (optional) \u2013 The token with which to authenticate requests to Context.\nVisit https://go.getcontext.ai/settings to generate a token.\nIf not provided, the value of the CONTEXT_TOKEN environment\nvariable will be used.\nRaises\nImportError \u2013 if the context-python package is not installed.\nChat Example:>>> from langchain.llms import ChatOpenAI\n>>> from langchain.callbacks import ContextCallbackHandler\n>>> context_callback = ContextCallbackHandler(\n... token=\"\",\n... )\n>>> chat = ChatOpenAI(\n... temperature=0,\n... headers={\"user_id\": \"123\"},\n... callbacks=[context_callback],\n... openai_api_key=\"API_KEY_HERE\",\n... )\n>>> messages = [\n... SystemMessage(content=\"You translate English to French.\"),\n... HumanMessage(content=\"I love programming with LangChain.\"),\n... ]\n>>> chat(messages)\nChain Example:>>> from langchain import LLMChain\n>>> from langchain.llms import ChatOpenAI\n>>> from langchain.callbacks import ContextCallbackHandler\n>>> context_callback = ContextCallbackHandler(\n... token=\"\",\n... )\n>>> human_message_prompt = HumanMessagePromptTemplate(\n... prompt=PromptTemplate(\n... template=\"What is a good name for a company that makes {product}?\",\n... input_variables=[\"product\"],\n... ),\n... )\n>>> chat_prompt_template = ChatPromptTemplate.from_messages(\n... [human_message_prompt]\n... )\n>>> callback = ContextCallbackHandler(token)", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.context_callback.ContextCallbackHandler.html"} {"id": "507ba8173559-1", "text": "... [human_message_prompt]\n... )\n>>> callback = ContextCallbackHandler(token)\n>>> # Note: the same callback object must be shared between the\n... LLM and the chain.\n>>> chat = ChatOpenAI(temperature=0.9, callbacks=[callback])\n>>> chain = LLMChain(\n... llm=chat,\n... prompt=chat_prompt_template,\n... callbacks=[callback]\n... )\n>>> chain.run(\"colorful socks\")\nMethods\n__init__([token,\u00a0verbose])\non_agent_action(action,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent action.\non_agent_finish(finish,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent end.\non_chain_end(outputs,\u00a0**kwargs)\nRun when chain ends.\non_chain_error(error,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when chain errors.\non_chain_start(serialized,\u00a0inputs,\u00a0**kwargs)\nRun when chain starts.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when the chat model is started.\non_llm_end(response,\u00a0**kwargs)\nRun when LLM ends.\non_llm_error(error,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when LLM errors.\non_llm_new_token(token,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on new LLM token.\non_llm_start(serialized,\u00a0prompts,\u00a0*,\u00a0run_id)\nRun when LLM starts running.\non_retriever_end(documents,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever errors.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.context_callback.ContextCallbackHandler.html"} {"id": "507ba8173559-2", "text": "Run when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun on arbitrary text.\non_tool_end(output,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when tool ends running.\non_tool_error(error,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when tool errors.\non_tool_start(serialized,\u00a0input_str,\u00a0*,\u00a0run_id)\nRun when tool starts running.\nAttributes\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\non_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on agent action.\non_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on agent end.\non_chain_end(outputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain ends.\non_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when chain errors.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain starts.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.context_callback.ContextCallbackHandler.html"} {"id": "507ba8173559-3", "text": "Run when chain starts.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, **kwargs: Any) \u2192 Any[source]\u00b6\nRun when the chat model is started.\non_llm_end(response: LLMResult, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM ends.\non_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when LLM errors.\non_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on new LLM token. Only available when streaming is enabled.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when LLM starts running.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever errors.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.context_callback.ContextCallbackHandler.html"} {"id": "507ba8173559-4", "text": "Run when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever starts running.\non_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on arbitrary text.\non_tool_end(output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when tool ends running.\non_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when tool errors.\non_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when tool starts running.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.context_callback.ContextCallbackHandler.html"} {"id": "1db2e506a73a-0", "text": "langchain.callbacks.utils.load_json\u00b6\nlangchain.callbacks.utils.load_json(json_path: Union[str, Path]) \u2192 str[source]\u00b6\nLoad json file to a string.\nParameters\njson_path (str) \u2013 The path to the json file.\nReturns\nThe string representation of the json file.\nReturn type\n(str)", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.utils.load_json.html"} {"id": "9f18712aec37-0", "text": "langchain.callbacks.tracers.schemas.TracerSessionV1Create\u00b6\nclass langchain.callbacks.tracers.schemas.TracerSessionV1Create(*, start_time: datetime = None, name: Optional[str] = None, extra: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: TracerSessionV1Base\nCreate class for TracerSessionV1.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam extra: Optional[Dict[str, Any]] = None\u00b6\nparam name: Optional[str] = None\u00b6\nparam start_time: datetime.datetime [Optional]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.schemas.TracerSessionV1Create.html"} {"id": "32a59e23b93f-0", "text": "langchain.callbacks.manager.AsyncCallbackManagerForToolRun\u00b6\nclass langchain.callbacks.manager.AsyncCallbackManagerForToolRun(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: AsyncParentRunManager, ToolManagerMixin\nAsync callback manager for tool run.\nInitialize the run manager.\nParameters\nrun_id (UUID) \u2013 The ID of the run.\nhandlers (List[BaseCallbackHandler]) \u2013 The list of handlers.\ninheritable_handlers (List[BaseCallbackHandler]) \u2013 The list of inheritable handlers.\nparent_run_id (UUID, optional) \u2013 The ID of the parent run.\nDefaults to None.\ntags (Optional[List[str]]) \u2013 The list of tags.\ninheritable_tags (Optional[List[str]]) \u2013 The list of inheritable tags.\nmetadata (Optional[Dict[str, Any]]) \u2013 The metadata.\ninheritable_metadata (Optional[Dict[str, Any]]) \u2013 The inheritable metadata.\nMethods\n__init__(*,\u00a0run_id,\u00a0handlers,\u00a0...[,\u00a0...])\nInitialize the run manager.\nget_child([tag])\nGet a child callback manager.\nget_noop_manager()\nReturn a manager that doesn't perform any operations.\non_text(text,\u00a0**kwargs)\nRun when text is received.\non_tool_end(output,\u00a0**kwargs)\nRun when tool ends running.\non_tool_error(error,\u00a0**kwargs)\nRun when tool errors.\nget_child(tag: Optional[str] = None) \u2192 AsyncCallbackManager\u00b6\nGet a child callback manager.\nParameters", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncCallbackManagerForToolRun.html"} {"id": "32a59e23b93f-1", "text": "Get a child callback manager.\nParameters\ntag (str, optional) \u2013 The tag for the child callback manager.\nDefaults to None.\nReturns\nThe child callback manager.\nReturn type\nAsyncCallbackManager\nclassmethod get_noop_manager() \u2192 BRM\u00b6\nReturn a manager that doesn\u2019t perform any operations.\nReturns\nThe noop manager.\nReturn type\nBaseRunManager\nasync on_text(text: str, **kwargs: Any) \u2192 Any\u00b6\nRun when text is received.\nParameters\ntext (str) \u2013 The received text.\nReturns\nThe result of the callback.\nReturn type\nAny\nasync on_tool_end(output: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool ends running.\nParameters\noutput (str) \u2013 The output of the tool.\nasync on_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool errors.\nParameters\nerror (Exception or KeyboardInterrupt) \u2013 The error.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncCallbackManagerForToolRun.html"} {"id": "7730f077eacb-0", "text": "langchain.callbacks.manager.CallbackManagerForLLMRun\u00b6\nclass langchain.callbacks.manager.CallbackManagerForLLMRun(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: RunManager, LLMManagerMixin\nCallback manager for LLM run.\nInitialize the run manager.\nParameters\nrun_id (UUID) \u2013 The ID of the run.\nhandlers (List[BaseCallbackHandler]) \u2013 The list of handlers.\ninheritable_handlers (List[BaseCallbackHandler]) \u2013 The list of inheritable handlers.\nparent_run_id (UUID, optional) \u2013 The ID of the parent run.\nDefaults to None.\ntags (Optional[List[str]]) \u2013 The list of tags.\ninheritable_tags (Optional[List[str]]) \u2013 The list of inheritable tags.\nmetadata (Optional[Dict[str, Any]]) \u2013 The metadata.\ninheritable_metadata (Optional[Dict[str, Any]]) \u2013 The inheritable metadata.\nMethods\n__init__(*,\u00a0run_id,\u00a0handlers,\u00a0...[,\u00a0...])\nInitialize the run manager.\nget_noop_manager()\nReturn a manager that doesn't perform any operations.\non_llm_end(response,\u00a0**kwargs)\nRun when LLM ends running.\non_llm_error(error,\u00a0**kwargs)\nRun when LLM errors.\non_llm_new_token(token,\u00a0**kwargs)\nRun when LLM generates a new token.\non_text(text,\u00a0**kwargs)\nRun when text is received.\nclassmethod get_noop_manager() \u2192 BRM\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.CallbackManagerForLLMRun.html"} {"id": "7730f077eacb-1", "text": "Run when text is received.\nclassmethod get_noop_manager() \u2192 BRM\u00b6\nReturn a manager that doesn\u2019t perform any operations.\nReturns\nThe noop manager.\nReturn type\nBaseRunManager\non_llm_end(response: LLMResult, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM ends running.\nParameters\nresponse (LLMResult) \u2013 The LLM result.\non_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM errors.\nParameters\nerror (Exception or KeyboardInterrupt) \u2013 The error.\non_llm_new_token(token: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM generates a new token.\nParameters\ntoken (str) \u2013 The new token.\non_text(text: str, **kwargs: Any) \u2192 Any\u00b6\nRun when text is received.\nParameters\ntext (str) \u2013 The received text.\nReturns\nThe result of the callback.\nReturn type\nAny", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.CallbackManagerForLLMRun.html"} {"id": "87e78e362cc4-0", "text": "langchain.callbacks.file.FileCallbackHandler\u00b6\nclass langchain.callbacks.file.FileCallbackHandler(filename: str, mode: str = 'a', color: Optional[str] = None)[source]\u00b6\nBases: BaseCallbackHandler\nCallback Handler that writes to a file.\nInitialize callback handler.\nMethods\n__init__(filename[,\u00a0mode,\u00a0color])\nInitialize callback handler.\non_agent_action(action[,\u00a0color])\nRun on agent action.\non_agent_finish(finish[,\u00a0color])\nRun on agent end.\non_chain_end(outputs,\u00a0**kwargs)\nPrint out that we finished a chain.\non_chain_error(error,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when chain errors.\non_chain_start(serialized,\u00a0inputs,\u00a0**kwargs)\nPrint out that we are entering a chain.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when LLM ends running.\non_llm_error(error,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when LLM errors.\non_llm_new_token(token,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on new LLM token.\non_llm_start(serialized,\u00a0prompts,\u00a0*,\u00a0run_id)\nRun when LLM starts running.\non_retriever_end(documents,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text[,\u00a0color,\u00a0end])\nRun when agent ends.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.file.FileCallbackHandler.html"} {"id": "87e78e362cc4-1", "text": "on_text(text[,\u00a0color,\u00a0end])\nRun when agent ends.\non_tool_end(output[,\u00a0color,\u00a0...])\nIf not the final action, print out observation.\non_tool_error(error,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when tool errors.\non_tool_start(serialized,\u00a0input_str,\u00a0*,\u00a0run_id)\nRun when tool starts running.\nAttributes\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\non_agent_action(action: AgentAction, color: Optional[str] = None, **kwargs: Any) \u2192 Any[source]\u00b6\nRun on agent action.\non_agent_finish(finish: AgentFinish, color: Optional[str] = None, **kwargs: Any) \u2192 None[source]\u00b6\nRun on agent end.\non_chain_end(outputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nPrint out that we finished a chain.\non_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when chain errors.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nPrint out that we are entering a chain.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.file.FileCallbackHandler.html"} {"id": "87e78e362cc4-2", "text": "Run when a chat model starts running.\non_llm_end(response: LLMResult, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when LLM ends running.\non_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when LLM errors.\non_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on new LLM token. Only available when streaming is enabled.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when LLM starts running.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever starts running.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.file.FileCallbackHandler.html"} {"id": "87e78e362cc4-3", "text": "Run when Retriever starts running.\non_text(text: str, color: Optional[str] = None, end: str = '', **kwargs: Any) \u2192 None[source]\u00b6\nRun when agent ends.\non_tool_end(output: str, color: Optional[str] = None, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) \u2192 None[source]\u00b6\nIf not the final action, print out observation.\non_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when tool errors.\non_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when tool starts running.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.file.FileCallbackHandler.html"} {"id": "f2f34fcb3a80-0", "text": "langchain.callbacks.aim_callback.AimCallbackHandler\u00b6\nclass langchain.callbacks.aim_callback.AimCallbackHandler(repo: Optional[str] = None, experiment_name: Optional[str] = None, system_tracking_interval: Optional[int] = 10, log_system_params: bool = True)[source]\u00b6\nBases: BaseMetadataCallbackHandler, BaseCallbackHandler\nCallback Handler that logs to Aim.\nParameters\nrepo (str, optional) \u2013 Aim repository path or Repo object to which\nRun object is bound. If skipped, default Repo is used.\nexperiment_name (str, optional) \u2013 Sets Run\u2019s experiment property.\n\u2018default\u2019 if not specified. Can be used later to query runs/sequences.\nsystem_tracking_interval (int, optional) \u2013 Sets the tracking interval\nin seconds for system usage metrics (CPU, Memory, etc.). Set to None\nto disable system metrics tracking.\nlog_system_params (bool, optional) \u2013 Enable/Disable logging of system\nparams such as installed packages, git info, environment variables, etc.\nThis handler will utilize the associated callback method called and formats\nthe input of each callback function with metadata regarding the state of LLM run\nand then logs the response to Aim.\nInitialize callback handler.\nMethods\n__init__([repo,\u00a0experiment_name,\u00a0...])\nInitialize callback handler.\nflush_tracker([repo,\u00a0experiment_name,\u00a0...])\nFlush the tracker and reset the session.\nget_custom_callback_meta()\non_agent_action(action,\u00a0**kwargs)\nRun on agent action.\non_agent_finish(finish,\u00a0**kwargs)\nRun when agent ends running.\non_chain_end(outputs,\u00a0**kwargs)\nRun when chain ends running.\non_chain_error(error,\u00a0**kwargs)\nRun when chain errors.\non_chain_start(serialized,\u00a0inputs,\u00a0**kwargs)\nRun when chain starts running.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.aim_callback.AimCallbackHandler.html"} {"id": "f2f34fcb3a80-1", "text": "Run when chain starts running.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0**kwargs)\nRun when LLM ends running.\non_llm_error(error,\u00a0**kwargs)\nRun when LLM errors.\non_llm_new_token(token,\u00a0**kwargs)\nRun when LLM generates a new token.\non_llm_start(serialized,\u00a0prompts,\u00a0**kwargs)\nRun when LLM starts.\non_retriever_end(documents,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text,\u00a0**kwargs)\nRun when agent is ending.\non_tool_end(output,\u00a0**kwargs)\nRun when tool ends running.\non_tool_error(error,\u00a0**kwargs)\nRun when tool errors.\non_tool_start(serialized,\u00a0input_str,\u00a0**kwargs)\nRun when tool starts running.\nreset_callback_meta()\nReset the callback metadata.\nsetup(**kwargs)\nAttributes\nalways_verbose\nWhether to call verbose callbacks even if verbose is False.\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.aim_callback.AimCallbackHandler.html"} {"id": "f2f34fcb3a80-2", "text": "ignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\nflush_tracker(repo: Optional[str] = None, experiment_name: Optional[str] = None, system_tracking_interval: Optional[int] = 10, log_system_params: bool = True, langchain_asset: Any = None, reset: bool = True, finish: bool = False) \u2192 None[source]\u00b6\nFlush the tracker and reset the session.\nParameters\nrepo (str, optional) \u2013 Aim repository path or Repo object to which\nRun object is bound. If skipped, default Repo is used.\nexperiment_name (str, optional) \u2013 Sets Run\u2019s experiment property.\n\u2018default\u2019 if not specified. Can be used later to query runs/sequences.\nsystem_tracking_interval (int, optional) \u2013 Sets the tracking interval\nin seconds for system usage metrics (CPU, Memory, etc.). Set to None\nto disable system metrics tracking.\nlog_system_params (bool, optional) \u2013 Enable/Disable logging of system\nparams such as installed packages, git info, environment variables, etc.\nlangchain_asset \u2013 The langchain asset to save.\nreset \u2013 Whether to reset the session.\nfinish \u2013 Whether to finish the run.\nReturns \u2013 None\nget_custom_callback_meta() \u2192 Dict[str, Any]\u00b6\non_agent_action(action: AgentAction, **kwargs: Any) \u2192 Any[source]\u00b6\nRun on agent action.\non_agent_finish(finish: AgentFinish, **kwargs: Any) \u2192 None[source]\u00b6\nRun when agent ends running.\non_chain_end(outputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain ends running.\non_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain errors.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.aim_callback.AimCallbackHandler.html"} {"id": "f2f34fcb3a80-3", "text": "Run when chain errors.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain starts running.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\non_llm_end(response: LLMResult, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM ends running.\non_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM errors.\non_llm_new_token(token: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM generates a new token.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM starts.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.aim_callback.AimCallbackHandler.html"} {"id": "f2f34fcb3a80-4", "text": "Run when Retriever starts running.\non_text(text: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when agent is ending.\non_tool_end(output: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool ends running.\non_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool errors.\non_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool starts running.\nreset_callback_meta() \u2192 None\u00b6\nReset the callback metadata.\nsetup(**kwargs: Any) \u2192 None[source]\u00b6\nproperty always_verbose: bool\u00b6\nWhether to call verbose callbacks even if verbose is False.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.aim_callback.AimCallbackHandler.html"} {"id": "f5481089f110-0", "text": "langchain.callbacks.manager.tracing_enabled\u00b6\nlangchain.callbacks.manager.tracing_enabled(session_name: str = 'default') \u2192 Generator[TracerSessionV1, None, None][source]\u00b6\nGet the Deprecated LangChainTracer in a context manager.\nParameters\nsession_name (str, optional) \u2013 The name of the session.\nDefaults to \u201cdefault\u201d.\nReturns\nThe LangChainTracer session.\nReturn type\nTracerSessionV1\nExample\n>>> with tracing_enabled() as session:\n... # Use the LangChainTracer session", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.tracing_enabled.html"} {"id": "7bc81db7722f-0", "text": "langchain.callbacks.mlflow_callback.construct_html_from_prompt_and_generation\u00b6\nlangchain.callbacks.mlflow_callback.construct_html_from_prompt_and_generation(prompt: str, generation: str) \u2192 Any[source]\u00b6\nConstruct an html element from a prompt and a generation.\nParameters\nprompt (str) \u2013 The prompt.\ngeneration (str) \u2013 The generation.\nReturns\nThe html string.\nReturn type\n(str)", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.mlflow_callback.construct_html_from_prompt_and_generation.html"} {"id": "d9d202752871-0", "text": "langchain.callbacks.tracers.schemas.TracerSession\u00b6\nclass langchain.callbacks.tracers.schemas.TracerSession(*, start_time: datetime = None, name: Optional[str] = None, extra: Optional[Dict[str, Any]] = None, tenant_id: UUID, id: UUID)[source]\u00b6\nBases: TracerSessionBase\nTracerSessionV1 schema for the V2 API.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam extra: Optional[Dict[str, Any]] = None\u00b6\nparam id: uuid.UUID [Required]\u00b6\nparam name: Optional[str] = None\u00b6\nparam start_time: datetime.datetime [Optional]\u00b6\nparam tenant_id: uuid.UUID [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.schemas.TracerSession.html"} {"id": "41b463f56a25-0", "text": "langchain.callbacks.openai_info.get_openai_token_cost_for_model\u00b6\nlangchain.callbacks.openai_info.get_openai_token_cost_for_model(model_name: str, num_tokens: int, is_completion: bool = False) \u2192 float[source]\u00b6\nGet the cost in USD for a given model and number of tokens.\nParameters\nmodel_name \u2013 Name of the model\nnum_tokens \u2013 Number of tokens.\nis_completion \u2013 Whether the model is used for completion or not.\nDefaults to False.\nReturns\nCost in USD.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.get_openai_token_cost_for_model.html"} {"id": "6b362db15138-0", "text": "langchain.callbacks.utils.flatten_dict\u00b6\nlangchain.callbacks.utils.flatten_dict(nested_dict: Dict[str, Any], parent_key: str = '', sep: str = '_') \u2192 Dict[str, Any][source]\u00b6\nFlattens a nested dictionary into a flat dictionary.\nParameters\nnested_dict (dict) \u2013 The nested dictionary to flatten.\nparent_key (str) \u2013 The prefix to prepend to the keys of the flattened dict.\nsep (str) \u2013 The separator to use between the parent key and the key of the\nflattened dictionary.\nReturns\nA flat dictionary.\nReturn type\n(dict)", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.utils.flatten_dict.html"} {"id": "15cbef53aa64-0", "text": "langchain.callbacks.clearml_callback.ClearMLCallbackHandler\u00b6\nclass langchain.callbacks.clearml_callback.ClearMLCallbackHandler(task_type: Optional[str] = 'inference', project_name: Optional[str] = 'langchain_callback_demo', tags: Optional[Sequence] = None, task_name: Optional[str] = None, visualize: bool = False, complexity_metrics: bool = False, stream_logs: bool = False)[source]\u00b6\nBases: BaseMetadataCallbackHandler, BaseCallbackHandler\nCallback Handler that logs to ClearML.\nParameters\njob_type (str) \u2013 The type of clearml task such as \u201cinference\u201d, \u201ctesting\u201d or \u201cqc\u201d\nproject_name (str) \u2013 The clearml project name\ntags (list) \u2013 Tags to add to the task\ntask_name (str) \u2013 Name of the clearml task\nvisualize (bool) \u2013 Whether to visualize the run.\ncomplexity_metrics (bool) \u2013 Whether to log complexity metrics\nstream_logs (bool) \u2013 Whether to stream callback actions to ClearML\nThis handler will utilize the associated callback method and formats\nthe input of each callback function with metadata regarding the state of LLM run,\nand adds the response to the list of records for both the {method}_records and\naction. It then logs the response to the ClearML console.\nInitialize callback handler.\nMethods\n__init__([task_type,\u00a0project_name,\u00a0tags,\u00a0...])\nInitialize callback handler.\nanalyze_text(text)\nAnalyze text using textstat and spacy.\nflush_tracker([name,\u00a0langchain_asset,\u00a0finish])\nFlush the tracker and setup the session.\nget_custom_callback_meta()\non_agent_action(action,\u00a0**kwargs)\nRun on agent action.\non_agent_finish(finish,\u00a0**kwargs)\nRun when agent ends running.\non_chain_end(outputs,\u00a0**kwargs)", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.clearml_callback.ClearMLCallbackHandler.html"} {"id": "15cbef53aa64-1", "text": "Run when agent ends running.\non_chain_end(outputs,\u00a0**kwargs)\nRun when chain ends running.\non_chain_error(error,\u00a0**kwargs)\nRun when chain errors.\non_chain_start(serialized,\u00a0inputs,\u00a0**kwargs)\nRun when chain starts running.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0**kwargs)\nRun when LLM ends running.\non_llm_error(error,\u00a0**kwargs)\nRun when LLM errors.\non_llm_new_token(token,\u00a0**kwargs)\nRun when LLM generates a new token.\non_llm_start(serialized,\u00a0prompts,\u00a0**kwargs)\nRun when LLM starts.\non_retriever_end(documents,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text,\u00a0**kwargs)\nRun when agent is ending.\non_tool_end(output,\u00a0**kwargs)\nRun when tool ends running.\non_tool_error(error,\u00a0**kwargs)\nRun when tool errors.\non_tool_start(serialized,\u00a0input_str,\u00a0**kwargs)\nRun when tool starts running.\nreset_callback_meta()\nReset the callback metadata.\nAttributes\nalways_verbose\nWhether to call verbose callbacks even if verbose is False.\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.clearml_callback.ClearMLCallbackHandler.html"} {"id": "15cbef53aa64-2", "text": "ignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\nanalyze_text(text: str) \u2192 dict[source]\u00b6\nAnalyze text using textstat and spacy.\nParameters\ntext (str) \u2013 The text to analyze.\nReturns\nA dictionary containing the complexity metrics.\nReturn type\n(dict)\nflush_tracker(name: Optional[str] = None, langchain_asset: Any = None, finish: bool = False) \u2192 None[source]\u00b6\nFlush the tracker and setup the session.\nEverything after this will be a new table.\nParameters\nname \u2013 Name of the preformed session so far so it is identifyable\nlangchain_asset \u2013 The langchain asset to save.\nfinish \u2013 Whether to finish the run.\nReturns \u2013 None\nget_custom_callback_meta() \u2192 Dict[str, Any]\u00b6\non_agent_action(action: AgentAction, **kwargs: Any) \u2192 Any[source]\u00b6\nRun on agent action.\non_agent_finish(finish: AgentFinish, **kwargs: Any) \u2192 None[source]\u00b6\nRun when agent ends running.\non_chain_end(outputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain ends running.\non_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain errors.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain starts running.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.clearml_callback.ClearMLCallbackHandler.html"} {"id": "15cbef53aa64-3", "text": "Run when a chat model starts running.\non_llm_end(response: LLMResult, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM ends running.\non_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM errors.\non_llm_new_token(token: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM generates a new token.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM starts.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever starts running.\non_text(text: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when agent is ending.\non_tool_end(output: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool ends running.\non_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool errors.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.clearml_callback.ClearMLCallbackHandler.html"} {"id": "15cbef53aa64-4", "text": "Run when tool errors.\non_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool starts running.\nreset_callback_meta() \u2192 None\u00b6\nReset the callback metadata.\nproperty always_verbose: bool\u00b6\nWhether to call verbose callbacks even if verbose is False.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.clearml_callback.ClearMLCallbackHandler.html"} {"id": "4d9493ea8f1f-0", "text": "langchain.callbacks.openai_info.OpenAICallbackHandler\u00b6\nclass langchain.callbacks.openai_info.OpenAICallbackHandler[source]\u00b6\nBases: BaseCallbackHandler\nCallback Handler that tracks OpenAI info.\nMethods\n__init__()\non_agent_action(action,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent action.\non_agent_finish(finish,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent end.\non_chain_end(outputs,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when chain ends running.\non_chain_error(error,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when chain errors.\non_chain_start(serialized,\u00a0inputs,\u00a0*,\u00a0run_id)\nRun when chain starts running.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0**kwargs)\nCollect token usage.\non_llm_error(error,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when LLM errors.\non_llm_new_token(token,\u00a0**kwargs)\nPrint out the token.\non_llm_start(serialized,\u00a0prompts,\u00a0**kwargs)\nPrint out the prompts.\non_retriever_end(documents,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun on arbitrary text.\non_tool_end(output,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when tool ends running.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html"} {"id": "4d9493ea8f1f-1", "text": "Run when tool ends running.\non_tool_error(error,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun when tool errors.\non_tool_start(serialized,\u00a0input_str,\u00a0*,\u00a0run_id)\nRun when tool starts running.\nAttributes\nalways_verbose\nWhether to call verbose callbacks even if verbose is False.\ncompletion_tokens\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nprompt_tokens\nraise_error\nrun_inline\nsuccessful_requests\ntotal_cost\ntotal_tokens\non_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on agent action.\non_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on agent end.\non_chain_end(outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when chain ends running.\non_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when chain errors.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when chain starts running.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html"} {"id": "4d9493ea8f1f-2", "text": "Run when chain starts running.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\non_llm_end(response: LLMResult, **kwargs: Any) \u2192 None[source]\u00b6\nCollect token usage.\non_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when LLM errors.\non_llm_new_token(token: str, **kwargs: Any) \u2192 None[source]\u00b6\nPrint out the token.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) \u2192 None[source]\u00b6\nPrint out the prompts.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever starts running.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html"} {"id": "4d9493ea8f1f-3", "text": "Run when Retriever starts running.\non_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on arbitrary text.\non_tool_end(output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when tool ends running.\non_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when tool errors.\non_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when tool starts running.\nproperty always_verbose: bool\u00b6\nWhether to call verbose callbacks even if verbose is False.\ncompletion_tokens: int = 0\u00b6\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nprompt_tokens: int = 0\u00b6\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6\nsuccessful_requests: int = 0\u00b6\ntotal_cost: float = 0.0\u00b6\ntotal_tokens: int = 0\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html"} {"id": "432b4d79ee9d-0", "text": "langchain.callbacks.manager.BaseRunManager\u00b6\nclass langchain.callbacks.manager.BaseRunManager(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: RunManagerMixin\nBase class for run manager (a bound callback manager).\nInitialize the run manager.\nParameters\nrun_id (UUID) \u2013 The ID of the run.\nhandlers (List[BaseCallbackHandler]) \u2013 The list of handlers.\ninheritable_handlers (List[BaseCallbackHandler]) \u2013 The list of inheritable handlers.\nparent_run_id (UUID, optional) \u2013 The ID of the parent run.\nDefaults to None.\ntags (Optional[List[str]]) \u2013 The list of tags.\ninheritable_tags (Optional[List[str]]) \u2013 The list of inheritable tags.\nmetadata (Optional[Dict[str, Any]]) \u2013 The metadata.\ninheritable_metadata (Optional[Dict[str, Any]]) \u2013 The inheritable metadata.\nMethods\n__init__(*,\u00a0run_id,\u00a0handlers,\u00a0...[,\u00a0...])\nInitialize the run manager.\nget_noop_manager()\nReturn a manager that doesn't perform any operations.\non_text(text,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun on arbitrary text.\nclassmethod get_noop_manager() \u2192 BRM[source]\u00b6\nReturn a manager that doesn\u2019t perform any operations.\nReturns\nThe noop manager.\nReturn type\nBaseRunManager\non_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.BaseRunManager.html"} {"id": "432b4d79ee9d-1", "text": "Run on arbitrary text.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.BaseRunManager.html"} {"id": "99f68c299f8d-0", "text": "langchain.callbacks.clearml_callback.import_clearml\u00b6\nlangchain.callbacks.clearml_callback.import_clearml() \u2192 Any[source]\u00b6\nImport the clearml python package and raise an error if it is not installed.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.clearml_callback.import_clearml.html"} {"id": "fea0035d0d9d-0", "text": "langchain.callbacks.streaming_stdout.StreamingStdOutCallbackHandler\u00b6\nclass langchain.callbacks.streaming_stdout.StreamingStdOutCallbackHandler[source]\u00b6\nBases: BaseCallbackHandler\nCallback handler for streaming. Only works with LLMs that support streaming.\nMethods\n__init__()\non_agent_action(action,\u00a0**kwargs)\nRun on agent action.\non_agent_finish(finish,\u00a0**kwargs)\nRun on agent end.\non_chain_end(outputs,\u00a0**kwargs)\nRun when chain ends running.\non_chain_error(error,\u00a0**kwargs)\nRun when chain errors.\non_chain_start(serialized,\u00a0inputs,\u00a0**kwargs)\nRun when chain starts running.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0**kwargs)\nRun when LLM ends running.\non_llm_error(error,\u00a0**kwargs)\nRun when LLM errors.\non_llm_new_token(token,\u00a0**kwargs)\nRun on new LLM token.\non_llm_start(serialized,\u00a0prompts,\u00a0**kwargs)\nRun when LLM starts running.\non_retriever_end(documents,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id[,\u00a0...])\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text,\u00a0**kwargs)\nRun on arbitrary text.\non_tool_end(output,\u00a0**kwargs)\nRun when tool ends running.\non_tool_error(error,\u00a0**kwargs)\nRun when tool errors.\non_tool_start(serialized,\u00a0input_str,\u00a0**kwargs)\nRun when tool starts running.\nAttributes", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_stdout.StreamingStdOutCallbackHandler.html"} {"id": "fea0035d0d9d-1", "text": "Run when tool starts running.\nAttributes\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nraise_error\nrun_inline\non_agent_action(action: AgentAction, **kwargs: Any) \u2192 Any[source]\u00b6\nRun on agent action.\non_agent_finish(finish: AgentFinish, **kwargs: Any) \u2192 None[source]\u00b6\nRun on agent end.\non_chain_end(outputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain ends running.\non_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain errors.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain starts running.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\non_llm_end(response: LLMResult, **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM ends running.\non_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM errors.\non_llm_new_token(token: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun on new LLM token. Only available when streaming is enabled.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_stdout.StreamingStdOutCallbackHandler.html"} {"id": "fea0035d0d9d-2", "text": "Run on new LLM token. Only available when streaming is enabled.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) \u2192 None[source]\u00b6\nRun when LLM starts running.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when Retriever starts running.\non_text(text: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun on arbitrary text.\non_tool_end(output: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool ends running.\non_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool errors.\non_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) \u2192 None[source]\u00b6\nRun when tool starts running.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_stdout.StreamingStdOutCallbackHandler.html"} {"id": "fea0035d0d9d-3", "text": "Whether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_stdout.StreamingStdOutCallbackHandler.html"} {"id": "91c07a1df55b-0", "text": "langchain.callbacks.tracers.run_collector.RunCollectorCallbackHandler\u00b6\nclass langchain.callbacks.tracers.run_collector.RunCollectorCallbackHandler(example_id: Optional[Union[UUID, str]] = None, **kwargs: Any)[source]\u00b6\nBases: BaseTracer\nA tracer that collects all nested runs in a list.\nThis tracer is useful for inspection and evaluation purposes.\nParameters\nexample_id (Optional[Union[UUID, str]], default=None) \u2013 The ID of the example being traced. It can be either a UUID or a string.\nInitialize the RunCollectorCallbackHandler.\nParameters\nexample_id (Optional[Union[UUID, str]], default=None) \u2013 The ID of the example being traced. It can be either a UUID or a string.\nMethods\n__init__([example_id])\nInitialize the RunCollectorCallbackHandler.\non_agent_action(action,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent action.\non_agent_finish(finish,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent end.\non_chain_end(outputs,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nEnd a trace for a chain run.\non_chain_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nHandle an error for a chain run.\non_chain_start(serialized,\u00a0inputs,\u00a0*,\u00a0run_id)\nStart a trace for a chain run.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nEnd a trace for an LLM run.\non_llm_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nHandle an error for an LLM run.\non_llm_new_token(token,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on new LLM token.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.run_collector.RunCollectorCallbackHandler.html"} {"id": "91c07a1df55b-1", "text": "Run on new LLM token.\non_llm_start(serialized,\u00a0prompts,\u00a0*,\u00a0run_id)\nStart a trace for an LLM run.\non_retriever_end(documents,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.\non_text(text,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun on arbitrary text.\non_tool_end(output,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nEnd a trace for a tool run.\non_tool_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nHandle an error for a tool run.\non_tool_start(serialized,\u00a0input_str,\u00a0*,\u00a0run_id)\nStart a trace for a tool run.\nAttributes\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nname\nraise_error\nrun_inline\non_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on agent action.\non_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on agent end.\non_chain_end(outputs: Dict[str, Any], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nEnd a trace for a chain run.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.run_collector.RunCollectorCallbackHandler.html"} {"id": "91c07a1df55b-2", "text": "End a trace for a chain run.\non_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nHandle an error for a chain run.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nStart a trace for a chain run.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\non_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nEnd a trace for an LLM run.\non_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nHandle an error for an LLM run.\non_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 None\u00b6\nRun on new LLM token. Only available when streaming is enabled.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nStart a trace for an LLM run.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.run_collector.RunCollectorCallbackHandler.html"} {"id": "91c07a1df55b-3", "text": "Start a trace for an LLM run.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nRun when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nRun when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nRun when Retriever starts running.\non_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on arbitrary text.\non_tool_end(output: str, *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nEnd a trace for a tool run.\non_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nHandle an error for a tool run.\non_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nStart a trace for a tool run.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.run_collector.RunCollectorCallbackHandler.html"} {"id": "91c07a1df55b-4", "text": "property ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nname = 'run-collector_callback_handler'\u00b6\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6\nrun_map: Dict[str, Run]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.run_collector.RunCollectorCallbackHandler.html"} {"id": "4ff2e7ad66f6-0", "text": "langchain.callbacks.flyte_callback.analyze_text\u00b6\nlangchain.callbacks.flyte_callback.analyze_text(text: str, nlp: Any = None, textstat: Any = None) \u2192 dict[source]\u00b6\nAnalyze text using textstat and spacy.\nParameters\ntext (str) \u2013 The text to analyze.\nnlp (spacy.lang) \u2013 The spacy language model to use for visualization.\nReturns\nA dictionary containing the complexity metrics and visualizationfiles serialized to HTML string.\nReturn type\n(dict)", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.flyte_callback.analyze_text.html"} {"id": "c98578b0196c-0", "text": "langchain.callbacks.manager.CallbackManagerForChainRun\u00b6\nclass langchain.callbacks.manager.CallbackManagerForChainRun(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: ParentRunManager, ChainManagerMixin\nCallback manager for chain run.\nInitialize the run manager.\nParameters\nrun_id (UUID) \u2013 The ID of the run.\nhandlers (List[BaseCallbackHandler]) \u2013 The list of handlers.\ninheritable_handlers (List[BaseCallbackHandler]) \u2013 The list of inheritable handlers.\nparent_run_id (UUID, optional) \u2013 The ID of the parent run.\nDefaults to None.\ntags (Optional[List[str]]) \u2013 The list of tags.\ninheritable_tags (Optional[List[str]]) \u2013 The list of inheritable tags.\nmetadata (Optional[Dict[str, Any]]) \u2013 The metadata.\ninheritable_metadata (Optional[Dict[str, Any]]) \u2013 The inheritable metadata.\nMethods\n__init__(*,\u00a0run_id,\u00a0handlers,\u00a0...[,\u00a0...])\nInitialize the run manager.\nget_child([tag])\nGet a child callback manager.\nget_noop_manager()\nReturn a manager that doesn't perform any operations.\non_agent_action(action,\u00a0**kwargs)\nRun when agent action is received.\non_agent_finish(finish,\u00a0**kwargs)\nRun when agent finish is received.\non_chain_end(outputs,\u00a0**kwargs)\nRun when chain ends running.\non_chain_error(error,\u00a0**kwargs)\nRun when chain errors.\non_text(text,\u00a0**kwargs)", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.CallbackManagerForChainRun.html"} {"id": "c98578b0196c-1", "text": "Run when chain errors.\non_text(text,\u00a0**kwargs)\nRun when text is received.\nget_child(tag: Optional[str] = None) \u2192 CallbackManager\u00b6\nGet a child callback manager.\nParameters\ntag (str, optional) \u2013 The tag for the child callback manager.\nDefaults to None.\nReturns\nThe child callback manager.\nReturn type\nCallbackManager\nclassmethod get_noop_manager() \u2192 BRM\u00b6\nReturn a manager that doesn\u2019t perform any operations.\nReturns\nThe noop manager.\nReturn type\nBaseRunManager\non_agent_action(action: AgentAction, **kwargs: Any) \u2192 Any[source]\u00b6\nRun when agent action is received.\nParameters\naction (AgentAction) \u2013 The agent action.\nReturns\nThe result of the callback.\nReturn type\nAny\non_agent_finish(finish: AgentFinish, **kwargs: Any) \u2192 Any[source]\u00b6\nRun when agent finish is received.\nParameters\nfinish (AgentFinish) \u2013 The agent finish.\nReturns\nThe result of the callback.\nReturn type\nAny\non_chain_end(outputs: Dict[str, Any], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain ends running.\nParameters\noutputs (Dict[str, Any]) \u2013 The outputs of the chain.\non_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) \u2192 None[source]\u00b6\nRun when chain errors.\nParameters\nerror (Exception or KeyboardInterrupt) \u2013 The error.\non_text(text: str, **kwargs: Any) \u2192 Any\u00b6\nRun when text is received.\nParameters\ntext (str) \u2013 The received text.\nReturns\nThe result of the callback.\nReturn type\nAny", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.CallbackManagerForChainRun.html"} {"id": "d967b60e665a-0", "text": "langchain.callbacks.tracers.langchain.log_error_once\u00b6\nlangchain.callbacks.tracers.langchain.log_error_once(method: str, exception: Exception) \u2192 None[source]\u00b6\nLog an error once.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.langchain.log_error_once.html"} {"id": "e17f253ab591-0", "text": "langchain.callbacks.streamlit.streamlit_callback_handler.LLMThoughtState\u00b6\nclass langchain.callbacks.streamlit.streamlit_callback_handler.LLMThoughtState(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]\u00b6\nBases: Enum\nEnumerator of the LLMThought state.\nAttributes\nTHINKING\nRUNNING_TOOL\nCOMPLETE\nCOMPLETE = 'COMPLETE'\u00b6\nRUNNING_TOOL = 'RUNNING_TOOL'\u00b6\nTHINKING = 'THINKING'\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streamlit.streamlit_callback_handler.LLMThoughtState.html"} {"id": "cba63a723c35-0", "text": "langchain.callbacks.manager.env_var_is_set\u00b6\nlangchain.callbacks.manager.env_var_is_set(env_var: str) \u2192 bool[source]\u00b6\nCheck if an environment variable is set.\nParameters\nenv_var (str) \u2013 The name of the environment variable.\nReturns\nTrue if the environment variable is set, False otherwise.\nReturn type\nbool", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.env_var_is_set.html"} {"id": "9ad438b40760-0", "text": "langchain.callbacks.streamlit.streamlit_callback_handler.ToolRecord\u00b6\nclass langchain.callbacks.streamlit.streamlit_callback_handler.ToolRecord(name: str, input_str: str)[source]\u00b6\nBases: NamedTuple\nThe tool record as a NamedTuple.\nCreate new instance of ToolRecord(name, input_str)\nMethods\n__init__()\ncount(value,\u00a0/)\nReturn number of occurrences of value.\nindex(value[,\u00a0start,\u00a0stop])\nReturn first index of value.\nAttributes\ninput_str\nAlias for field number 1\nname\nAlias for field number 0\ncount(value, /)\u00b6\nReturn number of occurrences of value.\nindex(value, start=0, stop=9223372036854775807, /)\u00b6\nReturn first index of value.\nRaises ValueError if the value is not present.\ninput_str: str\u00b6\nAlias for field number 1\nname: str\u00b6\nAlias for field number 0", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streamlit.streamlit_callback_handler.ToolRecord.html"} {"id": "cccb3825d3ea-0", "text": "langchain.callbacks.manager.RunManager\u00b6\nclass langchain.callbacks.manager.RunManager(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: BaseRunManager\nSync Run Manager.\nInitialize the run manager.\nParameters\nrun_id (UUID) \u2013 The ID of the run.\nhandlers (List[BaseCallbackHandler]) \u2013 The list of handlers.\ninheritable_handlers (List[BaseCallbackHandler]) \u2013 The list of inheritable handlers.\nparent_run_id (UUID, optional) \u2013 The ID of the parent run.\nDefaults to None.\ntags (Optional[List[str]]) \u2013 The list of tags.\ninheritable_tags (Optional[List[str]]) \u2013 The list of inheritable tags.\nmetadata (Optional[Dict[str, Any]]) \u2013 The metadata.\ninheritable_metadata (Optional[Dict[str, Any]]) \u2013 The inheritable metadata.\nMethods\n__init__(*,\u00a0run_id,\u00a0handlers,\u00a0...[,\u00a0...])\nInitialize the run manager.\nget_noop_manager()\nReturn a manager that doesn't perform any operations.\non_text(text,\u00a0**kwargs)\nRun when text is received.\nclassmethod get_noop_manager() \u2192 BRM\u00b6\nReturn a manager that doesn\u2019t perform any operations.\nReturns\nThe noop manager.\nReturn type\nBaseRunManager\non_text(text: str, **kwargs: Any) \u2192 Any[source]\u00b6\nRun when text is received.\nParameters\ntext (str) \u2013 The received text.\nReturns\nThe result of the callback.\nReturn type\nAny", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.RunManager.html"} {"id": "f69a323a8218-0", "text": "langchain.callbacks.tracers.stdout.ConsoleCallbackHandler\u00b6\nclass langchain.callbacks.tracers.stdout.ConsoleCallbackHandler(**kwargs: Any)[source]\u00b6\nBases: BaseTracer\nTracer that prints to the console.\nMethods\n__init__(**kwargs)\nget_breadcrumbs(run)\nget_parents(run)\non_agent_action(action,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent action.\non_agent_finish(finish,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on agent end.\non_chain_end(outputs,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nEnd a trace for a chain run.\non_chain_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nHandle an error for a chain run.\non_chain_start(serialized,\u00a0inputs,\u00a0*,\u00a0run_id)\nStart a trace for a chain run.\non_chat_model_start(serialized,\u00a0messages,\u00a0*,\u00a0...)\nRun when a chat model starts running.\non_llm_end(response,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nEnd a trace for an LLM run.\non_llm_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nHandle an error for an LLM run.\non_llm_new_token(token,\u00a0*,\u00a0run_id[,\u00a0...])\nRun on new LLM token.\non_llm_start(serialized,\u00a0prompts,\u00a0*,\u00a0run_id)\nStart a trace for an LLM run.\non_retriever_end(documents,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nRun when Retriever ends running.\non_retriever_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nRun when Retriever errors.\non_retriever_start(serialized,\u00a0query,\u00a0*,\u00a0run_id)\nRun when Retriever starts running.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.stdout.ConsoleCallbackHandler.html"} {"id": "f69a323a8218-1", "text": "Run when Retriever starts running.\non_text(text,\u00a0*,\u00a0run_id[,\u00a0parent_run_id])\nRun on arbitrary text.\non_tool_end(output,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nEnd a trace for a tool run.\non_tool_error(error,\u00a0*,\u00a0run_id,\u00a0**kwargs)\nHandle an error for a tool run.\non_tool_start(serialized,\u00a0input_str,\u00a0*,\u00a0run_id)\nStart a trace for a tool run.\nAttributes\nignore_agent\nWhether to ignore agent callbacks.\nignore_chain\nWhether to ignore chain callbacks.\nignore_chat_model\nWhether to ignore chat model callbacks.\nignore_llm\nWhether to ignore LLM callbacks.\nignore_retriever\nWhether to ignore retriever callbacks.\nname\nraise_error\nrun_inline\nget_breadcrumbs(run: Run) \u2192 str[source]\u00b6\nget_parents(run: Run) \u2192 List[Run][source]\u00b6\non_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on agent action.\non_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on agent end.\non_chain_end(outputs: Dict[str, Any], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nEnd a trace for a chain run.\non_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nHandle an error for a chain run.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.stdout.ConsoleCallbackHandler.html"} {"id": "f69a323a8218-2", "text": "Handle an error for a chain run.\non_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nStart a trace for a chain run.\non_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun when a chat model starts running.\non_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nEnd a trace for an LLM run.\non_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nHandle an error for an LLM run.\non_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 None\u00b6\nRun on new LLM token. Only available when streaming is enabled.\non_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nStart a trace for an LLM run.\non_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nRun when Retriever ends running.", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.stdout.ConsoleCallbackHandler.html"} {"id": "f69a323a8218-3", "text": "Run when Retriever ends running.\non_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nRun when Retriever errors.\non_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nRun when Retriever starts running.\non_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) \u2192 Any\u00b6\nRun on arbitrary text.\non_tool_end(output: str, *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nEnd a trace for a tool run.\non_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) \u2192 None\u00b6\nHandle an error for a tool run.\non_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 None\u00b6\nStart a trace for a tool run.\nproperty ignore_agent: bool\u00b6\nWhether to ignore agent callbacks.\nproperty ignore_chain: bool\u00b6\nWhether to ignore chain callbacks.\nproperty ignore_chat_model: bool\u00b6\nWhether to ignore chat model callbacks.\nproperty ignore_llm: bool\u00b6\nWhether to ignore LLM callbacks.\nproperty ignore_retriever: bool\u00b6\nWhether to ignore retriever callbacks.\nname = 'console_callback_handler'\u00b6\nraise_error: bool = False\u00b6\nrun_inline: bool = False\u00b6\nrun_map: Dict[str, Run]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.stdout.ConsoleCallbackHandler.html"} {"id": "70193e3bfda8-0", "text": "langchain.callbacks.tracers.schemas.LLMRun\u00b6\nclass langchain.callbacks.tracers.schemas.LLMRun(*, uuid: str, parent_uuid: Optional[str] = None, start_time: datetime = None, end_time: datetime = None, extra: Optional[Dict[str, Any]] = None, execution_order: int, child_execution_order: int, serialized: Dict[str, Any], session_id: int, error: Optional[str] = None, prompts: List[str], response: Optional[LLMResult] = None)[source]\u00b6\nBases: BaseRun\nClass for LLMRun.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam child_execution_order: int [Required]\u00b6\nparam end_time: datetime.datetime [Optional]\u00b6\nparam error: Optional[str] = None\u00b6\nparam execution_order: int [Required]\u00b6\nparam extra: Optional[Dict[str, Any]] = None\u00b6\nparam parent_uuid: Optional[str] = None\u00b6\nparam prompts: List[str] [Required]\u00b6\nparam response: Optional[langchain.schema.output.LLMResult] = None\u00b6\nparam serialized: Dict[str, Any] [Required]\u00b6\nparam session_id: int [Required]\u00b6\nparam start_time: datetime.datetime [Optional]\u00b6\nparam uuid: str [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.schemas.LLMRun.html"} {"id": "b5f218eee9b0-0", "text": "langchain.prompts.chat.ChatPromptValue\u00b6\nclass langchain.prompts.chat.ChatPromptValue(*, messages: List[BaseMessage])[source]\u00b6\nBases: PromptValue\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam messages: List[langchain.schema.messages.BaseMessage] [Required]\u00b6\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nto_messages() \u2192 List[BaseMessage][source]\u00b6\nReturn prompt as messages.\nto_string() \u2192 str[source]\u00b6\nReturn prompt as string.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptValue.html"} {"id": "a2de0b09e231-0", "text": "langchain.prompts.chat.HumanMessagePromptTemplate\u00b6\nclass langchain.prompts.chat.HumanMessagePromptTemplate(*, prompt: StringPromptTemplate, additional_kwargs: dict = None)[source]\u00b6\nBases: BaseStringMessagePromptTemplate\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam additional_kwargs: dict [Optional]\u00b6\nparam prompt: langchain.prompts.base.StringPromptTemplate [Required]\u00b6\nformat(**kwargs: Any) \u2192 BaseMessage[source]\u00b6\nTo a BaseMessage.\nformat_messages(**kwargs: Any) \u2192 List[BaseMessage]\u00b6\nTo messages.\nclassmethod from_template(template: str, template_format: str = 'f-string', **kwargs: Any) \u2192 MessagePromptTemplateT\u00b6\nclassmethod from_template_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) \u2192 MessagePromptTemplateT\u00b6\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty input_variables: List[str]\u00b6\nInput variables for this prompt template.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html"} {"id": "daa702be2630-0", "text": "langchain.prompts.base.check_valid_template\u00b6\nlangchain.prompts.base.check_valid_template(template: str, template_format: str, input_variables: List[str]) \u2192 None[source]\u00b6\nCheck that template string is valid.", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.base.check_valid_template.html"} {"id": "abb3feb8f18c-0", "text": "langchain.prompts.chat.BaseChatPromptTemplate\u00b6\nclass langchain.prompts.chat.BaseChatPromptTemplate(*, input_variables: List[str], output_parser: Optional[BaseOutputParser] = None, partial_variables: Mapping[str, Union[str, Callable[[], str]]] = None)[source]\u00b6\nBases: BasePromptTemplate, ABC\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam input_variables: List[str] [Required]\u00b6\nA list of the names of the variables the prompt template expects.\nparam output_parser: Optional[BaseOutputParser] = None\u00b6\nHow to parse the output of calling an LLM on this formatted prompt.\nparam partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of prompt.\nformat(**kwargs: Any) \u2192 str[source]\u00b6\nFormat the prompt with the inputs.\nParameters\nkwargs \u2013 Any arguments to be passed to the prompt template.\nReturns\nA formatted string.\nExample:\nprompt.format(variable1=\"foo\")\nabstract format_messages(**kwargs: Any) \u2192 List[BaseMessage][source]\u00b6\nFormat kwargs into a list of messages.\nformat_prompt(**kwargs: Any) \u2192 PromptValue[source]\u00b6\nCreate Chat Messages.\npartial(**kwargs: Union[str, Callable[[], str]]) \u2192 BasePromptTemplate\u00b6\nReturn a partial of the prompt template.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the prompt.\nParameters\nfile_path \u2013 Path to directory to save prompt to.\nExample:\n.. code-block:: python\nprompt.save(file_path=\u201dpath/prompt.yaml\u201d)\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseChatPromptTemplate.html"} {"id": "abb3feb8f18c-1", "text": "to_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_variable_names\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate variable names do not include restricted names.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseChatPromptTemplate.html"} {"id": "6cf0daf5a05f-0", "text": "langchain.prompts.example_selector.length_based.LengthBasedExampleSelector\u00b6\nclass langchain.prompts.example_selector.length_based.LengthBasedExampleSelector(*, examples: ~typing.List[dict], example_prompt: ~langchain.prompts.prompt.PromptTemplate, get_text_length: ~typing.Callable[[str], int] = , max_length: int = 2048, example_text_lengths: ~typing.List[int] = [])[source]\u00b6\nBases: BaseExampleSelector, BaseModel\nSelect examples based on length.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam example_prompt: langchain.prompts.prompt.PromptTemplate [Required]\u00b6\nPrompt template used to format the examples.\nparam examples: List[dict] [Required]\u00b6\nA list of the examples that the prompt template expects.\nparam get_text_length: Callable[[str], int] = \u00b6\nFunction to measure prompt length. Defaults to word count.\nparam max_length: int = 2048\u00b6\nMax length for the prompt, beyond which examples are cut.\nadd_example(example: Dict[str, str]) \u2192 None[source]\u00b6\nAdd new example to list.\nvalidator calculate_example_text_lengths\u00a0 \u00bb\u00a0 example_text_lengths[source]\u00b6\nCalculate text lengths if they don\u2019t exist.\nselect_examples(input_variables: Dict[str, str]) \u2192 List[dict][source]\u00b6\nSelect which examples to use based on the input lengths.", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.length_based.LengthBasedExampleSelector.html"} {"id": "0074066c352a-0", "text": "langchain.prompts.loading.load_prompt\u00b6\nlangchain.prompts.loading.load_prompt(path: Union[str, Path]) \u2192 BasePromptTemplate[source]\u00b6\nUnified method for loading a prompt from LangChainHub or local fs.", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.loading.load_prompt.html"} {"id": "19e882992dc4-0", "text": "langchain.prompts.chat.BaseMessagePromptTemplate\u00b6\nclass langchain.prompts.chat.BaseMessagePromptTemplate[source]\u00b6\nBases: Serializable, ABC\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nabstract format_messages(**kwargs: Any) \u2192 List[BaseMessage][source]\u00b6\nTo messages.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nabstract property input_variables: List[str]\u00b6\nInput variables for this prompt template.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseMessagePromptTemplate.html"} {"id": "00fd7b7951f2-0", "text": "langchain.prompts.chat.BaseStringMessagePromptTemplate\u00b6\nclass langchain.prompts.chat.BaseStringMessagePromptTemplate(*, prompt: StringPromptTemplate, additional_kwargs: dict = None)[source]\u00b6\nBases: BaseMessagePromptTemplate, ABC\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam additional_kwargs: dict [Optional]\u00b6\nparam prompt: langchain.prompts.base.StringPromptTemplate [Required]\u00b6\nabstract format(**kwargs: Any) \u2192 BaseMessage[source]\u00b6\nTo a BaseMessage.\nformat_messages(**kwargs: Any) \u2192 List[BaseMessage][source]\u00b6\nTo messages.\nclassmethod from_template(template: str, template_format: str = 'f-string', **kwargs: Any) \u2192 MessagePromptTemplateT[source]\u00b6\nclassmethod from_template_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) \u2192 MessagePromptTemplateT[source]\u00b6\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty input_variables: List[str]\u00b6\nInput variables for this prompt template.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseStringMessagePromptTemplate.html"} {"id": "00fd7b7951f2-1", "text": "Return whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseStringMessagePromptTemplate.html"} {"id": "44ef1579da0a-0", "text": "langchain.prompts.chat.AIMessagePromptTemplate\u00b6\nclass langchain.prompts.chat.AIMessagePromptTemplate(*, prompt: StringPromptTemplate, additional_kwargs: dict = None)[source]\u00b6\nBases: BaseStringMessagePromptTemplate\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam additional_kwargs: dict [Optional]\u00b6\nparam prompt: langchain.prompts.base.StringPromptTemplate [Required]\u00b6\nformat(**kwargs: Any) \u2192 BaseMessage[source]\u00b6\nTo a BaseMessage.\nformat_messages(**kwargs: Any) \u2192 List[BaseMessage]\u00b6\nTo messages.\nclassmethod from_template(template: str, template_format: str = 'f-string', **kwargs: Any) \u2192 MessagePromptTemplateT\u00b6\nclassmethod from_template_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) \u2192 MessagePromptTemplateT\u00b6\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty input_variables: List[str]\u00b6\nInput variables for this prompt template.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html"} {"id": "88c642911995-0", "text": "langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector\u00b6\nclass langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector(*, vectorstore: VectorStore, k: int = 4, example_keys: Optional[List[str]] = None, input_keys: Optional[List[str]] = None, fetch_k: int = 20)[source]\u00b6\nBases: SemanticSimilarityExampleSelector\nExampleSelector that selects examples based on Max Marginal Relevance.\nThis was shown to improve performance in this paper:\nhttps://arxiv.org/pdf/2211.13892.pdf\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam example_keys: Optional[List[str]] = None\u00b6\nOptional keys to filter examples to.\nparam fetch_k: int = 20\u00b6\nNumber of examples to fetch to rerank.\nparam input_keys: Optional[List[str]] = None\u00b6\nOptional keys to filter input to. If provided, the search is based on\nthe input variables instead of all variables.\nparam k: int = 4\u00b6\nNumber of examples to select.\nparam vectorstore: langchain.vectorstores.base.VectorStore [Required]\u00b6\nVectorStore than contains information about examples.\nadd_example(example: Dict[str, str]) \u2192 str\u00b6\nAdd new example to vectorstore.\nclassmethod from_examples(examples: List[dict], embeddings: Embeddings, vectorstore_cls: Type[VectorStore], k: int = 4, input_keys: Optional[List[str]] = None, fetch_k: int = 20, **vectorstore_cls_kwargs: Any) \u2192 MaxMarginalRelevanceExampleSelector[source]\u00b6\nCreate k-shot example selector using example list and embeddings.\nReshuffles examples dynamically based on query similarity.\nParameters", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector.html"} {"id": "88c642911995-1", "text": "Reshuffles examples dynamically based on query similarity.\nParameters\nexamples \u2013 List of examples to use in the prompt.\nembeddings \u2013 An iniialized embedding API interface, e.g. OpenAIEmbeddings().\nvectorstore_cls \u2013 A vector store DB interface class, e.g. FAISS.\nk \u2013 Number of examples to select\ninput_keys \u2013 If provided, the search is based on the input variables\ninstead of all variables.\nvectorstore_cls_kwargs \u2013 optional kwargs containing url for vector store\nReturns\nThe ExampleSelector instantiated, backed by a vector store.\nselect_examples(input_variables: Dict[str, str]) \u2192 List[dict][source]\u00b6\nSelect which examples to use based on semantic similarity.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector.html"} {"id": "fc3ece4e8e7e-0", "text": "langchain.prompts.few_shot_with_templates.FewShotPromptWithTemplates\u00b6\nclass langchain.prompts.few_shot_with_templates.FewShotPromptWithTemplates(*, input_variables: List[str], output_parser: Optional[BaseOutputParser] = None, partial_variables: Mapping[str, Union[str, Callable[[], str]]] = None, examples: Optional[List[dict]] = None, example_selector: Optional[BaseExampleSelector] = None, example_prompt: PromptTemplate, suffix: StringPromptTemplate, example_separator: str = '\\n\\n', prefix: Optional[StringPromptTemplate] = None, template_format: str = 'f-string', validate_template: bool = True)[source]\u00b6\nBases: StringPromptTemplate\nPrompt template that contains few shot examples.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam example_prompt: langchain.prompts.prompt.PromptTemplate [Required]\u00b6\nPromptTemplate used to format an individual example.\nparam example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = None\u00b6\nExampleSelector to choose the examples to format into the prompt.\nEither this or examples should be provided.\nparam example_separator: str = '\\n\\n'\u00b6\nString separator used to join the prefix, the examples, and suffix.\nparam examples: Optional[List[dict]] = None\u00b6\nExamples to format into the prompt.\nEither this or example_selector should be provided.\nparam input_variables: List[str] [Required]\u00b6\nA list of the names of the variables the prompt template expects.\nparam output_parser: Optional[BaseOutputParser] = None\u00b6\nHow to parse the output of calling an LLM on this formatted prompt.\nparam partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]\u00b6", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html"} {"id": "fc3ece4e8e7e-1", "text": "param prefix: Optional[langchain.prompts.base.StringPromptTemplate] = None\u00b6\nA PromptTemplate to put before the examples.\nparam suffix: langchain.prompts.base.StringPromptTemplate [Required]\u00b6\nA PromptTemplate to put after the examples.\nparam template_format: str = 'f-string'\u00b6\nThe format of the prompt template. Options are: \u2018f-string\u2019, \u2018jinja2\u2019.\nparam validate_template: bool = True\u00b6\nWhether or not to try validating the template.\nvalidator check_examples_and_selector\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nCheck that one and only one of examples/example_selector are provided.\ndict(**kwargs: Any) \u2192 Dict[source]\u00b6\nReturn a dictionary of the prompt.\nformat(**kwargs: Any) \u2192 str[source]\u00b6\nFormat the prompt with the inputs.\nParameters\nkwargs \u2013 Any arguments to be passed to the prompt template.\nReturns\nA formatted string.\nExample:\nprompt.format(variable1=\"foo\")\nformat_prompt(**kwargs: Any) \u2192 PromptValue\u00b6\nCreate Chat Messages.\npartial(**kwargs: Union[str, Callable[[], str]]) \u2192 BasePromptTemplate\u00b6\nReturn a partial of the prompt template.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the prompt.\nParameters\nfile_path \u2013 Path to directory to save prompt to.\nExample:\n.. code-block:: python\nprompt.save(file_path=\u201dpath/prompt.yaml\u201d)\nvalidator template_is_valid\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nCheck that prefix, suffix and input variables are consistent.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_variable_names\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate variable names do not include restricted names.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html"} {"id": "fc3ece4e8e7e-2", "text": "property lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html"} {"id": "18d41eba8401-0", "text": "langchain.prompts.base.validate_jinja2\u00b6\nlangchain.prompts.base.validate_jinja2(template: str, input_variables: List[str]) \u2192 None[source]\u00b6\nValidate that the input variables are valid for the template.\nIssues an warning if missing or extra variables are found.\nParameters\ntemplate \u2013 The template string.\ninput_variables \u2013 The input variables.", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.base.validate_jinja2.html"} {"id": "dc4d666580ef-0", "text": "langchain.prompts.example_selector.ngram_overlap.ngram_overlap_score\u00b6\nlangchain.prompts.example_selector.ngram_overlap.ngram_overlap_score(source: List[str], example: List[str]) \u2192 float[source]\u00b6\nCompute ngram overlap score of source and example as sentence_bleu score.\nUse sentence_bleu with method1 smoothing function and auto reweighting.\nReturn float value between 0.0 and 1.0 inclusive.\nhttps://www.nltk.org/_modules/nltk/translate/bleu_score.html\nhttps://aclanthology.org/P02-1040.pdf", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.ngram_overlap.ngram_overlap_score.html"} {"id": "2e784b2c8daa-0", "text": "langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector\u00b6\nclass langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector(*, vectorstore: VectorStore, k: int = 4, example_keys: Optional[List[str]] = None, input_keys: Optional[List[str]] = None)[source]\u00b6\nBases: BaseExampleSelector, BaseModel\nExample selector that selects examples based on SemanticSimilarity.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam example_keys: Optional[List[str]] = None\u00b6\nOptional keys to filter examples to.\nparam input_keys: Optional[List[str]] = None\u00b6\nOptional keys to filter input to. If provided, the search is based on\nthe input variables instead of all variables.\nparam k: int = 4\u00b6\nNumber of examples to select.\nparam vectorstore: langchain.vectorstores.base.VectorStore [Required]\u00b6\nVectorStore than contains information about examples.\nadd_example(example: Dict[str, str]) \u2192 str[source]\u00b6\nAdd new example to vectorstore.\nclassmethod from_examples(examples: List[dict], embeddings: Embeddings, vectorstore_cls: Type[VectorStore], k: int = 4, input_keys: Optional[List[str]] = None, **vectorstore_cls_kwargs: Any) \u2192 SemanticSimilarityExampleSelector[source]\u00b6\nCreate k-shot example selector using example list and embeddings.\nReshuffles examples dynamically based on query similarity.\nParameters\nexamples \u2013 List of examples to use in the prompt.\nembeddings \u2013 An initialized embedding API interface, e.g. OpenAIEmbeddings().\nvectorstore_cls \u2013 A vector store DB interface class, e.g. FAISS.\nk \u2013 Number of examples to select\ninput_keys \u2013 If provided, the search is based on the input variables", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector.html"} {"id": "2e784b2c8daa-1", "text": "input_keys \u2013 If provided, the search is based on the input variables\ninstead of all variables.\nvectorstore_cls_kwargs \u2013 optional kwargs containing url for vector store\nReturns\nThe ExampleSelector instantiated, backed by a vector store.\nselect_examples(input_variables: Dict[str, str]) \u2192 List[dict][source]\u00b6\nSelect which examples to use based on semantic similarity.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector.html"} {"id": "7d8f461ab871-0", "text": "langchain.prompts.chat.SystemMessagePromptTemplate\u00b6\nclass langchain.prompts.chat.SystemMessagePromptTemplate(*, prompt: StringPromptTemplate, additional_kwargs: dict = None)[source]\u00b6\nBases: BaseStringMessagePromptTemplate\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam additional_kwargs: dict [Optional]\u00b6\nparam prompt: langchain.prompts.base.StringPromptTemplate [Required]\u00b6\nformat(**kwargs: Any) \u2192 BaseMessage[source]\u00b6\nTo a BaseMessage.\nformat_messages(**kwargs: Any) \u2192 List[BaseMessage]\u00b6\nTo messages.\nclassmethod from_template(template: str, template_format: str = 'f-string', **kwargs: Any) \u2192 MessagePromptTemplateT\u00b6\nclassmethod from_template_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) \u2192 MessagePromptTemplateT\u00b6\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty input_variables: List[str]\u00b6\nInput variables for this prompt template.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.SystemMessagePromptTemplate.html"} {"id": "b87ab95ca9b3-0", "text": "langchain.prompts.chat.ChatMessagePromptTemplate\u00b6\nclass langchain.prompts.chat.ChatMessagePromptTemplate(*, prompt: StringPromptTemplate, additional_kwargs: dict = None, role: str)[source]\u00b6\nBases: BaseStringMessagePromptTemplate\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam additional_kwargs: dict [Optional]\u00b6\nparam prompt: langchain.prompts.base.StringPromptTemplate [Required]\u00b6\nparam role: str [Required]\u00b6\nformat(**kwargs: Any) \u2192 BaseMessage[source]\u00b6\nTo a BaseMessage.\nformat_messages(**kwargs: Any) \u2192 List[BaseMessage]\u00b6\nTo messages.\nclassmethod from_template(template: str, template_format: str = 'f-string', **kwargs: Any) \u2192 MessagePromptTemplateT\u00b6\nclassmethod from_template_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) \u2192 MessagePromptTemplateT\u00b6\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty input_variables: List[str]\u00b6\nInput variables for this prompt template.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatMessagePromptTemplate.html"} {"id": "b87ab95ca9b3-1", "text": "Return whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatMessagePromptTemplate.html"} {"id": "cbe5d582df7a-0", "text": "langchain.prompts.base.StringPromptValue\u00b6\nclass langchain.prompts.base.StringPromptValue(*, text: str)[source]\u00b6\nBases: PromptValue\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam text: str [Required]\u00b6\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nto_messages() \u2192 List[BaseMessage][source]\u00b6\nReturn prompt as messages.\nto_string() \u2192 str[source]\u00b6\nReturn prompt as string.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.base.StringPromptValue.html"} {"id": "3456f33da278-0", "text": "langchain.prompts.prompt.PromptTemplate\u00b6\nclass langchain.prompts.prompt.PromptTemplate(*, input_variables: List[str], output_parser: Optional[BaseOutputParser] = None, partial_variables: Mapping[str, Union[str, Callable[[], str]]] = None, template: str, template_format: str = 'f-string', validate_template: bool = True)[source]\u00b6\nBases: StringPromptTemplate\nSchema to represent a prompt for an LLM.\nExample\nfrom langchain import PromptTemplate\nprompt = PromptTemplate(input_variables=[\"foo\"], template=\"Say {foo}\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam input_variables: List[str] [Required]\u00b6\nA list of the names of the variables the prompt template expects.\nparam output_parser: Optional[BaseOutputParser] = None\u00b6\nHow to parse the output of calling an LLM on this formatted prompt.\nparam partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]\u00b6\nparam template: str [Required]\u00b6\nThe prompt template.\nparam template_format: str = 'f-string'\u00b6\nThe format of the prompt template. Options are: \u2018f-string\u2019, \u2018jinja2\u2019.\nparam validate_template: bool = True\u00b6\nWhether or not to try validating the template.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of prompt.\nformat(**kwargs: Any) \u2192 str[source]\u00b6\nFormat the prompt with the inputs.\nParameters\nkwargs \u2013 Any arguments to be passed to the prompt template.\nReturns\nA formatted string.\nExample:\nprompt.format(variable1=\"foo\")\nformat_prompt(**kwargs: Any) \u2192 PromptValue\u00b6\nCreate Chat Messages.", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.prompt.PromptTemplate.html"} {"id": "3456f33da278-1", "text": "format_prompt(**kwargs: Any) \u2192 PromptValue\u00b6\nCreate Chat Messages.\nclassmethod from_examples(examples: List[str], suffix: str, input_variables: List[str], example_separator: str = '\\n\\n', prefix: str = '', **kwargs: Any) \u2192 PromptTemplate[source]\u00b6\nTake examples in list format with prefix and suffix to create a prompt.\nIntended to be used as a way to dynamically create a prompt from examples.\nParameters\nexamples \u2013 List of examples to use in the prompt.\nsuffix \u2013 String to go after the list of examples. Should generally\nset up the user\u2019s input.\ninput_variables \u2013 A list of variable names the final prompt template\nwill expect.\nexample_separator \u2013 The separator to use in between examples. Defaults\nto two new line characters.\nprefix \u2013 String that should go before any examples. Generally includes\nexamples. Default to an empty string.\nReturns\nThe final prompt generated.\nclassmethod from_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) \u2192 PromptTemplate[source]\u00b6\nLoad a prompt from a file.\nParameters\ntemplate_file \u2013 The path to the file containing the prompt template.\ninput_variables \u2013 A list of variable names the final prompt template\nwill expect.\nReturns\nThe prompt loaded from the file.\nclassmethod from_template(template: str, **kwargs: Any) \u2192 PromptTemplate[source]\u00b6\nLoad a prompt template from a template.\npartial(**kwargs: Union[str, Callable[[], str]]) \u2192 BasePromptTemplate\u00b6\nReturn a partial of the prompt template.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the prompt.\nParameters\nfile_path \u2013 Path to directory to save prompt to.\nExample:\n.. code-block:: python\nprompt.save(file_path=\u201dpath/prompt.yaml\u201d)\nvalidator template_is_valid\u00a0 \u00bb\u00a0 all fields[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.prompt.PromptTemplate.html"} {"id": "3456f33da278-2", "text": "validator template_is_valid\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nCheck that template and input variables are consistent.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_variable_names\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate variable names do not include restricted names.\nproperty lc_attributes: Dict[str, Any]\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.prompt.PromptTemplate.html"} {"id": "a3ee4c056f2c-0", "text": "langchain.prompts.example_selector.semantic_similarity.sorted_values\u00b6\nlangchain.prompts.example_selector.semantic_similarity.sorted_values(values: Dict[str, str]) \u2192 List[Any][source]\u00b6\nReturn a list of values in dict sorted by key.", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.semantic_similarity.sorted_values.html"} {"id": "adf4fb53d6c6-0", "text": "langchain.prompts.example_selector.base.BaseExampleSelector\u00b6\nclass langchain.prompts.example_selector.base.BaseExampleSelector[source]\u00b6\nBases: ABC\nInterface for selecting examples to include in prompts.\nMethods\n__init__()\nadd_example(example)\nAdd new example to store for a key.\nselect_examples(input_variables)\nSelect which examples to use based on the inputs.\nabstract add_example(example: Dict[str, str]) \u2192 Any[source]\u00b6\nAdd new example to store for a key.\nabstract select_examples(input_variables: Dict[str, str]) \u2192 List[dict][source]\u00b6\nSelect which examples to use based on the inputs.", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.base.BaseExampleSelector.html"} {"id": "9e15c21eae64-0", "text": "langchain.prompts.few_shot.FewShotPromptTemplate\u00b6\nclass langchain.prompts.few_shot.FewShotPromptTemplate(*, input_variables: List[str], output_parser: Optional[BaseOutputParser] = None, partial_variables: Mapping[str, Union[str, Callable[[], str]]] = None, examples: Optional[List[dict]] = None, example_selector: Optional[BaseExampleSelector] = None, example_prompt: PromptTemplate, suffix: str, example_separator: str = '\\n\\n', prefix: str = '', template_format: str = 'f-string', validate_template: bool = True)[source]\u00b6\nBases: StringPromptTemplate\nPrompt template that contains few shot examples.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam example_prompt: langchain.prompts.prompt.PromptTemplate [Required]\u00b6\nPromptTemplate used to format an individual example.\nparam example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = None\u00b6\nExampleSelector to choose the examples to format into the prompt.\nEither this or examples should be provided.\nparam example_separator: str = '\\n\\n'\u00b6\nString separator used to join the prefix, the examples, and suffix.\nparam examples: Optional[List[dict]] = None\u00b6\nExamples to format into the prompt.\nEither this or example_selector should be provided.\nparam input_variables: List[str] [Required]\u00b6\nA list of the names of the variables the prompt template expects.\nparam output_parser: Optional[BaseOutputParser] = None\u00b6\nHow to parse the output of calling an LLM on this formatted prompt.\nparam partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]\u00b6\nparam prefix: str = ''\u00b6\nA prompt template string to put before the examples.", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotPromptTemplate.html"} {"id": "9e15c21eae64-1", "text": "param prefix: str = ''\u00b6\nA prompt template string to put before the examples.\nparam suffix: str [Required]\u00b6\nA prompt template string to put after the examples.\nparam template_format: str = 'f-string'\u00b6\nThe format of the prompt template. Options are: \u2018f-string\u2019, \u2018jinja2\u2019.\nparam validate_template: bool = True\u00b6\nWhether or not to try validating the template.\nvalidator check_examples_and_selector\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nCheck that one and only one of examples/example_selector are provided.\ndict(**kwargs: Any) \u2192 Dict[source]\u00b6\nReturn a dictionary of the prompt.\nformat(**kwargs: Any) \u2192 str[source]\u00b6\nFormat the prompt with the inputs.\nParameters\nkwargs \u2013 Any arguments to be passed to the prompt template.\nReturns\nA formatted string.\nExample:\nprompt.format(variable1=\"foo\")\nformat_prompt(**kwargs: Any) \u2192 PromptValue\u00b6\nCreate Chat Messages.\npartial(**kwargs: Union[str, Callable[[], str]]) \u2192 BasePromptTemplate\u00b6\nReturn a partial of the prompt template.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the prompt.\nParameters\nfile_path \u2013 Path to directory to save prompt to.\nExample:\n.. code-block:: python\nprompt.save(file_path=\u201dpath/prompt.yaml\u201d)\nvalidator template_is_valid\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nCheck that prefix, suffix and input variables are consistent.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_variable_names\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate variable names do not include restricted names.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotPromptTemplate.html"} {"id": "9e15c21eae64-2", "text": "serialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotPromptTemplate.html"} {"id": "8dc675df0ebe-0", "text": "langchain.prompts.base.jinja2_formatter\u00b6\nlangchain.prompts.base.jinja2_formatter(template: str, **kwargs: Any) \u2192 str[source]\u00b6\nFormat a template using jinja2.", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.base.jinja2_formatter.html"} {"id": "e8c9d761825e-0", "text": "langchain.prompts.base.StringPromptTemplate\u00b6\nclass langchain.prompts.base.StringPromptTemplate(*, input_variables: List[str], output_parser: Optional[BaseOutputParser] = None, partial_variables: Mapping[str, Union[str, Callable[[], str]]] = None)[source]\u00b6\nBases: BasePromptTemplate, ABC\nString prompt should expose the format method, returning a prompt.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam input_variables: List[str] [Required]\u00b6\nA list of the names of the variables the prompt template expects.\nparam output_parser: Optional[BaseOutputParser] = None\u00b6\nHow to parse the output of calling an LLM on this formatted prompt.\nparam partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of prompt.\nabstract format(**kwargs: Any) \u2192 str\u00b6\nFormat the prompt with the inputs.\nParameters\nkwargs \u2013 Any arguments to be passed to the prompt template.\nReturns\nA formatted string.\nExample:\nprompt.format(variable1=\"foo\")\nformat_prompt(**kwargs: Any) \u2192 PromptValue[source]\u00b6\nCreate Chat Messages.\npartial(**kwargs: Union[str, Callable[[], str]]) \u2192 BasePromptTemplate\u00b6\nReturn a partial of the prompt template.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the prompt.\nParameters\nfile_path \u2013 Path to directory to save prompt to.\nExample:\n.. code-block:: python\nprompt.save(file_path=\u201dpath/prompt.yaml\u201d)\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_variable_names\u00a0 \u00bb\u00a0 all fields\u00b6", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.base.StringPromptTemplate.html"} {"id": "e8c9d761825e-1", "text": "validator validate_variable_names\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate variable names do not include restricted names.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.base.StringPromptTemplate.html"} {"id": "525a280f69c8-0", "text": "langchain.prompts.chat.ChatPromptTemplate\u00b6\nclass langchain.prompts.chat.ChatPromptTemplate(*, input_variables: List[str], output_parser: Optional[BaseOutputParser] = None, partial_variables: Mapping[str, Union[str, Callable[[], str]]] = None, messages: List[Union[BaseMessagePromptTemplate, BaseMessage]])[source]\u00b6\nBases: BaseChatPromptTemplate, ABC\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam input_variables: List[str] [Required]\u00b6\nA list of the names of the variables the prompt template expects.\nparam messages: List[Union[BaseMessagePromptTemplate, BaseMessage]] [Required]\u00b6\nparam output_parser: Optional[BaseOutputParser] = None\u00b6\nHow to parse the output of calling an LLM on this formatted prompt.\nparam partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of prompt.\nformat(**kwargs: Any) \u2192 str[source]\u00b6\nFormat the prompt with the inputs.\nParameters\nkwargs \u2013 Any arguments to be passed to the prompt template.\nReturns\nA formatted string.\nExample:\nprompt.format(variable1=\"foo\")\nformat_messages(**kwargs: Any) \u2192 List[BaseMessage][source]\u00b6\nFormat kwargs into a list of messages.\nformat_prompt(**kwargs: Any) \u2192 PromptValue\u00b6\nCreate Chat Messages.\nclassmethod from_messages(messages: Sequence[Union[BaseMessagePromptTemplate, BaseMessage]]) \u2192 ChatPromptTemplate[source]\u00b6\nclassmethod from_role_strings(string_messages: List[Tuple[str, str]]) \u2192 ChatPromptTemplate[source]\u00b6\nclassmethod from_strings(string_messages: List[Tuple[Type[BaseMessagePromptTemplate], str]]) \u2192 ChatPromptTemplate[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html"} {"id": "525a280f69c8-1", "text": "classmethod from_template(template: str, **kwargs: Any) \u2192 ChatPromptTemplate[source]\u00b6\npartial(**kwargs: Union[str, Callable[[], str]]) \u2192 BasePromptTemplate[source]\u00b6\nReturn a partial of the prompt template.\nsave(file_path: Union[Path, str]) \u2192 None[source]\u00b6\nSave the prompt.\nParameters\nfile_path \u2013 Path to directory to save prompt to.\nExample:\n.. code-block:: python\nprompt.save(file_path=\u201dpath/prompt.yaml\u201d)\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_input_variables\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nvalidator validate_variable_names\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate variable names do not include restricted names.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html"} {"id": "646979e35355-0", "text": "langchain.prompts.loading.load_prompt_from_config\u00b6\nlangchain.prompts.loading.load_prompt_from_config(config: dict) \u2192 BasePromptTemplate[source]\u00b6\nLoad prompt from Config Dict.", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.loading.load_prompt_from_config.html"} {"id": "5e56b7e220f9-0", "text": "langchain.prompts.pipeline.PipelinePromptTemplate\u00b6\nclass langchain.prompts.pipeline.PipelinePromptTemplate(*, input_variables: List[str], output_parser: Optional[BaseOutputParser] = None, partial_variables: Mapping[str, Union[str, Callable[[], str]]] = None, final_prompt: BasePromptTemplate, pipeline_prompts: List[Tuple[str, BasePromptTemplate]])[source]\u00b6\nBases: BasePromptTemplate\nA prompt template for composing multiple prompts together.\nThis can be useful when you want to reuse parts of prompts.\nA PipelinePrompt consists of two main parts:\nfinal_prompt: This is the final prompt that is returned\npipeline_prompts: This is a list of tuples, consistingof a string (name) and a Prompt Template.\nEach PromptTemplate will be formatted and then passed\nto future prompt templates as a variable with\nthe same name as name\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam final_prompt: langchain.schema.prompt_template.BasePromptTemplate [Required]\u00b6\nparam input_variables: List[str] [Required]\u00b6\nA list of the names of the variables the prompt template expects.\nparam output_parser: Optional[BaseOutputParser] = None\u00b6\nHow to parse the output of calling an LLM on this formatted prompt.\nparam partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]\u00b6\nparam pipeline_prompts: List[Tuple[str, langchain.schema.prompt_template.BasePromptTemplate]] [Required]\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of prompt.\nformat(**kwargs: Any) \u2192 str[source]\u00b6\nFormat the prompt with the inputs.\nParameters\nkwargs \u2013 Any arguments to be passed to the prompt template.\nReturns\nA formatted string.\nExample:", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.pipeline.PipelinePromptTemplate.html"} {"id": "5e56b7e220f9-1", "text": "Returns\nA formatted string.\nExample:\nprompt.format(variable1=\"foo\")\nformat_prompt(**kwargs: Any) \u2192 PromptValue[source]\u00b6\nCreate Chat Messages.\nvalidator get_input_variables\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nGet input variables.\npartial(**kwargs: Union[str, Callable[[], str]]) \u2192 BasePromptTemplate\u00b6\nReturn a partial of the prompt template.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the prompt.\nParameters\nfile_path \u2013 Path to directory to save prompt to.\nExample:\n.. code-block:: python\nprompt.save(file_path=\u201dpath/prompt.yaml\u201d)\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_variable_names\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate variable names do not include restricted names.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.pipeline.PipelinePromptTemplate.html"} {"id": "3f63a7d7ba4f-0", "text": "langchain.prompts.chat.MessagesPlaceholder\u00b6\nclass langchain.prompts.chat.MessagesPlaceholder(*, variable_name: str)[source]\u00b6\nBases: BaseMessagePromptTemplate\nPrompt template that assumes variable is already list of messages.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam variable_name: str [Required]\u00b6\nformat_messages(**kwargs: Any) \u2192 List[BaseMessage][source]\u00b6\nTo a BaseMessage.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty input_variables: List[str]\u00b6\nInput variables for this prompt template.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.MessagesPlaceholder.html"} {"id": "a32c2595ede9-0", "text": "langchain.prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector\u00b6\nclass langchain.prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector(*, examples: List[dict], example_prompt: PromptTemplate, threshold: float = - 1.0)[source]\u00b6\nBases: BaseExampleSelector, BaseModel\nSelect and order examples based on ngram overlap score (sentence_bleu score).\nhttps://www.nltk.org/_modules/nltk/translate/bleu_score.html\nhttps://aclanthology.org/P02-1040.pdf\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam example_prompt: langchain.prompts.prompt.PromptTemplate [Required]\u00b6\nPrompt template used to format the examples.\nparam examples: List[dict] [Required]\u00b6\nA list of the examples that the prompt template expects.\nparam threshold: float = -1.0\u00b6\nThreshold at which algorithm stops. Set to -1.0 by default.\nFor negative threshold:\nselect_examples sorts examples by ngram_overlap_score, but excludes none.\nFor threshold greater than 1.0:\nselect_examples excludes all examples, and returns an empty list.\nFor threshold equal to 0.0:\nselect_examples sorts examples by ngram_overlap_score,\nand excludes examples with no ngram overlap with input.\nadd_example(example: Dict[str, str]) \u2192 None[source]\u00b6\nAdd new example to list.\nvalidator check_dependencies\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nCheck that valid dependencies exist.\nselect_examples(input_variables: Dict[str, str]) \u2192 List[dict][source]\u00b6\nReturn list of examples sorted by ngram_overlap_score with input.\nDescending order.\nExcludes any examples with ngram_overlap_score less than or equal to threshold.", "source": "https://api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector.html"} {"id": "7a7fc39b2a20-0", "text": "langchain.utilities.loading.try_load_from_hub\u00b6\nlangchain.utilities.loading.try_load_from_hub(path: Union[str, Path], loader: Callable[[str], T], valid_prefix: str, valid_suffixes: Set[str], **kwargs: Any) \u2192 Optional[T][source]\u00b6\nLoad configuration from hub. Returns None if path is not a hub path.", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.loading.try_load_from_hub.html"} {"id": "976898d272f9-0", "text": "langchain.utilities.searx_search.SearxResults\u00b6\nclass langchain.utilities.searx_search.SearxResults(data: str)[source]\u00b6\nBases: dict\nDict like wrapper around search api results.\nTake a raw result from Searx and make it into a dict like object.\nMethods\n__init__(data)\nTake a raw result from Searx and make it into a dict like object.\nclear()\ncopy()\nfromkeys([value])\nCreate a new dictionary with keys from iterable and values set to value.\nget(key[,\u00a0default])\nReturn the value for key if key is in the dictionary, else default.\nitems()\nkeys()\npop(k[,d])\nIf the key is not found, return the default if given; otherwise, raise a KeyError.\npopitem()\nRemove and return a (key, value) pair as a 2-tuple.\nsetdefault(key[,\u00a0default])\nInsert key with a value of default if key is not in the dictionary.\nupdate([E,\u00a0]**F)\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]\nvalues()\nAttributes\nanswers\nHelper accessor on the json result.\nresults\nSilence mypy for accessing this field.\nclear() \u2192 None.\u00a0 Remove all items from D.\u00b6\ncopy() \u2192 a shallow copy of D\u00b6\nfromkeys(value=None, /)\u00b6\nCreate a new dictionary with keys from iterable and values set to value.\nget(key, default=None, /)\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.searx_search.SearxResults.html"} {"id": "976898d272f9-1", "text": "get(key, default=None, /)\u00b6\nReturn the value for key if key is in the dictionary, else default.\nitems() \u2192 a set-like object providing a view on D's items\u00b6\nkeys() \u2192 a set-like object providing a view on D's keys\u00b6\npop(k[, d]) \u2192 v, remove specified key and return the corresponding value.\u00b6\nIf the key is not found, return the default if given; otherwise,\nraise a KeyError.\npopitem()\u00b6\nRemove and return a (key, value) pair as a 2-tuple.\nPairs are returned in LIFO (last-in, first-out) order.\nRaises KeyError if the dict is empty.\nsetdefault(key, default=None, /)\u00b6\nInsert key with a value of default if key is not in the dictionary.\nReturn the value for key if key is in the dictionary, else default.\nupdate([E, ]**F) \u2192 None.\u00a0 Update D from dict/iterable E and F.\u00b6\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k]\nIf E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v\nIn either case, this is followed by: for k in F: D[k] = F[k]\nvalues() \u2192 an object providing a view on D's values\u00b6\nproperty answers: Any\u00b6\nHelper accessor on the json result.", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.searx_search.SearxResults.html"} {"id": "3d17397a67fa-0", "text": "langchain.utilities.google_search.GoogleSearchAPIWrapper\u00b6\nclass langchain.utilities.google_search.GoogleSearchAPIWrapper(*, search_engine: Any = None, google_api_key: Optional[str] = None, google_cse_id: Optional[str] = None, k: int = 10, siterestrict: bool = False)[source]\u00b6\nBases: BaseModel\nWrapper for Google Search API.\nAdapted from: Instructions adapted from https://stackoverflow.com/questions/\n37083058/\nprogrammatically-searching-google-in-python-using-custom-search\nTODO: DOCS for using it\n1. Install google-api-python-client\n- If you don\u2019t already have a Google account, sign up.\n- If you have never created a Google APIs Console project,\nread the Managing Projects page and create a project in the Google API Console.\n- Install the library using pip install google-api-python-client\nThe current version of the library is 2.70.0 at this time\n2. To create an API key:\n- Navigate to the APIs & Services\u2192Credentials panel in Cloud Console.\n- Select Create credentials, then select API key from the drop-down menu.\n- The API key created dialog box displays your newly created key.\n- You now have an API_KEY\n3. Setup Custom Search Engine so you can search the entire web\n- Create a custom search engine in this link.\n- In Sites to search, add any valid URL (i.e. www.stackoverflow.com).\n- That\u2019s all you have to fill up, the rest doesn\u2019t matter.\nIn the left-side menu, click Edit search engine \u2192 {your search engine name}\n\u2192 Setup Set Search the entire web to ON. Remove the URL you added from\nthe list of Sites to search.\n- Under Search engine ID you\u2019ll find the search-engine-ID.\n4. Enable the Custom Search API", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.google_search.GoogleSearchAPIWrapper.html"} {"id": "3d17397a67fa-1", "text": "4. Enable the Custom Search API\n- Navigate to the APIs & Services\u2192Dashboard panel in Cloud Console.\n- Click Enable APIs and Services.\n- Search for Custom Search API and click on it.\n- Click Enable.\nURL for it: https://console.cloud.google.com/apis/library/customsearch.googleapis\n.com\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam google_api_key: Optional[str] = None\u00b6\nparam google_cse_id: Optional[str] = None\u00b6\nparam k: int = 10\u00b6\nparam siterestrict: bool = False\u00b6\nresults(query: str, num_results: int, search_params: Optional[Dict[str, str]] = None) \u2192 List[Dict][source]\u00b6\nRun query through GoogleSearch and return metadata.\nParameters\nquery \u2013 The query to search for.\nnum_results \u2013 The number of results to return.\nsearch_params \u2013 Parameters to be passed on search\nReturns\nsnippet - The description of the result.\ntitle - The title of the result.\nlink - The link to the result.\nReturn type\nA list of dictionaries with the following keys\nrun(query: str) \u2192 str[source]\u00b6\nRun query through GoogleSearch and parse result.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.google_search.GoogleSearchAPIWrapper.html"} {"id": "8ac37372d3df-0", "text": "langchain.utilities.openapi.OpenAPISpec\u00b6\nclass langchain.utilities.openapi.OpenAPISpec(*, openapi: str = '3.1.0', info: Info, jsonSchemaDialect: Optional[str] = None, servers: List[Server] = [Server(url='/', description=None, variables=None)], paths: Optional[Dict[str, PathItem]] = None, webhooks: Optional[Dict[str, Union[PathItem, Reference]]] = None, components: Optional[Components] = None, security: Optional[List[Dict[str, List[str]]]] = None, tags: Optional[List[Tag]] = None, externalDocs: Optional[ExternalDocumentation] = None)[source]\u00b6\nBases: OpenAPI\nOpenAPI Model that removes misformatted parts of the spec.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam components: Optional[openapi_schema_pydantic.v3.v3_1_0.components.Components] = None\u00b6\nAn element to hold various schemas for the document.\nparam externalDocs: Optional[openapi_schema_pydantic.v3.v3_1_0.external_documentation.ExternalDocumentation] = None\u00b6\nAdditional external documentation.\nparam info: openapi_schema_pydantic.v3.v3_1_0.info.Info [Required]\u00b6\nREQUIRED. Provides metadata about the API. The metadata MAY be used by tooling as required.\nparam jsonSchemaDialect: Optional[str] = None\u00b6\nThe default value for the $schema keyword within [Schema Objects](#schemaObject)\ncontained within this OAS document. This MUST be in the form of a URI.\nparam openapi: str = '3.1.0'\u00b6\nREQUIRED. This string MUST be the [version number](#versions)", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.openapi.OpenAPISpec.html"} {"id": "8ac37372d3df-1", "text": "REQUIRED. This string MUST be the [version number](#versions)\nof the OpenAPI Specification that the OpenAPI document uses.\nThe openapi field SHOULD be used by tooling to interpret the OpenAPI document.\nThis is not related to the API [info.version](#infoVersion) string.\nparam paths: Optional[Dict[str, openapi_schema_pydantic.v3.v3_1_0.path_item.PathItem]] = None\u00b6\nThe available paths and operations for the API.\nparam security: Optional[List[Dict[str, List[str]]]] = None\u00b6\nA declaration of which security mechanisms can be used across the API.\nThe list of values includes alternative security requirement objects that can be used.\nOnly one of the security requirement objects need to be satisfied to authorize a request.\nIndividual operations can override this definition.\nTo make security optional, an empty security requirement ({}) can be included in the array.\nparam servers: List[openapi_schema_pydantic.v3.v3_1_0.server.Server] = [Server(url='/', description=None, variables=None)]\u00b6\nAn array of Server Objects, which provide connectivity information to a target server.\nIf the servers property is not provided, or is an empty array,\nthe default value would be a [Server Object](#serverObject) with a [url](#serverUrl) value of /.\nparam tags: Optional[List[openapi_schema_pydantic.v3.v3_1_0.tag.Tag]] = None\u00b6\nA list of tags used by the document with additional metadata.\nThe order of the tags can be used to reflect on their order by the parsing tools.\nNot all tags that are used by the [Operation Object](#operationObject) must be declared.\nThe tags that are not declared MAY be organized randomly or based on the tools\u2019 logic.\nEach tag name in the list MUST be unique.", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.openapi.OpenAPISpec.html"} {"id": "8ac37372d3df-2", "text": "Each tag name in the list MUST be unique.\nparam webhooks: Optional[Dict[str, Union[openapi_schema_pydantic.v3.v3_1_0.path_item.PathItem, openapi_schema_pydantic.v3.v3_1_0.reference.Reference]]] = None\u00b6\nThe incoming webhooks that MAY be received as part of this API and that the API consumer MAY choose to implement.\nClosely related to the callbacks feature, this section describes requests initiated other than by an API call,\nfor example by an out of band registration.\nThe key name is a unique string to refer to each webhook,\nwhile the (optionally referenced) Path Item Object describes a request\nthat may be initiated by the API provider and the expected responses.\nAn [example](../examples/v3.1/webhook-example.yaml) is available.\nclassmethod from_file(path: Union[str, Path]) \u2192 OpenAPISpec[source]\u00b6\nGet an OpenAPI spec from a file path.\nclassmethod from_spec_dict(spec_dict: dict) \u2192 OpenAPISpec[source]\u00b6\nGet an OpenAPI spec from a dict.\nclassmethod from_text(text: str) \u2192 OpenAPISpec[source]\u00b6\nGet an OpenAPI spec from a text.\nclassmethod from_url(url: str) \u2192 OpenAPISpec[source]\u00b6\nGet an OpenAPI spec from a URL.\nstatic get_cleaned_operation_id(operation: Operation, path: str, method: str) \u2192 str[source]\u00b6\nGet a cleaned operation id from an operation id.\nget_methods_for_path(path: str) \u2192 List[str][source]\u00b6\nReturn a list of valid methods for the specified path.\nget_operation(path: str, method: str) \u2192 Operation[source]\u00b6\nGet the operation object for a given path and HTTP method.\nget_parameters_for_operation(operation: Operation) \u2192 List[Parameter][source]\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.openapi.OpenAPISpec.html"} {"id": "8ac37372d3df-3", "text": "get_parameters_for_operation(operation: Operation) \u2192 List[Parameter][source]\u00b6\nGet the components for a given operation.\nget_parameters_for_path(path: str) \u2192 List[Parameter][source]\u00b6\nget_referenced_schema(ref: Reference) \u2192 Schema[source]\u00b6\nGet a schema (or nested reference) or err.\nget_request_body_for_operation(operation: Operation) \u2192 Optional[RequestBody][source]\u00b6\nGet the request body for a given operation.\nget_schema(schema: Union[Reference, Schema]) \u2192 Schema[source]\u00b6\nclassmethod parse_obj(obj: dict) \u2192 OpenAPISpec[source]\u00b6\nproperty base_url: str\u00b6\nGet the base url.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.openapi.OpenAPISpec.html"} {"id": "db0da0791fca-0", "text": "langchain.utilities.google_places_api.GooglePlacesAPIWrapper\u00b6\nclass langchain.utilities.google_places_api.GooglePlacesAPIWrapper(*, gplaces_api_key: Optional[str] = None, google_map_client: Any = None, top_k_results: Optional[int] = None)[source]\u00b6\nBases: BaseModel\nWrapper around Google Places API.\nTo use, you should have the googlemaps python package installed,an API key for the google maps platform,\nand the environment variable \u2018\u2019GPLACES_API_KEY\u2019\u2019\nset with your API key , or pass \u2018gplaces_api_key\u2019\nas a named parameter to the constructor.\nBy default, this will return the all the results on the input query.You can use the top_k_results argument to limit the number of results.\nExample\nfrom langchain import GooglePlacesAPIWrapper\ngplaceapi = GooglePlacesAPIWrapper()\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam gplaces_api_key: Optional[str] = None\u00b6\nparam top_k_results: Optional[int] = None\u00b6\nfetch_place_details(place_id: str) \u2192 Optional[str][source]\u00b6\nformat_place_details(place_details: Dict[str, Any]) \u2192 Optional[str][source]\u00b6\nrun(query: str) \u2192 str[source]\u00b6\nRun Places search and get k number of places that exists that match.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key is in your environment variable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.google_places_api.GooglePlacesAPIWrapper.html"} {"id": "8e2437dd8898-0", "text": "langchain.utilities.google_serper.GoogleSerperAPIWrapper\u00b6\nclass langchain.utilities.google_serper.GoogleSerperAPIWrapper(*, k: int = 10, gl: str = 'us', hl: str = 'en', type: Literal['news', 'search', 'places', 'images'] = 'search', tbs: Optional[str] = None, serper_api_key: Optional[str] = None, aiosession: Optional[ClientSession] = None, result_key_for_type: dict = {'images': 'images', 'news': 'news', 'places': 'places', 'search': 'organic'})[source]\u00b6\nBases: BaseModel\nWrapper around the Serper.dev Google Search API.\nYou can create a free API key at https://serper.dev.\nTo use, you should have the environment variable SERPER_API_KEY\nset with your API key, or pass serper_api_key as a named parameter\nto the constructor.\nExample\nfrom langchain import GoogleSerperAPIWrapper\ngoogle_serper = GoogleSerperAPIWrapper()\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam aiosession: Optional[aiohttp.client.ClientSession] = None\u00b6\nparam gl: str = 'us'\u00b6\nparam hl: str = 'en'\u00b6\nparam k: int = 10\u00b6\nparam serper_api_key: Optional[str] = None\u00b6\nparam tbs: Optional[str] = None\u00b6\nparam type: Literal['news', 'search', 'places', 'images'] = 'search'\u00b6\nasync aresults(query: str, **kwargs: Any) \u2192 Dict[source]\u00b6\nRun query through GoogleSearch.\nasync arun(query: str, **kwargs: Any) \u2192 str[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.google_serper.GoogleSerperAPIWrapper.html"} {"id": "8e2437dd8898-1", "text": "async arun(query: str, **kwargs: Any) \u2192 str[source]\u00b6\nRun query through GoogleSearch and parse result async.\nresults(query: str, **kwargs: Any) \u2192 Dict[source]\u00b6\nRun query through GoogleSearch.\nrun(query: str, **kwargs: Any) \u2192 str[source]\u00b6\nRun query through GoogleSearch and parse result.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key exists in environment.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.google_serper.GoogleSerperAPIWrapper.html"} {"id": "89b82092a361-0", "text": "langchain.utilities.searx_search.SearxSearchWrapper\u00b6\nclass langchain.utilities.searx_search.SearxSearchWrapper(*, searx_host: str = '', unsecure: bool = False, params: dict = None, headers: Optional[dict] = None, engines: Optional[List[str]] = [], categories: Optional[List[str]] = [], query_suffix: Optional[str] = '', k: int = 10, aiosession: Optional[Any] = None)[source]\u00b6\nBases: BaseModel\nWrapper for Searx API.\nTo use you need to provide the searx host by passing the named parameter\nsearx_host or exporting the environment variable SEARX_HOST.\nIn some situations you might want to disable SSL verification, for example\nif you are running searx locally. You can do this by passing the named parameter\nunsecure. You can also pass the host url scheme as http to disable SSL.\nExample\nfrom langchain.utilities import SearxSearchWrapper\nsearx = SearxSearchWrapper(searx_host=\"http://localhost:8888\")\nExample with SSL disabled:from langchain.utilities import SearxSearchWrapper\n# note the unsecure parameter is not needed if you pass the url scheme as\n# http\nsearx = SearxSearchWrapper(searx_host=\"http://localhost:8888\",\n unsecure=True)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam aiosession: Optional[Any] = None\u00b6\nparam categories: Optional[List[str]] = []\u00b6\nparam engines: Optional[List[str]] = []\u00b6\nparam headers: Optional[dict] = None\u00b6\nparam k: int = 10\u00b6\nparam params: dict [Optional]\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.searx_search.SearxSearchWrapper.html"} {"id": "89b82092a361-1", "text": "param k: int = 10\u00b6\nparam params: dict [Optional]\u00b6\nparam query_suffix: Optional[str] = ''\u00b6\nparam searx_host: str = ''\u00b6\nparam unsecure: bool = False\u00b6\nasync aresults(query: str, num_results: int, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) \u2192 List[Dict][source]\u00b6\nAsynchronously query with json results.\nUses aiohttp. See results for more info.\nasync arun(query: str, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) \u2192 str[source]\u00b6\nAsynchronously version of run.\nvalidator disable_ssl_warnings\u00a0 \u00bb\u00a0 unsecure[source]\u00b6\nDisable SSL warnings.\nresults(query: str, num_results: int, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) \u2192 List[Dict][source]\u00b6\nRun query through Searx API and returns the results with metadata.\nParameters\nquery \u2013 The query to search for.\nquery_suffix \u2013 Extra suffix appended to the query.\nnum_results \u2013 Limit the number of results to return.\nengines \u2013 List of engines to use for the query.\ncategories \u2013 List of categories to use for the query.\n**kwargs \u2013 extra parameters to pass to the searx API.\nReturns\n{snippet: The description of the result.\ntitle: The title of the result.\nlink: The link to the result.\nengines: The engines used for the result.\ncategory: Searx category of the result.\n}\nReturn type\nDict with the following keys", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.searx_search.SearxSearchWrapper.html"} {"id": "89b82092a361-2", "text": "}\nReturn type\nDict with the following keys\nrun(query: str, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) \u2192 str[source]\u00b6\nRun query through Searx API and parse results.\nYou can pass any other params to the searx query API.\nParameters\nquery \u2013 The query to search for.\nquery_suffix \u2013 Extra suffix appended to the query.\nengines \u2013 List of engines to use for the query.\ncategories \u2013 List of categories to use for the query.\n**kwargs \u2013 extra parameters to pass to the searx API.\nReturns\nThe result of the query.\nReturn type\nstr\nRaises\nValueError \u2013 If an error occurred with the query.\nExample\nThis will make a query to the qwant engine:\nfrom langchain.utilities import SearxSearchWrapper\nsearx = SearxSearchWrapper(searx_host=\"http://my.searx.host\")\nsearx.run(\"what is the weather in France ?\", engine=\"qwant\")\n# the same result can be achieved using the `!` syntax of searx\n# to select the engine using `query_suffix`\nsearx.run(\"what is the weather in France ?\", query_suffix=\"!qwant\")\nvalidator validate_params\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that custom searx params are merged with default ones.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.searx_search.SearxSearchWrapper.html"} {"id": "00eaca05b4be-0", "text": "langchain.utilities.graphql.GraphQLAPIWrapper\u00b6\nclass langchain.utilities.graphql.GraphQLAPIWrapper(*, custom_headers: Optional[Dict[str, str]] = None, graphql_endpoint: str, gql_client: Any = None, gql_function: Callable[[str], Any])[source]\u00b6\nBases: BaseModel\nWrapper around GraphQL API.\nTo use, you should have the gql python package installed.\nThis wrapper will use the GraphQL API to conduct queries.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam custom_headers: Optional[Dict[str, str]] = None\u00b6\nparam graphql_endpoint: str [Required]\u00b6\nrun(query: str) \u2192 str[source]\u00b6\nRun a GraphQL query and get the results.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that the python package exists in the environment.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.graphql.GraphQLAPIWrapper.html"} {"id": "c54e70e5ec9b-0", "text": "langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper\u00b6\nclass langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper(*, k: int = 10, region: Optional[str] = 'wt-wt', safesearch: str = 'moderate', time: Optional[str] = 'y', max_results: int = 5)[source]\u00b6\nBases: BaseModel\nWrapper for DuckDuckGo Search API.\nFree and does not require any setup\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam k: int = 10\u00b6\nparam max_results: int = 5\u00b6\nparam region: Optional[str] = 'wt-wt'\u00b6\nparam safesearch: str = 'moderate'\u00b6\nparam time: Optional[str] = 'y'\u00b6\nget_snippets(query: str) \u2192 List[str][source]\u00b6\nRun query through DuckDuckGo and return concatenated results.\nresults(query: str, num_results: int) \u2192 List[Dict[str, str]][source]\u00b6\nRun query through DuckDuckGo and return metadata.\nParameters\nquery \u2013 The query to search for.\nnum_results \u2013 The number of results to return.\nReturns\nsnippet - The description of the result.\ntitle - The title of the result.\nlink - The link to the result.\nReturn type\nA list of dictionaries with the following keys\nrun(query: str) \u2192 str[source]\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that python package exists in environment.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper.html"} {"id": "6eac528ea149-0", "text": "langchain.utilities.apify.ApifyWrapper\u00b6\nclass langchain.utilities.apify.ApifyWrapper(*, apify_client: Any = None, apify_client_async: Any = None)[source]\u00b6\nBases: BaseModel\nWrapper around Apify.\nTo use, you should have the apify-client python package installed,\nand the environment variable APIFY_API_TOKEN set with your API key, or pass\napify_api_token as a named parameter to the constructor.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam apify_client: Any = None\u00b6\nparam apify_client_async: Any = None\u00b6\nasync acall_actor(actor_id: str, run_input: Dict, dataset_mapping_function: Callable[[Dict], Document], *, build: Optional[str] = None, memory_mbytes: Optional[int] = None, timeout_secs: Optional[int] = None) \u2192 ApifyDatasetLoader[source]\u00b6\nRun an Actor on the Apify platform and wait for results to be ready.\nParameters\nactor_id (str) \u2013 The ID or name of the Actor on the Apify platform.\nrun_input (Dict) \u2013 The input object of the Actor that you\u2019re trying to run.\ndataset_mapping_function (Callable) \u2013 A function that takes a single\ndictionary (an Apify dataset item) and converts it to\nan instance of the Document class.\nbuild (str, optional) \u2013 Optionally specifies the actor build to run.\nIt can be either a build tag or build number.\nmemory_mbytes (int, optional) \u2013 Optional memory limit for the run,\nin megabytes.\ntimeout_secs (int, optional) \u2013 Optional timeout for the run, in seconds.\nReturns\nA loader that will fetch the records from theActor run\u2019s default dataset.\nReturn type\nApifyDatasetLoader", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.apify.ApifyWrapper.html"} {"id": "6eac528ea149-1", "text": "Return type\nApifyDatasetLoader\nasync acall_actor_task(task_id: str, task_input: Dict, dataset_mapping_function: Callable[[Dict], Document], *, build: Optional[str] = None, memory_mbytes: Optional[int] = None, timeout_secs: Optional[int] = None) \u2192 ApifyDatasetLoader[source]\u00b6\nRun a saved Actor task on Apify and wait for results to be ready.\nParameters\ntask_id (str) \u2013 The ID or name of the task on the Apify platform.\ntask_input (Dict) \u2013 The input object of the task that you\u2019re trying to run.\nOverrides the task\u2019s saved input.\ndataset_mapping_function (Callable) \u2013 A function that takes a single\ndictionary (an Apify dataset item) and converts it to an\ninstance of the Document class.\nbuild (str, optional) \u2013 Optionally specifies the actor build to run.\nIt can be either a build tag or build number.\nmemory_mbytes (int, optional) \u2013 Optional memory limit for the run,\nin megabytes.\ntimeout_secs (int, optional) \u2013 Optional timeout for the run, in seconds.\nReturns\nA loader that will fetch the records from thetask run\u2019s default dataset.\nReturn type\nApifyDatasetLoader\ncall_actor(actor_id: str, run_input: Dict, dataset_mapping_function: Callable[[Dict], Document], *, build: Optional[str] = None, memory_mbytes: Optional[int] = None, timeout_secs: Optional[int] = None) \u2192 ApifyDatasetLoader[source]\u00b6\nRun an Actor on the Apify platform and wait for results to be ready.\nParameters\nactor_id (str) \u2013 The ID or name of the Actor on the Apify platform.\nrun_input (Dict) \u2013 The input object of the Actor that you\u2019re trying to run.\ndataset_mapping_function (Callable) \u2013 A function that takes a single", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.apify.ApifyWrapper.html"} {"id": "6eac528ea149-2", "text": "dataset_mapping_function (Callable) \u2013 A function that takes a single\ndictionary (an Apify dataset item) and converts it to an\ninstance of the Document class.\nbuild (str, optional) \u2013 Optionally specifies the actor build to run.\nIt can be either a build tag or build number.\nmemory_mbytes (int, optional) \u2013 Optional memory limit for the run,\nin megabytes.\ntimeout_secs (int, optional) \u2013 Optional timeout for the run, in seconds.\nReturns\nA loader that will fetch the records from theActor run\u2019s default dataset.\nReturn type\nApifyDatasetLoader\ncall_actor_task(task_id: str, task_input: Dict, dataset_mapping_function: Callable[[Dict], Document], *, build: Optional[str] = None, memory_mbytes: Optional[int] = None, timeout_secs: Optional[int] = None) \u2192 ApifyDatasetLoader[source]\u00b6\nRun a saved Actor task on Apify and wait for results to be ready.\nParameters\ntask_id (str) \u2013 The ID or name of the task on the Apify platform.\ntask_input (Dict) \u2013 The input object of the task that you\u2019re trying to run.\nOverrides the task\u2019s saved input.\ndataset_mapping_function (Callable) \u2013 A function that takes a single\ndictionary (an Apify dataset item) and converts it to an\ninstance of the Document class.\nbuild (str, optional) \u2013 Optionally specifies the actor build to run.\nIt can be either a build tag or build number.\nmemory_mbytes (int, optional) \u2013 Optional memory limit for the run,\nin megabytes.\ntimeout_secs (int, optional) \u2013 Optional timeout for the run, in seconds.\nReturns\nA loader that will fetch the records from thetask run\u2019s default dataset.\nReturn type\nApifyDatasetLoader\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.apify.ApifyWrapper.html"} {"id": "6eac528ea149-3", "text": "Return type\nApifyDatasetLoader\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate environment.\nValidate that an Apify API token is set and the apify-client\nPython package exists in the current environment.", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.apify.ApifyWrapper.html"} {"id": "347910d9167a-0", "text": "langchain.utilities.jira.JiraAPIWrapper\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.jira.JiraAPIWrapper.html"} {"id": "347910d9167a-1", "text": "class langchain.utilities.jira.JiraAPIWrapper(*, jira: Any = None, confluence: Any = None, jira_username: Optional[str] = None, jira_api_token: Optional[str] = None, jira_instance_url: Optional[str] = None, operations: List[Dict] = [{'mode': 'jql', 'name': 'JQL Query', 'description': '\\n\u00a0\u00a0\u00a0 This tool is a wrapper around atlassian-python-api\\'s Jira jql API, useful when you need to search for Jira issues.\\n\u00a0\u00a0\u00a0 The input to this tool is a JQL query string, and will be passed into atlassian-python-api\\'s Jira `jql` function,\\n\u00a0\u00a0\u00a0 For example, to find all the issues in project \"Test\" assigned to the me, you would pass in the following string:\\n\u00a0\u00a0\u00a0 project = Test AND assignee = currentUser()\\n\u00a0\u00a0\u00a0 or to find issues with summaries that contain the word \"test\", you would pass in the following string:\\n\u00a0\u00a0\u00a0 summary ~ \\'test\\'\\n\u00a0\u00a0\u00a0 '}, {'mode': 'get_projects', 'name': 'Get Projects', 'description': \"\\n\u00a0\u00a0\u00a0 This tool is a wrapper around atlassian-python-api's Jira project API, \\n\u00a0\u00a0\u00a0 useful when you need to fetch all the projects the user has access to, find out how many projects there are, or as an intermediary step that involv searching by projects. \\n\u00a0\u00a0\u00a0 there is no input to this tool.\\n\u00a0\u00a0\u00a0 \"}, {'mode': 'create_issue', 'name': 'Create Issue', 'description': '\\n\u00a0\u00a0\u00a0 This tool is a wrapper around atlassian-python-api\\'s Jira issue_create API, useful when you need to create a Jira issue. \\n\u00a0\u00a0\u00a0 The input to this tool is a dictionary specifying the fields of the Jira issue, and will be passed into atlassian-python-api\\'s Jira `issue_create`", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.jira.JiraAPIWrapper.html"} {"id": "347910d9167a-2", "text": "issue, and will be passed into atlassian-python-api\\'s Jira `issue_create` function.\\n\u00a0\u00a0\u00a0 For example, to create a low priority task called \"test issue\" with description \"test description\", you would pass in the following dictionary: \\n\u00a0\u00a0\u00a0 {{\"summary\": \"test issue\", \"description\": \"test description\", \"issuetype\": {{\"name\": \"Task\"}}, \"priority\": {{\"name\": \"Low\"}}}}\\n\u00a0\u00a0\u00a0 '}, {'mode': 'other', 'name': 'Catch all Jira API call', 'description': '\\n\u00a0\u00a0\u00a0 This tool is a wrapper around atlassian-python-api\\'s Jira API.\\n\u00a0\u00a0\u00a0 There are other dedicated tools for fetching all projects, and creating and searching for issues, \\n\u00a0\u00a0\u00a0 use this tool if you need to perform any other actions allowed by the atlassian-python-api Jira API.\\n\u00a0\u00a0\u00a0 The input to this tool is a dictionary specifying a function from atlassian-python-api\\'s Jira API, \\n\u00a0\u00a0\u00a0 as well as a list of arguments and dictionary of keyword arguments to pass into the function.\\n\u00a0\u00a0\u00a0 For example, to get all the users in a group, while increasing the max number of results to 100, you would\\n\u00a0\u00a0\u00a0 pass in the following dictionary: {{\"function\": \"get_all_users_from_group\", \"args\": [\"group\"], \"kwargs\": {{\"limit\":100}} }}\\n\u00a0\u00a0\u00a0 or to find out how many projects are in the Jira instance, you would pass in the following string:\\n\u00a0\u00a0\u00a0 {{\"function\": \"projects\"}}\\n\u00a0\u00a0\u00a0 For more information on the Jira API, refer to https://atlassian-python-api.readthedocs.io/jira.html\\n\u00a0\u00a0\u00a0 '}, {'mode': 'create_page', 'name': 'Create confluence page', 'description': 'This tool is a wrapper around atlassian-python-api\\'s Confluence \\natlassian-python-api API, useful when you need to create a", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.jira.JiraAPIWrapper.html"} {"id": "347910d9167a-3", "text": "Confluence \\natlassian-python-api API, useful when you need to create a Confluence page. The input to this tool is a dictionary \\nspecifying the fields of the Confluence page, and will be passed into atlassian-python-api\\'s Confluence `create_page` \\nfunction. For example, to create a page in the DEMO space titled \"This is the title\" with body \"This is the body. You can use \\nHTML tags!\", you would pass in the following dictionary: {{\"space\": \"DEMO\", \"title\":\"This is the \\ntitle\",\"body\":\"This is the body. You can use HTML tags!\"}} '}])[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.jira.JiraAPIWrapper.html"} {"id": "347910d9167a-4", "text": "Bases: BaseModel\nWrapper for Jira API.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam confluence: Any = None\u00b6\nparam jira_api_token: Optional[str] = None\u00b6\nparam jira_instance_url: Optional[str] = None\u00b6\nparam jira_username: Optional[str] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.jira.JiraAPIWrapper.html"} {"id": "347910d9167a-5", "text": "param operations: List[Dict] = [{'mode': 'jql', 'name': 'JQL Query', 'description': '\\n\u00a0\u00a0\u00a0 This tool is a wrapper around atlassian-python-api\\'s Jira jql API, useful when you need to search for Jira issues.\\n\u00a0\u00a0\u00a0 The input to this tool is a JQL query string, and will be passed into atlassian-python-api\\'s Jira `jql` function,\\n\u00a0\u00a0\u00a0 For example, to find all the issues in project \"Test\" assigned to the me, you would pass in the following string:\\n\u00a0\u00a0\u00a0 project = Test AND assignee = currentUser()\\n\u00a0\u00a0\u00a0 or to find issues with summaries that contain the word \"test\", you would pass in the following string:\\n\u00a0\u00a0\u00a0 summary ~ \\'test\\'\\n\u00a0\u00a0\u00a0 '}, {'mode': 'get_projects', 'name': 'Get Projects', 'description': \"\\n\u00a0\u00a0\u00a0 This tool is a wrapper around atlassian-python-api's Jira project API, \\n\u00a0\u00a0\u00a0 useful when you need to fetch all the projects the user has access to, find out how many projects there are, or as an intermediary step that involv searching by projects. \\n\u00a0\u00a0\u00a0 there is no input to this tool.\\n\u00a0\u00a0\u00a0 \"}, {'mode': 'create_issue', 'name': 'Create Issue', 'description': '\\n\u00a0\u00a0\u00a0 This tool is a wrapper around atlassian-python-api\\'s Jira issue_create API, useful when you need to create a Jira issue. \\n\u00a0\u00a0\u00a0 The input to this tool is a dictionary specifying the fields of the Jira issue, and will be passed into atlassian-python-api\\'s Jira `issue_create` function.\\n\u00a0\u00a0\u00a0 For example, to create a low priority task called \"test issue\" with description \"test description\", you would pass in the following dictionary: \\n\u00a0\u00a0\u00a0 {{\"summary\": \"test issue\", \"description\": \"test description\", \"issuetype\": {{\"name\":", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.jira.JiraAPIWrapper.html"} {"id": "347910d9167a-6", "text": "\"test issue\", \"description\": \"test description\", \"issuetype\": {{\"name\": \"Task\"}}, \"priority\": {{\"name\": \"Low\"}}}}\\n\u00a0\u00a0\u00a0 '}, {'mode': 'other', 'name': 'Catch all Jira API call', 'description': '\\n\u00a0\u00a0\u00a0 This tool is a wrapper around atlassian-python-api\\'s Jira API.\\n\u00a0\u00a0\u00a0 There are other dedicated tools for fetching all projects, and creating and searching for issues, \\n\u00a0\u00a0\u00a0 use this tool if you need to perform any other actions allowed by the atlassian-python-api Jira API.\\n\u00a0\u00a0\u00a0 The input to this tool is a dictionary specifying a function from atlassian-python-api\\'s Jira API, \\n\u00a0\u00a0\u00a0 as well as a list of arguments and dictionary of keyword arguments to pass into the function.\\n\u00a0\u00a0\u00a0 For example, to get all the users in a group, while increasing the max number of results to 100, you would\\n\u00a0\u00a0\u00a0 pass in the following dictionary: {{\"function\": \"get_all_users_from_group\", \"args\": [\"group\"], \"kwargs\": {{\"limit\":100}} }}\\n\u00a0\u00a0\u00a0 or to find out how many projects are in the Jira instance, you would pass in the following string:\\n\u00a0\u00a0\u00a0 {{\"function\": \"projects\"}}\\n\u00a0\u00a0\u00a0 For more information on the Jira API, refer to https://atlassian-python-api.readthedocs.io/jira.html\\n\u00a0\u00a0\u00a0 '}, {'mode': 'create_page', 'name': 'Create confluence page', 'description': 'This tool is a wrapper around atlassian-python-api\\'s Confluence \\natlassian-python-api API, useful when you need to create a Confluence page. The input to this tool is a dictionary \\nspecifying the fields of the Confluence page, and will be passed into atlassian-python-api\\'s Confluence `create_page` \\nfunction. For example, to create a page in the DEMO space titled", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.jira.JiraAPIWrapper.html"} {"id": "347910d9167a-7", "text": "\\nfunction. For example, to create a page in the DEMO space titled \"This is the title\" with body \"This is the body. You can use \\nHTML tags!\", you would pass in the following dictionary: {{\"space\": \"DEMO\", \"title\":\"This is the \\ntitle\",\"body\":\"This is the body. You can use HTML tags!\"}} '}]\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.jira.JiraAPIWrapper.html"} {"id": "347910d9167a-8", "text": "issue_create(query: str) \u2192 str[source]\u00b6\nlist() \u2192 List[Dict][source]\u00b6\nother(query: str) \u2192 str[source]\u00b6\npage_create(query: str) \u2192 str[source]\u00b6\nparse_issues(issues: Dict) \u2192 List[dict][source]\u00b6\nparse_projects(projects: List[dict]) \u2192 List[dict][source]\u00b6\nproject() \u2192 str[source]\u00b6\nrun(mode: str, query: str) \u2192 str[source]\u00b6\nsearch(query: str) \u2192 str[source]\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.jira.JiraAPIWrapper.html"} {"id": "6ff4f6fd6cd0-0", "text": "langchain.utilities.awslambda.LambdaWrapper\u00b6\nclass langchain.utilities.awslambda.LambdaWrapper(*, lambda_client: Any = None, function_name: Optional[str] = None, awslambda_tool_name: Optional[str] = None, awslambda_tool_description: Optional[str] = None)[source]\u00b6\nBases: BaseModel\nWrapper for AWS Lambda SDK.\nDocs for using:\npip install boto3\nCreate a lambda function using the AWS Console or CLI\nRun aws configure and enter your AWS credentials\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam awslambda_tool_description: Optional[str] = None\u00b6\nparam awslambda_tool_name: Optional[str] = None\u00b6\nparam function_name: Optional[str] = None\u00b6\nrun(query: str) \u2192 str[source]\u00b6\nInvoke Lambda function and parse result.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that python package exists in environment.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.awslambda.LambdaWrapper.html"} {"id": "41109adc88bf-0", "text": "langchain.utilities.vertexai.raise_vertex_import_error\u00b6\nlangchain.utilities.vertexai.raise_vertex_import_error() \u2192 None[source]\u00b6\nRaise ImportError related to Vertex SDK being not available.\nRaises\nImportError \u2013 an ImportError that mentions a required version of the SDK.", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.vertexai.raise_vertex_import_error.html"} {"id": "869b2ee1c7df-0", "text": "langchain.utilities.python.PythonREPL\u00b6\nclass langchain.utilities.python.PythonREPL(*, _globals: Optional[Dict] = None, _locals: Optional[Dict] = None)[source]\u00b6\nBases: BaseModel\nSimulates a standalone Python REPL.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam globals: Optional[Dict] [Optional] (alias '_globals')\u00b6\nparam locals: Optional[Dict] [Optional] (alias '_locals')\u00b6\nrun(command: str) \u2192 str[source]\u00b6\nRun command with own globals/locals and returns anything printed.", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.python.PythonREPL.html"} {"id": "bbbbcc765acb-0", "text": "langchain.utilities.serpapi.SerpAPIWrapper\u00b6\nclass langchain.utilities.serpapi.SerpAPIWrapper(*, search_engine: Any = None, params: dict = {'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}, serpapi_api_key: Optional[str] = None, aiosession: Optional[ClientSession] = None)[source]\u00b6\nBases: BaseModel\nWrapper around SerpAPI.\nTo use, you should have the google-search-results python package installed,\nand the environment variable SERPAPI_API_KEY set with your API key, or pass\nserpapi_api_key as a named parameter to the constructor.\nExample\nfrom langchain.utilities import SerpAPIWrapper\nserpapi = SerpAPIWrapper()\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam aiosession: Optional[aiohttp.client.ClientSession] = None\u00b6\nparam params: dict = {'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}\u00b6\nparam serpapi_api_key: Optional[str] = None\u00b6\nasync aresults(query: str) \u2192 dict[source]\u00b6\nUse aiohttp to run query through SerpAPI and return the results async.\nasync arun(query: str, **kwargs: Any) \u2192 str[source]\u00b6\nRun query through SerpAPI and parse result async.\nget_params(query: str) \u2192 Dict[str, str][source]\u00b6\nGet parameters for SerpAPI.\nresults(query: str) \u2192 dict[source]\u00b6\nRun query through SerpAPI and return the raw result.\nrun(query: str, **kwargs: Any) \u2192 str[source]\u00b6\nRun query through SerpAPI and parse result.", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.serpapi.SerpAPIWrapper.html"} {"id": "bbbbcc765acb-1", "text": "Run query through SerpAPI and parse result.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.serpapi.SerpAPIWrapper.html"} {"id": "e80059041bc2-0", "text": "langchain.utilities.powerbi.fix_table_name\u00b6\nlangchain.utilities.powerbi.fix_table_name(table: str) \u2192 str[source]\u00b6\nAdd single quotes around table names that contain spaces.", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.powerbi.fix_table_name.html"} {"id": "6e542e04b457-0", "text": "langchain.utilities.bibtex.BibtexparserWrapper\u00b6\nclass langchain.utilities.bibtex.BibtexparserWrapper[source]\u00b6\nBases: BaseModel\nWrapper around bibtexparser.\nTo use, you should have the bibtexparser python package installed.\nhttps://bibtexparser.readthedocs.io/en/master/\nThis wrapper will use bibtexparser to load a collection of references from\na bibtex file and fetch document summaries.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nget_metadata(entry: Mapping[str, Any], load_extra: bool = False) \u2192 Dict[str, Any][source]\u00b6\nGet metadata for the given entry.\nload_bibtex_entries(path: str) \u2192 List[Dict[str, Any]][source]\u00b6\nLoad bibtex entries from the bibtex file at the given path.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that the python package exists in environment.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.bibtex.BibtexparserWrapper.html"} {"id": "ea21d0cbb160-0", "text": "langchain.utilities.wolfram_alpha.WolframAlphaAPIWrapper\u00b6\nclass langchain.utilities.wolfram_alpha.WolframAlphaAPIWrapper(*, wolfram_client: Any = None, wolfram_alpha_appid: Optional[str] = None)[source]\u00b6\nBases: BaseModel\nWrapper for Wolfram Alpha.\nDocs for using:\nGo to wolfram alpha and sign up for a developer account\nCreate an app and get your APP ID\nSave your APP ID into WOLFRAM_ALPHA_APPID env variable\npip install wolframalpha\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam wolfram_alpha_appid: Optional[str] = None\u00b6\nrun(query: str) \u2192 str[source]\u00b6\nRun query through WolframAlpha and parse result.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.wolfram_alpha.WolframAlphaAPIWrapper.html"} {"id": "1585775f0b57-0", "text": "langchain.utilities.brave_search.BraveSearchWrapper\u00b6\nclass langchain.utilities.brave_search.BraveSearchWrapper(*, api_key: str, search_kwargs: dict = None, base_url: str = 'https://api.search.brave.com/res/v1/web/search')[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_key: str [Required]\u00b6\nparam search_kwargs: dict [Optional]\u00b6\ndownload_documents(query: str) \u2192 List[Document][source]\u00b6\nQuery the Brave search engine and return the results as a list of Documents.\nParameters\nquery \u2013 The query to search for.\nReturns: The results as a list of Documents.\nrun(query: str) \u2192 str[source]\u00b6\nQuery the Brave search engine and return the results as a JSON string.\nParameters\nquery \u2013 The query to search for.\nReturns: The results as a JSON string.", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.brave_search.BraveSearchWrapper.html"} {"id": "9e56aa6a791f-0", "text": "langchain.utilities.zapier.ZapierNLAWrapper\u00b6\nclass langchain.utilities.zapier.ZapierNLAWrapper(*, zapier_nla_api_key: str, zapier_nla_oauth_access_token: str, zapier_nla_api_base: str = 'https://nla.zapier.com/api/v1/')[source]\u00b6\nBases: BaseModel\nWrapper for Zapier NLA.\nFull docs here: https://nla.zapier.com/start/\nThis wrapper supports both API Key and OAuth Credential auth methods. API Key\nis the fastest way to get started using this wrapper.\nCall this wrapper with either zapier_nla_api_key or\nzapier_nla_oauth_access_token arguments, or set the ZAPIER_NLA_API_KEY\nenvironment variable. If both arguments are set, the Access Token will take\nprecedence.\nFor use-cases where LangChain + Zapier NLA is powering a user-facing application,\nand LangChain needs access to the end-user\u2019s connected accounts on Zapier.com,\nyou\u2019ll need to use OAuth. Review the full docs above to learn how to create\nyour own provider and generate credentials.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam zapier_nla_api_base: str = 'https://nla.zapier.com/api/v1/'\u00b6\nparam zapier_nla_api_key: str [Required]\u00b6\nparam zapier_nla_oauth_access_token: str [Required]\u00b6\nasync alist() \u2192 List[Dict][source]\u00b6\nReturns a list of all exposed (enabled) actions associated with\ncurrent user (associated with the set api_key). Change your exposed\nactions here: https://nla.zapier.com/demo/start/\nThe return list can be empty if no actions exposed. Else will contain", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.zapier.ZapierNLAWrapper.html"} {"id": "9e56aa6a791f-1", "text": "The return list can be empty if no actions exposed. Else will contain\na list of action objects:\n[{\u201cid\u201d: str,\n\u201cdescription\u201d: str,\n\u201cparams\u201d: Dict[str, str]\n}]\nparams will always contain an instructions key, the only required\nparam. All others optional and if provided will override any AI guesses\n(see \u201cunderstanding the AI guessing flow\u201d here:\nhttps://nla.zapier.com/api/v1/docs)\nasync alist_as_str() \u2192 str[source]\u00b6\nSame as list, but returns a stringified version of the JSON for\ninsertting back into an LLM.\nasync apreview(action_id: str, instructions: str, params: Optional[Dict] = None) \u2192 Dict[source]\u00b6\nSame as run, but instead of actually executing the action, will\ninstead return a preview of params that have been guessed by the AI in\ncase you need to explicitly review before executing.\nasync apreview_as_str(*args, **kwargs) \u2192 str[source]\u00b6\nSame as preview, but returns a stringified version of the JSON for\ninsertting back into an LLM.\nasync arun(action_id: str, instructions: str, params: Optional[Dict] = None) \u2192 Dict[source]\u00b6\nExecutes an action that is identified by action_id, must be exposed\n(enabled) by the current user (associated with the set api_key). Change\nyour exposed actions here: https://nla.zapier.com/demo/start/\nThe return JSON is guaranteed to be less than ~500 words (350\ntokens) making it safe to inject into the prompt of another LLM\ncall.\nasync arun_as_str(*args, **kwargs) \u2192 str[source]\u00b6\nSame as run, but returns a stringified version of the JSON for\ninsertting back into an LLM.", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.zapier.ZapierNLAWrapper.html"} {"id": "9e56aa6a791f-2", "text": "insertting back into an LLM.\nlist() \u2192 List[Dict][source]\u00b6\nReturns a list of all exposed (enabled) actions associated with\ncurrent user (associated with the set api_key). Change your exposed\nactions here: https://nla.zapier.com/demo/start/\nThe return list can be empty if no actions exposed. Else will contain\na list of action objects:\n[{\u201cid\u201d: str,\n\u201cdescription\u201d: str,\n\u201cparams\u201d: Dict[str, str]\n}]\nparams will always contain an instructions key, the only required\nparam. All others optional and if provided will override any AI guesses\n(see \u201cunderstanding the AI guessing flow\u201d here:\nhttps://nla.zapier.com/docs/using-the-api#ai-guessing)\nlist_as_str() \u2192 str[source]\u00b6\nSame as list, but returns a stringified version of the JSON for\ninsertting back into an LLM.\npreview(action_id: str, instructions: str, params: Optional[Dict] = None) \u2192 Dict[source]\u00b6\nSame as run, but instead of actually executing the action, will\ninstead return a preview of params that have been guessed by the AI in\ncase you need to explicitly review before executing.\npreview_as_str(*args, **kwargs) \u2192 str[source]\u00b6\nSame as preview, but returns a stringified version of the JSON for\ninsertting back into an LLM.\nrun(action_id: str, instructions: str, params: Optional[Dict] = None) \u2192 Dict[source]\u00b6\nExecutes an action that is identified by action_id, must be exposed\n(enabled) by the current user (associated with the set api_key). Change\nyour exposed actions here: https://nla.zapier.com/demo/start/\nThe return JSON is guaranteed to be less than ~500 words (350", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.zapier.ZapierNLAWrapper.html"} {"id": "9e56aa6a791f-3", "text": "The return JSON is guaranteed to be less than ~500 words (350\ntokens) making it safe to inject into the prompt of another LLM\ncall.\nrun_as_str(*args, **kwargs) \u2192 str[source]\u00b6\nSame as run, but returns a stringified version of the JSON for\ninsertting back into an LLM.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key exists in environment.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.zapier.ZapierNLAWrapper.html"} {"id": "6169a8387afd-0", "text": "langchain.utilities.scenexplain.SceneXplainAPIWrapper\u00b6\nclass langchain.utilities.scenexplain.SceneXplainAPIWrapper(_env_file: Optional[Union[str, PathLike, List[Union[str, PathLike]], Tuple[Union[str, PathLike], ...]]] = '', _env_file_encoding: Optional[str] = None, _env_nested_delimiter: Optional[str] = None, _secrets_dir: Optional[Union[str, PathLike]] = None, *, scenex_api_key: str, scenex_api_url: str = 'https://api.scenex.jina.ai/v1/describe')[source]\u00b6\nBases: BaseSettings, BaseModel\nWrapper for SceneXplain API.\nIn order to set this up, you need API key for the SceneXplain API.\nYou can obtain a key by following the steps below.\n- Sign up for a free account at https://scenex.jina.ai/.\n- Navigate to the API Access page (https://scenex.jina.ai/api)\nand create a new API key.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam scenex_api_key: str [Required]\u00b6\nparam scenex_api_url: str = 'https://api.scenex.jina.ai/v1/describe'\u00b6\nrun(image: str) \u2192 str[source]\u00b6\nRun SceneXplain image explainer.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key exists in environment.\nmodel Config\u00b6\nBases: BaseConfig\ngetter_dict\u00b6\nalias of GetterDict", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.scenexplain.SceneXplainAPIWrapper.html"} {"id": "6169a8387afd-1", "text": "model Config\u00b6\nBases: BaseConfig\ngetter_dict\u00b6\nalias of GetterDict\nclassmethod customise_sources(init_settings: Callable[[BaseSettings], Dict[str, Any]], env_settings: Callable[[BaseSettings], Dict[str, Any]], file_secret_settings: Callable[[BaseSettings], Dict[str, Any]]) \u2192 Tuple[Callable[[BaseSettings], Dict[str, Any]], ...]\u00b6\nclassmethod get_field_info(name: unicode) \u2192 Dict[str, Any]\u00b6\nGet properties of FieldInfo from the fields property of the config class.\njson_dumps(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, cls=None, indent=None, separators=None, default=None, sort_keys=False, **kw)\u00b6\nSerialize obj to a JSON formatted str.\nIf skipkeys is true then dict keys that are not basic types\n(str, int, float, bool, None) will be skipped\ninstead of raising a TypeError.\nIf ensure_ascii is false, then the return value can contain non-ASCII\ncharacters if they appear in strings contained in obj. Otherwise, all\nsuch characters are escaped in JSON strings.\nIf check_circular is false, then the circular reference check\nfor container types will be skipped and a circular reference will\nresult in an RecursionError (or worse).\nIf allow_nan is false, then it will be a ValueError to\nserialize out of range float values (nan, inf, -inf) in\nstrict compliance of the JSON specification, instead of using the\nJavaScript equivalents (NaN, Infinity, -Infinity).\nIf indent is a non-negative integer, then JSON array elements and\nobject members will be pretty-printed with that indent level. An indent\nlevel of 0 will only insert newlines. None is the most compact\nrepresentation.\nIf specified, separators should be an (item_separator, key_separator)", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.scenexplain.SceneXplainAPIWrapper.html"} {"id": "6169a8387afd-2", "text": "representation.\nIf specified, separators should be an (item_separator, key_separator)\ntuple. The default is (', ', ': ') if indent is None and\n(',', ': ') otherwise. To get the most compact JSON representation,\nyou should specify (',', ':') to eliminate whitespace.\ndefault(obj) is a function that should return a serializable version\nof obj or raise TypeError. The default simply raises TypeError.\nIf sort_keys is true (default: False), then the output of\ndictionaries will be sorted by key.\nTo use a custom JSONEncoder subclass (e.g. one that overrides the\n.default() method to serialize additional types), specify it with\nthe cls kwarg; otherwise JSONEncoder is used.\njson_loads(*, cls=None, object_hook=None, parse_float=None, parse_int=None, parse_constant=None, object_pairs_hook=None, **kw)\u00b6\nDeserialize s (a str, bytes or bytearray instance\ncontaining a JSON document) to a Python object.\nobject_hook is an optional function that will be called with the\nresult of any object literal decode (a dict). The return value of\nobject_hook will be used instead of the dict. This feature\ncan be used to implement custom decoders (e.g. JSON-RPC class hinting).\nobject_pairs_hook is an optional function that will be called with the\nresult of any object literal decoded with an ordered list of pairs. The\nreturn value of object_pairs_hook will be used instead of the dict.\nThis feature can be used to implement custom decoders. If object_hook\nis also defined, the object_pairs_hook takes priority.\nparse_float, if specified, will be called with the string\nof every JSON float to be decoded. By default this is equivalent to\nfloat(num_str). This can be used to use another datatype or parser\nfor JSON floats (e.g. decimal.Decimal).", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.scenexplain.SceneXplainAPIWrapper.html"} {"id": "6169a8387afd-3", "text": "for JSON floats (e.g. decimal.Decimal).\nparse_int, if specified, will be called with the string\nof every JSON int to be decoded. By default this is equivalent to\nint(num_str). This can be used to use another datatype or parser\nfor JSON integers (e.g. float).\nparse_constant, if specified, will be called with one of the\nfollowing strings: -Infinity, Infinity, NaN.\nThis can be used to raise an exception if invalid JSON numbers\nare encountered.\nTo use a custom JSONDecoder subclass, specify it with the cls\nkwarg; otherwise JSONDecoder is used.\nclassmethod parse_env_var(field_name: unicode, raw_val: unicode) \u2192 Any\u00b6\nclassmethod prepare_field(field: ModelField) \u2192 None\u00b6\nOptional hook to check or modify fields during model creation.\nalias_generator = None\u00b6\nallow_inf_nan = True\u00b6\nallow_mutation = True\u00b6\nallow_population_by_field_name = False\u00b6\nanystr_lower = False\u00b6\nanystr_strip_whitespace = False\u00b6\nanystr_upper = False\u00b6\narbitrary_types_allowed = True\u00b6\ncase_sensitive = False\u00b6\ncopy_on_model_validation = 'shallow'\u00b6\nenv_file = None\u00b6\nenv_file_encoding = None\u00b6\nenv_nested_delimiter = None\u00b6\nenv_prefix = ''\u00b6\nerror_msg_templates = {}\u00b6\nextra = 'forbid'\u00b6\nfields = {}\u00b6\nfrozen = False\u00b6\njson_encoders = {}\u00b6\nkeep_untouched = ()\u00b6\nmax_anystr_length = None\u00b6\nmin_anystr_length = 0\u00b6\norm_mode = False\u00b6\npost_init_call = 'before_validation'\u00b6\nschema_extra = {}\u00b6\nsecrets_dir = None\u00b6\nsmart_union = False\u00b6\ntitle = None\u00b6\nunderscore_attrs_are_private = False\u00b6\nuse_enum_values = False\u00b6\nvalidate_all = True\u00b6\nvalidate_assignment = False\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.scenexplain.SceneXplainAPIWrapper.html"} {"id": "8ab890f16e3b-0", "text": "langchain.utilities.wikipedia.WikipediaAPIWrapper\u00b6\nclass langchain.utilities.wikipedia.WikipediaAPIWrapper(*, wiki_client: Any = None, top_k_results: int = 3, lang: str = 'en', load_all_available_meta: bool = False, doc_content_chars_max: int = 4000)[source]\u00b6\nBases: BaseModel\nWrapper around WikipediaAPI.\nTo use, you should have the wikipedia python package installed.\nThis wrapper will use the Wikipedia API to conduct searches and\nfetch page summaries. By default, it will return the page summaries\nof the top-k results.\nIt limits the Document content by doc_content_chars_max.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam doc_content_chars_max: int = 4000\u00b6\nparam lang: str = 'en'\u00b6\nparam load_all_available_meta: bool = False\u00b6\nparam top_k_results: int = 3\u00b6\nload(query: str) \u2192 List[Document][source]\u00b6\nRun Wikipedia search and get the article text plus the meta information.\nSee\nReturns: a list of documents.\nrun(query: str) \u2192 str[source]\u00b6\nRun Wikipedia search and get page summaries.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that the python package exists in environment.", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.wikipedia.WikipediaAPIWrapper.html"} {"id": "9ab87142e8c8-0", "text": "langchain.utilities.powerbi.PowerBIDataset\u00b6\nclass langchain.utilities.powerbi.PowerBIDataset(*, dataset_id: str, table_names: List[str], group_id: Optional[str] = None, credential: Optional[TokenCredential] = None, token: Optional[str] = None, impersonated_user_name: Optional[str] = None, sample_rows_in_table_info: ConstrainedIntValue = 1, schemas: Dict[str, str] = None, aiosession: Optional[ClientSession] = None)[source]\u00b6\nBases: BaseModel\nCreate PowerBI engine from dataset ID and credential or token.\nUse either the credential or a supplied token to authenticate.\nIf both are supplied the credential is used to generate a token.\nThe impersonated_user_name is the UPN of a user to be impersonated.\nIf the model is not RLS enabled, this will be ignored.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam aiosession: Optional[aiohttp.ClientSession] = None\u00b6\nparam credential: Optional[TokenCredential] = None\u00b6\nparam dataset_id: str [Required]\u00b6\nparam group_id: Optional[str] = None\u00b6\nparam impersonated_user_name: Optional[str] = None\u00b6\nparam sample_rows_in_table_info: int = 1\u00b6\nConstraints\nexclusiveMinimum = 0\nmaximum = 10\nparam schemas: Dict[str, str] [Optional]\u00b6\nparam table_names: List[str] [Required]\u00b6\nparam token: Optional[str] = None\u00b6\nasync aget_table_info(table_names: Optional[Union[List[str], str]] = None) \u2192 str[source]\u00b6\nGet information about specified tables.\nasync arun(command: str) \u2192 Any[source]\u00b6\nExecute a DAX command and return the result asynchronously.", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.powerbi.PowerBIDataset.html"} {"id": "9ab87142e8c8-1", "text": "Execute a DAX command and return the result asynchronously.\nvalidator fix_table_names\u00a0 \u00bb\u00a0 table_names[source]\u00b6\nFix the table names.\nget_schemas() \u2192 str[source]\u00b6\nGet the available schema\u2019s.\nget_table_info(table_names: Optional[Union[List[str], str]] = None) \u2192 str[source]\u00b6\nGet information about specified tables.\nget_table_names() \u2192 Iterable[str][source]\u00b6\nGet names of tables available.\nrun(command: str) \u2192 Any[source]\u00b6\nExecute a DAX command and return a json representing the results.\nvalidator token_or_credential_present\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that at least one of token and credentials is present.\nproperty headers: Dict[str, str]\u00b6\nGet the token.\nproperty request_url: str\u00b6\nGet the request url.\nproperty table_info: str\u00b6\nInformation about all tables in the database.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.powerbi.PowerBIDataset.html"} {"id": "add783ec475b-0", "text": "langchain.utilities.twilio.TwilioAPIWrapper\u00b6\nclass langchain.utilities.twilio.TwilioAPIWrapper(*, client: Any = None, account_sid: Optional[str] = None, auth_token: Optional[str] = None, from_number: Optional[str] = None)[source]\u00b6\nBases: BaseModel\nMessaging Client using Twilio.\nTo use, you should have the twilio python package installed,\nand the environment variables TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN, and\nTWILIO_FROM_NUMBER, or pass account_sid, auth_token, and from_number as\nnamed parameters to the constructor.\nExample\nfrom langchain.utilities.twilio import TwilioAPIWrapper\ntwilio = TwilioAPIWrapper(\n account_sid=\"ACxxx\",\n auth_token=\"xxx\",\n from_number=\"+10123456789\"\n)\ntwilio.run('test', '+12484345508')\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam account_sid: Optional[str] = None\u00b6\nTwilio account string identifier.\nparam auth_token: Optional[str] = None\u00b6\nTwilio auth token.\nparam from_number: Optional[str] = None\u00b6\nA Twilio phone number in [E.164](https://www.twilio.com/docs/glossary/what-e164)\nformat, an\n[alphanumeric sender ID](https://www.twilio.com/docs/sms/send-messages#use-an-alphanumeric-sender-id),\nor a [Channel Endpoint address](https://www.twilio.com/docs/sms/channels#channel-addresses)\nthat is enabled for the type of message you want to send. Phone numbers or\n[short codes](https://www.twilio.com/docs/sms/api/short-code) purchased from", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.twilio.TwilioAPIWrapper.html"} {"id": "add783ec475b-1", "text": "Twilio also work here. You cannot, for example, spoof messages from a private\ncell phone number. If you are using messaging_service_sid, this parameter\nmust be empty.\nrun(body: str, to: str) \u2192 str[source]\u00b6\nRun body through Twilio and respond with message sid.\nParameters\nbody \u2013 The text of the message you want to send. Can be up to 1,600\ncharacters in length.\nto \u2013 The destination phone number in\n[E.164](https://www.twilio.com/docs/glossary/what-e164) format for\nSMS/MMS or\n[Channel user address](https://www.twilio.com/docs/sms/channels#channel-addresses)\nfor other 3rd-party channels.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = False\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.twilio.TwilioAPIWrapper.html"} {"id": "f7c588c41a05-0", "text": "langchain.utilities.dataforseo_api_search.DataForSeoAPIWrapper\u00b6\nclass langchain.utilities.dataforseo_api_search.DataForSeoAPIWrapper(*, default_params: dict = {'depth': 10, 'language_code': 'en', 'location_name': 'United States', 'se_name': 'google', 'se_type': 'organic'}, params: dict = {}, api_login: Optional[str] = None, api_password: Optional[str] = None, json_result_types: Optional[list] = None, json_result_fields: Optional[list] = None, top_count: Optional[int] = None, aiosession: Optional[ClientSession] = None)[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam aiosession: Optional[aiohttp.client.ClientSession] = None\u00b6\nparam api_login: Optional[str] = None\u00b6\nparam api_password: Optional[str] = None\u00b6\nparam default_params: dict = {'depth': 10, 'language_code': 'en', 'location_name': 'United States', 'se_name': 'google', 'se_type': 'organic'}\u00b6\nparam json_result_fields: Optional[list] = None\u00b6\nparam json_result_types: Optional[list] = None\u00b6\nparam params: dict = {}\u00b6\nparam top_count: Optional[int] = None\u00b6\nasync aresults(url: str) \u2192 list[source]\u00b6\nasync arun(url: str) \u2192 str[source]\u00b6\nRun request to DataForSEO SERP API and parse result async.\nresults(url: str) \u2192 list[source]\u00b6\nrun(url: str) \u2192 str[source]\u00b6\nRun request to DataForSEO SERP API and parse result async.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.dataforseo_api_search.DataForSeoAPIWrapper.html"} {"id": "f7c588c41a05-1", "text": "validator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that login and password exists in environment.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.dataforseo_api_search.DataForSeoAPIWrapper.html"} {"id": "b72c3d716d73-0", "text": "langchain.utilities.arxiv.ArxivAPIWrapper\u00b6\nclass langchain.utilities.arxiv.ArxivAPIWrapper(*, arxiv_search: Any = None, arxiv_exceptions: Any = None, top_k_results: int = 3, load_max_docs: int = 100, load_all_available_meta: bool = False, doc_content_chars_max: Optional[int] = 4000, ARXIV_MAX_QUERY_LENGTH: int = 300)[source]\u00b6\nBases: BaseModel\nWrapper around ArxivAPI.\nTo use, you should have the arxiv python package installed.\nhttps://lukasschwab.me/arxiv.py/index.html\nThis wrapper will use the Arxiv API to conduct searches and\nfetch document summaries. By default, it will return the document summaries\nof the top-k results.\nIt limits the Document content by doc_content_chars_max.\nSet doc_content_chars_max=None if you don\u2019t want to limit the content size.\nParameters\ntop_k_results \u2013 number of the top-scored document used for the arxiv tool\nARXIV_MAX_QUERY_LENGTH \u2013 the cut limit on the query used for the arxiv tool.\nload_max_docs \u2013 a limit to the number of loaded documents\nload_all_available_meta \u2013 \nif True: the metadata of the loaded Documents gets all available meta info(see https://lukasschwab.me/arxiv.py/index.html#Result),\nif False: the metadata gets only the most informative fields.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam arxiv_exceptions: Any = None\u00b6\nparam doc_content_chars_max: Optional[int] = 4000\u00b6\nparam load_all_available_meta: bool = False\u00b6\nparam load_max_docs: int = 100\u00b6\nparam top_k_results: int = 3\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.arxiv.ArxivAPIWrapper.html"} {"id": "b72c3d716d73-1", "text": "param top_k_results: int = 3\u00b6\nload(query: str) \u2192 List[Document][source]\u00b6\nRun Arxiv search and get the article texts plus the article meta information.\nSee https://lukasschwab.me/arxiv.py/index.html#Search\nReturns: a list of documents with the document.page_content in text format\nrun(query: str) \u2192 str[source]\u00b6\nRun Arxiv search and get the article meta information.\nSee https://lukasschwab.me/arxiv.py/index.html#Search\nSee https://lukasschwab.me/arxiv.py/index.html#Result\nIt uses only the most informative fields of article meta information.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that the python package exists in environment.", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.arxiv.ArxivAPIWrapper.html"} {"id": "35f789c8a467-0", "text": "langchain.utilities.bing_search.BingSearchAPIWrapper\u00b6\nclass langchain.utilities.bing_search.BingSearchAPIWrapper(*, bing_subscription_key: str, bing_search_url: str, k: int = 10)[source]\u00b6\nBases: BaseModel\nWrapper for Bing Search API.\nIn order to set this up, follow instructions at:\nhttps://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam bing_search_url: str [Required]\u00b6\nparam bing_subscription_key: str [Required]\u00b6\nparam k: int = 10\u00b6\nresults(query: str, num_results: int) \u2192 List[Dict][source]\u00b6\nRun query through BingSearch and return metadata.\nParameters\nquery \u2013 The query to search for.\nnum_results \u2013 The number of results to return.\nReturns\nsnippet - The description of the result.\ntitle - The title of the result.\nlink - The link to the result.\nReturn type\nA list of dictionaries with the following keys\nrun(query: str) \u2192 str[source]\u00b6\nRun query through BingSearch and parse result.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and endpoint exists in environment.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.bing_search.BingSearchAPIWrapper.html"} {"id": "6114432dbf24-0", "text": "langchain.utilities.vertexai.init_vertexai\u00b6\nlangchain.utilities.vertexai.init_vertexai(project: Optional[str] = None, location: Optional[str] = None, credentials: Optional[Credentials] = None) \u2192 None[source]\u00b6\nInit vertexai.\nParameters\nproject \u2013 The default GCP project to use when making Vertex API calls.\nlocation \u2013 The default location to use when making API calls.\ncredentials \u2013 The default custom\ncredentials to use when making API calls. If not provided credentials\nwill be ascertained from the environment.\nRaises\nImportError \u2013 If importing vertexai SDK did not succeed.", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.vertexai.init_vertexai.html"} {"id": "7ac4aed31bdd-0", "text": "langchain.utilities.metaphor_search.MetaphorSearchAPIWrapper\u00b6\nclass langchain.utilities.metaphor_search.MetaphorSearchAPIWrapper(*, metaphor_api_key: str, k: int = 10)[source]\u00b6\nBases: BaseModel\nWrapper for Metaphor Search API.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam k: int = 10\u00b6\nparam metaphor_api_key: str [Required]\u00b6\nresults(query: str, num_results: int, include_domains: Optional[List[str]] = None, exclude_domains: Optional[List[str]] = None, start_crawl_date: Optional[str] = None, end_crawl_date: Optional[str] = None, start_published_date: Optional[str] = None, end_published_date: Optional[str] = None) \u2192 List[Dict][source]\u00b6\nRun query through Metaphor Search and return metadata.\nParameters\nquery \u2013 The query to search for.\nnum_results \u2013 The number of results to return.\nReturns\ntitle - The title of the\nurl - The url\nauthor - Author of the content, if applicable. Otherwise, None.\npublished_date - Estimated date published\nin YYYY-MM-DD format. Otherwise, None.\nReturn type\nA list of dictionaries with the following keys\nasync results_async(query: str, num_results: int, include_domains: Optional[List[str]] = None, exclude_domains: Optional[List[str]] = None, start_crawl_date: Optional[str] = None, end_crawl_date: Optional[str] = None, start_published_date: Optional[str] = None, end_published_date: Optional[str] = None) \u2192 List[Dict][source]\u00b6\nGet results from the Metaphor Search API asynchronously.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.metaphor_search.MetaphorSearchAPIWrapper.html"} {"id": "7ac4aed31bdd-1", "text": "validator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and endpoint exists in environment.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.metaphor_search.MetaphorSearchAPIWrapper.html"} {"id": "405beb3265e5-0", "text": "langchain.utilities.pupmed.PubMedAPIWrapper\u00b6\nclass langchain.utilities.pupmed.PubMedAPIWrapper(*, top_k_results: int = 3, load_max_docs: int = 25, doc_content_chars_max: int = 2000, load_all_available_meta: bool = False, email: str = 'your_email@example.com', base_url_esearch: str = 'https://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?', base_url_efetch: str = 'https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?', max_retry: int = 5, sleep_time: float = 0.2, ARXIV_MAX_QUERY_LENGTH: int = 300)[source]\u00b6\nBases: BaseModel\nWrapper around PubMed API.\nThis wrapper will use the PubMed API to conduct searches and fetch\ndocument summaries. By default, it will return the document summaries\nof the top-k results of an input search.\nParameters\ntop_k_results \u2013 number of the top-scored document used for the PubMed tool\nload_max_docs \u2013 a limit to the number of loaded documents\nload_all_available_meta \u2013 \nif True: the metadata of the loaded Documents gets all available meta info(see https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch)\nif False: the metadata gets only the most informative fields.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam doc_content_chars_max: int = 2000\u00b6\nparam email: str = 'your_email@example.com'\u00b6\nparam load_all_available_meta: bool = False\u00b6\nparam load_max_docs: int = 25\u00b6\nparam top_k_results: int = 3\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.pupmed.PubMedAPIWrapper.html"} {"id": "405beb3265e5-1", "text": "param top_k_results: int = 3\u00b6\nload(query: str) \u2192 List[dict][source]\u00b6\nSearch PubMed for documents matching the query.\nReturn a list of dictionaries containing the document metadata.\nload_docs(query: str) \u2192 List[Document][source]\u00b6\nretrieve_article(uid: str, webenv: str) \u2192 dict[source]\u00b6\nrun(query: str) \u2192 str[source]\u00b6\nRun PubMed search and get the article meta information.\nSee https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch\nIt uses only the most informative fields of article meta information.", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.pupmed.PubMedAPIWrapper.html"} {"id": "39e5c8cb87a6-0", "text": "langchain.utilities.powerbi.json_to_md\u00b6\nlangchain.utilities.powerbi.json_to_md(json_contents: List[Dict[str, Union[str, int, float]]], table_name: Optional[str] = None) \u2192 str[source]\u00b6\nConverts a JSON object to a markdown table.", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.powerbi.json_to_md.html"} {"id": "d78cfad8b9e4-0", "text": "langchain.utilities.openapi.HTTPVerb\u00b6\nclass langchain.utilities.openapi.HTTPVerb(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]\u00b6\nBases: str, Enum\nEnumerator of the HTTP verbs.\nMethods\nfrom_str(verb)\nParse an HTTP verb.\n__init__(*args,\u00a0**kwds)\ncapitalize()\nReturn a capitalized version of the string.\ncasefold()\nReturn a version of the string suitable for caseless comparisons.\ncenter(width[,\u00a0fillchar])\nReturn a centered string of length width.\ncount(sub[,\u00a0start[,\u00a0end]])\nReturn the number of non-overlapping occurrences of substring sub in string S[start:end].\nencode([encoding,\u00a0errors])\nEncode the string using the codec registered for encoding.\nendswith(suffix[,\u00a0start[,\u00a0end]])\nReturn True if S ends with the specified suffix, False otherwise.\nexpandtabs([tabsize])\nReturn a copy where all tab characters are expanded using spaces.\nfind(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nformat(*args,\u00a0**kwargs)\nReturn a formatted version of S, using substitutions from args and kwargs.\nformat_map(mapping)\nReturn a formatted version of S, using substitutions from mapping.\nindex(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nisalnum()\nReturn True if the string is an alpha-numeric string, False otherwise.\nisalpha()\nReturn True if the string is an alphabetic string, False otherwise.\nisascii()\nReturn True if all characters in the string are ASCII, False otherwise.\nisdecimal()", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.openapi.HTTPVerb.html"} {"id": "d78cfad8b9e4-1", "text": "Return True if all characters in the string are ASCII, False otherwise.\nisdecimal()\nReturn True if the string is a decimal string, False otherwise.\nisdigit()\nReturn True if the string is a digit string, False otherwise.\nisidentifier()\nReturn True if the string is a valid Python identifier, False otherwise.\nislower()\nReturn True if the string is a lowercase string, False otherwise.\nisnumeric()\nReturn True if the string is a numeric string, False otherwise.\nisprintable()\nReturn True if the string is printable, False otherwise.\nisspace()\nReturn True if the string is a whitespace string, False otherwise.\nistitle()\nReturn True if the string is a title-cased string, False otherwise.\nisupper()\nReturn True if the string is an uppercase string, False otherwise.\njoin(iterable,\u00a0/)\nConcatenate any number of strings.\nljust(width[,\u00a0fillchar])\nReturn a left-justified string of length width.\nlower()\nReturn a copy of the string converted to lowercase.\nlstrip([chars])\nReturn a copy of the string with leading whitespace removed.\nmaketrans\nReturn a translation table usable for str.translate().\npartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nremoveprefix(prefix,\u00a0/)\nReturn a str with the given prefix string removed if present.\nremovesuffix(suffix,\u00a0/)\nReturn a str with the given suffix string removed if present.\nreplace(old,\u00a0new[,\u00a0count])\nReturn a copy with all occurrences of substring old replaced by new.\nrfind(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].\nrindex(sub[,\u00a0start[,\u00a0end]])", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.openapi.HTTPVerb.html"} {"id": "d78cfad8b9e4-2", "text": "rindex(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].\nrjust(width[,\u00a0fillchar])\nReturn a right-justified string of length width.\nrpartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nrsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nrstrip([chars])\nReturn a copy of the string with trailing whitespace removed.\nsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nsplitlines([keepends])\nReturn a list of the lines in the string, breaking at line boundaries.\nstartswith(prefix[,\u00a0start[,\u00a0end]])\nReturn True if S starts with the specified prefix, False otherwise.\nstrip([chars])\nReturn a copy of the string with leading and trailing whitespace removed.\nswapcase()\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\nReturn a version of the string where each word is titlecased.\ntranslate(table,\u00a0/)\nReplace each character in the string using the given translation table.\nupper()\nReturn a copy of the string converted to uppercase.\nzfill(width,\u00a0/)\nPad a numeric string with zeros on the left, to fill a field of the given width.\nAttributes\nGET\nPUT\nPOST\nDELETE\nOPTIONS\nHEAD\nPATCH\nTRACE\ncapitalize()\u00b6\nReturn a capitalized version of the string.\nMore specifically, make the first character have upper case and the rest lower\ncase.\ncasefold()\u00b6\nReturn a version of the string suitable for caseless comparisons.\ncenter(width, fillchar=' ', /)\u00b6\nReturn a centered string of length width.", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.openapi.HTTPVerb.html"} {"id": "d78cfad8b9e4-3", "text": "center(width, fillchar=' ', /)\u00b6\nReturn a centered string of length width.\nPadding is done using the specified fill character (default is a space).\ncount(sub[, start[, end]]) \u2192 int\u00b6\nReturn the number of non-overlapping occurrences of substring sub in\nstring S[start:end]. Optional arguments start and end are\ninterpreted as in slice notation.\nencode(encoding='utf-8', errors='strict')\u00b6\nEncode the string using the codec registered for encoding.\nencodingThe encoding in which to encode the string.\nerrorsThe error handling scheme to use for encoding errors.\nThe default is \u2018strict\u2019 meaning that encoding errors raise a\nUnicodeEncodeError. Other possible values are \u2018ignore\u2019, \u2018replace\u2019 and\n\u2018xmlcharrefreplace\u2019 as well as any other name registered with\ncodecs.register_error that can handle UnicodeEncodeErrors.\nendswith(suffix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S ends with the specified suffix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nsuffix can also be a tuple of strings to try.\nexpandtabs(tabsize=8)\u00b6\nReturn a copy where all tab characters are expanded using spaces.\nIf tabsize is not given, a tab size of 8 characters is assumed.\nfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nformat(*args, **kwargs) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from args and kwargs.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nformat_map(mapping) \u2192 str\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.openapi.HTTPVerb.html"} {"id": "d78cfad8b9e4-4", "text": "format_map(mapping) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from mapping.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nclassmethod from_str(verb: str) \u2192 HTTPVerb[source]\u00b6\nParse an HTTP verb.\nindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nisalnum()\u00b6\nReturn True if the string is an alpha-numeric string, False otherwise.\nA string is alpha-numeric if all characters in the string are alpha-numeric and\nthere is at least one character in the string.\nisalpha()\u00b6\nReturn True if the string is an alphabetic string, False otherwise.\nA string is alphabetic if all characters in the string are alphabetic and there\nis at least one character in the string.\nisascii()\u00b6\nReturn True if all characters in the string are ASCII, False otherwise.\nASCII characters have code points in the range U+0000-U+007F.\nEmpty string is ASCII too.\nisdecimal()\u00b6\nReturn True if the string is a decimal string, False otherwise.\nA string is a decimal string if all characters in the string are decimal and\nthere is at least one character in the string.\nisdigit()\u00b6\nReturn True if the string is a digit string, False otherwise.\nA string is a digit string if all characters in the string are digits and there\nis at least one character in the string.\nisidentifier()\u00b6\nReturn True if the string is a valid Python identifier, False otherwise.\nCall keyword.iskeyword(s) to test whether string s is a reserved identifier,\nsuch as \u201cdef\u201d or \u201cclass\u201d.", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.openapi.HTTPVerb.html"} {"id": "d78cfad8b9e4-5", "text": "such as \u201cdef\u201d or \u201cclass\u201d.\nislower()\u00b6\nReturn True if the string is a lowercase string, False otherwise.\nA string is lowercase if all cased characters in the string are lowercase and\nthere is at least one cased character in the string.\nisnumeric()\u00b6\nReturn True if the string is a numeric string, False otherwise.\nA string is numeric if all characters in the string are numeric and there is at\nleast one character in the string.\nisprintable()\u00b6\nReturn True if the string is printable, False otherwise.\nA string is printable if all of its characters are considered printable in\nrepr() or if it is empty.\nisspace()\u00b6\nReturn True if the string is a whitespace string, False otherwise.\nA string is whitespace if all characters in the string are whitespace and there\nis at least one character in the string.\nistitle()\u00b6\nReturn True if the string is a title-cased string, False otherwise.\nIn a title-cased string, upper- and title-case characters may only\nfollow uncased characters and lowercase characters only cased ones.\nisupper()\u00b6\nReturn True if the string is an uppercase string, False otherwise.\nA string is uppercase if all cased characters in the string are uppercase and\nthere is at least one cased character in the string.\njoin(iterable, /)\u00b6\nConcatenate any number of strings.\nThe string whose method is called is inserted in between each given string.\nThe result is returned as a new string.\nExample: \u2018.\u2019.join([\u2018ab\u2019, \u2018pq\u2019, \u2018rs\u2019]) -> \u2018ab.pq.rs\u2019\nljust(width, fillchar=' ', /)\u00b6\nReturn a left-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nlower()\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.openapi.HTTPVerb.html"} {"id": "d78cfad8b9e4-6", "text": "Padding is done using the specified fill character (default is a space).\nlower()\u00b6\nReturn a copy of the string converted to lowercase.\nlstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nstatic maketrans()\u00b6\nReturn a translation table usable for str.translate().\nIf there is only one argument, it must be a dictionary mapping Unicode\nordinals (integers) or characters to Unicode ordinals, strings or None.\nCharacter keys will be then converted to ordinals.\nIf there are two arguments, they must be strings of equal length, and\nin the resulting dictionary, each character in x will be mapped to the\ncharacter at the same position in y. If there is a third argument, it\nmust be a string, whose characters will be mapped to None in the result.\npartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string. If the separator is found,\nreturns a 3-tuple containing the part before the separator, the separator\nitself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing the original string\nand two empty strings.\nremoveprefix(prefix, /)\u00b6\nReturn a str with the given prefix string removed if present.\nIf the string starts with the prefix string, return string[len(prefix):].\nOtherwise, return a copy of the original string.\nremovesuffix(suffix, /)\u00b6\nReturn a str with the given suffix string removed if present.\nIf the string ends with the suffix string and that suffix is not empty,\nreturn string[:-len(suffix)]. Otherwise, return a copy of the original\nstring.\nreplace(old, new, count=- 1, /)\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.openapi.HTTPVerb.html"} {"id": "d78cfad8b9e4-7", "text": "string.\nreplace(old, new, count=- 1, /)\u00b6\nReturn a copy with all occurrences of substring old replaced by new.\ncountMaximum number of occurrences to replace.\n-1 (the default value) means replace all occurrences.\nIf the optional argument count is given, only the first count occurrences are\nreplaced.\nrfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nrindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nrjust(width, fillchar=' ', /)\u00b6\nReturn a right-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nrpartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string, starting at the end. If\nthe separator is found, returns a 3-tuple containing the part before the\nseparator, the separator itself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing two empty strings\nand the original string.\nrsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.openapi.HTTPVerb.html"} {"id": "d78cfad8b9e4-8", "text": "character (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nSplitting starts at the end of the string and works to the front.\nrstrip(chars=None, /)\u00b6\nReturn a copy of the string with trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nNote, str.split() is mainly useful for data that has been intentionally\ndelimited. With natural text that includes punctuation, consider using\nthe regular expression module.\nsplitlines(keepends=False)\u00b6\nReturn a list of the lines in the string, breaking at line boundaries.\nLine breaks are not included in the resulting list unless keepends is given and\ntrue.\nstartswith(prefix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S starts with the specified prefix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nprefix can also be a tuple of strings to try.\nstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading and trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nswapcase()\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.openapi.HTTPVerb.html"} {"id": "d78cfad8b9e4-9", "text": "If chars is given and not None, remove characters in chars instead.\nswapcase()\u00b6\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\u00b6\nReturn a version of the string where each word is titlecased.\nMore specifically, words start with uppercased characters and all remaining\ncased characters have lower case.\ntranslate(table, /)\u00b6\nReplace each character in the string using the given translation table.\ntableTranslation table, which must be a mapping of Unicode ordinals to\nUnicode ordinals, strings, or None.\nThe table must implement lookup/indexing via __getitem__, for instance a\ndictionary or list. If this operation raises LookupError, the character is\nleft untouched. Characters mapped to None are deleted.\nupper()\u00b6\nReturn a copy of the string converted to uppercase.\nzfill(width, /)\u00b6\nPad a numeric string with zeros on the left, to fill a field of the given width.\nThe string is never truncated.\nDELETE = 'delete'\u00b6\nGET = 'get'\u00b6\nHEAD = 'head'\u00b6\nOPTIONS = 'options'\u00b6\nPATCH = 'patch'\u00b6\nPOST = 'post'\u00b6\nPUT = 'put'\u00b6\nTRACE = 'trace'\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.openapi.HTTPVerb.html"} {"id": "4fc2d37916d3-0", "text": "langchain.utilities.openweathermap.OpenWeatherMapAPIWrapper\u00b6\nclass langchain.utilities.openweathermap.OpenWeatherMapAPIWrapper(*, owm: Any = None, openweathermap_api_key: Optional[str] = None)[source]\u00b6\nBases: BaseModel\nWrapper for OpenWeatherMap API using PyOWM.\nDocs for using:\nGo to OpenWeatherMap and sign up for an API key\nSave your API KEY into OPENWEATHERMAP_API_KEY env variable\npip install pyowm\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam openweathermap_api_key: Optional[str] = None\u00b6\nparam owm: Any = None\u00b6\nrun(location: str) \u2192 str[source]\u00b6\nGet the current weather information for a specified location.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key exists in environment.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/utilities/langchain.utilities.openweathermap.OpenWeatherMapAPIWrapper.html"} {"id": "942eea1ba2c0-0", "text": "langchain.output_parsers.openai_functions.PydanticAttrOutputFunctionsParser\u00b6\nclass langchain.output_parsers.openai_functions.PydanticAttrOutputFunctionsParser(*, args_only: bool = True, pydantic_schema: Union[Type[BaseModel], Dict[str, Type[BaseModel]]], attr_name: str)[source]\u00b6\nBases: PydanticOutputFunctionsParser\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_only: bool = True\u00b6\nparam attr_name: str [Required]\u00b6\nparam pydantic_schema: Union[Type[pydantic.main.BaseModel], Dict[str, Type[pydantic.main.BaseModel]]] [Required]\u00b6\nparse_result(result: List[Generation]) \u2192 Any[source]\u00b6\nParse a list of candidate model Generations into a specific format.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_schema\u00a0 \u00bb\u00a0 all fields\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.openai_functions.PydanticAttrOutputFunctionsParser.html"} {"id": "942eea1ba2c0-1", "text": "property lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.openai_functions.PydanticAttrOutputFunctionsParser.html"} {"id": "6b42dde9e284-0", "text": "langchain.output_parsers.regex_dict.RegexDictParser\u00b6\nclass langchain.output_parsers.regex_dict.RegexDictParser(*, regex_pattern: str = \"{}:\\\\s?([^.'\\\\n']*)\\\\.?\", output_key_to_format: Dict[str, str], no_update_value: Optional[str] = None)[source]\u00b6\nBases: BaseOutputParser\nClass to parse the output into a dictionary.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam no_update_value: Optional[str] = None\u00b6\nparam output_key_to_format: Dict[str, str] [Required]\u00b6\nparam regex_pattern: str = \"{}:\\\\s?([^.'\\\\n']*)\\\\.?\"\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 Dict[str, str][source]\u00b6\nParse the output of an LLM call.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.regex_dict.RegexDictParser.html"} {"id": "6b42dde9e284-1", "text": "Parameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.regex_dict.RegexDictParser.html"} {"id": "fd4ffe62ea59-0", "text": "langchain.output_parsers.structured.StructuredOutputParser\u00b6\nclass langchain.output_parsers.structured.StructuredOutputParser(*, response_schemas: List[ResponseSchema])[source]\u00b6\nBases: BaseOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam response_schemas: List[langchain.output_parsers.structured.ResponseSchema] [Required]\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nclassmethod from_response_schemas(response_schemas: List[ResponseSchema]) \u2192 StructuredOutputParser[source]\u00b6\nget_format_instructions(only_json: bool = False) \u2192 str[source]\u00b6\nMethod to get the format instructions for the output parser.\nexample:\n```python\nfrom langchain.output_parsers.structured import (\nStructuredOutputParser, ResponseSchema\n)\nresponse_schemas = [\nResponseSchema(name=\u201dfoo\u201d,\ndescription=\u201da list of strings\u201d,\ntype=\u201dList[string]\u201d\n),\nResponseSchema(name=\u201dbar\u201d,\ndescription=\u201da string\u201d,\ntype=\u201dstring\u201d\n),\n]\nparser = StructuredOutputParser.from_response_schemas(response_schemas)\nprint(parser.get_format_instructions())\noutput:\n# The output should be a markdown code snippet formatted in the following\n# schema, including the leading and trailing \u201c`json\" and \"`\u201d:\n#\n# ```json\n# {\n# \u201cfoo\u201d: List[string] // a list of strings\n# \u201cbar\u201d: string // a string\n# }\nParameters\nonly_json (bool) \u2013 If True, only the json in the markdown code snippet\nwill be returned, without the introducing text. Defaults to False.\nparse(text: str) \u2192 Any[source]\u00b6\nParse a single string model output into some structure.\nParameters", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.structured.StructuredOutputParser.html"} {"id": "fd4ffe62ea59-1", "text": "Parse a single string model output into some structure.\nParameters\ntext \u2013 String output of language model.\nReturns\nStructured output.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.structured.StructuredOutputParser.html"} {"id": "0b5ca5d67cfb-0", "text": "langchain.output_parsers.retry.RetryOutputParser\u00b6\nclass langchain.output_parsers.retry.RetryOutputParser(*, parser: BaseOutputParser[T], retry_chain: LLMChain)[source]\u00b6\nBases: BaseOutputParser[T]\nWraps a parser and tries to fix parsing errors.\nDoes this by passing the original prompt and the completion to another\nLLM, and telling it the completion did not satisfy criteria in the prompt.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam parser: langchain.schema.output_parser.BaseOutputParser[langchain.output_parsers.retry.T] [Required]\u00b6\nparam retry_chain: langchain.chains.llm.LLMChain [Required]\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nclassmethod from_llm(llm: BaseLanguageModel, parser: BaseOutputParser[T], prompt: BasePromptTemplate = PromptTemplate(input_variables=['completion', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\\n{prompt}\\nCompletion:\\n{completion}\\n\\nAbove, the Completion did not satisfy the constraints given in the Prompt.\\nPlease try again:', template_format='f-string', validate_template=True)) \u2192 RetryOutputParser[T][source]\u00b6\nget_format_instructions() \u2192 str[source]\u00b6\nInstructions on how the LLM output should be formatted.\nparse(completion: str) \u2192 T[source]\u00b6\nParse a single string model output into some structure.\nParameters\ntext \u2013 String output of language model.\nReturns\nStructured output.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.retry.RetryOutputParser.html"} {"id": "0b5ca5d67cfb-1", "text": "Parameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt_value: PromptValue) \u2192 T[source]\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.retry.RetryOutputParser.html"} {"id": "0a6d76017a3b-0", "text": "langchain.output_parsers.retry.RetryWithErrorOutputParser\u00b6\nclass langchain.output_parsers.retry.RetryWithErrorOutputParser(*, parser: BaseOutputParser[T], retry_chain: LLMChain)[source]\u00b6\nBases: BaseOutputParser[T]\nWraps a parser and tries to fix parsing errors.\nDoes this by passing the original prompt, the completion, AND the error\nthat was raised to another language model and telling it that the completion\ndid not work, and raised the given error. Differs from RetryOutputParser\nin that this implementation provides the error that was raised back to the\nLLM, which in theory should give it more information on how to fix it.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam parser: langchain.schema.output_parser.BaseOutputParser[langchain.output_parsers.retry.T] [Required]\u00b6\nparam retry_chain: langchain.chains.llm.LLMChain [Required]\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nclassmethod from_llm(llm: BaseLanguageModel, parser: BaseOutputParser[T], prompt: BasePromptTemplate = PromptTemplate(input_variables=['completion', 'error', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\\n{prompt}\\nCompletion:\\n{completion}\\n\\nAbove, the Completion did not satisfy the constraints given in the Prompt.\\nDetails: {error}\\nPlease try again:', template_format='f-string', validate_template=True)) \u2192 RetryWithErrorOutputParser[T][source]\u00b6\nget_format_instructions() \u2192 str[source]\u00b6\nInstructions on how the LLM output should be formatted.\nparse(completion: str) \u2192 T[source]\u00b6\nParse a single string model output into some structure.\nParameters\ntext \u2013 String output of language model.\nReturns", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.retry.RetryWithErrorOutputParser.html"} {"id": "0a6d76017a3b-1", "text": "Parameters\ntext \u2013 String output of language model.\nReturns\nStructured output.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt_value: PromptValue) \u2192 T[source]\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.retry.RetryWithErrorOutputParser.html"} {"id": "2477c71e436f-0", "text": "langchain.output_parsers.structured.ResponseSchema\u00b6\nclass langchain.output_parsers.structured.ResponseSchema(*, name: str, description: str, type: str = 'string')[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam description: str [Required]\u00b6\nparam name: str [Required]\u00b6\nparam type: str = 'string'\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.structured.ResponseSchema.html"} {"id": "43ffe0372ad6-0", "text": "langchain.output_parsers.fix.OutputFixingParser\u00b6\nclass langchain.output_parsers.fix.OutputFixingParser(*, parser: BaseOutputParser[T], retry_chain: LLMChain)[source]\u00b6\nBases: BaseOutputParser[T]\nWraps a parser and tries to fix parsing errors.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam parser: langchain.schema.output_parser.BaseOutputParser[langchain.output_parsers.fix.T] [Required]\u00b6\nparam retry_chain: langchain.chains.llm.LLMChain [Required]\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nclassmethod from_llm(llm: BaseLanguageModel, parser: BaseOutputParser[T], prompt: BasePromptTemplate = PromptTemplate(input_variables=['completion', 'error', 'instructions'], output_parser=None, partial_variables={}, template='Instructions:\\n--------------\\n{instructions}\\n--------------\\nCompletion:\\n--------------\\n{completion}\\n--------------\\n\\nAbove, the Completion did not satisfy the constraints given in the Instructions.\\nError:\\n--------------\\n{error}\\n--------------\\n\\nPlease try again. Please only respond with an answer that satisfies the constraints laid out in the Instructions:', template_format='f-string', validate_template=True)) \u2192 OutputFixingParser[T][source]\u00b6\nget_format_instructions() \u2192 str[source]\u00b6\nInstructions on how the LLM output should be formatted.\nparse(completion: str) \u2192 T[source]\u00b6\nParse a single string model output into some structure.\nParameters\ntext \u2013 String output of language model.\nReturns\nStructured output.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.fix.OutputFixingParser.html"} {"id": "43ffe0372ad6-1", "text": "Parse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.fix.OutputFixingParser.html"} {"id": "71b63c80f935-0", "text": "langchain.output_parsers.datetime.DatetimeOutputParser\u00b6\nclass langchain.output_parsers.datetime.DatetimeOutputParser(*, format: str = '%Y-%m-%dT%H:%M:%S.%fZ')[source]\u00b6\nBases: BaseOutputParser[datetime]\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam format: str = '%Y-%m-%dT%H:%M:%S.%fZ'\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str[source]\u00b6\nInstructions on how the LLM output should be formatted.\nparse(response: str) \u2192 datetime[source]\u00b6\nParse a single string model output into some structure.\nParameters\ntext \u2013 String output of language model.\nReturns\nStructured output.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.datetime.DatetimeOutputParser.html"} {"id": "71b63c80f935-1", "text": "to_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.datetime.DatetimeOutputParser.html"} {"id": "a4509cdb2ab3-0", "text": "langchain.output_parsers.rail_parser.GuardrailsOutputParser\u00b6\nclass langchain.output_parsers.rail_parser.GuardrailsOutputParser(*, guard: Any = None, api: Optional[Callable] = None, args: Any = None, kwargs: Any = None)[source]\u00b6\nBases: BaseOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api: Optional[Callable] = None\u00b6\nparam args: Any = None\u00b6\nparam guard: Any = None\u00b6\nparam kwargs: Any = None\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nclassmethod from_pydantic(output_class: Any, num_reasks: int = 1, api: Optional[Callable] = None, *args: Any, **kwargs: Any) \u2192 GuardrailsOutputParser[source]\u00b6\nclassmethod from_rail(rail_file: str, num_reasks: int = 1, api: Optional[Callable] = None, *args: Any, **kwargs: Any) \u2192 GuardrailsOutputParser[source]\u00b6\nclassmethod from_rail_string(rail_str: str, num_reasks: int = 1, api: Optional[Callable] = None, *args: Any, **kwargs: Any) \u2192 GuardrailsOutputParser[source]\u00b6\nget_format_instructions() \u2192 str[source]\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 Dict[source]\u00b6\nParse a single string model output into some structure.\nParameters\ntext \u2013 String output of language model.\nReturns\nStructured output.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.rail_parser.GuardrailsOutputParser.html"} {"id": "a4509cdb2ab3-1", "text": "Parse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.rail_parser.GuardrailsOutputParser.html"} {"id": "f34d35c155e8-0", "text": "langchain.output_parsers.json.parse_json_markdown\u00b6\nlangchain.output_parsers.json.parse_json_markdown(json_string: str) \u2192 dict[source]\u00b6\nParse a JSON string from a Markdown string.\nParameters\njson_string \u2013 The Markdown string.\nReturns\nThe parsed JSON object as a Python dictionary.", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.json.parse_json_markdown.html"} {"id": "5d0938c81269-0", "text": "langchain.output_parsers.list.CommaSeparatedListOutputParser\u00b6\nclass langchain.output_parsers.list.CommaSeparatedListOutputParser[source]\u00b6\nBases: ListOutputParser\nParse out comma separated lists.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str[source]\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 List[str][source]\u00b6\nParse the output of an LLM call.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.list.CommaSeparatedListOutputParser.html"} {"id": "5d0938c81269-1", "text": "property lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.list.CommaSeparatedListOutputParser.html"} {"id": "a9cc2e811247-0", "text": "langchain.output_parsers.boolean.BooleanOutputParser\u00b6\nclass langchain.output_parsers.boolean.BooleanOutputParser(*, true_val: str = 'YES', false_val: str = 'NO')[source]\u00b6\nBases: BaseOutputParser[bool]\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam false_val: str = 'NO'\u00b6\nparam true_val: str = 'YES'\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 bool[source]\u00b6\nParse the output of an LLM call to a boolean.\nParameters\ntext \u2013 output of language model\nReturns\nboolean\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.boolean.BooleanOutputParser.html"} {"id": "a9cc2e811247-1", "text": "to_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.boolean.BooleanOutputParser.html"} {"id": "414713e61cea-0", "text": "langchain.output_parsers.openai_functions.PydanticOutputFunctionsParser\u00b6\nclass langchain.output_parsers.openai_functions.PydanticOutputFunctionsParser(*, args_only: bool = True, pydantic_schema: Union[Type[BaseModel], Dict[str, Type[BaseModel]]])[source]\u00b6\nBases: OutputFunctionsParser\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_only: bool = True\u00b6\nparam pydantic_schema: Union[Type[pydantic.main.BaseModel], Dict[str, Type[pydantic.main.BaseModel]]] [Required]\u00b6\nparse_result(result: List[Generation]) \u2192 Any[source]\u00b6\nParse a list of candidate model Generations into a specific format.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_schema\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.openai_functions.PydanticOutputFunctionsParser.html"} {"id": "e1c82b08df63-0", "text": "langchain.output_parsers.list.ListOutputParser\u00b6\nclass langchain.output_parsers.list.ListOutputParser[source]\u00b6\nBases: BaseOutputParser\nClass to parse the output of an LLM call to a list.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nabstract parse(text: str) \u2192 List[str][source]\u00b6\nParse the output of an LLM call.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.list.ListOutputParser.html"} {"id": "e1c82b08df63-1", "text": "property lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.list.ListOutputParser.html"} {"id": "429219ebedc5-0", "text": "langchain.output_parsers.pydantic.PydanticOutputParser\u00b6\nclass langchain.output_parsers.pydantic.PydanticOutputParser(*, pydantic_object: Type[T])[source]\u00b6\nBases: BaseOutputParser[T]\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam pydantic_object: Type[langchain.output_parsers.pydantic.T] [Required]\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str[source]\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 T[source]\u00b6\nParse a single string model output into some structure.\nParameters\ntext \u2013 String output of language model.\nReturns\nStructured output.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.pydantic.PydanticOutputParser.html"} {"id": "429219ebedc5-1", "text": "to_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.pydantic.PydanticOutputParser.html"} {"id": "b50c29010937-0", "text": "langchain.output_parsers.combining.CombiningOutputParser\u00b6\nclass langchain.output_parsers.combining.CombiningOutputParser(*, parsers: List[BaseOutputParser])[source]\u00b6\nBases: BaseOutputParser\nClass to combine multiple output parsers into one.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam parsers: List[langchain.schema.output_parser.BaseOutputParser] [Required]\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str[source]\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 Dict[str, Any][source]\u00b6\nParse the output of an LLM call.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_parsers\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate the parsers.", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.combining.CombiningOutputParser.html"} {"id": "b50c29010937-1", "text": "validator validate_parsers\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate the parsers.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.combining.CombiningOutputParser.html"} {"id": "47160c864ee9-0", "text": "langchain.output_parsers.openai_functions.JsonOutputFunctionsParser\u00b6\nclass langchain.output_parsers.openai_functions.JsonOutputFunctionsParser(*, args_only: bool = True)[source]\u00b6\nBases: OutputFunctionsParser\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_only: bool = True\u00b6\nparse_result(result: List[Generation]) \u2192 Any[source]\u00b6\nParse a list of candidate model Generations into a specific format.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.openai_functions.JsonOutputFunctionsParser.html"} {"id": "e55bcded0bc9-0", "text": "langchain.output_parsers.enum.EnumOutputParser\u00b6\nclass langchain.output_parsers.enum.EnumOutputParser(*, enum: Type[Enum])[source]\u00b6\nBases: BaseOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam enum: Type[enum.Enum] [Required]\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str[source]\u00b6\nInstructions on how the LLM output should be formatted.\nparse(response: str) \u2192 Any[source]\u00b6\nParse a single string model output into some structure.\nParameters\ntext \u2013 String output of language model.\nReturns\nStructured output.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.enum.EnumOutputParser.html"} {"id": "e55bcded0bc9-1", "text": "to_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.enum.EnumOutputParser.html"} {"id": "2b234605cda5-0", "text": "langchain.output_parsers.regex.RegexParser\u00b6\nclass langchain.output_parsers.regex.RegexParser(*, regex: str, output_keys: List[str], default_output_key: Optional[str] = None)[source]\u00b6\nBases: BaseOutputParser\nClass to parse the output into a dictionary.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam default_output_key: Optional[str] = None\u00b6\nparam output_keys: List[str] [Required]\u00b6\nparam regex: str [Required]\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 Dict[str, str][source]\u00b6\nParse the output of an LLM call.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.regex.RegexParser.html"} {"id": "2b234605cda5-1", "text": "to_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.regex.RegexParser.html"} {"id": "c2b39a924922-0", "text": "langchain.output_parsers.loading.load_output_parser\u00b6\nlangchain.output_parsers.loading.load_output_parser(config: dict) \u2192 dict[source]\u00b6\nLoad output parser.", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.loading.load_output_parser.html"} {"id": "1c15e7f539c9-0", "text": "langchain.output_parsers.json.parse_and_check_json_markdown\u00b6\nlangchain.output_parsers.json.parse_and_check_json_markdown(text: str, expected_keys: List[str]) \u2192 dict[source]\u00b6\nParse a JSON string from a Markdown string and check that it\ncontains the expected keys.\nParameters\ntext \u2013 The Markdown string.\nexpected_keys \u2013 The expected keys in the JSON string.\nReturns\nThe parsed JSON object as a Python dictionary.", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.json.parse_and_check_json_markdown.html"} {"id": "936d94a0a134-0", "text": "langchain.output_parsers.openai_functions.JsonKeyOutputFunctionsParser\u00b6\nclass langchain.output_parsers.openai_functions.JsonKeyOutputFunctionsParser(*, args_only: bool = True, key_name: str)[source]\u00b6\nBases: JsonOutputFunctionsParser\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_only: bool = True\u00b6\nparam key_name: str [Required]\u00b6\nparse_result(result: List[Generation]) \u2192 Any[source]\u00b6\nParse a list of candidate model Generations into a specific format.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.openai_functions.JsonKeyOutputFunctionsParser.html"} {"id": "312b61e9dc82-0", "text": "langchain.output_parsers.openai_functions.OutputFunctionsParser\u00b6\nclass langchain.output_parsers.openai_functions.OutputFunctionsParser(*, args_only: bool = True)[source]\u00b6\nBases: BaseLLMOutputParser[Any]\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_only: bool = True\u00b6\nparse_result(result: List[Generation]) \u2192 Any[source]\u00b6\nParse a list of candidate model Generations into a specific format.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.openai_functions.OutputFunctionsParser.html"} {"id": "ceaecd545ff2-0", "text": "langchain.indexes.graph.GraphIndexCreator\u00b6\nclass langchain.indexes.graph.GraphIndexCreator(*, llm: ~typing.Optional[~langchain.schema.language_model.BaseLanguageModel] = None, graph_type: ~typing.Type[~langchain.graphs.networkx_graph.NetworkxEntityGraph] = )[source]\u00b6\nBases: BaseModel\nFunctionality to create graph index.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam graph_type: Type[langchain.graphs.networkx_graph.NetworkxEntityGraph] = \u00b6\nparam llm: Optional[langchain.schema.language_model.BaseLanguageModel] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/indexes/langchain.indexes.graph.GraphIndexCreator.html"} {"id": "ceaecd545ff2-1", "text": "param llm: Optional[langchain.schema.language_model.BaseLanguageModel] = None\u00b6\nasync afrom_text(text: str, prompt: BasePromptTemplate = PromptTemplate(input_variables=['text'], output_parser=None, partial_variables={}, template=\"You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrating them with your knowledge stored within your weights as well as that stored in a knowledge graph. Extract all of the knowledge triples from the text. A knowledge triple is a clause that contains a subject, a predicate, and an object. The subject is the entity being described, the predicate is the property of the subject that is being described, and the object is the value of the property.\\n\\nEXAMPLE\\nIt's a state in the US. It's also the number 1 producer of gold in the US.\\n\\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nI'm going to the store.\\n\\nOutput: NONE\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nOh huh. I know Descartes likes to drive antique scooters and play the mandolin.\\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\\nEND OF EXAMPLE\\n\\nEXAMPLE\\n{text}Output:\", template_format='f-string', validate_template=True)) \u2192 NetworkxEntityGraph[source]\u00b6\nCreate graph index from text asynchronously.", "source": "https://api.python.langchain.com/en/latest/indexes/langchain.indexes.graph.GraphIndexCreator.html"} {"id": "ceaecd545ff2-2", "text": "Create graph index from text asynchronously.\nfrom_text(text: str, prompt: BasePromptTemplate = PromptTemplate(input_variables=['text'], output_parser=None, partial_variables={}, template=\"You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrating them with your knowledge stored within your weights as well as that stored in a knowledge graph. Extract all of the knowledge triples from the text. A knowledge triple is a clause that contains a subject, a predicate, and an object. The subject is the entity being described, the predicate is the property of the subject that is being described, and the object is the value of the property.\\n\\nEXAMPLE\\nIt's a state in the US. It's also the number 1 producer of gold in the US.\\n\\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nI'm going to the store.\\n\\nOutput: NONE\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nOh huh. I know Descartes likes to drive antique scooters and play the mandolin.\\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\\nEND OF EXAMPLE\\n\\nEXAMPLE\\n{text}Output:\", template_format='f-string', validate_template=True)) \u2192 NetworkxEntityGraph[source]\u00b6\nCreate graph index from text.", "source": "https://api.python.langchain.com/en/latest/indexes/langchain.indexes.graph.GraphIndexCreator.html"} {"id": "f2184a45b9ec-0", "text": "langchain.indexes.vectorstore.VectorStoreIndexWrapper\u00b6\nclass langchain.indexes.vectorstore.VectorStoreIndexWrapper(*, vectorstore: VectorStore)[source]\u00b6\nBases: BaseModel\nWrapper around a vectorstore for easy access.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam vectorstore: langchain.vectorstores.base.VectorStore [Required]\u00b6\nquery(question: str, llm: Optional[BaseLanguageModel] = None, **kwargs: Any) \u2192 str[source]\u00b6\nQuery the vectorstore.\nquery_with_sources(question: str, llm: Optional[BaseLanguageModel] = None, **kwargs: Any) \u2192 dict[source]\u00b6\nQuery the vectorstore and get back sources.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/indexes/langchain.indexes.vectorstore.VectorStoreIndexWrapper.html"} {"id": "692c4d058b3a-0", "text": "langchain.indexes.vectorstore.VectorstoreIndexCreator\u00b6\nclass langchain.indexes.vectorstore.VectorstoreIndexCreator(*, vectorstore_cls: ~typing.Type[~langchain.vectorstores.base.VectorStore] = , embedding: ~langchain.embeddings.base.Embeddings = None, text_splitter: ~langchain.text_splitter.TextSplitter = None, vectorstore_kwargs: dict = None)[source]\u00b6\nBases: BaseModel\nLogic for creating indexes.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam embedding: langchain.embeddings.base.Embeddings [Optional]\u00b6\nparam text_splitter: langchain.text_splitter.TextSplitter [Optional]\u00b6\nparam vectorstore_cls: Type[langchain.vectorstores.base.VectorStore] = \u00b6\nparam vectorstore_kwargs: dict [Optional]\u00b6\nfrom_documents(documents: List[Document]) \u2192 VectorStoreIndexWrapper[source]\u00b6\nCreate a vectorstore index from documents.\nfrom_loaders(loaders: List[BaseLoader]) \u2192 VectorStoreIndexWrapper[source]\u00b6\nCreate a vectorstore index from loaders.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/indexes/langchain.indexes.vectorstore.VectorstoreIndexCreator.html"} {"id": "0e7ca8e1cee6-0", "text": "langchain.llms.pipelineai.PipelineAI\u00b6\nclass langchain.llms.pipelineai.PipelineAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, pipeline_key: str = '', pipeline_kwargs: Dict[str, Any] = None, pipeline_api_key: Optional[str] = None)[source]\u00b6\nBases: LLM, BaseModel\nWrapper around PipelineAI large language models.\nTo use, you should have the pipeline-ai python package installed,\nand the environment variable PIPELINE_API_KEY set with your API key.\nAny parameters that are valid to be passed to the call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain import PipelineAI\npipeline = PipelineAI(pipeline_key=\"\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam pipeline_api_key: Optional[str] = None\u00b6\nparam pipeline_key: str = ''\u00b6\nThe id or tag of the target pipeline\nparam pipeline_kwargs: Dict[str, Any] [Optional]\u00b6\nHolds any pipeline parameters valid for create call not\nexplicitly specified.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html"} {"id": "0e7ca8e1cee6-1", "text": "param verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html"} {"id": "0e7ca8e1cee6-2", "text": "first occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator build_extra\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nBuild extra kwargs from additional params that were passed in.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html"} {"id": "0e7ca8e1cee6-3", "text": "dict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html"} {"id": "0e7ca8e1cee6-4", "text": "get_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html"} {"id": "0e7ca8e1cee6-5", "text": "Parameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic config.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html"} {"id": "653342050a3c-0", "text": "langchain.llms.amazon_api_gateway.AmazonAPIGateway\u00b6\nclass langchain.llms.amazon_api_gateway.AmazonAPIGateway(*, cache: ~typing.Optional[bool] = None, verbose: bool = None, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, api_url: str, headers: ~typing.Optional[~typing.Dict] = None, model_kwargs: ~typing.Optional[~typing.Dict] = None, content_handler: ~langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway = )[source]\u00b6\nBases: LLM\nWrapper around custom Amazon API Gateway\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_url: str [Required]\u00b6\nAPI Gateway URL\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam content_handler: langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway = \u00b6\nThe content handler class that provides an input and\noutput transform functions to handle formats between LLM\nand the endpoint.\nparam headers: Optional[Dict] = None\u00b6\nAPI Gateway HTTP Headers to send, e.g. for authentication\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html"} {"id": "653342050a3c-1", "text": "Metadata to add to the run trace.\nparam model_kwargs: Optional[Dict] = None\u00b6\nKey word arguments to pass to the model.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html"} {"id": "653342050a3c-2", "text": "Parameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html"} {"id": "653342050a3c-3", "text": "**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html"} {"id": "653342050a3c-4", "text": "to the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html"} {"id": "653342050a3c-5", "text": "Pass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html"} {"id": "653342050a3c-6", "text": "model Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html"} {"id": "ec28599b5762-0", "text": "langchain.llms.cohere.completion_with_retry\u00b6\nlangchain.llms.cohere.completion_with_retry(llm: Cohere, **kwargs: Any) \u2192 Any[source]\u00b6\nUse tenacity to retry the completion call.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.completion_with_retry.html"} {"id": "fe4a533b08ad-0", "text": "langchain.llms.base.update_cache\u00b6\nlangchain.llms.base.update_cache(existing_prompts: Dict[int, List], llm_string: str, missing_prompt_idxs: List[int], new_results: LLMResult, prompts: List[str]) \u2192 Optional[dict][source]\u00b6\nUpdate the cache and get the LLM output.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.base.update_cache.html"} {"id": "a4ef2ec5f95f-0", "text": "langchain.llms.base.get_prompts\u00b6\nlangchain.llms.base.get_prompts(params: Dict[str, Any], prompts: List[str]) \u2192 Tuple[Dict[int, List], str, List[int], List[str]][source]\u00b6\nGet prompts that are already cached.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.base.get_prompts.html"} {"id": "b0144db45273-0", "text": "langchain.llms.databricks.get_default_host\u00b6\nlangchain.llms.databricks.get_default_host() \u2192 str[source]\u00b6\nGets the default Databricks workspace hostname.\nRaises an error if the hostname cannot be automatically determined.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.get_default_host.html"} {"id": "2aa791de6c2c-0", "text": "langchain.llms.google_palm.generate_with_retry\u00b6\nlangchain.llms.google_palm.generate_with_retry(llm: GooglePalm, **kwargs: Any) \u2192 Any[source]\u00b6\nUse tenacity to retry the completion call.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.generate_with_retry.html"} {"id": "7add6fe2ede1-0", "text": "langchain.llms.manifest.ManifestWrapper\u00b6\nclass langchain.llms.manifest.ManifestWrapper(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, llm_kwargs: Optional[Dict] = None)[source]\u00b6\nBases: LLM\nWrapper around HazyResearch\u2019s Manifest library.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam llm_kwargs: Optional[Dict] = None\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.manifest.ManifestWrapper.html"} {"id": "7add6fe2ede1-1", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.manifest.ManifestWrapper.html"} {"id": "7add6fe2ede1-2", "text": "async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.manifest.ManifestWrapper.html"} {"id": "7add6fe2ede1-3", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.manifest.ManifestWrapper.html"} {"id": "7add6fe2ede1-4", "text": "Get the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.manifest.ManifestWrapper.html"} {"id": "7add6fe2ede1-5", "text": "to the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.manifest.ManifestWrapper.html"} {"id": "10b1d1a935fb-0", "text": "langchain.llms.azureml_endpoint.OSSContentFormatter\u00b6\nclass langchain.llms.azureml_endpoint.OSSContentFormatter[source]\u00b6\nBases: ContentFormatterBase\nContent handler for LLMs from the OSS catalog.\nMethods\n__init__()\nformat_request_payload(prompt,\u00a0model_kwargs)\nFormats the request body according to the input schema of the model.\nformat_response_payload(output)\nFormats the response body according to the output schema of the model.\nAttributes\naccepts\nThe MIME type of the response data returned form the endpoint\ncontent_type\nThe MIME type of the input data passed to the endpoint\nformat_request_payload(prompt: str, model_kwargs: Dict) \u2192 bytes[source]\u00b6\nFormats the request body according to the input schema of\nthe model. Returns bytes or seekable file like object in the\nformat specified in the content_type request header.\nformat_response_payload(output: bytes) \u2192 str[source]\u00b6\nFormats the response body according to the output\nschema of the model. Returns the data type that is\nreceived from the response.\naccepts: Optional[str] = 'application/json'\u00b6\nThe MIME type of the response data returned form the endpoint\ncontent_type: Optional[str] = 'application/json'\u00b6\nThe MIME type of the input data passed to the endpoint", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.OSSContentFormatter.html"} {"id": "dad29f1f34ef-0", "text": "langchain.llms.huggingface_hub.HuggingFaceHub\u00b6\nclass langchain.llms.huggingface_hub.HuggingFaceHub(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, repo_id: str = 'gpt2', task: Optional[str] = None, model_kwargs: Optional[dict] = None, huggingfacehub_api_token: Optional[str] = None)[source]\u00b6\nBases: LLM\nWrapper around HuggingFaceHub models.\nTo use, you should have the huggingface_hub python package installed, and the\nenvironment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass\nit as a named parameter to the constructor.\nOnly supports text-generation, text2text-generation and summarization for now.\nExample\nfrom langchain.llms import HuggingFaceHub\nhf = HuggingFaceHub(repo_id=\"gpt2\", huggingfacehub_api_token=\"my-api-key\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam huggingfacehub_api_token: Optional[str] = None\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_kwargs: Optional[dict] = None\u00b6\nKey word arguments to pass to the model.\nparam repo_id: str = 'gpt2'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html"} {"id": "dad29f1f34ef-1", "text": "param repo_id: str = 'gpt2'\u00b6\nModel name to use.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam task: Optional[str] = None\u00b6\nTask to call the model with.\nShould be a task that returns generated_text or summary_text.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html"} {"id": "dad29f1f34ef-2", "text": "Parameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html"} {"id": "dad29f1f34ef-3", "text": "**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html"} {"id": "dad29f1f34ef-4", "text": "to the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html"} {"id": "dad29f1f34ef-5", "text": "Pass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html"} {"id": "dad29f1f34ef-6", "text": "property lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html"} {"id": "b2a4252b987f-0", "text": "langchain.llms.promptlayer_openai.PromptLayerOpenAIChat\u00b6\nclass langchain.llms.promptlayer_openai.PromptLayerOpenAIChat(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model_name: str = 'gpt-3.5-turbo', model_kwargs: Dict[str, Any] = None, openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_proxy: Optional[str] = None, max_retries: int = 6, prefix_messages: List = None, streaming: bool = False, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', pl_tags: Optional[List[str]] = None, return_pl_id: Optional[bool] = False)[source]\u00b6\nBases: OpenAIChat\nWrapper around OpenAI large language models.\nTo use, you should have the openai and promptlayer python\npackage installed, and the environment variable OPENAI_API_KEY\nand PROMPTLAYER_API_KEY set with your openAI API key and\npromptlayer key respectively.\nAll parameters that can be passed to the OpenAIChat LLM can also\nbe passed here. The PromptLayerOpenAIChat adds two optional\nParameters\npl_tags \u2013 List of strings to tag the request with.\nreturn_pl_id \u2013 If True, the PromptLayer request ID will be\nreturned in the generation_info field of the\nGeneration object.\nExample\nfrom langchain.llms import PromptLayerOpenAIChat", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html"} {"id": "b2a4252b987f-1", "text": "Generation object.\nExample\nfrom langchain.llms import PromptLayerOpenAIChat\nopenaichat = PromptLayerOpenAIChat(model_name=\"gpt-3.5-turbo\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam allowed_special: Union[Literal['all'], AbstractSet[str]] = {}\u00b6\nSet of special tokens that are allowed\u3002\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam disallowed_special: Union[Literal['all'], Collection[str]] = 'all'\u00b6\nSet of special tokens that are not allowed\u3002\nparam max_retries: int = 6\u00b6\nMaximum number of retries to make when generating.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_kwargs: Dict[str, Any] [Optional]\u00b6\nHolds any model parameters valid for create call not explicitly specified.\nparam model_name: str = 'gpt-3.5-turbo'\u00b6\nModel name to use.\nparam openai_api_base: Optional[str] = None\u00b6\nparam openai_api_key: Optional[str] = None\u00b6\nparam openai_proxy: Optional[str] = None\u00b6\nparam pl_tags: Optional[List[str]] = None\u00b6\nparam prefix_messages: List [Optional]\u00b6\nSeries of messages for Chat input.\nparam return_pl_id: Optional[bool] = False\u00b6\nparam streaming: bool = False\u00b6\nWhether to stream the results or not.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html"} {"id": "b2a4252b987f-2", "text": "param verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html"} {"id": "b2a4252b987f-3", "text": "first occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator build_extra\u00a0 \u00bb\u00a0 all fields\u00b6\nBuild extra kwargs from additional params that were passed in.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html"} {"id": "b2a4252b987f-4", "text": "dict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html"} {"id": "b2a4252b987f-5", "text": "get_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nGet the token IDs using the tiktoken package.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html"} {"id": "b2a4252b987f-6", "text": "stop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html"} {"id": "f65f8392b6d9-0", "text": "langchain.llms.huggingface_pipeline.HuggingFacePipeline\u00b6\nclass langchain.llms.huggingface_pipeline.HuggingFacePipeline(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, pipeline: Any = None, model_id: str = 'gpt2', model_kwargs: Optional[dict] = None, pipeline_kwargs: Optional[dict] = None)[source]\u00b6\nBases: LLM\nWrapper around HuggingFace Pipeline API.\nTo use, you should have the transformers python package installed.\nOnly supports text-generation, text2text-generation and summarization for now.\nExample using from_model_id:from langchain.llms import HuggingFacePipeline\nhf = HuggingFacePipeline.from_model_id(\n model_id=\"gpt2\",\n task=\"text-generation\",\n pipeline_kwargs={\"max_new_tokens\": 10},\n)\nExample passing pipeline in directly:from langchain.llms import HuggingFacePipeline\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\nmodel_id = \"gpt2\"\ntokenizer = AutoTokenizer.from_pretrained(model_id)\nmodel = AutoModelForCausalLM.from_pretrained(model_id)\npipe = pipeline(\n \"text-generation\", model=model, tokenizer=tokenizer, max_new_tokens=10\n)\nhf = HuggingFacePipeline(pipeline=pipe)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html"} {"id": "f65f8392b6d9-1", "text": "param callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_id: str = 'gpt2'\u00b6\nModel name to use.\nparam model_kwargs: Optional[dict] = None\u00b6\nKey word arguments passed to the model.\nparam pipeline_kwargs: Optional[dict] = None\u00b6\nKey word arguments passed to the pipeline.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html"} {"id": "f65f8392b6d9-2", "text": "API.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html"} {"id": "f65f8392b6d9-3", "text": "Asynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\nclassmethod from_model_id(model_id: str, task: str, device: int = - 1, model_kwargs: Optional[dict] = None, pipeline_kwargs: Optional[dict] = None, **kwargs: Any) \u2192 LLM[source]\u00b6\nConstruct the pipeline object from model_id and task.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html"} {"id": "f65f8392b6d9-4", "text": "need more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html"} {"id": "f65f8392b6d9-5", "text": "Pass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html"} {"id": "f65f8392b6d9-6", "text": "to_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html"} {"id": "2f016e99c120-0", "text": "langchain.llms.cerebriumai.CerebriumAI\u00b6\nclass langchain.llms.cerebriumai.CerebriumAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, endpoint_url: str = '', model_kwargs: Dict[str, Any] = None, cerebriumai_api_key: Optional[str] = None)[source]\u00b6\nBases: LLM\nWrapper around CerebriumAI large language models.\nTo use, you should have the cerebrium python package installed, and the\nenvironment variable CEREBRIUMAI_API_KEY set with your API key.\nAny parameters that are valid to be passed to the call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.llms import CerebriumAI\ncerebrium = CerebriumAI(endpoint_url=\"\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam cerebriumai_api_key: Optional[str] = None\u00b6\nparam endpoint_url: str = ''\u00b6\nmodel endpoint to use\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_kwargs: Dict[str, Any] [Optional]\u00b6\nHolds any model parameters valid for create call not\nexplicitly specified.\nparam tags: Optional[List[str]] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.cerebriumai.CerebriumAI.html"} {"id": "2f016e99c120-1", "text": "explicitly specified.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.cerebriumai.CerebriumAI.html"} {"id": "2f016e99c120-2", "text": "text generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator build_extra\u00a0 \u00bb\u00a0 all fields[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.cerebriumai.CerebriumAI.html"} {"id": "2f016e99c120-3", "text": "Top model prediction as a message.\nvalidator build_extra\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nBuild extra kwargs from additional params that were passed in.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.cerebriumai.CerebriumAI.html"} {"id": "2f016e99c120-4", "text": "to the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.cerebriumai.CerebriumAI.html"} {"id": "2f016e99c120-5", "text": "Pass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.cerebriumai.CerebriumAI.html"} {"id": "2f016e99c120-6", "text": "property lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic config.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.cerebriumai.CerebriumAI.html"} {"id": "c1cc0f040718-0", "text": "langchain.llms.base.BaseLLM\u00b6\nclass langchain.llms.base.BaseLLM(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: BaseLanguageModel, ABC\nLLM wrapper should take in a prompt and return a string.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None\u00b6\nparam callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str[source]\u00b6\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html"} {"id": "c1cc0f040718-1", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult[source]\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult[source]\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html"} {"id": "c1cc0f040718-2", "text": "async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str[source]\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage[source]\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict[source]\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult[source]\u00b6\nRun the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html"} {"id": "c1cc0f040718-3", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult[source]\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html"} {"id": "c1cc0f040718-4", "text": "Get the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str[source]\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage[source]\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html"} {"id": "c1cc0f040718-5", "text": "to the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None[source]\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose[source]\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html"} {"id": "1e70f6d76ab0-0", "text": "langchain.llms.octoai_endpoint.OctoAIEndpoint\u00b6\nclass langchain.llms.octoai_endpoint.OctoAIEndpoint(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, endpoint_url: Optional[str] = None, model_kwargs: Optional[dict] = None, octoai_api_token: Optional[str] = None)[source]\u00b6\nBases: LLM\nWrapper around OctoAI Inference Endpoints.\nOctoAIEndpoint is a class to interact with OctoAICompute Service large language model endpoints.\nTo use, you should have the octoai python package installed, and the\nenvironment variable OCTOAI_API_TOKEN set with your API token, or pass\nit as a named parameter to the constructor.\nExample\nfrom langchain.llms.octoai_endpoint import OctoAIEndpoint\nOctoAIEndpoint(\n octoai_api_token=\"octoai-api-key\",\n endpoint_url=\"https://mpt-7b-demo-kk0powt97tmb.octoai.cloud/generate\",\n model_kwargs={\n \"max_new_tokens\": 200,\n \"temperature\": 0.75,\n \"top_p\": 0.95,\n \"repetition_penalty\": 1,\n \"seed\": None,\n \"stop\": [],\n },\n)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html"} {"id": "1e70f6d76ab0-1", "text": "param cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam endpoint_url: Optional[str] = None\u00b6\nEndpoint URL to use.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_kwargs: Optional[dict] = None\u00b6\nKey word arguments to pass to the model.\nparam octoai_api_token: Optional[str] = None\u00b6\nOCTOAI API Token\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html"} {"id": "1e70f6d76ab0-2", "text": "This method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html"} {"id": "1e70f6d76ab0-3", "text": "Asynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html"} {"id": "1e70f6d76ab0-4", "text": "text generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html"} {"id": "1e70f6d76ab0-5", "text": "stop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html"} {"id": "1e70f6d76ab0-6", "text": "serialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html"} {"id": "66f69d057355-0", "text": "langchain.llms.loading.load_llm\u00b6\nlangchain.llms.loading.load_llm(file: Union[str, Path]) \u2192 BaseLLM[source]\u00b6\nLoad LLM from file.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.loading.load_llm.html"} {"id": "e6e702287941-0", "text": "langchain.llms.openai.BaseOpenAI\u00b6\nclass langchain.llms.openai.BaseOpenAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model: str = 'text-davinci-003', temperature: float = 0.7, max_tokens: int = 256, top_p: float = 1, frequency_penalty: float = 0, presence_penalty: float = 0, n: int = 1, best_of: int = 1, model_kwargs: Dict[str, Any] = None, openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_organization: Optional[str] = None, openai_proxy: Optional[str] = None, batch_size: int = 20, request_timeout: Optional[Union[float, Tuple[float, float]]] = None, logit_bias: Optional[Dict[str, float]] = None, max_retries: int = 6, streaming: bool = False, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', tiktoken_model_name: Optional[str] = None)[source]\u00b6\nBases: BaseLLM\nWrapper around OpenAI large language models.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam allowed_special: Union[Literal['all'], AbstractSet[str]] = {}\u00b6\nSet of special tokens that are allowed\u3002\nparam batch_size: int = 20\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html"} {"id": "e6e702287941-1", "text": "Set of special tokens that are allowed\u3002\nparam batch_size: int = 20\u00b6\nBatch size to use when passing multiple documents to generate.\nparam best_of: int = 1\u00b6\nGenerates best_of completions server-side and returns the \u201cbest\u201d.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam disallowed_special: Union[Literal['all'], Collection[str]] = 'all'\u00b6\nSet of special tokens that are not allowed\u3002\nparam frequency_penalty: float = 0\u00b6\nPenalizes repeated tokens according to frequency.\nparam logit_bias: Optional[Dict[str, float]] [Optional]\u00b6\nAdjust the probability of specific tokens being generated.\nparam max_retries: int = 6\u00b6\nMaximum number of retries to make when generating.\nparam max_tokens: int = 256\u00b6\nThe maximum number of tokens to generate in the completion.\n-1 returns as many tokens as possible given the prompt and\nthe models maximal context size.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_kwargs: Dict[str, Any] [Optional]\u00b6\nHolds any model parameters valid for create call not explicitly specified.\nparam model_name: str = 'text-davinci-003' (alias 'model')\u00b6\nModel name to use.\nparam n: int = 1\u00b6\nHow many completions to generate for each prompt.\nparam openai_api_base: Optional[str] = None\u00b6\nparam openai_api_key: Optional[str] = None\u00b6\nparam openai_organization: Optional[str] = None\u00b6\nparam openai_proxy: Optional[str] = None\u00b6\nparam presence_penalty: float = 0\u00b6\nPenalizes repeated tokens.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html"} {"id": "e6e702287941-2", "text": "param presence_penalty: float = 0\u00b6\nPenalizes repeated tokens.\nparam request_timeout: Optional[Union[float, Tuple[float, float]]] = None\u00b6\nTimeout for requests to OpenAI completion API. Default is 600 seconds.\nparam streaming: bool = False\u00b6\nWhether to stream the results or not.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: float = 0.7\u00b6\nWhat sampling temperature to use.\nparam tiktoken_model_name: Optional[str] = None\u00b6\nThe model name to pass to tiktoken when using this class.\nTiktoken is used to count the number of tokens in documents to constrain\nthem to be under a certain limit. By default, when set to None, this will\nbe the same as the embedding model name. However, there are some cases\nwhere you may want to use this Embedding class with a model name not\nsupported by tiktoken. This can include when using Azure embeddings or\nwhen using one of the many model providers that expose an OpenAI-like\nAPI but with different models. In those cases, in order to avoid erroring\nwhen tiktoken is called, you can specify a model name to use here.\nparam top_p: float = 1\u00b6\nTotal probability mass of tokens to consider at each step.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html"} {"id": "e6e702287941-3", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html"} {"id": "e6e702287941-4", "text": "async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator build_extra\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nBuild extra kwargs from additional params that were passed in.\ncreate_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) \u2192 LLMResult[source]\u00b6\nCreate the LLMResult from the choices and prompts.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html"} {"id": "e6e702287941-5", "text": "dict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html"} {"id": "e6e702287941-6", "text": "get_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) \u2192 List[List[str]][source]\u00b6\nGet the sub prompts for llm call.\nget_token_ids(text: str) \u2192 List[int][source]\u00b6\nGet the token IDs using the tiktoken package.\nmax_tokens_for_prompt(prompt: str) \u2192 int[source]\u00b6\nCalculate the maximum number of tokens possible to generate for a prompt.\nParameters\nprompt \u2013 The prompt to pass into the model.\nReturns\nThe maximum number of tokens to generate for a prompt.\nExample\nmax_tokens = openai.max_token_for_prompt(\"Tell me a joke.\")\nstatic modelname_to_contextsize(modelname: str) \u2192 int[source]\u00b6\nCalculate the maximum number of tokens possible to generate for a model.\nParameters\nmodelname \u2013 The modelname we want to know the context size for.\nReturns\nThe maximum context size\nExample\nmax_tokens = openai.modelname_to_contextsize(\"text-davinci-003\")\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html"} {"id": "e6e702287941-7", "text": "Pass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nprep_streaming_params(stop: Optional[List[str]] = None) \u2192 Dict[str, Any][source]\u00b6\nPrepare the params for streaming.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html"} {"id": "e6e702287941-8", "text": "This allows users to pass in None as verbose to access the global setting.\nstream(prompt: str, stop: Optional[List[str]] = None) \u2192 Generator[source]\u00b6\nCall OpenAI with streaming flag and return the resulting generator.\nBETA: this is a beta feature while we figure out the right abstraction.\nOnce that happens, this interface could change.\nParameters\nprompt \u2013 The prompts to pass into the model.\nstop \u2013 Optional list of stop words to use when generating.\nReturns\nA generator representing the stream of tokens from OpenAI.\nExample\ngenerator = openai.stream(\"Tell me a joke.\")\nfor token in generator:\n yield token\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty max_context_size: int\u00b6\nGet max context size for this model.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nallow_population_by_field_name = True\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html"} {"id": "7b78ce166623-0", "text": "langchain.llms.human.HumanInputLLM\u00b6\nclass langchain.llms.human.HumanInputLLM(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, input_func: Callable = None, prompt_func: Callable[[str], None] = None, separator: str = '\\n', input_kwargs: Mapping[str, Any] = {}, prompt_kwargs: Mapping[str, Any] = {})[source]\u00b6\nBases: LLM\nA LLM wrapper which returns user input as the response.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam input_func: Callable [Optional]\u00b6\nparam input_kwargs: Mapping[str, Any] = {}\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam prompt_func: Callable[[str], None] [Optional]\u00b6\nparam prompt_kwargs: Mapping[str, Any] = {}\u00b6\nparam separator: str = '\\n'\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html"} {"id": "7b78ce166623-1", "text": "param verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html"} {"id": "7b78ce166623-2", "text": "first occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html"} {"id": "7b78ce166623-3", "text": "dict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html"} {"id": "7b78ce166623-4", "text": "get_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html"} {"id": "7b78ce166623-5", "text": "Parameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html"} {"id": "27a1aac2537a-0", "text": "langchain.llms.vertexai.is_codey_model\u00b6\nlangchain.llms.vertexai.is_codey_model(model_name: str) \u2192 bool[source]\u00b6\nReturns True if the model name is a Codey model.\nParameters\nmodel_name \u2013 The model name to check.\nReturns: True if the model name is a Codey model.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.is_codey_model.html"} {"id": "0dad9dc48471-0", "text": "langchain.llms.huggingface_endpoint.HuggingFaceEndpoint\u00b6\nclass langchain.llms.huggingface_endpoint.HuggingFaceEndpoint(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, endpoint_url: str = '', task: Optional[str] = None, model_kwargs: Optional[dict] = None, huggingfacehub_api_token: Optional[str] = None)[source]\u00b6\nBases: LLM\nWrapper around HuggingFaceHub Inference Endpoints.\nTo use, you should have the huggingface_hub python package installed, and the\nenvironment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass\nit as a named parameter to the constructor.\nOnly supports text-generation and text2text-generation for now.\nExample\nfrom langchain.llms import HuggingFaceEndpoint\nendpoint_url = (\n \"https://abcdefghijklmnop.us-east-1.aws.endpoints.huggingface.cloud\"\n)\nhf = HuggingFaceEndpoint(\n endpoint_url=endpoint_url,\n huggingfacehub_api_token=\"my-api-key\"\n)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam endpoint_url: str = ''\u00b6\nEndpoint URL to use.\nparam huggingfacehub_api_token: Optional[str] = None\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_endpoint.HuggingFaceEndpoint.html"} {"id": "0dad9dc48471-1", "text": "Metadata to add to the run trace.\nparam model_kwargs: Optional[dict] = None\u00b6\nKey word arguments to pass to the model.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam task: Optional[str] = None\u00b6\nTask to call the model with.\nShould be a task that returns generated_text or summary_text.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_endpoint.HuggingFaceEndpoint.html"} {"id": "0dad9dc48471-2", "text": "Parameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_endpoint.HuggingFaceEndpoint.html"} {"id": "0dad9dc48471-3", "text": "**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_endpoint.HuggingFaceEndpoint.html"} {"id": "0dad9dc48471-4", "text": "to the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_endpoint.HuggingFaceEndpoint.html"} {"id": "0dad9dc48471-5", "text": "Pass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_endpoint.HuggingFaceEndpoint.html"} {"id": "0dad9dc48471-6", "text": "property lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_endpoint.HuggingFaceEndpoint.html"} {"id": "981c8afea93c-0", "text": "langchain.llms.base.create_base_retry_decorator\u00b6\nlangchain.llms.base.create_base_retry_decorator(error_types: List[Type[BaseException]], max_retries: int = 1) \u2192 Callable[[Any], Any][source]\u00b6\nCreate a retry decorator for a given LLM and provided list of error types.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.base.create_base_retry_decorator.html"} {"id": "0e8bd6433e52-0", "text": "langchain.llms.anyscale.Anyscale\u00b6\nclass langchain.llms.anyscale.Anyscale(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, model_kwargs: Optional[dict] = None, anyscale_service_url: Optional[str] = None, anyscale_service_route: Optional[str] = None, anyscale_service_token: Optional[str] = None)[source]\u00b6\nBases: LLM\nWrapper around Anyscale Services.\nTo use, you should have the environment variable ANYSCALE_SERVICE_URL,\nANYSCALE_SERVICE_ROUTE and ANYSCALE_SERVICE_TOKEN set with your Anyscale\nService, or pass it as a named parameter to the constructor.\nExample\nfrom langchain.llms import Anyscale\nanyscale = Anyscale(anyscale_service_url=\"SERVICE_URL\",\n anyscale_service_route=\"SERVICE_ROUTE\",\n anyscale_service_token=\"SERVICE_TOKEN\")\n# Use Ray for distributed processing\nimport ray\nprompt_list=[]\n@ray.remote\ndef send_query(llm, prompt):\n resp = llm(prompt)\n return resp\nfutures = [send_query.remote(anyscale, prompt) for prompt in prompt_list]\nresults = ray.get(futures)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam anyscale_service_route: Optional[str] = None\u00b6\nparam anyscale_service_token: Optional[str] = None\u00b6\nparam anyscale_service_url: Optional[str] = None\u00b6\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.anyscale.Anyscale.html"} {"id": "0e8bd6433e52-1", "text": "param callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_kwargs: Optional[dict] = None\u00b6\nKey word arguments to pass to the model. Reserved for future use\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.anyscale.Anyscale.html"} {"id": "0e8bd6433e52-2", "text": "need more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.anyscale.Anyscale.html"} {"id": "0e8bd6433e52-3", "text": "Parameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.anyscale.Anyscale.html"} {"id": "0e8bd6433e52-4", "text": "callbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.anyscale.Anyscale.html"} {"id": "0e8bd6433e52-5", "text": "to the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.anyscale.Anyscale.html"} {"id": "0e8bd6433e52-6", "text": "eg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.anyscale.Anyscale.html"} {"id": "f7e921374c0f-0", "text": "langchain.llms.openllm.IdentifyingParams\u00b6\nclass langchain.llms.openllm.IdentifyingParams[source]\u00b6\nBases: TypedDict\nParameters for identifying a model as a typed dict.\nMethods\n__init__(*args,\u00a0**kwargs)\nclear()\ncopy()\nfromkeys([value])\nCreate a new dictionary with keys from iterable and values set to value.\nget(key[,\u00a0default])\nReturn the value for key if key is in the dictionary, else default.\nitems()\nkeys()\npop(k[,d])\nIf the key is not found, return the default if given; otherwise, raise a KeyError.\npopitem()\nRemove and return a (key, value) pair as a 2-tuple.\nsetdefault(key[,\u00a0default])\nInsert key with a value of default if key is not in the dictionary.\nupdate([E,\u00a0]**F)\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]\nvalues()\nAttributes\nmodel_name\nmodel_id\nserver_url\nserver_type\nembedded\nllm_kwargs\nclear() \u2192 None.\u00a0 Remove all items from D.\u00b6\ncopy() \u2192 a shallow copy of D\u00b6\nfromkeys(value=None, /)\u00b6\nCreate a new dictionary with keys from iterable and values set to value.\nget(key, default=None, /)\u00b6\nReturn the value for key if key is in the dictionary, else default.\nitems() \u2192 a set-like object providing a view on D's items\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.IdentifyingParams.html"} {"id": "f7e921374c0f-1", "text": "items() \u2192 a set-like object providing a view on D's items\u00b6\nkeys() \u2192 a set-like object providing a view on D's keys\u00b6\npop(k[, d]) \u2192 v, remove specified key and return the corresponding value.\u00b6\nIf the key is not found, return the default if given; otherwise,\nraise a KeyError.\npopitem()\u00b6\nRemove and return a (key, value) pair as a 2-tuple.\nPairs are returned in LIFO (last-in, first-out) order.\nRaises KeyError if the dict is empty.\nsetdefault(key, default=None, /)\u00b6\nInsert key with a value of default if key is not in the dictionary.\nReturn the value for key if key is in the dictionary, else default.\nupdate([E, ]**F) \u2192 None.\u00a0 Update D from dict/iterable E and F.\u00b6\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k]\nIf E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v\nIn either case, this is followed by: for k in F: D[k] = F[k]\nvalues() \u2192 an object providing a view on D's values\u00b6\nembedded: bool\u00b6\nllm_kwargs: Dict[str, Any]\u00b6\nmodel_id: Optional[str]\u00b6\nmodel_name: str\u00b6\nserver_type: Optional[Literal['http', 'grpc']]\u00b6\nserver_url: Optional[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.IdentifyingParams.html"} {"id": "40923c518d19-0", "text": "langchain.llms.replicate.Replicate\u00b6\nclass langchain.llms.replicate.Replicate(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, model: str, input: Dict[str, Any] = None, model_kwargs: Dict[str, Any] = None, replicate_api_token: Optional[str] = None)[source]\u00b6\nBases: LLM\nWrapper around Replicate models.\nTo use, you should have the replicate python package installed,\nand the environment variable REPLICATE_API_TOKEN set with your API token.\nYou can find your token here: https://replicate.com/account\nThe model param is required, but any other model parameters can also\nbe passed in with the format input={model_param: value, \u2026}\nExample\nfrom langchain.llms import Replicate\nreplicate = Replicate(model=\"stability-ai/stable-diffusion: 27b93a2413e7f36cd83da926f365628 0b2931564ff050bf9575f1fdf9bcd7478\",\n input={\"image_dimensions\": \"512x512\"})\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam input: Dict[str, Any] [Optional]\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html"} {"id": "40923c518d19-1", "text": "Metadata to add to the run trace.\nparam model: str [Required]\u00b6\nparam model_kwargs: Dict[str, Any] [Optional]\u00b6\nparam replicate_api_token: Optional[str] = None\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html"} {"id": "40923c518d19-2", "text": "Parameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html"} {"id": "40923c518d19-3", "text": "**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator build_extra\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nBuild extra kwargs from additional params that were passed in.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html"} {"id": "40923c518d19-4", "text": "functionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html"} {"id": "40923c518d19-5", "text": "to the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html"} {"id": "40923c518d19-6", "text": "eg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic config.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html"} {"id": "14661096d1f2-0", "text": "langchain.llms.sagemaker_endpoint.SagemakerEndpoint\u00b6\nclass langchain.llms.sagemaker_endpoint.SagemakerEndpoint(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, endpoint_name: str = '', region_name: str = '', credentials_profile_name: Optional[str] = None, content_handler: LLMContentHandler, model_kwargs: Optional[Dict] = None, endpoint_kwargs: Optional[Dict] = None)[source]\u00b6\nBases: LLM\nWrapper around custom Sagemaker Inference Endpoints.\nTo use, you must supply the endpoint name from your deployed\nSagemaker model & the region where it is deployed.\nTo authenticate, the AWS client uses the following methods to\nautomatically load credentials:\nhttps://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\nIf a specific credential profile should be used, you must pass\nthe name of the profile from the ~/.aws/credentials file that is to be used.\nMake sure the credentials / roles used have the required policies to\naccess the Sagemaker endpoint.\nSee: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam content_handler: langchain.llms.sagemaker_endpoint.LLMContentHandler [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html"} {"id": "14661096d1f2-1", "text": "The content handler class that provides an input and\noutput transform functions to handle formats between LLM\nand the endpoint.\nparam credentials_profile_name: Optional[str] = None\u00b6\nThe name of the profile in the ~/.aws/credentials or ~/.aws/config files, which\nhas either access keys or role information specified.\nIf not specified, the default credential profile or, if on an EC2 instance,\ncredentials from IMDS will be used.\nSee: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\nparam endpoint_kwargs: Optional[Dict] = None\u00b6\nOptional attributes passed to the invoke_endpoint\nfunction. See `boto3`_. docs for more info.\n.. _boto3: \nparam endpoint_name: str = ''\u00b6\nThe name of the endpoint from the deployed Sagemaker model.\nMust be unique within an AWS Region.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_kwargs: Optional[Dict] = None\u00b6\nKey word arguments to pass to the model.\nparam region_name: str = ''\u00b6\nThe aws region where the Sagemaker model is deployed, eg. us-west-2.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html"} {"id": "14661096d1f2-2", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html"} {"id": "14661096d1f2-3", "text": "async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html"} {"id": "14661096d1f2-4", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html"} {"id": "14661096d1f2-5", "text": "Get the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html"} {"id": "14661096d1f2-6", "text": "to the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that AWS credentials to and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html"} {"id": "b2f43a622386-0", "text": "langchain.llms.mosaicml.MosaicML\u00b6\nclass langchain.llms.mosaicml.MosaicML(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict', inject_instruction_format: bool = False, model_kwargs: Optional[dict] = None, retry_sleep: float = 1.0, mosaicml_api_token: Optional[str] = None)[source]\u00b6\nBases: LLM\nWrapper around MosaicML\u2019s LLM inference service.\nTo use, you should have the\nenvironment variable MOSAICML_API_TOKEN set with your API token, or pass\nit as a named parameter to the constructor.\nExample\nfrom langchain.llms import MosaicML\nendpoint_url = (\n \"https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict\"\n)\nmosaic_llm = MosaicML(\n endpoint_url=endpoint_url,\n mosaicml_api_token=\"my-api-key\"\n)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict'\u00b6\nEndpoint URL to use.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html"} {"id": "b2f43a622386-1", "text": "Endpoint URL to use.\nparam inject_instruction_format: bool = False\u00b6\nWhether to inject the instruction format into the prompt.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_kwargs: Optional[dict] = None\u00b6\nKey word arguments to pass to the model.\nparam mosaicml_api_token: Optional[str] = None\u00b6\nparam retry_sleep: float = 1.0\u00b6\nHow long to try sleeping for if a rate limit is encountered\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html"} {"id": "b2f43a622386-2", "text": "API.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html"} {"id": "b2f43a622386-3", "text": "Asynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html"} {"id": "b2f43a622386-4", "text": "text generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html"} {"id": "b2f43a622386-5", "text": "stop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html"} {"id": "b2f43a622386-6", "text": "serialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html"} {"id": "5d2dc51b1c7b-0", "text": "langchain.llms.databricks.Databricks\u00b6\nclass langchain.llms.databricks.Databricks(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, host: str = None, api_token: str = None, endpoint_name: Optional[str] = None, cluster_id: Optional[str] = None, cluster_driver_port: Optional[str] = None, model_kwargs: Optional[Dict[str, Any]] = None, transform_input_fn: Optional[Callable] = None, transform_output_fn: Optional[Callable[[...], str]] = None)[source]\u00b6\nBases: LLM\nLLM wrapper around a Databricks serving endpoint or a cluster driver proxy app.\nIt supports two endpoint types:\nServing endpoint (recommended for both production and development).\nWe assume that an LLM was registered and deployed to a serving endpoint.\nTo wrap it as an LLM you must have \u201cCan Query\u201d permission to the endpoint.\nSet endpoint_name accordingly and do not set cluster_id and\ncluster_driver_port.\nThe expected model signature is:\ninputs:\n[{\"name\": \"prompt\", \"type\": \"string\"},\n {\"name\": \"stop\", \"type\": \"list[string]\"}]\noutputs: [{\"type\": \"string\"}]\nCluster driver proxy app (recommended for interactive development).\nOne can load an LLM on a Databricks interactive cluster and start a local HTTP\nserver on the driver node to serve the model at / using HTTP POST method\nwith JSON input/output.\nPlease use a port number between [3000, 8000] and let the server listen to", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html"} {"id": "5d2dc51b1c7b-1", "text": "the driver IP address or simply 0.0.0.0 instead of localhost only.\nTo wrap it as an LLM you must have \u201cCan Attach To\u201d permission to the cluster.\nSet cluster_id and cluster_driver_port and do not set endpoint_name.\nThe expected server schema (using JSON schema) is:\ninputs:\n{\"type\": \"object\",\n \"properties\": {\n \"prompt\": {\"type\": \"string\"},\n \"stop\": {\"type\": \"array\", \"items\": {\"type\": \"string\"}}},\n \"required\": [\"prompt\"]}`\noutputs: {\"type\": \"string\"}\nIf the endpoint model signature is different or you want to set extra params,\nyou can use transform_input_fn and transform_output_fn to apply necessary\ntransformations before and after the query.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_token: str [Optional]\u00b6\nDatabricks personal access token.\nIf not provided, the default value is determined by\nthe DATABRICKS_TOKEN environment variable if present, or\nan automatically generated temporary token if running inside a Databricks\nnotebook attached to an interactive cluster in \u201csingle user\u201d or\n\u201cno isolation shared\u201d mode.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam cluster_driver_port: Optional[str] = None\u00b6\nThe port number used by the HTTP server running on the cluster driver node.\nThe server should listen on the driver IP address or simply 0.0.0.0 to connect.\nWe recommend the server using a port number between [3000, 8000].\nparam cluster_id: Optional[str] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html"} {"id": "5d2dc51b1c7b-2", "text": "param cluster_id: Optional[str] = None\u00b6\nID of the cluster if connecting to a cluster driver proxy app.\nIf neither endpoint_name nor cluster_id is not provided and the code runs\ninside a Databricks notebook attached to an interactive cluster in \u201csingle user\u201d\nor \u201cno isolation shared\u201d mode, the current cluster ID is used as default.\nYou must not set both endpoint_name and cluster_id.\nparam endpoint_name: Optional[str] = None\u00b6\nName of the model serving endpont.\nYou must specify the endpoint name to connect to a model serving endpoint.\nYou must not set both endpoint_name and cluster_id.\nparam host: str [Optional]\u00b6\nDatabricks workspace hostname.\nIf not provided, the default value is determined by\nthe DATABRICKS_HOST environment variable if present, or\nthe hostname of the current Databricks workspace if running inside\na Databricks notebook attached to an interactive cluster in \u201csingle user\u201d\nor \u201cno isolation shared\u201d mode.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_kwargs: Optional[Dict[str, Any]] = None\u00b6\nExtra parameters to pass to the endpoint.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam transform_input_fn: Optional[Callable] = None\u00b6\nA function that transforms {prompt, stop, **kwargs} into a JSON-compatible\nrequest object that the endpoint accepts.\nFor example, you can apply a prompt template to the input prompt.\nparam transform_output_fn: Optional[Callable[[...], str]] = None\u00b6\nA function that transforms the output from the endpoint to the generated text.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html"} {"id": "5d2dc51b1c7b-3", "text": "param verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html"} {"id": "5d2dc51b1c7b-4", "text": "first occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html"} {"id": "5d2dc51b1c7b-5", "text": "dict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html"} {"id": "5d2dc51b1c7b-6", "text": "get_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html"} {"id": "5d2dc51b1c7b-7", "text": "Parameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_cluster_driver_port\u00a0 \u00bb\u00a0 cluster_driver_port[source]\u00b6\nvalidator set_cluster_id\u00a0 \u00bb\u00a0 cluster_id[source]\u00b6\nvalidator set_model_kwargs\u00a0 \u00bb\u00a0 model_kwargs[source]\u00b6\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html"} {"id": "5d2dc51b1c7b-8", "text": "model Config[source]\u00b6\nBases: object\nextra = 'forbid'\u00b6\nunderscore_attrs_are_private = True\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html"} {"id": "2ff287da87da-0", "text": "langchain.llms.rwkv.RWKV\u00b6\nclass langchain.llms.rwkv.RWKV(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, model: str, tokens_path: str, strategy: str = 'cpu fp32', rwkv_verbose: bool = True, temperature: float = 1.0, top_p: float = 0.5, penalty_alpha_frequency: float = 0.4, penalty_alpha_presence: float = 0.4, CHUNK_LEN: int = 256, max_tokens_per_generation: int = 256, client: Any = None, tokenizer: Any = None, pipeline: Any = None, model_tokens: Any = None, model_state: Any = None)[source]\u00b6\nBases: LLM, BaseModel\nWrapper around RWKV language models.\nTo use, you should have the rwkv python package installed, the\npre-trained model file, and the model\u2019s config information.\nExample\nfrom langchain.llms import RWKV\nmodel = RWKV(model=\"./models/rwkv-3b-fp16.bin\", strategy=\"cpu fp32\")\n# Simplest invocation\nresponse = model(\"Once upon a time, \")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam CHUNK_LEN: int = 256\u00b6\nBatch size for prompt processing.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html"} {"id": "2ff287da87da-1", "text": "param callbacks: Callbacks = None\u00b6\nparam max_tokens_per_generation: int = 256\u00b6\nMaximum number of tokens to generate.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model: str [Required]\u00b6\nPath to the pre-trained RWKV model file.\nparam penalty_alpha_frequency: float = 0.4\u00b6\nPositive values penalize new tokens based on their existing frequency\nin the text so far, decreasing the model\u2019s likelihood to repeat the same\nline verbatim..\nparam penalty_alpha_presence: float = 0.4\u00b6\nPositive values penalize new tokens based on whether they appear\nin the text so far, increasing the model\u2019s likelihood to talk about\nnew topics..\nparam rwkv_verbose: bool = True\u00b6\nPrint debug information.\nparam strategy: str = 'cpu fp32'\u00b6\nToken context window.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: float = 1.0\u00b6\nThe temperature to use for sampling.\nparam tokens_path: str [Required]\u00b6\nPath to the RWKV tokens file.\nparam top_p: float = 0.5\u00b6\nThe top-p value to use for sampling.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html"} {"id": "2ff287da87da-2", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html"} {"id": "2ff287da87da-3", "text": "async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html"} {"id": "2ff287da87da-4", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html"} {"id": "2ff287da87da-5", "text": "Get the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html"} {"id": "2ff287da87da-6", "text": "to the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun_rnn(_tokens: List[str], newline_adj: int = 0) \u2192 Any[source]\u00b6\nrwkv_generate(prompt: str) \u2192 str[source]\u00b6\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that the python package exists in the environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html"} {"id": "64e365cdef06-0", "text": "langchain.llms.google_palm.GooglePalm\u00b6\nclass langchain.llms.google_palm.GooglePalm(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, google_api_key: Optional[str] = None, model_name: str = 'models/text-bison-001', temperature: float = 0.7, top_p: Optional[float] = None, top_k: Optional[int] = None, max_output_tokens: Optional[int] = None, n: int = 1)[source]\u00b6\nBases: BaseLLM, BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam google_api_key: Optional[str] = None\u00b6\nparam max_output_tokens: Optional[int] = None\u00b6\nMaximum number of tokens to include in a candidate. Must be greater than zero.\nIf unset, will default to 64.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_name: str = 'models/text-bison-001'\u00b6\nModel name to use.\nparam n: int = 1\u00b6\nNumber of chat completions to generate for each prompt. Note that the API may\nnot return the full n completions if duplicates are generated.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html"} {"id": "64e365cdef06-1", "text": "param tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: float = 0.7\u00b6\nRun inference with this temperature. Must by in the closed interval\n[0.0, 1.0].\nparam top_k: Optional[int] = None\u00b6\nDecode using top-k sampling: consider the set of top_k most probable tokens.\nMust be positive.\nparam top_p: Optional[float] = None\u00b6\nDecode using nucleus sampling: consider the smallest set of tokens whose\nprobability sum is at least top_p. Must be in the closed interval [0.0, 1.0].\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html"} {"id": "64e365cdef06-2", "text": "This method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html"} {"id": "64e365cdef06-3", "text": "Asynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html"} {"id": "64e365cdef06-4", "text": "text generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html"} {"id": "64e365cdef06-5", "text": "stop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate api key, python package exists.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html"} {"id": "64e365cdef06-6", "text": "serialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html"} {"id": "5ae5ceb113fa-0", "text": "langchain.llms.nlpcloud.NLPCloud\u00b6\nclass langchain.llms.nlpcloud.NLPCloud(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model_name: str = 'finetuned-gpt-neox-20b', temperature: float = 0.7, min_length: int = 1, max_length: int = 256, length_no_input: bool = True, remove_input: bool = True, remove_end_sequence: bool = True, bad_words: List[str] = [], top_p: int = 1, top_k: int = 50, repetition_penalty: float = 1.0, length_penalty: float = 1.0, do_sample: bool = True, num_beams: int = 1, early_stopping: bool = False, num_return_sequences: int = 1, nlpcloud_api_key: Optional[str] = None)[source]\u00b6\nBases: LLM\nWrapper around NLPCloud large language models.\nTo use, you should have the nlpcloud python package installed, and the\nenvironment variable NLPCLOUD_API_KEY set with your API key.\nExample\nfrom langchain.llms import NLPCloud\nnlpcloud = NLPCloud(model=\"gpt-neox-20b\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam bad_words: List[str] = []\u00b6\nList of tokens not allowed to be generated.\nparam cache: Optional[bool] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html"} {"id": "5ae5ceb113fa-1", "text": "param cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam do_sample: bool = True\u00b6\nWhether to use sampling (True) or greedy decoding.\nparam early_stopping: bool = False\u00b6\nWhether to stop beam search at num_beams sentences.\nparam length_no_input: bool = True\u00b6\nWhether min_length and max_length should include the length of the input.\nparam length_penalty: float = 1.0\u00b6\nExponential penalty to the length.\nparam max_length: int = 256\u00b6\nThe maximum number of tokens to generate in the completion.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam min_length: int = 1\u00b6\nThe minimum number of tokens to generate in the completion.\nparam model_name: str = 'finetuned-gpt-neox-20b'\u00b6\nModel name to use.\nparam nlpcloud_api_key: Optional[str] = None\u00b6\nparam num_beams: int = 1\u00b6\nNumber of beams for beam search.\nparam num_return_sequences: int = 1\u00b6\nHow many completions to generate for each prompt.\nparam remove_end_sequence: bool = True\u00b6\nWhether or not to remove the end sequence token.\nparam remove_input: bool = True\u00b6\nRemove input text from API response\nparam repetition_penalty: float = 1.0\u00b6\nPenalizes repeated tokens. 1.0 means no penalty.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: float = 0.7\u00b6\nWhat sampling temperature to use.\nparam top_k: int = 50\u00b6\nThe number of highest probability tokens to keep for top-k filtering.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html"} {"id": "5ae5ceb113fa-2", "text": "The number of highest probability tokens to keep for top-k filtering.\nparam top_p: int = 1\u00b6\nTotal probability mass of tokens to consider at each step.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html"} {"id": "5ae5ceb113fa-3", "text": "text generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html"} {"id": "5ae5ceb113fa-4", "text": "Returns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html"} {"id": "5ae5ceb113fa-5", "text": "get_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html"} {"id": "5ae5ceb113fa-6", "text": "Parameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html"} {"id": "3e79b21eac29-0", "text": "langchain.llms.aleph_alpha.AlephAlpha\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html"} {"id": "3e79b21eac29-1", "text": "class langchain.llms.aleph_alpha.AlephAlpha(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model: Optional[str] = 'luminous-base', maximum_tokens: int = 64, temperature: float = 0.0, top_k: int = 0, top_p: float = 0.0, presence_penalty: float = 0.0, frequency_penalty: float = 0.0, repetition_penalties_include_prompt: Optional[bool] = False, use_multiplicative_presence_penalty: Optional[bool] = False, penalty_bias: Optional[str] = None, penalty_exceptions: Optional[List[str]] = None, penalty_exceptions_include_stop_sequences: Optional[bool] = None, best_of: Optional[int] = None, n: int = 1, logit_bias: Optional[Dict[int, float]] = None, log_probs: Optional[int] = None, tokens: Optional[bool] = False, disable_optimizations: Optional[bool] = False, minimum_tokens: Optional[int] = 0, echo: bool = False, use_multiplicative_frequency_penalty: bool = False, sequence_penalty: float = 0.0, sequence_penalty_min_length: int = 2, use_multiplicative_sequence_penalty: bool = False, completion_bias_inclusion: Optional[Sequence[str]] = None, completion_bias_inclusion_first_token_only: bool = False, completion_bias_exclusion: Optional[Sequence[str]] = None, completion_bias_exclusion_first_token_only: bool = False, contextual_control_threshold: Optional[float] = None, control_log_additive: Optional[bool] =", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html"} {"id": "3e79b21eac29-2", "text": "contextual_control_threshold: Optional[float] = None, control_log_additive: Optional[bool] = True, repetition_penalties_include_completion: bool = True, raw_completion: bool = False, aleph_alpha_api_key: Optional[str] = None, stop_sequences: Optional[List[str]] = None)[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html"} {"id": "3e79b21eac29-3", "text": "Bases: LLM\nWrapper around Aleph Alpha large language models.\nTo use, you should have the aleph_alpha_client python package installed, and the\nenvironment variable ALEPH_ALPHA_API_KEY set with your API key, or pass\nit as a named parameter to the constructor.\nParameters are explained more in depth here:\nhttps://github.com/Aleph-Alpha/aleph-alpha-client/blob/c14b7dd2b4325c7da0d6a119f6e76385800e097b/aleph_alpha_client/completion.py#L10\nExample\nfrom langchain.llms import AlephAlpha\naleph_alpha = AlephAlpha(aleph_alpha_api_key=\"my-api-key\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam aleph_alpha_api_key: Optional[str] = None\u00b6\nAPI key for Aleph Alpha API.\nparam best_of: Optional[int] = None\u00b6\nreturns the one with the \u201cbest of\u201d results\n(highest log probability per token)\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam completion_bias_exclusion: Optional[Sequence[str]] = None\u00b6\nparam completion_bias_exclusion_first_token_only: bool = False\u00b6\nOnly consider the first token for the completion_bias_exclusion.\nparam completion_bias_inclusion: Optional[Sequence[str]] = None\u00b6\nparam completion_bias_inclusion_first_token_only: bool = False\u00b6\nparam contextual_control_threshold: Optional[float] = None\u00b6\nIf set to None, attention control parameters only apply to those tokens that have\nexplicitly been set in the request.\nIf set to a non-None value, control parameters are also applied to similar tokens.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html"} {"id": "3e79b21eac29-4", "text": "If set to a non-None value, control parameters are also applied to similar tokens.\nparam control_log_additive: Optional[bool] = True\u00b6\nTrue: apply control by adding the log(control_factor) to attention scores.\nFalse: (attention_scores - - attention_scores.min(-1)) * control_factor\nparam disable_optimizations: Optional[bool] = False\u00b6\nparam echo: bool = False\u00b6\nEcho the prompt in the completion.\nparam frequency_penalty: float = 0.0\u00b6\nPenalizes repeated tokens according to frequency.\nparam log_probs: Optional[int] = None\u00b6\nNumber of top log probabilities to be returned for each generated token.\nparam logit_bias: Optional[Dict[int, float]] = None\u00b6\nThe logit bias allows to influence the likelihood of generating tokens.\nparam maximum_tokens: int = 64\u00b6\nThe maximum number of tokens to be generated.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam minimum_tokens: Optional[int] = 0\u00b6\nGenerate at least this number of tokens.\nparam model: Optional[str] = 'luminous-base'\u00b6\nModel name to use.\nparam n: int = 1\u00b6\nHow many completions to generate for each prompt.\nparam penalty_bias: Optional[str] = None\u00b6\nPenalty bias for the completion.\nparam penalty_exceptions: Optional[List[str]] = None\u00b6\nList of strings that may be generated without penalty,\nregardless of other penalty settings\nparam penalty_exceptions_include_stop_sequences: Optional[bool] = None\u00b6\nShould stop_sequences be included in penalty_exceptions.\nparam presence_penalty: float = 0.0\u00b6\nPenalizes repeated tokens.\nparam raw_completion: bool = False\u00b6\nForce the raw completion of the model to be returned.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html"} {"id": "3e79b21eac29-5", "text": "Force the raw completion of the model to be returned.\nparam repetition_penalties_include_completion: bool = True\u00b6\nFlag deciding whether presence penalty or frequency penalty\nare updated from the completion.\nparam repetition_penalties_include_prompt: Optional[bool] = False\u00b6\nFlag deciding whether presence penalty or frequency penalty are\nupdated from the prompt.\nparam sequence_penalty: float = 0.0\u00b6\nparam sequence_penalty_min_length: int = 2\u00b6\nparam stop_sequences: Optional[List[str]] = None\u00b6\nStop sequences to use.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: float = 0.0\u00b6\nA non-negative float that tunes the degree of randomness in generation.\nparam tokens: Optional[bool] = False\u00b6\nreturn tokens of completion.\nparam top_k: int = 0\u00b6\nNumber of most likely tokens to consider at each step.\nparam top_p: float = 0.0\u00b6\nTotal probability mass of tokens to consider at each step.\nparam use_multiplicative_frequency_penalty: bool = False\u00b6\nparam use_multiplicative_presence_penalty: Optional[bool] = False\u00b6\nFlag deciding whether presence penalty is applied\nmultiplicatively (True) or additively (False).\nparam use_multiplicative_sequence_penalty: bool = False\u00b6\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html"} {"id": "3e79b21eac29-6", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html"} {"id": "3e79b21eac29-7", "text": "async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html"} {"id": "3e79b21eac29-8", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html"} {"id": "3e79b21eac29-9", "text": "Get the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html"} {"id": "3e79b21eac29-10", "text": "to the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html"} {"id": "c0b59dc5df56-0", "text": "langchain.llms.aviary.Aviary\u00b6\nclass langchain.llms.aviary.Aviary(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, model: str = 'amazon/LightGPT', aviary_url: Optional[str] = None, aviary_token: Optional[str] = None, use_prompt_format: bool = True, version: Optional[str] = None)[source]\u00b6\nBases: LLM\nAllow you to use an Aviary.\nAviary is a backend for hosted models. You can\nfind out more about aviary at\nhttp://github.com/ray-project/aviary\nTo get a list of the models supported on an\naviary, follow the instructions on the web site to\ninstall the aviary CLI and then use:\naviary models\nAVIARY_URL and AVIARY_TOKEN environement variables must be set.\nExample\nfrom langchain.llms import Aviary\nos.environ[\"AVIARY_URL\"] = \"\"\nos.environ[\"AVIARY_TOKEN\"] = \"\"\nlight = Aviary(model='amazon/LightGPT')\noutput = light('How do you make fried rice?')\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam aviary_token: Optional[str] = None\u00b6\nparam aviary_url: Optional[str] = None\u00b6\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.Aviary.html"} {"id": "c0b59dc5df56-1", "text": "param callbacks: Callbacks = None\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model: str = 'amazon/LightGPT'\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam use_prompt_format: bool = True\u00b6\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\nparam version: Optional[str] = None\u00b6\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.Aviary.html"} {"id": "c0b59dc5df56-2", "text": "need more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.Aviary.html"} {"id": "c0b59dc5df56-3", "text": "Parameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.Aviary.html"} {"id": "c0b59dc5df56-4", "text": "callbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.Aviary.html"} {"id": "c0b59dc5df56-5", "text": "to the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.Aviary.html"} {"id": "c0b59dc5df56-6", "text": "eg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.Aviary.html"} {"id": "cf079416ca4e-0", "text": "langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference\u00b6\nclass langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, max_new_tokens: int = 512, top_k: Optional[int] = None, top_p: Optional[float] = 0.95, typical_p: Optional[float] = 0.95, temperature: float = 0.8, repetition_penalty: Optional[float] = None, stop_sequences: List[str] = None, seed: Optional[int] = None, inference_server_url: str = '', timeout: int = 120, server_kwargs: Dict[str, Any] = None, stream: bool = False, client: Any = None, async_client: Any = None)[source]\u00b6\nBases: LLM\nHuggingFace text generation inference API.\nThis class is a wrapper around the HuggingFace text generation inference API.\nIt is used to generate text from a given prompt.\nAttributes:\n- max_new_tokens: The maximum number of tokens to generate.\n- top_k: The number of top-k tokens to consider when generating text.\n- top_p: The cumulative probability threshold for generating text.\n- typical_p: The typical probability threshold for generating text.\n- temperature: The temperature to use when generating text.\n- repetition_penalty: The repetition penalty to use when generating text.\n- stop_sequences: A list of stop sequences to use when generating text.\n- seed: The seed to use when generating text.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html"} {"id": "cf079416ca4e-1", "text": "- seed: The seed to use when generating text.\n- inference_server_url: The URL of the inference server to use.\n- timeout: The timeout value in seconds to use while connecting to inference server.\n- server_kwargs: The keyword arguments to pass to the inference server.\n- client: The client object used to communicate with the inference server.\n- async_client: The async client object used to communicate with the server.\nMethods:\n- _call: Generates text based on a given prompt and stop sequences.\n- _acall: Async generates text based on a given prompt and stop sequences.\n- _llm_type: Returns the type of LLM.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam async_client: Any = None\u00b6\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam client: Any = None\u00b6\nparam inference_server_url: str = ''\u00b6\nparam max_new_tokens: int = 512\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam repetition_penalty: Optional[float] = None\u00b6\nparam seed: Optional[int] = None\u00b6\nparam server_kwargs: Dict[str, Any] [Optional]\u00b6\nparam stop_sequences: List[str] [Optional]\u00b6\nparam stream: bool = False\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: float = 0.8\u00b6\nparam timeout: int = 120\u00b6\nparam top_k: Optional[int] = None\u00b6\nparam top_p: Optional[float] = 0.95\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html"} {"id": "cf079416ca4e-2", "text": "param top_p: Optional[float] = 0.95\u00b6\nparam typical_p: Optional[float] = 0.95\u00b6\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html"} {"id": "cf079416ca4e-3", "text": "text generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html"} {"id": "cf079416ca4e-4", "text": "Returns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html"} {"id": "cf079416ca4e-5", "text": "get_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html"} {"id": "cf079416ca4e-6", "text": "Parameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html"} {"id": "c10d5590b304-0", "text": "langchain.llms.baseten.Baseten\u00b6\nclass langchain.llms.baseten.Baseten(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, model: str, input: Dict[str, Any] = None, model_kwargs: Dict[str, Any] = None)[source]\u00b6\nBases: LLM\nUse your Baseten models in Langchain\nTo use, you should have the baseten python package installed,\nand run baseten.login() with your Baseten API key.\nThe required model param can be either a model id or model\nversion id. Using a model version ID will result in\nslightly faster invocation.\nAny other model parameters can also\nbe passed in with the format input={model_param: value, \u2026}\nThe Baseten model must accept a dictionary of input with the key\n\u201cprompt\u201d and return a dictionary with a key \u201cdata\u201d which maps\nto a list of response strings.\nExample\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam input: Dict[str, Any] [Optional]\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model: str [Required]\u00b6\nparam model_kwargs: Dict[str, Any] [Optional]\u00b6\nparam tags: Optional[List[str]] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.baseten.Baseten.html"} {"id": "c10d5590b304-1", "text": "param tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.baseten.Baseten.html"} {"id": "c10d5590b304-2", "text": "stop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.baseten.Baseten.html"} {"id": "c10d5590b304-3", "text": "dict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.baseten.Baseten.html"} {"id": "c10d5590b304-4", "text": "get_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.baseten.Baseten.html"} {"id": "c10d5590b304-5", "text": "Parameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.baseten.Baseten.html"} {"id": "99e2291ddefb-0", "text": "langchain.llms.openai.update_token_usage\u00b6\nlangchain.llms.openai.update_token_usage(keys: Set[str], response: Dict[str, Any], token_usage: Dict[str, Any]) \u2192 None[source]\u00b6\nUpdate token usage.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.update_token_usage.html"} {"id": "a316ca963fcf-0", "text": "langchain.llms.databricks.get_default_api_token\u00b6\nlangchain.llms.databricks.get_default_api_token() \u2192 str[source]\u00b6\nGets the default Databricks personal access token.\nRaises an error if the token cannot be automatically determined.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.get_default_api_token.html"} {"id": "8d1599bf0773-0", "text": "langchain.llms.ai21.AI21PenaltyData\u00b6\nclass langchain.llms.ai21.AI21PenaltyData(*, scale: int = 0, applyToWhitespaces: bool = True, applyToPunctuations: bool = True, applyToNumbers: bool = True, applyToStopwords: bool = True, applyToEmojis: bool = True)[source]\u00b6\nBases: BaseModel\nParameters for AI21 penalty data.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam applyToEmojis: bool = True\u00b6\nparam applyToNumbers: bool = True\u00b6\nparam applyToPunctuations: bool = True\u00b6\nparam applyToStopwords: bool = True\u00b6\nparam applyToWhitespaces: bool = True\u00b6\nparam scale: int = 0\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21PenaltyData.html"} {"id": "cb09ccf5a414-0", "text": "langchain.llms.bedrock.Bedrock\u00b6\nclass langchain.llms.bedrock.Bedrock(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, region_name: Optional[str] = None, credentials_profile_name: Optional[str] = None, model_id: str, model_kwargs: Optional[Dict] = None)[source]\u00b6\nBases: LLM\nLLM provider to invoke Bedrock models.\nTo authenticate, the AWS client uses the following methods to\nautomatically load credentials:\nhttps://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\nIf a specific credential profile should be used, you must pass\nthe name of the profile from the ~/.aws/credentials file that is to be used.\nMake sure the credentials / roles used have the required policies to\naccess the Bedrock service.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam credentials_profile_name: Optional[str] = None\u00b6\nThe name of the profile in the ~/.aws/credentials or ~/.aws/config files, which\nhas either access keys or role information specified.\nIf not specified, the default credential profile or, if on an EC2 instance,\ncredentials from IMDS will be used.\nSee: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html"} {"id": "cb09ccf5a414-1", "text": "See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_id: str [Required]\u00b6\nId of the model to call, e.g., amazon.titan-tg1-large, this is\nequivalent to the modelId property in the list-foundation-models api\nparam model_kwargs: Optional[Dict] = None\u00b6\nKey word arguments to pass to the model.\nparam region_name: Optional[str] = None\u00b6\nThe aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable\nor region specified in ~/.aws/config in case it is not provided here.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html"} {"id": "cb09ccf5a414-2", "text": "Run the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html"} {"id": "cb09ccf5a414-3", "text": "first occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html"} {"id": "cb09ccf5a414-4", "text": "need more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html"} {"id": "cb09ccf5a414-5", "text": "Pass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html"} {"id": "cb09ccf5a414-6", "text": "to_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that AWS credentials to and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html"} {"id": "8e6b9a2b0609-0", "text": "langchain.llms.openai.OpenAIChat\u00b6\nclass langchain.llms.openai.OpenAIChat(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model_name: str = 'gpt-3.5-turbo', model_kwargs: Dict[str, Any] = None, openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_proxy: Optional[str] = None, max_retries: int = 6, prefix_messages: List = None, streaming: bool = False, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all')[source]\u00b6\nBases: BaseLLM\nWrapper around OpenAI Chat large language models.\nTo use, you should have the openai python package installed, and the\nenvironment variable OPENAI_API_KEY set with your API key.\nAny parameters that are valid to be passed to the openai.create call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.llms import OpenAIChat\nopenaichat = OpenAIChat(model_name=\"gpt-3.5-turbo\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam allowed_special: Union[Literal['all'], AbstractSet[str]] = {}\u00b6\nSet of special tokens that are allowed\u3002\nparam cache: Optional[bool] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html"} {"id": "8e6b9a2b0609-1", "text": "Set of special tokens that are allowed\u3002\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam disallowed_special: Union[Literal['all'], Collection[str]] = 'all'\u00b6\nSet of special tokens that are not allowed\u3002\nparam max_retries: int = 6\u00b6\nMaximum number of retries to make when generating.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_kwargs: Dict[str, Any] [Optional]\u00b6\nHolds any model parameters valid for create call not explicitly specified.\nparam model_name: str = 'gpt-3.5-turbo'\u00b6\nModel name to use.\nparam openai_api_base: Optional[str] = None\u00b6\nparam openai_api_key: Optional[str] = None\u00b6\nparam openai_proxy: Optional[str] = None\u00b6\nparam prefix_messages: List [Optional]\u00b6\nSeries of messages for Chat input.\nparam streaming: bool = False\u00b6\nWhether to stream the results or not.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html"} {"id": "8e6b9a2b0609-2", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html"} {"id": "8e6b9a2b0609-3", "text": "async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator build_extra\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nBuild extra kwargs from additional params that were passed in.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html"} {"id": "8e6b9a2b0609-4", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html"} {"id": "8e6b9a2b0609-5", "text": "Get the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int][source]\u00b6\nGet the token IDs using the tiktoken package.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html"} {"id": "8e6b9a2b0609-6", "text": "save(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html"} {"id": "5026d8656b55-0", "text": "langchain.llms.databricks.get_repl_context\u00b6\nlangchain.llms.databricks.get_repl_context() \u2192 Any[source]\u00b6\nGets the notebook REPL context if running inside a Databricks notebook.\nReturns None otherwise.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.get_repl_context.html"} {"id": "71b6cf9983d2-0", "text": "langchain.llms.stochasticai.StochasticAI\u00b6\nclass langchain.llms.stochasticai.StochasticAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, api_url: str = '', model_kwargs: Dict[str, Any] = None, stochasticai_api_key: Optional[str] = None)[source]\u00b6\nBases: LLM\nWrapper around StochasticAI large language models.\nTo use, you should have the environment variable STOCHASTICAI_API_KEY\nset with your API key.\nExample\nfrom langchain.llms import StochasticAI\nstochasticai = StochasticAI(api_url=\"\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_url: str = ''\u00b6\nModel name to use.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_kwargs: Dict[str, Any] [Optional]\u00b6\nHolds any model parameters valid for create call not\nexplicitly specified.\nparam stochasticai_api_key: Optional[str] = None\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.stochasticai.StochasticAI.html"} {"id": "71b6cf9983d2-1", "text": "param verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.stochasticai.StochasticAI.html"} {"id": "71b6cf9983d2-2", "text": "first occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator build_extra\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nBuild extra kwargs from additional params that were passed in.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.stochasticai.StochasticAI.html"} {"id": "71b6cf9983d2-3", "text": "dict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.stochasticai.StochasticAI.html"} {"id": "71b6cf9983d2-4", "text": "get_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.stochasticai.StochasticAI.html"} {"id": "71b6cf9983d2-5", "text": "Parameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.stochasticai.StochasticAI.html"} {"id": "a92e7f9094f9-0", "text": "langchain.llms.modal.Modal\u00b6\nclass langchain.llms.modal.Modal(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, endpoint_url: str = '', model_kwargs: Dict[str, Any] = None)[source]\u00b6\nBases: LLM\nWrapper around Modal large language models.\nTo use, you should have the modal-client python package installed.\nAny parameters that are valid to be passed to the call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.llms import Modal\nmodal = Modal(endpoint_url=\"\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam endpoint_url: str = ''\u00b6\nmodel endpoint to use\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_kwargs: Dict[str, Any] [Optional]\u00b6\nHolds any model parameters valid for create call not\nexplicitly specified.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.modal.Modal.html"} {"id": "a92e7f9094f9-1", "text": "param verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.modal.Modal.html"} {"id": "a92e7f9094f9-2", "text": "first occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator build_extra\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nBuild extra kwargs from additional params that were passed in.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.modal.Modal.html"} {"id": "a92e7f9094f9-3", "text": "dict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.modal.Modal.html"} {"id": "a92e7f9094f9-4", "text": "get_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.modal.Modal.html"} {"id": "a92e7f9094f9-5", "text": "Parameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic config.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.modal.Modal.html"} {"id": "79adb9a0330d-0", "text": "langchain.llms.fake.FakeListLLM\u00b6\nclass langchain.llms.fake.FakeListLLM(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, responses: List, i: int = 0)[source]\u00b6\nBases: LLM\nFake LLM wrapper for testing purposes.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam i: int = 0\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam responses: List [Required]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.fake.FakeListLLM.html"} {"id": "79adb9a0330d-1", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.fake.FakeListLLM.html"} {"id": "79adb9a0330d-2", "text": "async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.fake.FakeListLLM.html"} {"id": "79adb9a0330d-3", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.fake.FakeListLLM.html"} {"id": "79adb9a0330d-4", "text": "Get the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.fake.FakeListLLM.html"} {"id": "79adb9a0330d-5", "text": "to the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.fake.FakeListLLM.html"} {"id": "22025489c071-0", "text": "langchain.llms.aviary.get_models\u00b6\nlangchain.llms.aviary.get_models() \u2192 List[str][source]\u00b6\nList available models", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.get_models.html"} {"id": "44e61f64927f-0", "text": "langchain.llms.deepinfra.DeepInfra\u00b6\nclass langchain.llms.deepinfra.DeepInfra(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, model_id: str = 'google/flan-t5-xl', model_kwargs: Optional[dict] = None, deepinfra_api_token: Optional[str] = None)[source]\u00b6\nBases: LLM\nWrapper around DeepInfra deployed models.\nTo use, you should have the requests python package installed, and the\nenvironment variable DEEPINFRA_API_TOKEN set with your API token, or pass\nit as a named parameter to the constructor.\nOnly supports text-generation and text2text-generation for now.\nExample\nfrom langchain.llms import DeepInfra\ndi = DeepInfra(model_id=\"google/flan-t5-xl\",\n deepinfra_api_token=\"my-api-key\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam deepinfra_api_token: Optional[str] = None\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_id: str = 'google/flan-t5-xl'\u00b6\nparam model_kwargs: Optional[dict] = None\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.deepinfra.DeepInfra.html"} {"id": "44e61f64927f-1", "text": "Tags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.deepinfra.DeepInfra.html"} {"id": "44e61f64927f-2", "text": "first occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.deepinfra.DeepInfra.html"} {"id": "44e61f64927f-3", "text": "dict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.deepinfra.DeepInfra.html"} {"id": "44e61f64927f-4", "text": "get_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.deepinfra.DeepInfra.html"} {"id": "44e61f64927f-5", "text": "Parameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.deepinfra.DeepInfra.html"} {"id": "93296e2d4827-0", "text": "langchain.llms.llamacpp.LlamaCpp\u00b6\nclass langchain.llms.llamacpp.LlamaCpp(*, cache: Optional[bool] = None, verbose: bool = True, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model_path: str, lora_base: Optional[str] = None, lora_path: Optional[str] = None, n_ctx: int = 512, n_parts: int = - 1, seed: int = - 1, f16_kv: bool = True, logits_all: bool = False, vocab_only: bool = False, use_mlock: bool = False, n_threads: Optional[int] = None, n_batch: Optional[int] = 8, n_gpu_layers: Optional[int] = None, suffix: Optional[str] = None, max_tokens: Optional[int] = 256, temperature: Optional[float] = 0.8, top_p: Optional[float] = 0.95, logprobs: Optional[int] = None, echo: Optional[bool] = False, stop: Optional[List[str]] = [], repeat_penalty: Optional[float] = 1.1, top_k: Optional[int] = 40, last_n_tokens_size: Optional[int] = 64, use_mmap: Optional[bool] = True, streaming: bool = True)[source]\u00b6\nBases: LLM\nWrapper around the llama.cpp model.\nTo use, you should have the llama-cpp-python library installed, and provide the\npath to the Llama model as a named parameter to the constructor.\nCheck out: https://github.com/abetlen/llama-cpp-python\nExample", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html"} {"id": "93296e2d4827-1", "text": "Check out: https://github.com/abetlen/llama-cpp-python\nExample\nfrom langchain.llms import LlamaCpp\nllm = LlamaCpp(model_path=\"/path/to/llama/model\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam echo: Optional[bool] = False\u00b6\nWhether to echo the prompt.\nparam f16_kv: bool = True\u00b6\nUse half-precision for key/value cache.\nparam last_n_tokens_size: Optional[int] = 64\u00b6\nThe number of tokens to look back when applying the repeat_penalty.\nparam logits_all: bool = False\u00b6\nReturn logits for all tokens, not just the last token.\nparam logprobs: Optional[int] = None\u00b6\nThe number of logprobs to return. If None, no logprobs are returned.\nparam lora_base: Optional[str] = None\u00b6\nThe path to the Llama LoRA base model.\nparam lora_path: Optional[str] = None\u00b6\nThe path to the Llama LoRA. If None, no LoRa is loaded.\nparam max_tokens: Optional[int] = 256\u00b6\nThe maximum number of tokens to generate.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_path: str [Required]\u00b6\nThe path to the Llama model file.\nparam n_batch: Optional[int] = 8\u00b6\nNumber of tokens to process in parallel.\nShould be a number between 1 and n_ctx.\nparam n_ctx: int = 512\u00b6\nToken context window.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html"} {"id": "93296e2d4827-2", "text": "param n_ctx: int = 512\u00b6\nToken context window.\nparam n_gpu_layers: Optional[int] = None\u00b6\nNumber of layers to be loaded into gpu memory. Default None.\nparam n_parts: int = -1\u00b6\nNumber of parts to split the model into.\nIf -1, the number of parts is automatically determined.\nparam n_threads: Optional[int] = None\u00b6\nNumber of threads to use.\nIf None, the number of threads is automatically determined.\nparam repeat_penalty: Optional[float] = 1.1\u00b6\nThe penalty to apply to repeated tokens.\nparam seed: int = -1\u00b6\nSeed. If -1, a random seed is used.\nparam stop: Optional[List[str]] = []\u00b6\nA list of strings to stop generation when encountered.\nparam streaming: bool = True\u00b6\nWhether to stream the results, token by token.\nparam suffix: Optional[str] = None\u00b6\nA suffix to append to the generated text. If None, no suffix is appended.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: Optional[float] = 0.8\u00b6\nThe temperature to use for sampling.\nparam top_k: Optional[int] = 40\u00b6\nThe top-k value to use for sampling.\nparam top_p: Optional[float] = 0.95\u00b6\nThe top-p value to use for sampling.\nparam use_mlock: bool = False\u00b6\nForce system to keep model in RAM.\nparam use_mmap: Optional[bool] = True\u00b6\nWhether to keep the model loaded in RAM\nparam verbose: bool = True\u00b6\nPrint verbose output to stderr.\nparam vocab_only: bool = False\u00b6\nOnly load the vocabulary, no weights.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html"} {"id": "93296e2d4827-3", "text": "param vocab_only: bool = False\u00b6\nOnly load the vocabulary, no weights.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html"} {"id": "93296e2d4827-4", "text": "first occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html"} {"id": "93296e2d4827-5", "text": "dict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html"} {"id": "93296e2d4827-6", "text": "get_num_tokens(text: str) \u2192 int[source]\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html"} {"id": "93296e2d4827-7", "text": "Parameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nstream(prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None) \u2192 Generator[Dict, None, None][source]\u00b6\nYields results objects as they are generated in real time.\nBETA: this is a beta feature while we figure out the right abstraction.\nOnce that happens, this interface could change.\nIt also calls the callback manager\u2019s on_llm_new_token event with\nsimilar parameters to the OpenAI LLM class method of the same name.\nArgs:prompt: The prompts to pass into the model.\nstop: Optional list of stop words to use when generating.\nReturns:A generator representing the stream of tokens being generated.\nYields:A dictionary like objects containing a string token and metadata.\nSee llama-cpp-python docs and below for more.\nExample:from langchain.llms import LlamaCpp\nllm = LlamaCpp(\n model_path=\"/path/to/local/model.bin\",", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html"} {"id": "93296e2d4827-8", "text": "llm = LlamaCpp(\n model_path=\"/path/to/local/model.bin\",\n temperature = 0.5\n)\nfor chunk in llm.stream(\"Ask 'Hi, how are you?' like a pirate:'\",\n stop=[\"'\",\"\n\u201c]):result = chunk[\u201cchoices\u201d][0]\nprint(result[\u201ctext\u201d], end=\u2019\u2019, flush=True)\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that llama-cpp-python library is installed.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html"} {"id": "e2bef0c7f15e-0", "text": "langchain.llms.openai.AzureOpenAI\u00b6\nclass langchain.llms.openai.AzureOpenAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model: str = 'text-davinci-003', temperature: float = 0.7, max_tokens: int = 256, top_p: float = 1, frequency_penalty: float = 0, presence_penalty: float = 0, n: int = 1, best_of: int = 1, model_kwargs: Dict[str, Any] = None, openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_organization: Optional[str] = None, openai_proxy: Optional[str] = None, batch_size: int = 20, request_timeout: Optional[Union[float, Tuple[float, float]]] = None, logit_bias: Optional[Dict[str, float]] = None, max_retries: int = 6, streaming: bool = False, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', tiktoken_model_name: Optional[str] = None, deployment_name: str = '', openai_api_type: str = 'azure', openai_api_version: str = '')[source]\u00b6\nBases: BaseOpenAI\nWrapper around Azure-specific OpenAI large language models.\nTo use, you should have the openai python package installed, and the\nenvironment variable OPENAI_API_KEY set with your API key.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html"} {"id": "e2bef0c7f15e-1", "text": "environment variable OPENAI_API_KEY set with your API key.\nAny parameters that are valid to be passed to the openai.create call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.llms import AzureOpenAI\nopenai = AzureOpenAI(model_name=\"text-davinci-003\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam allowed_special: Union[Literal['all'], AbstractSet[str]] = {}\u00b6\nSet of special tokens that are allowed\u3002\nparam batch_size: int = 20\u00b6\nBatch size to use when passing multiple documents to generate.\nparam best_of: int = 1\u00b6\nGenerates best_of completions server-side and returns the \u201cbest\u201d.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam deployment_name: str = ''\u00b6\nDeployment name to use.\nparam disallowed_special: Union[Literal['all'], Collection[str]] = 'all'\u00b6\nSet of special tokens that are not allowed\u3002\nparam frequency_penalty: float = 0\u00b6\nPenalizes repeated tokens according to frequency.\nparam logit_bias: Optional[Dict[str, float]] [Optional]\u00b6\nAdjust the probability of specific tokens being generated.\nparam max_retries: int = 6\u00b6\nMaximum number of retries to make when generating.\nparam max_tokens: int = 256\u00b6\nThe maximum number of tokens to generate in the completion.\n-1 returns as many tokens as possible given the prompt and\nthe models maximal context size.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html"} {"id": "e2bef0c7f15e-2", "text": "Metadata to add to the run trace.\nparam model_kwargs: Dict[str, Any] [Optional]\u00b6\nHolds any model parameters valid for create call not explicitly specified.\nparam model_name: str = 'text-davinci-003' (alias 'model')\u00b6\nModel name to use.\nparam n: int = 1\u00b6\nHow many completions to generate for each prompt.\nparam openai_api_base: Optional[str] = None\u00b6\nparam openai_api_key: Optional[str] = None\u00b6\nparam openai_api_type: str = 'azure'\u00b6\nparam openai_api_version: str = ''\u00b6\nparam openai_organization: Optional[str] = None\u00b6\nparam openai_proxy: Optional[str] = None\u00b6\nparam presence_penalty: float = 0\u00b6\nPenalizes repeated tokens.\nparam request_timeout: Optional[Union[float, Tuple[float, float]]] = None\u00b6\nTimeout for requests to OpenAI completion API. Default is 600 seconds.\nparam streaming: bool = False\u00b6\nWhether to stream the results or not.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: float = 0.7\u00b6\nWhat sampling temperature to use.\nparam tiktoken_model_name: Optional[str] = None\u00b6\nThe model name to pass to tiktoken when using this class.\nTiktoken is used to count the number of tokens in documents to constrain\nthem to be under a certain limit. By default, when set to None, this will\nbe the same as the embedding model name. However, there are some cases\nwhere you may want to use this Embedding class with a model name not\nsupported by tiktoken. This can include when using Azure embeddings or\nwhen using one of the many model providers that expose an OpenAI-like", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html"} {"id": "e2bef0c7f15e-3", "text": "when using one of the many model providers that expose an OpenAI-like\nAPI but with different models. In those cases, in order to avoid erroring\nwhen tiktoken is called, you can specify a model name to use here.\nparam top_p: float = 1\u00b6\nTotal probability mass of tokens to consider at each step.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html"} {"id": "e2bef0c7f15e-4", "text": "Parameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html"} {"id": "e2bef0c7f15e-5", "text": "**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator build_extra\u00a0 \u00bb\u00a0 all fields\u00b6\nBuild extra kwargs from additional params that were passed in.\ncreate_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) \u2192 LLMResult\u00b6\nCreate the LLMResult from the choices and prompts.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html"} {"id": "e2bef0c7f15e-6", "text": "stop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) \u2192 List[List[str]]\u00b6\nGet the sub prompts for llm call.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nGet the token IDs using the tiktoken package.\nmax_tokens_for_prompt(prompt: str) \u2192 int\u00b6\nCalculate the maximum number of tokens possible to generate for a prompt.\nParameters\nprompt \u2013 The prompt to pass into the model.\nReturns\nThe maximum number of tokens to generate for a prompt.\nExample\nmax_tokens = openai.max_token_for_prompt(\"Tell me a joke.\")\nstatic modelname_to_contextsize(modelname: str) \u2192 int\u00b6\nCalculate the maximum number of tokens possible to generate for a model.\nParameters", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html"} {"id": "e2bef0c7f15e-7", "text": "Calculate the maximum number of tokens possible to generate for a model.\nParameters\nmodelname \u2013 The modelname we want to know the context size for.\nReturns\nThe maximum context size\nExample\nmax_tokens = openai.modelname_to_contextsize(\"text-davinci-003\")\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nprep_streaming_params(stop: Optional[List[str]] = None) \u2192 Dict[str, Any]\u00b6\nPrepare the params for streaming.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html"} {"id": "e2bef0c7f15e-8", "text": "Raise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nstream(prompt: str, stop: Optional[List[str]] = None) \u2192 Generator\u00b6\nCall OpenAI with streaming flag and return the resulting generator.\nBETA: this is a beta feature while we figure out the right abstraction.\nOnce that happens, this interface could change.\nParameters\nprompt \u2013 The prompts to pass into the model.\nstop \u2013 Optional list of stop words to use when generating.\nReturns\nA generator representing the stream of tokens from OpenAI.\nExample\ngenerator = openai.stream(\"Tell me a joke.\")\nfor token in generator:\n yield token\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_azure_settings\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html"} {"id": "e2bef0c7f15e-9", "text": "eg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty max_context_size: int\u00b6\nGet max context size for this model.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\nallow_population_by_field_name = True\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html"} {"id": "2792003a1a00-0", "text": "langchain.llms.openai.OpenAI\u00b6\nclass langchain.llms.openai.OpenAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model: str = 'text-davinci-003', temperature: float = 0.7, max_tokens: int = 256, top_p: float = 1, frequency_penalty: float = 0, presence_penalty: float = 0, n: int = 1, best_of: int = 1, model_kwargs: Dict[str, Any] = None, openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_organization: Optional[str] = None, openai_proxy: Optional[str] = None, batch_size: int = 20, request_timeout: Optional[Union[float, Tuple[float, float]]] = None, logit_bias: Optional[Dict[str, float]] = None, max_retries: int = 6, streaming: bool = False, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', tiktoken_model_name: Optional[str] = None)[source]\u00b6\nBases: BaseOpenAI\nWrapper around OpenAI large language models.\nTo use, you should have the openai python package installed, and the\nenvironment variable OPENAI_API_KEY set with your API key.\nAny parameters that are valid to be passed to the openai.create call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.llms import OpenAI", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html"} {"id": "2792003a1a00-1", "text": "Example\nfrom langchain.llms import OpenAI\nopenai = OpenAI(model_name=\"text-davinci-003\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam allowed_special: Union[Literal['all'], AbstractSet[str]] = {}\u00b6\nSet of special tokens that are allowed\u3002\nparam batch_size: int = 20\u00b6\nBatch size to use when passing multiple documents to generate.\nparam best_of: int = 1\u00b6\nGenerates best_of completions server-side and returns the \u201cbest\u201d.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam client: Any = None\u00b6\nparam disallowed_special: Union[Literal['all'], Collection[str]] = 'all'\u00b6\nSet of special tokens that are not allowed\u3002\nparam frequency_penalty: float = 0\u00b6\nPenalizes repeated tokens according to frequency.\nparam logit_bias: Optional[Dict[str, float]] [Optional]\u00b6\nAdjust the probability of specific tokens being generated.\nparam max_retries: int = 6\u00b6\nMaximum number of retries to make when generating.\nparam max_tokens: int = 256\u00b6\nThe maximum number of tokens to generate in the completion.\n-1 returns as many tokens as possible given the prompt and\nthe models maximal context size.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_kwargs: Dict[str, Any] [Optional]\u00b6\nHolds any model parameters valid for create call not explicitly specified.\nparam model_name: str = 'text-davinci-003' (alias 'model')\u00b6\nModel name to use.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html"} {"id": "2792003a1a00-2", "text": "Model name to use.\nparam n: int = 1\u00b6\nHow many completions to generate for each prompt.\nparam openai_api_base: Optional[str] = None\u00b6\nparam openai_api_key: Optional[str] = None\u00b6\nparam openai_organization: Optional[str] = None\u00b6\nparam openai_proxy: Optional[str] = None\u00b6\nparam presence_penalty: float = 0\u00b6\nPenalizes repeated tokens.\nparam request_timeout: Optional[Union[float, Tuple[float, float]]] = None\u00b6\nTimeout for requests to OpenAI completion API. Default is 600 seconds.\nparam streaming: bool = False\u00b6\nWhether to stream the results or not.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: float = 0.7\u00b6\nWhat sampling temperature to use.\nparam tiktoken_model_name: Optional[str] = None\u00b6\nThe model name to pass to tiktoken when using this class.\nTiktoken is used to count the number of tokens in documents to constrain\nthem to be under a certain limit. By default, when set to None, this will\nbe the same as the embedding model name. However, there are some cases\nwhere you may want to use this Embedding class with a model name not\nsupported by tiktoken. This can include when using Azure embeddings or\nwhen using one of the many model providers that expose an OpenAI-like\nAPI but with different models. In those cases, in order to avoid erroring\nwhen tiktoken is called, you can specify a model name to use here.\nparam top_p: float = 1\u00b6\nTotal probability mass of tokens to consider at each step.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html"} {"id": "2792003a1a00-3", "text": "param verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html"} {"id": "2792003a1a00-4", "text": "first occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator build_extra\u00a0 \u00bb\u00a0 all fields\u00b6\nBuild extra kwargs from additional params that were passed in.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html"} {"id": "2792003a1a00-5", "text": "Build extra kwargs from additional params that were passed in.\ncreate_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) \u2192 LLMResult\u00b6\nCreate the LLMResult from the choices and prompts.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html"} {"id": "2792003a1a00-6", "text": "**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) \u2192 List[List[str]]\u00b6\nGet the sub prompts for llm call.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nGet the token IDs using the tiktoken package.\nmax_tokens_for_prompt(prompt: str) \u2192 int\u00b6\nCalculate the maximum number of tokens possible to generate for a prompt.\nParameters\nprompt \u2013 The prompt to pass into the model.\nReturns\nThe maximum number of tokens to generate for a prompt.\nExample\nmax_tokens = openai.max_token_for_prompt(\"Tell me a joke.\")\nstatic modelname_to_contextsize(modelname: str) \u2192 int\u00b6\nCalculate the maximum number of tokens possible to generate for a model.\nParameters\nmodelname \u2013 The modelname we want to know the context size for.\nReturns\nThe maximum context size\nExample\nmax_tokens = openai.modelname_to_contextsize(\"text-davinci-003\")", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html"} {"id": "2792003a1a00-7", "text": "max_tokens = openai.modelname_to_contextsize(\"text-davinci-003\")\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nprep_streaming_params(stop: Optional[List[str]] = None) \u2192 Dict[str, Any]\u00b6\nPrepare the params for streaming.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html"} {"id": "2792003a1a00-8", "text": "Example:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nstream(prompt: str, stop: Optional[List[str]] = None) \u2192 Generator\u00b6\nCall OpenAI with streaming flag and return the resulting generator.\nBETA: this is a beta feature while we figure out the right abstraction.\nOnce that happens, this interface could change.\nParameters\nprompt \u2013 The prompts to pass into the model.\nstop \u2013 Optional list of stop words to use when generating.\nReturns\nA generator representing the stream of tokens from OpenAI.\nExample\ngenerator = openai.stream(\"Tell me a joke.\")\nfor token in generator:\n yield token\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty max_context_size: int\u00b6\nGet max context size for this model.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html"} {"id": "2792003a1a00-9", "text": "model Config\u00b6\nBases: object\nConfiguration for this pydantic object.\nallow_population_by_field_name = True\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html"} {"id": "ccf4f94dad18-0", "text": "langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint\u00b6\nclass langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, endpoint_url: str = '', endpoint_api_key: str = '', deployment_name: str = '', http_client: Any = None, content_formatter: Any = None, model_kwargs: Optional[dict] = None)[source]\u00b6\nBases: LLM, BaseModel\nWrapper around Azure ML Hosted models using Managed Online Endpoints.\nExample\nazure_llm = AzureMLModel(\n endpoint_url=\"https://..inference.ml.azure.com/score\",\n endpoint_api_key=\"my-api-key\",\n deployment_name=\"my-deployment-name\",\n content_formatter=content_formatter,\n)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam content_formatter: Any = None\u00b6\nThe content formatter that provides an input and output\ntransform function to handle formats between the LLM and\nthe endpoint\nparam deployment_name: str = ''\u00b6\nDeployment Name for Endpoint. Should be passed to constructor or specified as\nenv var AZUREML_DEPLOYMENT_NAME.\nparam endpoint_api_key: str = ''\u00b6\nAuthentication Key for Endpoint. Should be passed to constructor or specified as\nenv var AZUREML_ENDPOINT_API_KEY.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html"} {"id": "ccf4f94dad18-1", "text": "env var AZUREML_ENDPOINT_API_KEY.\nparam endpoint_url: str = ''\u00b6\nURL of pre-existing Endpoint. Should be passed to constructor or specified as\nenv var AZUREML_ENDPOINT_URL.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_kwargs: Optional[dict] = None\u00b6\nKey word arguments to pass to the model.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html"} {"id": "ccf4f94dad18-2", "text": "need more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html"} {"id": "ccf4f94dad18-3", "text": "Parameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html"} {"id": "ccf4f94dad18-4", "text": "callbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html"} {"id": "ccf4f94dad18-5", "text": "to the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_client\u00a0 \u00bb\u00a0 http_client[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html"} {"id": "ccf4f94dad18-6", "text": "eg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html"} {"id": "34da78e6b58e-0", "text": "langchain.llms.azureml_endpoint.AzureMLEndpointClient\u00b6\nclass langchain.llms.azureml_endpoint.AzureMLEndpointClient(endpoint_url: str, endpoint_api_key: str, deployment_name: str)[source]\u00b6\nBases: object\nWrapper around AzureML Managed Online Endpoint Client.\nInitialize the class.\nMethods\n__init__(endpoint_url,\u00a0endpoint_api_key,\u00a0...)\nInitialize the class.\ncall(body)\ncall.\ncall(body: bytes) \u2192 bytes[source]\u00b6\ncall.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLEndpointClient.html"} {"id": "9b1022e73d87-0", "text": "langchain.llms.petals.Petals\u00b6\nclass langchain.llms.petals.Petals(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, tokenizer: Any = None, model_name: str = 'bigscience/bloom-petals', temperature: float = 0.7, max_new_tokens: int = 256, top_p: float = 0.9, top_k: Optional[int] = None, do_sample: bool = True, max_length: Optional[int] = None, model_kwargs: Dict[str, Any] = None, huggingface_api_key: Optional[str] = None)[source]\u00b6\nBases: LLM\nWrapper around Petals Bloom models.\nTo use, you should have the petals python package installed, and the\nenvironment variable HUGGINGFACE_API_KEY set with your API key.\nAny parameters that are valid to be passed to the call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.llms import petals\npetals = Petals()\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam client: Any = None\u00b6\nThe client to use for the API calls.\nparam do_sample: bool = True\u00b6\nWhether or not to use sampling; use greedy decoding otherwise.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html"} {"id": "9b1022e73d87-1", "text": "Whether or not to use sampling; use greedy decoding otherwise.\nparam huggingface_api_key: Optional[str] = None\u00b6\nparam max_length: Optional[int] = None\u00b6\nThe maximum length of the sequence to be generated.\nparam max_new_tokens: int = 256\u00b6\nThe maximum number of new tokens to generate in the completion.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_kwargs: Dict[str, Any] [Optional]\u00b6\nHolds any model parameters valid for create call\nnot explicitly specified.\nparam model_name: str = 'bigscience/bloom-petals'\u00b6\nThe model to use.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: float = 0.7\u00b6\nWhat sampling temperature to use\nparam tokenizer: Any = None\u00b6\nThe tokenizer to use for the API calls.\nparam top_k: Optional[int] = None\u00b6\nThe number of highest probability vocabulary tokens\nto keep for top-k-filtering.\nparam top_p: float = 0.9\u00b6\nThe cumulative probability for top-p sampling.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html"} {"id": "9b1022e73d87-2", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html"} {"id": "9b1022e73d87-3", "text": "async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator build_extra\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nBuild extra kwargs from additional params that were passed in.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html"} {"id": "9b1022e73d87-4", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html"} {"id": "9b1022e73d87-5", "text": "Get the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html"} {"id": "9b1022e73d87-6", "text": "to the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic config.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html"} {"id": "9c4b81a2e47e-0", "text": "langchain.llms.sagemaker_endpoint.ContentHandlerBase\u00b6\nclass langchain.llms.sagemaker_endpoint.ContentHandlerBase[source]\u00b6\nBases: Generic[INPUT_TYPE, OUTPUT_TYPE]\nA handler class to transform input from LLM to a\nformat that SageMaker endpoint expects. Similarily,\nthe class also handles transforming output from the\nSageMaker endpoint to a format that LLM class expects.\nMethods\n__init__()\ntransform_input(prompt,\u00a0model_kwargs)\nTransforms the input to a format that model can accept as the request Body.\ntransform_output(output)\nTransforms the output from the model to string that the LLM class expects.\nAttributes\naccepts\nThe MIME type of the response data returned from endpoint\ncontent_type\nThe MIME type of the input data passed to endpoint\nabstract transform_input(prompt: INPUT_TYPE, model_kwargs: Dict) \u2192 bytes[source]\u00b6\nTransforms the input to a format that model can accept\nas the request Body. Should return bytes or seekable file\nlike object in the format specified in the content_type\nrequest header.\nabstract transform_output(output: bytes) \u2192 OUTPUT_TYPE[source]\u00b6\nTransforms the output from the model to string that\nthe LLM class expects.\naccepts: Optional[str] = 'text/plain'\u00b6\nThe MIME type of the response data returned from endpoint\ncontent_type: Optional[str] = 'text/plain'\u00b6\nThe MIME type of the input data passed to endpoint", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.ContentHandlerBase.html"} {"id": "a5c7e2fa37ae-0", "text": "langchain.llms.base.LLM\u00b6\nclass langchain.llms.base.LLM(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: BaseLLM\nLLM class that expect subclasses to implement a simpler call method.\nThe purpose of this class is to expose a simpler interface for working\nwith LLMs, rather than expect the user to implement the full _generate method.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None\u00b6\nparam callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html"} {"id": "a5c7e2fa37ae-1", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html"} {"id": "a5c7e2fa37ae-2", "text": "async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html"} {"id": "a5c7e2fa37ae-3", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html"} {"id": "a5c7e2fa37ae-4", "text": "Get the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html"} {"id": "a5c7e2fa37ae-5", "text": "to the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html"} {"id": "2528c7a59491-0", "text": "langchain.llms.gpt4all.GPT4All\u00b6\nclass langchain.llms.gpt4all.GPT4All(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, model: str, backend: Optional[str] = None, max_tokens: int = 200, n_parts: int = - 1, seed: int = 0, f16_kv: bool = False, logits_all: bool = False, vocab_only: bool = False, use_mlock: bool = False, embedding: bool = False, n_threads: Optional[int] = 4, n_predict: Optional[int] = 256, temp: Optional[float] = 0.7, top_p: Optional[float] = 0.1, top_k: Optional[int] = 40, echo: Optional[bool] = False, stop: Optional[List[str]] = [], repeat_last_n: Optional[int] = 64, repeat_penalty: Optional[float] = 1.18, n_batch: int = 8, streaming: bool = False, allow_download: bool = False, client: Any = None)[source]\u00b6\nBases: LLM\nWrapper around GPT4All language models.\nTo use, you should have the gpt4all python package installed, the\npre-trained model file, and the model\u2019s config information.\nExample\nfrom langchain.llms import GPT4All\nmodel = GPT4All(model=\"./models/gpt4all-model.bin\", n_threads=8)\n# Simplest invocation\nresponse = model(\"Once upon a time, \")", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html"} {"id": "2528c7a59491-1", "text": "# Simplest invocation\nresponse = model(\"Once upon a time, \")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam allow_download: bool = False\u00b6\nIf model does not exist in ~/.cache/gpt4all/, download it.\nparam backend: Optional[str] = None\u00b6\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam echo: Optional[bool] = False\u00b6\nWhether to echo the prompt.\nparam embedding: bool = False\u00b6\nUse embedding mode only.\nparam f16_kv: bool = False\u00b6\nUse half-precision for key/value cache.\nparam logits_all: bool = False\u00b6\nReturn logits for all tokens, not just the last token.\nparam max_tokens: int = 200\u00b6\nToken context window.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model: str [Required]\u00b6\nPath to the pre-trained GPT4All model file.\nparam n_batch: int = 8\u00b6\nBatch size for prompt processing.\nparam n_parts: int = -1\u00b6\nNumber of parts to split the model into.\nIf -1, the number of parts is automatically determined.\nparam n_predict: Optional[int] = 256\u00b6\nThe maximum number of tokens to generate.\nparam n_threads: Optional[int] = 4\u00b6\nNumber of threads to use.\nparam repeat_last_n: Optional[int] = 64\u00b6\nLast n tokens to penalize\nparam repeat_penalty: Optional[float] = 1.18\u00b6\nThe penalty to apply to repeated tokens.\nparam seed: int = 0\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html"} {"id": "2528c7a59491-2", "text": "The penalty to apply to repeated tokens.\nparam seed: int = 0\u00b6\nSeed. If -1, a random seed is used.\nparam stop: Optional[List[str]] = []\u00b6\nA list of strings to stop generation when encountered.\nparam streaming: bool = False\u00b6\nWhether to stream the results or not.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temp: Optional[float] = 0.7\u00b6\nThe temperature to use for sampling.\nparam top_k: Optional[int] = 40\u00b6\nThe top-k value to use for sampling.\nparam top_p: Optional[float] = 0.1\u00b6\nThe top-p value to use for sampling.\nparam use_mlock: bool = False\u00b6\nForce system to keep model in RAM.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\nparam vocab_only: bool = False\u00b6\nOnly load the vocabulary, no weights.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html"} {"id": "2528c7a59491-3", "text": "Run the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html"} {"id": "2528c7a59491-4", "text": "first occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html"} {"id": "2528c7a59491-5", "text": "need more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html"} {"id": "2528c7a59491-6", "text": "Pass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html"} {"id": "2528c7a59491-7", "text": "to_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that the python package exists in the environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html"} {"id": "23d783f64f5d-0", "text": "langchain.llms.utils.enforce_stop_tokens\u00b6\nlangchain.llms.utils.enforce_stop_tokens(text: str, stop: List[str]) \u2192 str[source]\u00b6\nCut off the text as soon as any stop words occur.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.utils.enforce_stop_tokens.html"} {"id": "2e484f231c48-0", "text": "langchain.llms.clarifai.Clarifai\u00b6\nclass langchain.llms.clarifai.Clarifai(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, stub: Any = None, userDataObject: Any = None, model_id: Optional[str] = None, model_version_id: Optional[str] = None, app_id: Optional[str] = None, user_id: Optional[str] = None, pat: Optional[str] = None, api_base: str = 'https://api.clarifai.com')[source]\u00b6\nBases: LLM\nWrapper around Clarifai\u2019s large language models.\nTo use, you should have an account on the Clarifai platform,\nthe clarifai python package installed, and the\nenvironment variable CLARIFAI_PAT set with your PAT key,\nor pass it as a named parameter to the constructor.\nExample\nfrom langchain.llms import Clarifai\nclarifai_llm = Clarifai(pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_base: str = 'https://api.clarifai.com'\u00b6\nparam app_id: Optional[str] = None\u00b6\nClarifai application id to use.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html"} {"id": "2e484f231c48-1", "text": "param callbacks: Callbacks = None\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_id: Optional[str] = None\u00b6\nModel id to use.\nparam model_version_id: Optional[str] = None\u00b6\nModel version id to use.\nparam pat: Optional[str] = None\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam userDataObject: Any = None\u00b6\nparam user_id: Optional[str] = None\u00b6\nClarifai user id to use.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html"} {"id": "2e484f231c48-2", "text": "API.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html"} {"id": "2e484f231c48-3", "text": "Asynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html"} {"id": "2e484f231c48-4", "text": "text generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html"} {"id": "2e484f231c48-5", "text": "stop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that we have all required info to access Clarifai\nplatform and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html"} {"id": "2e484f231c48-6", "text": "property lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html"} {"id": "b9b389f76693-0", "text": "langchain.llms.predictionguard.PredictionGuard\u00b6\nclass langchain.llms.predictionguard.PredictionGuard(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model: Optional[str] = 'MPT-7B-Instruct', output: Optional[Dict[str, Any]] = None, max_tokens: int = 256, temperature: float = 0.75, token: Optional[str] = None, stop: Optional[List[str]] = None)[source]\u00b6\nBases: LLM\nWrapper around Prediction Guard large language models.\nTo use, you should have the predictionguard python package installed, and the\nenvironment variable PREDICTIONGUARD_TOKEN set with your access token, or pass\nit as a named parameter to the constructor. To use Prediction Guard\u2019s API along\nwith OpenAI models, set the environment variable OPENAI_API_KEY with your\nOpenAI API key as well.\nExample\npgllm = PredictionGuard(model=\"MPT-7B-Instruct\",\n token=\"my-access-token\",\n output={\n \"type\": \"boolean\"\n })\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam max_tokens: int = 256\u00b6\nDenotes the number of tokens to predict per generation.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.predictionguard.PredictionGuard.html"} {"id": "b9b389f76693-1", "text": "param metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model: Optional[str] = 'MPT-7B-Instruct'\u00b6\nModel name to use.\nparam output: Optional[Dict[str, Any]] = None\u00b6\nThe output type or structure for controlling the LLM output.\nparam stop: Optional[List[str]] = None\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: float = 0.75\u00b6\nA non-negative float that tunes the degree of randomness in generation.\nparam token: Optional[str] = None\u00b6\nYour Prediction Guard access token.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.predictionguard.PredictionGuard.html"} {"id": "b9b389f76693-2", "text": "Asynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.predictionguard.PredictionGuard.html"} {"id": "b9b389f76693-3", "text": "Asynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.predictionguard.PredictionGuard.html"} {"id": "b9b389f76693-4", "text": "text generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.predictionguard.PredictionGuard.html"} {"id": "b9b389f76693-5", "text": "stop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that the access token and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.predictionguard.PredictionGuard.html"} {"id": "b9b389f76693-6", "text": "serialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.predictionguard.PredictionGuard.html"} {"id": "aa4003026d7c-0", "text": "langchain.llms.aviary.get_completions\u00b6\nlangchain.llms.aviary.get_completions(model: str, prompt: str, use_prompt_format: bool = True, version: str = '') \u2192 Dict[str, Union[str, float, int]][source]\u00b6\nGet completions from Aviary models.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.get_completions.html"} {"id": "7da15cb58b4f-0", "text": "langchain.llms.vertexai.completion_with_retry\u00b6\nlangchain.llms.vertexai.completion_with_retry(llm: VertexAI, *args: Any, **kwargs: Any) \u2192 Any[source]\u00b6\nUse tenacity to retry the completion call.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.completion_with_retry.html"} {"id": "cf2786e203bf-0", "text": "langchain.llms.writer.Writer\u00b6\nclass langchain.llms.writer.Writer(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, writer_org_id: Optional[str] = None, model_id: str = 'palmyra-instruct', min_tokens: Optional[int] = None, max_tokens: Optional[int] = None, temperature: Optional[float] = None, top_p: Optional[float] = None, stop: Optional[List[str]] = None, presence_penalty: Optional[float] = None, repetition_penalty: Optional[float] = None, best_of: Optional[int] = None, logprobs: bool = False, n: Optional[int] = None, writer_api_key: Optional[str] = None, base_url: Optional[str] = None)[source]\u00b6\nBases: LLM\nWrapper around Writer large language models.\nTo use, you should have the environment variable WRITER_API_KEY and\nWRITER_ORG_ID set with your API key and organization ID respectively.\nExample\nfrom langchain import Writer\nwriter = Writer(model_id=\"palmyra-base\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam base_url: Optional[str] = None\u00b6\nBase url to use, if None decides based on model name.\nparam best_of: Optional[int] = None\u00b6\nGenerates this many completions server-side and returns the \u201cbest\u201d.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html"} {"id": "cf2786e203bf-1", "text": "param callbacks: Callbacks = None\u00b6\nparam logprobs: bool = False\u00b6\nWhether to return log probabilities.\nparam max_tokens: Optional[int] = None\u00b6\nMaximum number of tokens to generate.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam min_tokens: Optional[int] = None\u00b6\nMinimum number of tokens to generate.\nparam model_id: str = 'palmyra-instruct'\u00b6\nModel name to use.\nparam n: Optional[int] = None\u00b6\nHow many completions to generate.\nparam presence_penalty: Optional[float] = None\u00b6\nPenalizes repeated tokens regardless of frequency.\nparam repetition_penalty: Optional[float] = None\u00b6\nPenalizes repeated tokens according to frequency.\nparam stop: Optional[List[str]] = None\u00b6\nSequences when completion generation will stop.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: Optional[float] = None\u00b6\nWhat sampling temperature to use.\nparam top_p: Optional[float] = None\u00b6\nTotal probability mass of tokens to consider at each step.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\nparam writer_api_key: Optional[str] = None\u00b6\nWriter API key.\nparam writer_org_id: Optional[str] = None\u00b6\nWriter organization ID.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html"} {"id": "cf2786e203bf-2", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html"} {"id": "cf2786e203bf-3", "text": "async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html"} {"id": "cf2786e203bf-4", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html"} {"id": "cf2786e203bf-5", "text": "Get the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html"} {"id": "cf2786e203bf-6", "text": "to the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and organization id exist in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html"} {"id": "8f307d7bb5b6-0", "text": "langchain.llms.ai21.AI21\u00b6\nclass langchain.llms.ai21.AI21(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, model: str = 'j2-jumbo-instruct', temperature: float = 0.7, maxTokens: int = 256, minTokens: int = 0, topP: float = 1.0, presencePenalty: AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True), countPenalty: AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True), frequencyPenalty: AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True), numResults: int = 1, logitBias: Optional[Dict[str, float]] = None, ai21_api_key: Optional[str] = None, stop: Optional[List[str]] = None, base_url: Optional[str] = None)[source]\u00b6\nBases: LLM\nWrapper around AI21 large language models.\nTo use, you should have the environment variable AI21_API_KEY\nset with your API key.\nExample\nfrom langchain.llms import AI21\nai21 = AI21(model=\"j2-jumbo-instruct\")", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21.html"} {"id": "8f307d7bb5b6-1", "text": "ai21 = AI21(model=\"j2-jumbo-instruct\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam ai21_api_key: Optional[str] = None\u00b6\nparam base_url: Optional[str] = None\u00b6\nBase url to use, if None decides based on model name.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam countPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)\u00b6\nPenalizes repeated tokens according to count.\nparam frequencyPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)\u00b6\nPenalizes repeated tokens according to frequency.\nparam logitBias: Optional[Dict[str, float]] = None\u00b6\nAdjust the probability of specific tokens being generated.\nparam maxTokens: int = 256\u00b6\nThe maximum number of tokens to generate in the completion.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam minTokens: int = 0\u00b6\nThe minimum number of tokens to generate in the completion.\nparam model: str = 'j2-jumbo-instruct'\u00b6\nModel name to use.\nparam numResults: int = 1\u00b6\nHow many completions to generate for each prompt.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21.html"} {"id": "8f307d7bb5b6-2", "text": "How many completions to generate for each prompt.\nparam presencePenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)\u00b6\nPenalizes repeated tokens.\nparam stop: Optional[List[str]] = None\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: float = 0.7\u00b6\nWhat sampling temperature to use.\nparam topP: float = 1.0\u00b6\nTotal probability mass of tokens to consider at each step.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21.html"} {"id": "8f307d7bb5b6-3", "text": "Asynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21.html"} {"id": "8f307d7bb5b6-4", "text": "Asynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21.html"} {"id": "8f307d7bb5b6-5", "text": "text generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21.html"} {"id": "8f307d7bb5b6-6", "text": "stop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21.html"} {"id": "8f307d7bb5b6-7", "text": "serialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21.html"} {"id": "0f561c5ca54c-0", "text": "langchain.llms.openllm.OpenLLM\u00b6\nclass langchain.llms.openllm.OpenLLM(model_name: Optional[str] = None, *, model_id: Optional[str] = None, server_url: Optional[str] = None, server_type: Literal['grpc', 'http'] = 'http', embedded: bool = True, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, llm_kwargs: Dict[str, Any])[source]\u00b6\nBases: LLM\nWrapper for accessing OpenLLM, supporting both in-process model\ninstance and remote OpenLLM servers.\nTo use, you should have the openllm library installed:\npip install openllm\nLearn more at: https://github.com/bentoml/openllm\nExample running an LLM model locally managed by OpenLLM:from langchain.llms import OpenLLM\nllm = OpenLLM(\n model_name='flan-t5',\n model_id='google/flan-t5-large',\n)\nllm(\"What is the difference between a duck and a goose?\")\nFor all available supported models, you can run \u2018openllm models\u2019.\nIf you have a OpenLLM server running, you can also use it remotely:from langchain.llms import OpenLLM\nllm = OpenLLM(server_url='http://localhost:3000')\nllm(\"What is the difference between a duck and a goose?\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html"} {"id": "0f561c5ca54c-1", "text": "Raises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam embedded: bool = True\u00b6\nInitialize this LLM instance in current process by default. Should\nonly set to False when using in conjunction with BentoML Service.\nparam llm_kwargs: Dict[str, Any] [Required]\u00b6\nKey word arguments to be passed to openllm.LLM\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_id: Optional[str] = None\u00b6\nModel Id to use. If not provided, will use the default model for the model name.\nSee \u2018openllm models\u2019 for all available model variants.\nparam model_name: Optional[str] = None\u00b6\nModel name to use. See \u2018openllm models\u2019 for all available models.\nparam server_type: ServerType = 'http'\u00b6\nOptional server type. Either \u2018http\u2019 or \u2018grpc\u2019.\nparam server_url: Optional[str] = None\u00b6\nOptional server URL that currently runs a LLMServer with \u2018openllm start\u2019.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html"} {"id": "0f561c5ca54c-2", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html"} {"id": "0f561c5ca54c-3", "text": "async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html"} {"id": "0f561c5ca54c-4", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html"} {"id": "0f561c5ca54c-5", "text": "Get the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html"} {"id": "0f561c5ca54c-6", "text": "to the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty runner: openllm.LLMRunner\u00b6\nGet the underlying openllm.LLMRunner instance for integration with BentoML.\nExample:\n.. code-block:: python\nllm = OpenLLM(model_name=\u2019flan-t5\u2019,\nmodel_id=\u2019google/flan-t5-large\u2019,\nembedded=False,\n)\ntools = load_tools([\u201cserpapi\u201d, \u201cllm-math\u201d], llm=llm)\nagent = initialize_agent(", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html"} {"id": "0f561c5ca54c-7", "text": "agent = initialize_agent(\ntools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION\n)\nsvc = bentoml.Service(\u201clangchain-openllm\u201d, runners=[llm.runner])\n@svc.api(input=Text(), output=Text())\ndef chat(input_text: str):\nreturn agent.run(input_text)\nmodel Config[source]\u00b6\nBases: object\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html"} {"id": "c4e04a49de74-0", "text": "langchain.llms.azureml_endpoint.HFContentFormatter\u00b6\nclass langchain.llms.azureml_endpoint.HFContentFormatter[source]\u00b6\nBases: ContentFormatterBase\nContent handler for LLMs from the HuggingFace catalog.\nMethods\n__init__()\nformat_request_payload(prompt,\u00a0model_kwargs)\nFormats the request body according to the input schema of the model.\nformat_response_payload(output)\nFormats the response body according to the output schema of the model.\nAttributes\naccepts\nThe MIME type of the response data returned form the endpoint\ncontent_type\nThe MIME type of the input data passed to the endpoint\nformat_request_payload(prompt: str, model_kwargs: Dict) \u2192 bytes[source]\u00b6\nFormats the request body according to the input schema of\nthe model. Returns bytes or seekable file like object in the\nformat specified in the content_type request header.\nformat_response_payload(output: bytes) \u2192 str[source]\u00b6\nFormats the response body according to the output\nschema of the model. Returns the data type that is\nreceived from the response.\naccepts: Optional[str] = 'application/json'\u00b6\nThe MIME type of the response data returned form the endpoint\ncontent_type: Optional[str] = 'application/json'\u00b6\nThe MIME type of the input data passed to the endpoint", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.HFContentFormatter.html"} {"id": "f3de3f74e083-0", "text": "langchain.llms.openai.completion_with_retry\u00b6\nlangchain.llms.openai.completion_with_retry(llm: Union[BaseOpenAI, OpenAIChat], **kwargs: Any) \u2192 Any[source]\u00b6\nUse tenacity to retry the completion call.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.completion_with_retry.html"} {"id": "4d75985ce4c2-0", "text": "langchain.llms.cohere.Cohere\u00b6\nclass langchain.llms.cohere.Cohere(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model: Optional[str] = None, max_tokens: int = 256, temperature: float = 0.75, k: int = 0, p: int = 1, frequency_penalty: float = 0.0, presence_penalty: float = 0.0, truncate: Optional[str] = None, max_retries: int = 10, cohere_api_key: Optional[str] = None, stop: Optional[List[str]] = None)[source]\u00b6\nBases: LLM\nWrapper around Cohere large language models.\nTo use, you should have the cohere python package installed, and the\nenvironment variable COHERE_API_KEY set with your API key, or pass\nit as a named parameter to the constructor.\nExample\nfrom langchain.llms import Cohere\ncohere = Cohere(model=\"gptd-instruct-tft\", cohere_api_key=\"my-api-key\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam cohere_api_key: Optional[str] = None\u00b6\nparam frequency_penalty: float = 0.0\u00b6\nPenalizes repeated tokens according to frequency. Between 0 and 1.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html"} {"id": "4d75985ce4c2-1", "text": "Penalizes repeated tokens according to frequency. Between 0 and 1.\nparam k: int = 0\u00b6\nNumber of most likely tokens to consider at each step.\nparam max_retries: int = 10\u00b6\nMaximum number of retries to make when generating.\nparam max_tokens: int = 256\u00b6\nDenotes the number of tokens to predict per generation.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model: Optional[str] = None\u00b6\nModel name to use.\nparam p: int = 1\u00b6\nTotal probability mass of tokens to consider at each step.\nparam presence_penalty: float = 0.0\u00b6\nPenalizes repeated tokens. Between 0 and 1.\nparam stop: Optional[List[str]] = None\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: float = 0.75\u00b6\nA non-negative float that tunes the degree of randomness in generation.\nparam truncate: Optional[str] = None\u00b6\nSpecify how the client handles inputs longer than the maximum token\nlength: Truncate from START, END or NONE\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html"} {"id": "4d75985ce4c2-2", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html"} {"id": "4d75985ce4c2-3", "text": "async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html"} {"id": "4d75985ce4c2-4", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html"} {"id": "4d75985ce4c2-5", "text": "Get the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html"} {"id": "4d75985ce4c2-6", "text": "to the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html"} {"id": "4988d6c4395d-0", "text": "langchain.llms.beam.Beam\u00b6\nclass langchain.llms.beam.Beam(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, model_name: str = '', name: str = '', cpu: str = '', memory: str = '', gpu: str = '', python_version: str = '', python_packages: List[str] = [], max_length: str = '', url: str = '', model_kwargs: Dict[str, Any] = None, beam_client_id: str = '', beam_client_secret: str = '', app_id: Optional[str] = None)[source]\u00b6\nBases: LLM\nWrapper around Beam API for gpt2 large language model.\nTo use, you should have the beam-sdk python package installed,\nand the environment variable BEAM_CLIENT_ID set with your client id\nand BEAM_CLIENT_SECRET set with your client secret. Information on how\nto get these is available here: https://docs.beam.cloud/account/api-keys.\nThe wrapper can then be called as follows, where the name, cpu, memory, gpu,\npython version, and python packages can be updated accordingly. Once deployed,\nthe instance can be called.\nExample\nllm = Beam(model_name=\"gpt2\",\n name=\"langchain-gpt2\",\n cpu=8,\n memory=\"32Gi\",\n gpu=\"A10G\",\n python_version=\"python3.8\",\n python_packages=[\n \"diffusers[torch]>=0.10\",\n \"transformers\",\n \"torch\",\n \"pillow\",\n \"accelerate\",\n \"safetensors\",", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html"} {"id": "4988d6c4395d-1", "text": "\"pillow\",\n \"accelerate\",\n \"safetensors\",\n \"xformers\",],\n max_length=50)\nllm._deploy()\ncall_result = llm._call(input)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam app_id: Optional[str] = None\u00b6\nparam beam_client_id: str = ''\u00b6\nparam beam_client_secret: str = ''\u00b6\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam cpu: str = ''\u00b6\nparam gpu: str = ''\u00b6\nparam max_length: str = ''\u00b6\nparam memory: str = ''\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_kwargs: Dict[str, Any] [Optional]\u00b6\nHolds any model parameters valid for create call not\nexplicitly specified.\nparam model_name: str = ''\u00b6\nparam name: str = ''\u00b6\nparam python_packages: List[str] = []\u00b6\nparam python_version: str = ''\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam url: str = ''\u00b6\nmodel endpoint to use\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html"} {"id": "4988d6c4395d-2", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\napp_creation() \u2192 None[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html"} {"id": "4988d6c4395d-3", "text": "app_creation() \u2192 None[source]\u00b6\nCreates a Python file which will contain your Beam app definition.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator build_extra\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nBuild extra kwargs from additional params that were passed in.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html"} {"id": "4988d6c4395d-4", "text": "dict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html"} {"id": "4988d6c4395d-5", "text": "get_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html"} {"id": "4988d6c4395d-6", "text": "Parameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun_creation() \u2192 None[source]\u00b6\nCreates a Python file which will be deployed on beam.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty authorization: str\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html"} {"id": "4988d6c4395d-7", "text": "property lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic config.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html"} {"id": "f7cf835f93ab-0", "text": "langchain.llms.loading.load_llm_from_config\u00b6\nlangchain.llms.loading.load_llm_from_config(config: dict) \u2192 BaseLLM[source]\u00b6\nLoad LLM from Config Dict.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.loading.load_llm_from_config.html"} {"id": "0cb7ec69a4b2-0", "text": "langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM\u00b6\nclass langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM(*, cache: ~typing.Optional[bool] = None, verbose: bool = None, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, pipeline_ref: ~typing.Any = None, client: ~typing.Any = None, inference_fn: ~typing.Callable = , hardware: ~typing.Any = None, model_load_fn: ~typing.Callable = , load_fn_kwargs: ~typing.Optional[dict] = None, model_reqs: ~typing.List[str] = ['./', 'transformers', 'torch'], model_id: str = 'gpt2', task: str = 'text-generation', device: int = 0, model_kwargs: ~typing.Optional[dict] = None)[source]\u00b6\nBases: SelfHostedPipeline\nWrapper around HuggingFace Pipeline API to run on self-hosted remote hardware.\nSupported hardware includes auto-launched instances on AWS, GCP, Azure,\nand Lambda, as well as servers specified\nby IP address and SSH credentials (such as on-prem, or another cloud\nlike Paperspace, Coreweave, etc.).\nTo use, you should have the runhouse python package installed.\nOnly supports text-generation, text2text-generation and summarization for now.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html"} {"id": "0cb7ec69a4b2-1", "text": "Only supports text-generation, text2text-generation and summarization for now.\nExample using from_model_id:from langchain.llms import SelfHostedHuggingFaceLLM\nimport runhouse as rh\ngpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\nhf = SelfHostedHuggingFaceLLM(\n model_id=\"google/flan-t5-large\", task=\"text2text-generation\",\n hardware=gpu\n)\nExample passing fn that generates a pipeline (bc the pipeline is not serializable):from langchain.llms import SelfHostedHuggingFaceLLM\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\nimport runhouse as rh\ndef get_pipeline():\n model_id = \"gpt2\"\n tokenizer = AutoTokenizer.from_pretrained(model_id)\n model = AutoModelForCausalLM.from_pretrained(model_id)\n pipe = pipeline(\n \"text-generation\", model=model, tokenizer=tokenizer\n )\n return pipe\nhf = SelfHostedHuggingFaceLLM(\n model_load_fn=get_pipeline, model_id=\"gpt2\", hardware=gpu)\nConstruct the pipeline remotely using an auxiliary function.\nThe load function needs to be importable to be imported\nand run on the server, i.e. in a module and not a REPL or closure.\nThen, initialize the remote inference function.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam device: int = 0\u00b6\nDevice to use for inference. -1 for CPU, 0 for GPU, 1 for second GPU, etc.\nparam hardware: Any = None\u00b6\nRemote hardware to send the inference function to.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html"} {"id": "0cb7ec69a4b2-2", "text": "param hardware: Any = None\u00b6\nRemote hardware to send the inference function to.\nparam inference_fn: Callable = \u00b6\nInference function to send to the remote hardware.\nparam load_fn_kwargs: Optional[dict] = None\u00b6\nKey word arguments to pass to the model load function.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_id: str = 'gpt2'\u00b6\nHugging Face model_id to load the model.\nparam model_kwargs: Optional[dict] = None\u00b6\nKey word arguments to pass to the model.\nparam model_load_fn: Callable = \u00b6\nFunction to load the model remotely on the server.\nparam model_reqs: List[str] = ['./', 'transformers', 'torch']\u00b6\nRequirements to install on hardware to inference the model.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam task: str = 'text-generation'\u00b6\nHugging Face task (\u201ctext-generation\u201d, \u201ctext2text-generation\u201d or\n\u201csummarization\u201d).\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html"} {"id": "0cb7ec69a4b2-3", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html"} {"id": "0cb7ec69a4b2-4", "text": "async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\nclassmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) \u2192 LLM\u00b6\nInit the SelfHostedPipeline from a pipeline object or string.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html"} {"id": "0cb7ec69a4b2-5", "text": "Init the SelfHostedPipeline from a pipeline object or string.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html"} {"id": "0cb7ec69a4b2-6", "text": "Get the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html"} {"id": "0cb7ec69a4b2-7", "text": "Parameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html"} {"id": "be4c1a39332c-0", "text": "langchain.llms.azureml_endpoint.DollyContentFormatter\u00b6\nclass langchain.llms.azureml_endpoint.DollyContentFormatter[source]\u00b6\nBases: ContentFormatterBase\nContent handler for the Dolly-v2-12b model\nMethods\n__init__()\nformat_request_payload(prompt,\u00a0model_kwargs)\nFormats the request body according to the input schema of the model.\nformat_response_payload(output)\nFormats the response body according to the output schema of the model.\nAttributes\naccepts\nThe MIME type of the response data returned form the endpoint\ncontent_type\nThe MIME type of the input data passed to the endpoint\nformat_request_payload(prompt: str, model_kwargs: Dict) \u2192 bytes[source]\u00b6\nFormats the request body according to the input schema of\nthe model. Returns bytes or seekable file like object in the\nformat specified in the content_type request header.\nformat_response_payload(output: bytes) \u2192 str[source]\u00b6\nFormats the response body according to the output\nschema of the model. Returns the data type that is\nreceived from the response.\naccepts: Optional[str] = 'application/json'\u00b6\nThe MIME type of the response data returned form the endpoint\ncontent_type: Optional[str] = 'application/json'\u00b6\nThe MIME type of the input data passed to the endpoint", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.DollyContentFormatter.html"} {"id": "b7b568713fa0-0", "text": "langchain.llms.anthropic.Anthropic\u00b6\nclass langchain.llms.anthropic.Anthropic(*, client: Any = None, async_client: Any = None, model: str = 'claude-v1', max_tokens_to_sample: int = 256, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, streaming: bool = False, default_request_timeout: Optional[float] = None, anthropic_api_url: Optional[str] = None, anthropic_api_key: Optional[str] = None, HUMAN_PROMPT: Optional[str] = None, AI_PROMPT: Optional[str] = None, count_tokens: Optional[Callable[[str], int]] = None, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: LLM, _AnthropicCommon\nWrapper around Anthropic\u2019s large language models.\nTo use, you should have the anthropic python package installed, and the\nenvironment variable ANTHROPIC_API_KEY set with your API key, or pass\nit as a named parameter to the constructor.\nExample\nimport anthropic\nfrom langchain.llms import Anthropic\nmodel = Anthropic(model=\"\", anthropic_api_key=\"my-api-key\")\n# Simplest invocation, automatically wrapped with HUMAN_PROMPT\n# and AI_PROMPT.\nresponse = model(\"What are the biggest risks facing humanity?\")\n# Or if you want to use the chat mode, build a few-shot-prompt, or\n# put words in the Assistant's mouth, use HUMAN_PROMPT and AI_PROMPT:", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html"} {"id": "b7b568713fa0-1", "text": "# put words in the Assistant's mouth, use HUMAN_PROMPT and AI_PROMPT:\nraw_prompt = \"What are the biggest risks facing humanity?\"\nprompt = f\"{anthropic.HUMAN_PROMPT} {prompt}{anthropic.AI_PROMPT}\"\nresponse = model(prompt)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam AI_PROMPT: Optional[str] = None\u00b6\nparam HUMAN_PROMPT: Optional[str] = None\u00b6\nparam anthropic_api_key: Optional[str] = None\u00b6\nparam anthropic_api_url: Optional[str] = None\u00b6\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam count_tokens: Optional[Callable[[str], int]] = None\u00b6\nparam default_request_timeout: Optional[float] = None\u00b6\nTimeout for requests to Anthropic Completion API. Default is 600 seconds.\nparam max_tokens_to_sample: int = 256\u00b6\nDenotes the number of tokens to predict per generation.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model: str = 'claude-v1'\u00b6\nModel name to use.\nparam streaming: bool = False\u00b6\nWhether to stream the results.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: Optional[float] = None\u00b6\nA non-negative float that tunes the degree of randomness in generation.\nparam top_k: Optional[int] = None\u00b6\nNumber of most likely tokens to consider at each step.\nparam top_p: Optional[float] = None\u00b6\nTotal probability mass of tokens to consider at each step.\nparam verbose: bool [Optional]\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html"} {"id": "b7b568713fa0-2", "text": "param verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html"} {"id": "b7b568713fa0-3", "text": "first occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html"} {"id": "b7b568713fa0-4", "text": "dict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html"} {"id": "b7b568713fa0-5", "text": "get_num_tokens(text: str) \u2192 int[source]\u00b6\nCalculate number of tokens.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html"} {"id": "b7b568713fa0-6", "text": "first occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nvalidator raise_warning\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nRaise warning that this class is deprecated.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nstream(prompt: str, stop: Optional[List[str]] = None) \u2192 Generator[source]\u00b6\nCall Anthropic completion_stream and return the resulting generator.\nBETA: this is a beta feature while we figure out the right abstraction.\nOnce that happens, this interface could change.\nParameters\nprompt \u2013 The prompt to pass into the model.\nstop \u2013 Optional list of stop words to use when generating.\nReturns\nA generator representing the stream of tokens from Anthropic.\nExample\nprompt = \"Write a poem about a stream.\"\nprompt = f\"\\n\\nHuman: {prompt}\\n\\nAssistant:\"\ngenerator = anthropic.stream(prompt)\nfor token in generator:\n yield token\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html"} {"id": "b7b568713fa0-7", "text": "property lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html"} {"id": "21f14b443bb4-0", "text": "langchain.llms.openlm.OpenLM\u00b6\nclass langchain.llms.openlm.OpenLM(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model: str = 'text-davinci-003', temperature: float = 0.7, max_tokens: int = 256, top_p: float = 1, frequency_penalty: float = 0, presence_penalty: float = 0, n: int = 1, best_of: int = 1, model_kwargs: Dict[str, Any] = None, openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_organization: Optional[str] = None, openai_proxy: Optional[str] = None, batch_size: int = 20, request_timeout: Optional[Union[float, Tuple[float, float]]] = None, logit_bias: Optional[Dict[str, float]] = None, max_retries: int = 6, streaming: bool = False, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', tiktoken_model_name: Optional[str] = None)[source]\u00b6\nBases: BaseOpenAI\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam allowed_special: Union[Literal['all'], AbstractSet[str]] = {}\u00b6\nSet of special tokens that are allowed\u3002\nparam batch_size: int = 20\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html"} {"id": "21f14b443bb4-1", "text": "Set of special tokens that are allowed\u3002\nparam batch_size: int = 20\u00b6\nBatch size to use when passing multiple documents to generate.\nparam best_of: int = 1\u00b6\nGenerates best_of completions server-side and returns the \u201cbest\u201d.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam client: Any = None\u00b6\nparam disallowed_special: Union[Literal['all'], Collection[str]] = 'all'\u00b6\nSet of special tokens that are not allowed\u3002\nparam frequency_penalty: float = 0\u00b6\nPenalizes repeated tokens according to frequency.\nparam logit_bias: Optional[Dict[str, float]] [Optional]\u00b6\nAdjust the probability of specific tokens being generated.\nparam max_retries: int = 6\u00b6\nMaximum number of retries to make when generating.\nparam max_tokens: int = 256\u00b6\nThe maximum number of tokens to generate in the completion.\n-1 returns as many tokens as possible given the prompt and\nthe models maximal context size.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_kwargs: Dict[str, Any] [Optional]\u00b6\nHolds any model parameters valid for create call not explicitly specified.\nparam model_name: str = 'text-davinci-003' (alias 'model')\u00b6\nModel name to use.\nparam n: int = 1\u00b6\nHow many completions to generate for each prompt.\nparam openai_api_base: Optional[str] = None\u00b6\nparam openai_api_key: Optional[str] = None\u00b6\nparam openai_organization: Optional[str] = None\u00b6\nparam openai_proxy: Optional[str] = None\u00b6\nparam presence_penalty: float = 0\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html"} {"id": "21f14b443bb4-2", "text": "param presence_penalty: float = 0\u00b6\nPenalizes repeated tokens.\nparam request_timeout: Optional[Union[float, Tuple[float, float]]] = None\u00b6\nTimeout for requests to OpenAI completion API. Default is 600 seconds.\nparam streaming: bool = False\u00b6\nWhether to stream the results or not.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: float = 0.7\u00b6\nWhat sampling temperature to use.\nparam tiktoken_model_name: Optional[str] = None\u00b6\nThe model name to pass to tiktoken when using this class.\nTiktoken is used to count the number of tokens in documents to constrain\nthem to be under a certain limit. By default, when set to None, this will\nbe the same as the embedding model name. However, there are some cases\nwhere you may want to use this Embedding class with a model name not\nsupported by tiktoken. This can include when using Azure embeddings or\nwhen using one of the many model providers that expose an OpenAI-like\nAPI but with different models. In those cases, in order to avoid erroring\nwhen tiktoken is called, you can specify a model name to use here.\nparam top_p: float = 1\u00b6\nTotal probability mass of tokens to consider at each step.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html"} {"id": "21f14b443bb4-3", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html"} {"id": "21f14b443bb4-4", "text": "async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator build_extra\u00a0 \u00bb\u00a0 all fields\u00b6\nBuild extra kwargs from additional params that were passed in.\ncreate_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) \u2192 LLMResult\u00b6\nCreate the LLMResult from the choices and prompts.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html"} {"id": "21f14b443bb4-5", "text": "dict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html"} {"id": "21f14b443bb4-6", "text": "get_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) \u2192 List[List[str]]\u00b6\nGet the sub prompts for llm call.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nGet the token IDs using the tiktoken package.\nmax_tokens_for_prompt(prompt: str) \u2192 int\u00b6\nCalculate the maximum number of tokens possible to generate for a prompt.\nParameters\nprompt \u2013 The prompt to pass into the model.\nReturns\nThe maximum number of tokens to generate for a prompt.\nExample\nmax_tokens = openai.max_token_for_prompt(\"Tell me a joke.\")\nstatic modelname_to_contextsize(modelname: str) \u2192 int\u00b6\nCalculate the maximum number of tokens possible to generate for a model.\nParameters\nmodelname \u2013 The modelname we want to know the context size for.\nReturns\nThe maximum context size\nExample\nmax_tokens = openai.modelname_to_contextsize(\"text-davinci-003\")\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html"} {"id": "21f14b443bb4-7", "text": "Pass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nprep_streaming_params(stop: Optional[List[str]] = None) \u2192 Dict[str, Any]\u00b6\nPrepare the params for streaming.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html"} {"id": "21f14b443bb4-8", "text": "This allows users to pass in None as verbose to access the global setting.\nstream(prompt: str, stop: Optional[List[str]] = None) \u2192 Generator\u00b6\nCall OpenAI with streaming flag and return the resulting generator.\nBETA: this is a beta feature while we figure out the right abstraction.\nOnce that happens, this interface could change.\nParameters\nprompt \u2013 The prompts to pass into the model.\nstop \u2013 Optional list of stop words to use when generating.\nReturns\nA generator representing the stream of tokens from OpenAI.\nExample\ngenerator = openai.stream(\"Tell me a joke.\")\nfor token in generator:\n yield token\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty max_context_size: int\u00b6\nGet max context size for this model.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\nallow_population_by_field_name = True\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html"} {"id": "3db25ebb152d-0", "text": "langchain.llms.textgen.TextGen\u00b6\nclass langchain.llms.textgen.TextGen(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, model_url: str, preset: Optional[str] = None, max_new_tokens: Optional[int] = 250, do_sample: bool = True, temperature: Optional[float] = 1.3, top_p: Optional[float] = 0.1, typical_p: Optional[float] = 1, epsilon_cutoff: Optional[float] = 0, eta_cutoff: Optional[float] = 0, repetition_penalty: Optional[float] = 1.18, top_k: Optional[float] = 40, min_length: Optional[int] = 0, no_repeat_ngram_size: Optional[int] = 0, num_beams: Optional[int] = 1, penalty_alpha: Optional[float] = 0, length_penalty: Optional[float] = 1, early_stopping: bool = False, seed: int = - 1, add_bos_token: bool = True, truncation_length: Optional[int] = 2048, ban_eos_token: bool = False, skip_special_tokens: bool = True, stopping_strings: Optional[List[str]] = [], streaming: bool = False)[source]\u00b6\nBases: LLM\nWrapper around the text-generation-webui model.\nTo use, you should have the text-generation-webui installed, a model loaded,\nand \u2013api added as a command-line option.\nSuggested installation, use one-click installer for your OS:\nhttps://github.com/oobabooga/text-generation-webui#one-click-installers", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html"} {"id": "3db25ebb152d-1", "text": "https://github.com/oobabooga/text-generation-webui#one-click-installers\nParemeters below taken from text-generation-webui api example:\nhttps://github.com/oobabooga/text-generation-webui/blob/main/api-examples/api-example.py\nExample\nfrom langchain.llms import TextGen\nllm = TextGen(model_url=\"http://localhost:8500\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam add_bos_token: bool = True\u00b6\nAdd the bos_token to the beginning of prompts.\nDisabling this can make the replies more creative.\nparam ban_eos_token: bool = False\u00b6\nBan the eos_token. Forces the model to never end the generation prematurely.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam do_sample: bool = True\u00b6\nDo sample\nparam early_stopping: bool = False\u00b6\nEarly stopping\nparam epsilon_cutoff: Optional[float] = 0\u00b6\nEpsilon cutoff\nparam eta_cutoff: Optional[float] = 0\u00b6\nETA cutoff\nparam length_penalty: Optional[float] = 1\u00b6\nLength Penalty\nparam max_new_tokens: Optional[int] = 250\u00b6\nThe maximum number of tokens to generate.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam min_length: Optional[int] = 0\u00b6\nMinimum generation length in tokens.\nparam model_url: str [Required]\u00b6\nThe full URL to the textgen webui including http[s]://host:port\nparam no_repeat_ngram_size: Optional[int] = 0\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html"} {"id": "3db25ebb152d-2", "text": "param no_repeat_ngram_size: Optional[int] = 0\u00b6\nIf not set to 0, specifies the length of token sets that are completely blocked\nfrom repeating at all. Higher values = blocks larger phrases,\nlower values = blocks words or letters from repeating.\nOnly 0 or high values are a good idea in most cases.\nparam num_beams: Optional[int] = 1\u00b6\nNumber of beams\nparam penalty_alpha: Optional[float] = 0\u00b6\nPenalty Alpha\nparam preset: Optional[str] = None\u00b6\nThe preset to use in the textgen webui\nparam repetition_penalty: Optional[float] = 1.18\u00b6\nExponential penalty factor for repeating prior tokens. 1 means no penalty,\nhigher value = less repetition, lower value = more repetition.\nparam seed: int = -1\u00b6\nSeed (-1 for random)\nparam skip_special_tokens: bool = True\u00b6\nSkip special tokens. Some specific models need this unset.\nparam stopping_strings: Optional[List[str]] = []\u00b6\nA list of strings to stop generation when encountered.\nparam streaming: bool = False\u00b6\nWhether to stream the results, token by token (currently unimplemented).\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: Optional[float] = 1.3\u00b6\nPrimary factor to control randomness of outputs. 0 = deterministic\n(only the most likely token is used). Higher value = more randomness.\nparam top_k: Optional[float] = 40\u00b6\nSimilar to top_p, but select instead only the top_k most likely tokens.\nHigher value = higher range of possible random results.\nparam top_p: Optional[float] = 0.1\u00b6\nIf not set to 1, select tokens with probabilities adding up to less than this\nnumber. Higher value = higher range of possible random results.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html"} {"id": "3db25ebb152d-3", "text": "number. Higher value = higher range of possible random results.\nparam truncation_length: Optional[int] = 2048\u00b6\nTruncate the prompt up to this length. The leftmost tokens are removed if\nthe prompt exceeds this length. Most models require this to be at most 2048.\nparam typical_p: Optional[float] = 1\u00b6\nIf not set to 1, select only tokens that are at least this much more likely to\nappear than random tokens, given the prior text.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html"} {"id": "3db25ebb152d-4", "text": "need more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html"} {"id": "3db25ebb152d-5", "text": "Parameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html"} {"id": "3db25ebb152d-6", "text": "callbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html"} {"id": "3db25ebb152d-7", "text": "to the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html"} {"id": "3db25ebb152d-8", "text": "Return a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html"} {"id": "8226337cd56b-0", "text": "langchain.llms.gooseai.GooseAI\u00b6\nclass langchain.llms.gooseai.GooseAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model_name: str = 'gpt-neo-20b', temperature: float = 0.7, max_tokens: int = 256, top_p: float = 1, min_tokens: int = 1, frequency_penalty: float = 0, presence_penalty: float = 0, n: int = 1, model_kwargs: Dict[str, Any] = None, logit_bias: Optional[Dict[str, float]] = None, gooseai_api_key: Optional[str] = None)[source]\u00b6\nBases: LLM\nWrapper around OpenAI large language models.\nTo use, you should have the openai python package installed, and the\nenvironment variable GOOSEAI_API_KEY set with your API key.\nAny parameters that are valid to be passed to the openai.create call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.llms import GooseAI\ngooseai = GooseAI(model_name=\"gpt-neo-20b\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam client: Any = None\u00b6\nparam frequency_penalty: float = 0\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html"} {"id": "8226337cd56b-1", "text": "param client: Any = None\u00b6\nparam frequency_penalty: float = 0\u00b6\nPenalizes repeated tokens according to frequency.\nparam gooseai_api_key: Optional[str] = None\u00b6\nparam logit_bias: Optional[Dict[str, float]] [Optional]\u00b6\nAdjust the probability of specific tokens being generated.\nparam max_tokens: int = 256\u00b6\nThe maximum number of tokens to generate in the completion.\n-1 returns as many tokens as possible given the prompt and\nthe models maximal context size.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam min_tokens: int = 1\u00b6\nThe minimum number of tokens to generate in the completion.\nparam model_kwargs: Dict[str, Any] [Optional]\u00b6\nHolds any model parameters valid for create call not explicitly specified.\nparam model_name: str = 'gpt-neo-20b'\u00b6\nModel name to use\nparam n: int = 1\u00b6\nHow many completions to generate for each prompt.\nparam presence_penalty: float = 0\u00b6\nPenalizes repeated tokens.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: float = 0.7\u00b6\nWhat sampling temperature to use\nparam top_p: float = 1\u00b6\nTotal probability mass of tokens to consider at each step.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html"} {"id": "8226337cd56b-2", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html"} {"id": "8226337cd56b-3", "text": "async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator build_extra\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nBuild extra kwargs from additional params that were passed in.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html"} {"id": "8226337cd56b-4", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html"} {"id": "8226337cd56b-5", "text": "Get the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html"} {"id": "8226337cd56b-6", "text": "to the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic config.\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html"} {"id": "7b8499e0222d-0", "text": "langchain.llms.bananadev.Banana\u00b6\nclass langchain.llms.bananadev.Banana(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, model_key: str = '', model_kwargs: Dict[str, Any] = None, banana_api_key: Optional[str] = None)[source]\u00b6\nBases: LLM\nWrapper around Banana large language models.\nTo use, you should have the banana-dev python package installed,\nand the environment variable BANANA_API_KEY set with your API key.\nAny parameters that are valid to be passed to the call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.llms import Banana\nbanana = Banana(model_key=\"\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam banana_api_key: Optional[str] = None\u00b6\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_key: str = ''\u00b6\nmodel endpoint to use\nparam model_kwargs: Dict[str, Any] [Optional]\u00b6\nHolds any model parameters valid for create call not\nexplicitly specified.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.bananadev.Banana.html"} {"id": "7b8499e0222d-1", "text": "param verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.bananadev.Banana.html"} {"id": "7b8499e0222d-2", "text": "first occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator build_extra\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nBuild extra kwargs from additional params that were passed in.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.bananadev.Banana.html"} {"id": "7b8499e0222d-3", "text": "dict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.bananadev.Banana.html"} {"id": "7b8499e0222d-4", "text": "get_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.bananadev.Banana.html"} {"id": "7b8499e0222d-5", "text": "Parameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic config.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.bananadev.Banana.html"} {"id": "1302d64763c7-0", "text": "langchain.llms.sagemaker_endpoint.LLMContentHandler\u00b6\nclass langchain.llms.sagemaker_endpoint.LLMContentHandler[source]\u00b6\nBases: ContentHandlerBase[str, str]\nContent handler for LLM class.\nMethods\n__init__()\ntransform_input(prompt,\u00a0model_kwargs)\nTransforms the input to a format that model can accept as the request Body.\ntransform_output(output)\nTransforms the output from the model to string that the LLM class expects.\nAttributes\naccepts\nThe MIME type of the response data returned from endpoint\ncontent_type\nThe MIME type of the input data passed to endpoint\nabstract transform_input(prompt: INPUT_TYPE, model_kwargs: Dict) \u2192 bytes\u00b6\nTransforms the input to a format that model can accept\nas the request Body. Should return bytes or seekable file\nlike object in the format specified in the content_type\nrequest header.\nabstract transform_output(output: bytes) \u2192 OUTPUT_TYPE\u00b6\nTransforms the output from the model to string that\nthe LLM class expects.\naccepts: Optional[str] = 'text/plain'\u00b6\nThe MIME type of the response data returned from endpoint\ncontent_type: Optional[str] = 'text/plain'\u00b6\nThe MIME type of the input data passed to endpoint", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.LLMContentHandler.html"} {"id": "bddc36e296d1-0", "text": "langchain.llms.promptlayer_openai.PromptLayerOpenAI\u00b6\nclass langchain.llms.promptlayer_openai.PromptLayerOpenAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model: str = 'text-davinci-003', temperature: float = 0.7, max_tokens: int = 256, top_p: float = 1, frequency_penalty: float = 0, presence_penalty: float = 0, n: int = 1, best_of: int = 1, model_kwargs: Dict[str, Any] = None, openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_organization: Optional[str] = None, openai_proxy: Optional[str] = None, batch_size: int = 20, request_timeout: Optional[Union[float, Tuple[float, float]]] = None, logit_bias: Optional[Dict[str, float]] = None, max_retries: int = 6, streaming: bool = False, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', tiktoken_model_name: Optional[str] = None, pl_tags: Optional[List[str]] = None, return_pl_id: Optional[bool] = False)[source]\u00b6\nBases: OpenAI\nWrapper around OpenAI large language models.\nTo use, you should have the openai and promptlayer python\npackage installed, and the environment variable OPENAI_API_KEY", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html"} {"id": "bddc36e296d1-1", "text": "package installed, and the environment variable OPENAI_API_KEY\nand PROMPTLAYER_API_KEY set with your openAI API key and\npromptlayer key respectively.\nAll parameters that can be passed to the OpenAI LLM can also\nbe passed here. The PromptLayerOpenAI LLM adds two optional\nParameters\npl_tags \u2013 List of strings to tag the request with.\nreturn_pl_id \u2013 If True, the PromptLayer request ID will be\nreturned in the generation_info field of the\nGeneration object.\nExample\nfrom langchain.llms import PromptLayerOpenAI\nopenai = PromptLayerOpenAI(model_name=\"text-davinci-003\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam allowed_special: Union[Literal['all'], AbstractSet[str]] = {}\u00b6\nSet of special tokens that are allowed\u3002\nparam batch_size: int = 20\u00b6\nBatch size to use when passing multiple documents to generate.\nparam best_of: int = 1\u00b6\nGenerates best_of completions server-side and returns the \u201cbest\u201d.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam client: Any = None\u00b6\nparam disallowed_special: Union[Literal['all'], Collection[str]] = 'all'\u00b6\nSet of special tokens that are not allowed\u3002\nparam frequency_penalty: float = 0\u00b6\nPenalizes repeated tokens according to frequency.\nparam logit_bias: Optional[Dict[str, float]] [Optional]\u00b6\nAdjust the probability of specific tokens being generated.\nparam max_retries: int = 6\u00b6\nMaximum number of retries to make when generating.\nparam max_tokens: int = 256\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html"} {"id": "bddc36e296d1-2", "text": "Maximum number of retries to make when generating.\nparam max_tokens: int = 256\u00b6\nThe maximum number of tokens to generate in the completion.\n-1 returns as many tokens as possible given the prompt and\nthe models maximal context size.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_kwargs: Dict[str, Any] [Optional]\u00b6\nHolds any model parameters valid for create call not explicitly specified.\nparam model_name: str = 'text-davinci-003' (alias 'model')\u00b6\nModel name to use.\nparam n: int = 1\u00b6\nHow many completions to generate for each prompt.\nparam openai_api_base: Optional[str] = None\u00b6\nparam openai_api_key: Optional[str] = None\u00b6\nparam openai_organization: Optional[str] = None\u00b6\nparam openai_proxy: Optional[str] = None\u00b6\nparam pl_tags: Optional[List[str]] = None\u00b6\nparam presence_penalty: float = 0\u00b6\nPenalizes repeated tokens.\nparam request_timeout: Optional[Union[float, Tuple[float, float]]] = None\u00b6\nTimeout for requests to OpenAI completion API. Default is 600 seconds.\nparam return_pl_id: Optional[bool] = False\u00b6\nparam streaming: bool = False\u00b6\nWhether to stream the results or not.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: float = 0.7\u00b6\nWhat sampling temperature to use.\nparam tiktoken_model_name: Optional[str] = None\u00b6\nThe model name to pass to tiktoken when using this class.\nTiktoken is used to count the number of tokens in documents to constrain\nthem to be under a certain limit. By default, when set to None, this will", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html"} {"id": "bddc36e296d1-3", "text": "them to be under a certain limit. By default, when set to None, this will\nbe the same as the embedding model name. However, there are some cases\nwhere you may want to use this Embedding class with a model name not\nsupported by tiktoken. This can include when using Azure embeddings or\nwhen using one of the many model providers that expose an OpenAI-like\nAPI but with different models. In those cases, in order to avoid erroring\nwhen tiktoken is called, you can specify a model name to use here.\nparam top_p: float = 1\u00b6\nTotal probability mass of tokens to consider at each step.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html"} {"id": "bddc36e296d1-4", "text": "This method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html"} {"id": "bddc36e296d1-5", "text": "Asynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator build_extra\u00a0 \u00bb\u00a0 all fields\u00b6\nBuild extra kwargs from additional params that were passed in.\ncreate_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) \u2192 LLMResult\u00b6\nCreate the LLMResult from the choices and prompts.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html"} {"id": "bddc36e296d1-6", "text": "need more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) \u2192 List[List[str]]\u00b6\nGet the sub prompts for llm call.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nGet the token IDs using the tiktoken package.\nmax_tokens_for_prompt(prompt: str) \u2192 int\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html"} {"id": "bddc36e296d1-7", "text": "max_tokens_for_prompt(prompt: str) \u2192 int\u00b6\nCalculate the maximum number of tokens possible to generate for a prompt.\nParameters\nprompt \u2013 The prompt to pass into the model.\nReturns\nThe maximum number of tokens to generate for a prompt.\nExample\nmax_tokens = openai.max_token_for_prompt(\"Tell me a joke.\")\nstatic modelname_to_contextsize(modelname: str) \u2192 int\u00b6\nCalculate the maximum number of tokens possible to generate for a model.\nParameters\nmodelname \u2013 The modelname we want to know the context size for.\nReturns\nThe maximum context size\nExample\nmax_tokens = openai.modelname_to_contextsize(\"text-davinci-003\")\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html"} {"id": "bddc36e296d1-8", "text": "first occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nprep_streaming_params(stop: Optional[List[str]] = None) \u2192 Dict[str, Any]\u00b6\nPrepare the params for streaming.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nstream(prompt: str, stop: Optional[List[str]] = None) \u2192 Generator\u00b6\nCall OpenAI with streaming flag and return the resulting generator.\nBETA: this is a beta feature while we figure out the right abstraction.\nOnce that happens, this interface could change.\nParameters\nprompt \u2013 The prompts to pass into the model.\nstop \u2013 Optional list of stop words to use when generating.\nReturns\nA generator representing the stream of tokens from OpenAI.\nExample\ngenerator = openai.stream(\"Tell me a joke.\")\nfor token in generator:\n yield token\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that api key and python package exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html"} {"id": "bddc36e296d1-9", "text": "constructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty max_context_size: int\u00b6\nGet max context size for this model.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\nallow_population_by_field_name = True\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html"} {"id": "4328001d513a-0", "text": "langchain.llms.ctransformers.CTransformers\u00b6\nclass langchain.llms.ctransformers.CTransformers(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model: str, model_type: Optional[str] = None, model_file: Optional[str] = None, config: Optional[Dict[str, Any]] = None, lib: Optional[str] = None)[source]\u00b6\nBases: LLM\nWrapper around the C Transformers LLM interface.\nTo use, you should have the ctransformers python package installed.\nSee https://github.com/marella/ctransformers\nExample\nfrom langchain.llms import CTransformers\nllm = CTransformers(model=\"/path/to/ggml-gpt-2.bin\", model_type=\"gpt2\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam config: Optional[Dict[str, Any]] = None\u00b6\nThe config parameters.\nSee https://github.com/marella/ctransformers#config\nparam lib: Optional[str] = None\u00b6\nThe path to a shared library or one of avx2, avx, basic.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model: str [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.ctransformers.CTransformers.html"} {"id": "4328001d513a-1", "text": "Metadata to add to the run trace.\nparam model: str [Required]\u00b6\nThe path to a model file or directory or the name of a Hugging Face Hub\nmodel repo.\nparam model_file: Optional[str] = None\u00b6\nThe name of the model file in repo or directory.\nparam model_type: Optional[str] = None\u00b6\nThe model type.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.ctransformers.CTransformers.html"} {"id": "4328001d513a-2", "text": "need more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.ctransformers.CTransformers.html"} {"id": "4328001d513a-3", "text": "Parameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.ctransformers.CTransformers.html"} {"id": "4328001d513a-4", "text": "callbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.ctransformers.CTransformers.html"} {"id": "4328001d513a-5", "text": "to the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that ctransformers package is installed.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.ctransformers.CTransformers.html"} {"id": "4328001d513a-6", "text": "eg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.ctransformers.CTransformers.html"} {"id": "9c5d21e21c95-0", "text": "langchain.llms.forefrontai.ForefrontAI\u00b6\nclass langchain.llms.forefrontai.ForefrontAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, endpoint_url: str = '', temperature: float = 0.7, length: int = 256, top_p: float = 1.0, top_k: int = 40, repetition_penalty: int = 1, forefrontai_api_key: Optional[str] = None, base_url: Optional[str] = None)[source]\u00b6\nBases: LLM\nWrapper around ForefrontAI large language models.\nTo use, you should have the environment variable FOREFRONTAI_API_KEY\nset with your API key.\nExample\nfrom langchain.llms import ForefrontAI\nforefrontai = ForefrontAI(endpoint_url=\"\")\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam base_url: Optional[str] = None\u00b6\nBase url to use, if None decides based on model name.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam endpoint_url: str = ''\u00b6\nModel name to use.\nparam forefrontai_api_key: Optional[str] = None\u00b6\nparam length: int = 256\u00b6\nThe maximum number of tokens to generate in the completion.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam repetition_penalty: int = 1\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html"} {"id": "9c5d21e21c95-1", "text": "Metadata to add to the run trace.\nparam repetition_penalty: int = 1\u00b6\nPenalizes repeated tokens according to frequency.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: float = 0.7\u00b6\nWhat sampling temperature to use.\nparam top_k: int = 40\u00b6\nThe number of highest probability vocabulary tokens to\nkeep for top-k-filtering.\nparam top_p: float = 1.0\u00b6\nTotal probability mass of tokens to consider at each step.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html"} {"id": "9c5d21e21c95-2", "text": "API.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html"} {"id": "9c5d21e21c95-3", "text": "Asynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html"} {"id": "9c5d21e21c95-4", "text": "text generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html"} {"id": "9c5d21e21c95-5", "text": "stop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key exists in environment.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html"} {"id": "9c5d21e21c95-6", "text": "serialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html"} {"id": "985f8d9d52e7-0", "text": "langchain.llms.vertexai.VertexAI\u00b6\nclass langchain.llms.vertexai.VertexAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: '_LanguageModel' = None, model_name: str = 'text-bison', temperature: float = 0.0, max_output_tokens: int = 128, top_p: float = 0.95, top_k: int = 40, stop: Optional[List[str]] = None, project: Optional[str] = None, location: str = 'us-central1', credentials: Any = None, request_parallelism: int = 5, max_retries: int = 6, tuned_model_name: Optional[str] = None)[source]\u00b6\nBases: _VertexAICommon, LLM\nWrapper around Google Vertex AI large language models.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam credentials: Any = None\u00b6\nThe default custom credentials (google.auth.credentials.Credentials) to use\nparam location: str = 'us-central1'\u00b6\nThe default location to use when making API calls.\nparam max_output_tokens: int = 128\u00b6\nToken limit determines the maximum amount of text output from one prompt.\nparam max_retries: int = 6\u00b6\nThe maximum number of retries to make when generating.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html"} {"id": "985f8d9d52e7-1", "text": "param metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_name: str = 'text-bison'\u00b6\nThe name of the Vertex AI large language model.\nparam project: Optional[str] = None\u00b6\nThe default GCP project to use when making Vertex API calls.\nparam request_parallelism: int = 5\u00b6\nThe amount of parallelism allowed for requests issued to VertexAI models.\nparam stop: Optional[List[str]] = None\u00b6\nOptional list of stop words to use when generating.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam temperature: float = 0.0\u00b6\nSampling temperature, it controls the degree of randomness in token selection.\nparam top_k: int = 40\u00b6\nHow the model selects tokens for output, the next token is selected from\nparam top_p: float = 0.95\u00b6\nTokens are selected from most probable to least until the sum of their\nparam tuned_model_name: Optional[str] = None\u00b6\nThe name of a tuned model. If provided, model_name is ignored.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html"} {"id": "985f8d9d52e7-2", "text": "Check Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html"} {"id": "985f8d9d52e7-3", "text": "async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html"} {"id": "985f8d9d52e7-4", "text": "Run the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html"} {"id": "985f8d9d52e7-5", "text": "Get the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html"} {"id": "985f8d9d52e7-6", "text": "to the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that the python package exists in environment.\nproperty is_codey_model: bool\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\ntask_executor: ClassVar[Optional[Executor]] = None\u00b6\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html"} {"id": "1b6842d7f033-0", "text": "langchain.llms.self_hosted.SelfHostedPipeline\u00b6\nclass langchain.llms.self_hosted.SelfHostedPipeline(*, cache: ~typing.Optional[bool] = None, verbose: bool = None, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, pipeline_ref: ~typing.Any = None, client: ~typing.Any = None, inference_fn: ~typing.Callable = , hardware: ~typing.Any = None, model_load_fn: ~typing.Callable, load_fn_kwargs: ~typing.Optional[dict] = None, model_reqs: ~typing.List[str] = ['./', 'torch'])[source]\u00b6\nBases: LLM\nRun model inference on self-hosted remote hardware.\nSupported hardware includes auto-launched instances on AWS, GCP, Azure,\nand Lambda, as well as servers specified\nby IP address and SSH credentials (such as on-prem, or another\ncloud like Paperspace, Coreweave, etc.).\nTo use, you should have the runhouse python package installed.\nExample for custom pipeline and inference functions:from langchain.llms import SelfHostedPipeline\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\nimport runhouse as rh\ndef load_pipeline():\n tokenizer = AutoTokenizer.from_pretrained(\"gpt2\")\n model = AutoModelForCausalLM.from_pretrained(\"gpt2\")\n return pipeline(\n \"text-generation\", model=model, tokenizer=tokenizer,\n max_new_tokens=10", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html"} {"id": "1b6842d7f033-1", "text": "\"text-generation\", model=model, tokenizer=tokenizer,\n max_new_tokens=10\n )\ndef inference_fn(pipeline, prompt, stop = None):\n return pipeline(prompt)[0][\"generated_text\"]\ngpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\nllm = SelfHostedPipeline(\n model_load_fn=load_pipeline,\n hardware=gpu,\n model_reqs=model_reqs, inference_fn=inference_fn\n)\nExample for <2GB model (can be serialized and sent directly to the server):from langchain.llms import SelfHostedPipeline\nimport runhouse as rh\ngpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\nmy_model = ...\nllm = SelfHostedPipeline.from_pipeline(\n pipeline=my_model,\n hardware=gpu,\n model_reqs=[\"./\", \"torch\", \"transformers\"],\n)\nExample passing model path for larger models:from langchain.llms import SelfHostedPipeline\nimport runhouse as rh\nimport pickle\nfrom transformers import pipeline\ngenerator = pipeline(model=\"gpt2\")\nrh.blob(pickle.dumps(generator), path=\"models/pipeline.pkl\"\n ).save().to(gpu, path=\"models\")\nllm = SelfHostedPipeline.from_pipeline(\n pipeline=\"models/pipeline.pkl\",\n hardware=gpu,\n model_reqs=[\"./\", \"torch\", \"transformers\"],\n)\nInit the pipeline with an auxiliary function.\nThe load function must be in global scope to be imported\nand run on the server, i.e. in a module and not a REPL or closure.\nThen, initialize the remote inference function.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html"} {"id": "1b6842d7f033-2", "text": "param callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam hardware: Any = None\u00b6\nRemote hardware to send the inference function to.\nparam inference_fn: Callable = \u00b6\nInference function to send to the remote hardware.\nparam load_fn_kwargs: Optional[dict] = None\u00b6\nKey word arguments to pass to the model load function.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_load_fn: Callable [Required]\u00b6\nFunction to load the model remotely on the server.\nparam model_reqs: List[str] = ['./', 'torch']\u00b6\nRequirements to install on hardware to inference the model.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html"} {"id": "1b6842d7f033-3", "text": "Run the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html"} {"id": "1b6842d7f033-4", "text": "first occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\nclassmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) \u2192 LLM[source]\u00b6\nInit the SelfHostedPipeline from a pipeline object or string.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html"} {"id": "1b6842d7f033-5", "text": "Pass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html"} {"id": "1b6842d7f033-6", "text": "Return the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html"} {"id": "1b6842d7f033-7", "text": "Example:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html"} {"id": "5291f9b991c5-0", "text": "langchain.agents.agent_toolkits.openapi.base.create_openapi_agent\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.base.create_openapi_agent.html"} {"id": "5291f9b991c5-1", "text": "langchain.agents.agent_toolkits.openapi.base.create_openapi_agent(llm: BaseLanguageModel, toolkit: OpenAPIToolkit, callback_manager: Optional[BaseCallbackManager] = None, prefix: str = \"You are an agent designed to answer questions by making web requests to an API given the openapi spec.\\n\\nIf the question does not seem related to the API, return I don't know. Do not make up an answer.\\nOnly use information provided by the tools to construct your response.\\n\\nFirst, find the base URL needed to make the request.\\n\\nSecond, find the relevant paths needed to answer the question. Take note that, sometimes, you might need to make more than one request to more than one path to answer the question.\\n\\nThird, find the required parameters needed to make the request. For GET requests, these are usually URL parameters and for POST requests, these are request body parameters.\\n\\nFourth, make the requests needed to answer the question. Ensure that you are sending the correct parameters to the request by checking which parameters are required. For parameters with a fixed set of values, please use the spec to look at which values are allowed.\\n\\nUse the exact parameter names as listed in the spec, do not make up any names or abbreviate the names of parameters.\\nIf you get a not found error, ensure that you are using a path that actually exists in the spec.\\n\", suffix: str = 'Begin!\\n\\nQuestion: {input}\\nThought: I should explore the spec to find the base url for the API.\\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.base.create_openapi_agent.html"} {"id": "5291f9b991c5-2", "text": "Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, return_intermediate_steps: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 AgentExecutor[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.base.create_openapi_agent.html"} {"id": "5291f9b991c5-3", "text": "Construct a json agent from an LLM and tools.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.base.create_openapi_agent.html"} {"id": "cd77df6367ae-0", "text": "langchain.agents.agent_toolkits.jira.toolkit.JiraToolkit\u00b6\nclass langchain.agents.agent_toolkits.jira.toolkit.JiraToolkit(*, tools: List[BaseTool] = [])[source]\u00b6\nBases: BaseToolkit\nJira Toolkit.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam tools: List[langchain.tools.base.BaseTool] = []\u00b6\nclassmethod from_jira_api_wrapper(jira_api_wrapper: JiraAPIWrapper) \u2192 JiraToolkit[source]\u00b6\nget_tools() \u2192 List[BaseTool][source]\u00b6\nGet the tools in the toolkit.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.jira.toolkit.JiraToolkit.html"} {"id": "d4ebac85dd84-0", "text": "langchain.agents.agent_toolkits.openapi.planner.create_openapi_agent\u00b6\nlangchain.agents.agent_toolkits.openapi.planner.create_openapi_agent(api_spec: ReducedOpenAPISpec, requests_wrapper: TextRequestsWrapper, llm: BaseLanguageModel, shared_memory: Optional[ReadOnlySharedMemory] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = True, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 AgentExecutor[source]\u00b6\nInstantiate API planner and controller for a given spec.\nInject credentials via requests_wrapper.\nWe use a top-level \u201corchestrator\u201d agent to invoke the planner and controller,\nrather than a top-level planner\nthat invokes a controller with its plan. This is to keep the planner simple.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.create_openapi_agent.html"} {"id": "873a2b2c35bc-0", "text": "langchain.agents.agent_toolkits.office365.toolkit.O365Toolkit\u00b6\nclass langchain.agents.agent_toolkits.office365.toolkit.O365Toolkit(*, account: Account = None)[source]\u00b6\nBases: BaseToolkit\nToolkit for interacting with Office365.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam account: Account [Optional]\u00b6\nget_tools() \u2192 List[BaseTool][source]\u00b6\nGet the tools in the toolkit.\nmodel Config[source]\u00b6\nBases: object\nPydantic config.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.office365.toolkit.O365Toolkit.html"} {"id": "273e651605e7-0", "text": "langchain.agents.agent_toolkits.sql.base.create_sql_agent\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.sql.base.create_sql_agent.html"} {"id": "273e651605e7-1", "text": "langchain.agents.agent_toolkits.sql.base.create_sql_agent(llm: BaseLanguageModel, toolkit: SQLDatabaseToolkit, agent_type: AgentType = AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager: Optional[BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with a SQL database.\\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\\nYou can order the results by a relevant column to return the most interesting examples in the database.\\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\\nYou have access to tools for interacting with the database.\\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\\n\\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\\n\\nIf the question does not seem related to the database, just return \"I don\\'t know\" as the answer.\\n', suffix: Optional[str] = None, format_instructions: str = 'Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.sql.base.create_sql_agent.html"} {"id": "273e651605e7-2", "text": "I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 AgentExecutor[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.sql.base.create_sql_agent.html"} {"id": "273e651605e7-3", "text": "Construct a sql agent from an LLM and tools.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.sql.base.create_sql_agent.html"} {"id": "c53fd14f0fb1-0", "text": "langchain.agents.structured_chat.output_parser.StructuredChatOutputParser\u00b6\nclass langchain.agents.structured_chat.output_parser.StructuredChatOutputParser[source]\u00b6\nBases: AgentOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str[source]\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 Union[AgentAction, AgentFinish][source]\u00b6\nParse text into agent action/finish.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.structured_chat.output_parser.StructuredChatOutputParser.html"} {"id": "c53fd14f0fb1-1", "text": "property lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.structured_chat.output_parser.StructuredChatOutputParser.html"} {"id": "3d7a5bfe3ab8-0", "text": "langchain.agents.structured_chat.base.StructuredChatAgent\u00b6\nclass langchain.agents.structured_chat.base.StructuredChatAgent(*, llm_chain: LLMChain, output_parser: AgentOutputParser = None, allowed_tools: Optional[List[str]] = None)[source]\u00b6\nBases: Agent\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam allowed_tools: Optional[List[str]] = None\u00b6\nparam llm_chain: langchain.chains.llm.LLMChain [Required]\u00b6\nparam output_parser: langchain.agents.agent.AgentOutputParser [Optional]\u00b6\nasync aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[AgentAction, AgentFinish]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.structured_chat.base.StructuredChatAgent.html"} {"id": "3d7a5bfe3ab8-1", "text": "**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nclassmethod create_prompt(tools: Sequence[BaseTool], prefix: str = 'Respond to the human as helpfully and accurately as possible. You have access to the following tools:', suffix: str = 'Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\\nThought:', human_message_template: str = '{input}\\n\\n{agent_scratchpad}', format_instructions: str = 'Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\\n\\nValid \"action\" values: \"Final Answer\" or {tool_names}\\n\\nProvide only ONE action per $JSON_BLOB, as shown:\\n\\n```\\n{{{{\\n\u00a0 \"action\": $TOOL_NAME,\\n\u00a0 \"action_input\": $INPUT\\n}}}}\\n```\\n\\nFollow this format:\\n\\nQuestion: input question to answer\\nThought: consider previous and subsequent steps\\nAction:\\n```\\n$JSON_BLOB\\n```\\nObservation: action result\\n... (repeat Thought/Action/Observation N times)\\nThought: I know what to respond\\nAction:\\n```\\n{{{{\\n\u00a0 \"action\": \"Final Answer\",\\n\u00a0 \"action_input\": \"Final response to human\"\\n}}}}\\n```', input_variables: Optional[List[str]] = None, memory_prompts: Optional[List[BasePromptTemplate]] = None) \u2192 BasePromptTemplate[source]\u00b6\nCreate a prompt for this class.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of agent.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.structured_chat.base.StructuredChatAgent.html"} {"id": "3d7a5bfe3ab8-2", "text": "dict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of agent.\nclassmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, prefix: str = 'Respond to the human as helpfully and accurately as possible. You have access to the following tools:', suffix: str = 'Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\\nThought:', human_message_template: str = '{input}\\n\\n{agent_scratchpad}', format_instructions: str = 'Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\\n\\nValid \"action\" values: \"Final Answer\" or {tool_names}\\n\\nProvide only ONE action per $JSON_BLOB, as shown:\\n\\n```\\n{{{{\\n\u00a0 \"action\": $TOOL_NAME,\\n\u00a0 \"action_input\": $INPUT\\n}}}}\\n```\\n\\nFollow this format:\\n\\nQuestion: input question to answer\\nThought: consider previous and subsequent steps\\nAction:\\n```\\n$JSON_BLOB\\n```\\nObservation: action result\\n... (repeat Thought/Action/Observation N times)\\nThought: I know what to respond\\nAction:\\n```\\n{{{{\\n\u00a0 \"action\": \"Final Answer\",\\n\u00a0 \"action_input\": \"Final response to human\"\\n}}}}\\n```', input_variables: Optional[List[str]] = None, memory_prompts: Optional[List[BasePromptTemplate]] = None, **kwargs: Any) \u2192 Agent[source]\u00b6\nConstruct an agent from an LLM and tools.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.structured_chat.base.StructuredChatAgent.html"} {"id": "3d7a5bfe3ab8-3", "text": "Construct an agent from an LLM and tools.\nget_allowed_tools() \u2192 Optional[List[str]]\u00b6\nget_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) \u2192 Dict[str, Any]\u00b6\nCreate the full inputs for the LLMChain from intermediate steps.\nplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[AgentAction, AgentFinish]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nreturn_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) \u2192 AgentFinish\u00b6\nReturn response when agent has been stopped due to max iterations.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the agent.\nParameters\nfile_path \u2013 Path to file to save the agent to.\nExample:\n.. code-block:: python\n# If working with agent executor\nagent.agent.save(file_path=\u201dpath/agent.yaml\u201d)\ntool_run_logging_kwargs() \u2192 Dict\u00b6\nvalidator validate_prompt\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that prompt matches format.\nproperty llm_prefix: str\u00b6\nPrefix to append the llm call with.\nproperty observation_prefix: str\u00b6\nPrefix to append the observation with.\nproperty return_values: List[str]\u00b6\nReturn values of the agent.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.structured_chat.base.StructuredChatAgent.html"} {"id": "b232411b3faf-0", "text": "langchain.agents.mrkl.output_parser.MRKLOutputParser\u00b6\nclass langchain.agents.mrkl.output_parser.MRKLOutputParser[source]\u00b6\nBases: AgentOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str[source]\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 Union[AgentAction, AgentFinish][source]\u00b6\nParse text into agent action/finish.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.output_parser.MRKLOutputParser.html"} {"id": "b232411b3faf-1", "text": "property lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.output_parser.MRKLOutputParser.html"} {"id": "9b6d3a6f4223-0", "text": "langchain.agents.agent.AgentExecutor\u00b6\nclass langchain.agents.agent.AgentExecutor(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, agent: Union[BaseSingleActionAgent, BaseMultiActionAgent], tools: Sequence[BaseTool], return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False)[source]\u00b6\nBases: Chain\nConsists of an agent using tools.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam agent: Union[BaseSingleActionAgent, BaseMultiActionAgent] [Required]\u00b6\nThe agent to run for creating a plan and determining actions\nto take at each step of the execution loop.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam early_stopping_method: str = 'force'\u00b6\nThe method to use for early stopping if the agent never\nreturns AgentFinish. Either \u2018force\u2019 or \u2018generate\u2019.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html"} {"id": "9b6d3a6f4223-1", "text": "returns AgentFinish. Either \u2018force\u2019 or \u2018generate\u2019.\n\u201cforce\u201d returns a string saying that it stopped because it met atime or iteration limit.\n\u201cgenerate\u201d calls the agent\u2019s LLM Chain one final time to generatea final answer based on the previous steps.\nparam handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False\u00b6\nHow to handle errors raised by the agent\u2019s output parser.Defaults to False, which raises the error.\nsIf true, the error will be sent back to the LLM as an observation.\nIf a string, the string itself will be sent to the LLM as an observation.\nIf a callable function, the function will be called with the exception\nas an argument, and the result of that function will be passed to the agentas an observation.\nparam max_execution_time: Optional[float] = None\u00b6\nThe maximum amount of wall clock time to spend in the execution\nloop.\nparam max_iterations: Optional[int] = 15\u00b6\nThe maximum number of steps to take before ending the execution\nloop.\nSetting to \u2018None\u2019 could lead to an infinite loop.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html"} {"id": "9b6d3a6f4223-2", "text": "You can use these to eg identify a specific instance of a chain with its use case.\nparam return_intermediate_steps: bool = False\u00b6\nWhether to return the agent\u2019s trajectory of intermediate steps\nat the end in addition to the final output.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam tools: Sequence[BaseTool] [Required]\u00b6\nThe valid tools the agent can call.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html"} {"id": "9b6d3a6f4223-3", "text": "addition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html"} {"id": "9b6d3a6f4223-4", "text": "these runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html"} {"id": "9b6d3a6f4223-5", "text": "these runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_agent_and_tools(agent: Union[BaseSingleActionAgent, BaseMultiActionAgent], tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, **kwargs: Any) \u2192 AgentExecutor[source]\u00b6\nCreate from agent and tools.\nlookup_tool(name: str) \u2192 BaseTool[source]\u00b6\nLookup tool by name.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html"} {"id": "9b6d3a6f4223-6", "text": "Parameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html"} {"id": "9b6d3a6f4223-7", "text": "sole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None[source]\u00b6\nRaise error - saving not supported for Agent Executors.\nsave_agent(file_path: Union[Path, str]) \u2192 None[source]\u00b6\nSave the underlying agent.\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_return_direct_tool\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that tools are compatible with agent.\nvalidator validate_tools\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that tools are compatible with agent.\nproperty lc_attributes: Dict\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html"} {"id": "9b6d3a6f4223-8", "text": "Validate that tools are compatible with agent.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html"} {"id": "b96e749d3aaa-0", "text": "langchain.agents.loading.load_agent_from_config\u00b6\nlangchain.agents.loading.load_agent_from_config(config: dict, llm: Optional[BaseLanguageModel] = None, tools: Optional[List[Tool]] = None, **kwargs: Any) \u2192 Union[BaseSingleActionAgent, BaseMultiActionAgent][source]\u00b6\nLoad agent from Config Dict.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.loading.load_agent_from_config.html"} {"id": "ffda9b597835-0", "text": "langchain.agents.agent_toolkits.powerbi.chat_base.create_pbi_chat_agent\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.powerbi.chat_base.create_pbi_chat_agent.html"} {"id": "ffda9b597835-1", "text": "langchain.agents.agent_toolkits.powerbi.chat_base.create_pbi_chat_agent(llm: BaseChatModel, toolkit: Optional[PowerBIToolkit] = None, powerbi: Optional[PowerBIDataset] = None, callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, prefix: str = 'Assistant is a large language model built to help users interact with a PowerBI Dataset.\\n\\nAssistant should try to create a correct and complete answer to the question from the user. If the user asks a question not related to the dataset it should return \"This does not appear to be part of this dataset.\" as the answer. The user might make a mistake with the spelling of certain values, if you think that is the case, ask the user to confirm the spelling of the value and then run the query again. Unless the user specifies a specific number of examples they wish to obtain, and the results are too large, limit your query to at most {top_k} results, but make it clear when answering which field was used for the filtering. The user has access to these tables: {{tables}}.\\n\\nThe answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readable format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. \\n', suffix: str = \"TOOLS\\n------\\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\\n\\n{{tools}}\\n\\n{format_instructions}\\n\\nUSER'S INPUT\\n--------------------\\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\\n\\n{{{{input}}}}\\n\",", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.powerbi.chat_base.create_pbi_chat_agent.html"} {"id": "ffda9b597835-2", "text": "blob with a single action, and NOTHING else):\\n\\n{{{{input}}}}\\n\", examples: Optional[str] = None, input_variables: Optional[List[str]] = None, memory: Optional[BaseChatMemory] = None, top_k: int = 10, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 AgentExecutor[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.powerbi.chat_base.create_pbi_chat_agent.html"} {"id": "ffda9b597835-3", "text": "Construct a Power BI agent from a Chat LLM and tools.\nIf you supply only a toolkit and no Power BI dataset, the same LLM is used for both.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.powerbi.chat_base.create_pbi_chat_agent.html"} {"id": "7f8e3bf817cc-0", "text": "langchain.agents.agent.BaseMultiActionAgent\u00b6\nclass langchain.agents.agent.BaseMultiActionAgent[source]\u00b6\nBases: BaseModel\nBase Agent class.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nabstract async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[List[AgentAction], AgentFinish][source]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nActions specifying what tool to use.\ndict(**kwargs: Any) \u2192 Dict[source]\u00b6\nReturn dictionary representation of agent.\nget_allowed_tools() \u2192 Optional[List[str]][source]\u00b6\nabstract plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[List[AgentAction], AgentFinish][source]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nActions specifying what tool to use.\nreturn_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) \u2192 AgentFinish[source]\u00b6\nReturn response when agent has been stopped due to max iterations.\nsave(file_path: Union[Path, str]) \u2192 None[source]\u00b6\nSave the agent.\nParameters\nfile_path \u2013 Path to file to save the agent to.\nExample:", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.BaseMultiActionAgent.html"} {"id": "7f8e3bf817cc-1", "text": "Parameters\nfile_path \u2013 Path to file to save the agent to.\nExample:\n.. code-block:: python\n# If working with agent executor\nagent.agent.save(file_path=\u201dpath/agent.yaml\u201d)\ntool_run_logging_kwargs() \u2192 Dict[source]\u00b6\nproperty return_values: List[str]\u00b6\nReturn values of the agent.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.BaseMultiActionAgent.html"} {"id": "d6ef572d72d2-0", "text": "langchain.agents.agent_toolkits.gmail.toolkit.GmailToolkit\u00b6\nclass langchain.agents.agent_toolkits.gmail.toolkit.GmailToolkit(*, api_resource: Resource = None)[source]\u00b6\nBases: BaseToolkit\nToolkit for interacting with Gmail.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_resource: Resource [Optional]\u00b6\nget_tools() \u2192 List[BaseTool][source]\u00b6\nGet the tools in the toolkit.\nmodel Config[source]\u00b6\nBases: object\nPydantic config.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.gmail.toolkit.GmailToolkit.html"} {"id": "33f1b67208aa-0", "text": "langchain.agents.mrkl.base.MRKLChain\u00b6\nclass langchain.agents.mrkl.base.MRKLChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, agent: Union[BaseSingleActionAgent, BaseMultiActionAgent], tools: Sequence[BaseTool], return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False)[source]\u00b6\nBases: AgentExecutor\nChain that implements the MRKL system.\nExample\nfrom langchain import OpenAI, MRKLChain\nfrom langchain.chains.mrkl.base import ChainConfig\nllm = OpenAI(temperature=0)\nprompt = PromptTemplate(...)\nchains = [...]\nmrkl = MRKLChain.from_chains(llm=llm, prompt=prompt)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam agent: Union[BaseSingleActionAgent, BaseMultiActionAgent] [Required]\u00b6\nThe agent to run for creating a plan and determining actions\nto take at each step of the execution loop.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.MRKLChain.html"} {"id": "33f1b67208aa-1", "text": "Callback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam early_stopping_method: str = 'force'\u00b6\nThe method to use for early stopping if the agent never\nreturns AgentFinish. Either \u2018force\u2019 or \u2018generate\u2019.\n\u201cforce\u201d returns a string saying that it stopped because it met atime or iteration limit.\n\u201cgenerate\u201d calls the agent\u2019s LLM Chain one final time to generatea final answer based on the previous steps.\nparam handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False\u00b6\nHow to handle errors raised by the agent\u2019s output parser.Defaults to False, which raises the error.\nsIf true, the error will be sent back to the LLM as an observation.\nIf a string, the string itself will be sent to the LLM as an observation.\nIf a callable function, the function will be called with the exception\nas an argument, and the result of that function will be passed to the agentas an observation.\nparam max_execution_time: Optional[float] = None\u00b6\nThe maximum amount of wall clock time to spend in the execution\nloop.\nparam max_iterations: Optional[int] = 15\u00b6\nThe maximum number of steps to take before ending the execution\nloop.\nSetting to \u2018None\u2019 could lead to an infinite loop.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.MRKLChain.html"} {"id": "33f1b67208aa-2", "text": "them along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam return_intermediate_steps: bool = False\u00b6\nWhether to return the agent\u2019s trajectory of intermediate steps\nat the end in addition to the final output.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam tools: Sequence[BaseTool] [Required]\u00b6\nThe valid tools the agent can call.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.MRKLChain.html"} {"id": "33f1b67208aa-3", "text": "Chain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.MRKLChain.html"} {"id": "33f1b67208aa-4", "text": "chain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.MRKLChain.html"} {"id": "33f1b67208aa-5", "text": "sole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_agent_and_tools(agent: Union[BaseSingleActionAgent, BaseMultiActionAgent], tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, **kwargs: Any) \u2192 AgentExecutor\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.MRKLChain.html"} {"id": "33f1b67208aa-6", "text": "Create from agent and tools.\nclassmethod from_chains(llm: BaseLanguageModel, chains: List[ChainConfig], **kwargs: Any) \u2192 AgentExecutor[source]\u00b6\nUser friendly way to initialize the MRKL chain.\nThis is intended to be an easy way to get up and running with the\nMRKL chain.\nParameters\nllm \u2013 The LLM to use as the agent LLM.\nchains \u2013 The chains the MRKL system has access to.\n**kwargs \u2013 parameters to be passed to initialization.\nReturns\nAn initialized MRKL chain.\nExample\nfrom langchain import LLMMathChain, OpenAI, SerpAPIWrapper, MRKLChain\nfrom langchain.chains.mrkl.base import ChainConfig\nllm = OpenAI(temperature=0)\nsearch = SerpAPIWrapper()\nllm_math_chain = LLMMathChain(llm=llm)\nchains = [\n ChainConfig(\n action_name = \"Search\",\n action=search.search,\n action_description=\"useful for searching\"\n ),\n ChainConfig(\n action_name=\"Calculator\",\n action=llm_math_chain.run,\n action_description=\"useful for doing math\"\n )\n]\nmrkl = MRKLChain.from_chains(llm, chains)\nlookup_tool(name: str) \u2192 BaseTool\u00b6\nLookup tool by name.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.MRKLChain.html"} {"id": "33f1b67208aa-7", "text": "Returns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.MRKLChain.html"} {"id": "33f1b67208aa-8", "text": "these runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nRaise error - saving not supported for Agent Executors.\nsave_agent(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the underlying agent.\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_return_direct_tool\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that tools are compatible with agent.\nvalidator validate_tools\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that tools are compatible with agent.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.MRKLChain.html"} {"id": "33f1b67208aa-9", "text": "constructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.MRKLChain.html"} {"id": "7f2b2c51325e-0", "text": "langchain.agents.agent_toolkits.csv.base.create_csv_agent\u00b6\nlangchain.agents.agent_toolkits.csv.base.create_csv_agent(llm: BaseLanguageModel, path: Union[str, List[str]], pandas_kwargs: Optional[dict] = None, **kwargs: Any) \u2192 AgentExecutor[source]\u00b6\nCreate csv agent by loading to a dataframe and using pandas agent.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.csv.base.create_csv_agent.html"} {"id": "f11d6bac41c8-0", "text": "langchain.agents.agent_toolkits.openapi.planner.RequestsDeleteToolWithParsing\u00b6\nclass langchain.agents.agent_toolkits.openapi.planner.RequestsDeleteToolWithParsing(*, name: str = 'requests_delete', description: str = 'ONLY USE THIS TOOL WHEN THE USER HAS SPECIFICALLY REQUESTED TO DELETE CONTENT FROM A WEBSITE.\\nInput to the tool should be a json string with 2 keys: \"url\", and \"output_instructions\".\\nThe value of \"url\" should be a string.\\nThe value of \"output_instructions\" should be instructions on what information to extract from the response, for example the id(s) for a resource(s) that the DELETE request creates.\\nAlways use double quotes for strings in the json string.\\nONLY USE THIS TOOL IF THE USER HAS SPECIFICALLY REQUESTED TO DELETE SOMETHING.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, requests_wrapper: TextRequestsWrapper, response_length: Optional[int] = 5000, llm_chain: LLMChain = None)[source]\u00b6\nBases: BaseRequestsTool, BaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsDeleteToolWithParsing.html"} {"id": "f11d6bac41c8-1", "text": "Deprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'ONLY USE THIS TOOL WHEN THE USER HAS SPECIFICALLY REQUESTED TO DELETE CONTENT FROM A WEBSITE.\\nInput to the tool should be a json string with 2 keys: \"url\", and \"output_instructions\".\\nThe value of \"url\" should be a string.\\nThe value of \"output_instructions\" should be instructions on what information to extract from the response, for example the id(s) for a resource(s) that the DELETE request creates.\\nAlways use double quotes for strings in the json string.\\nONLY USE THIS TOOL IF THE USER HAS SPECIFICALLY REQUESTED TO DELETE SOMETHING.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam llm_chain: langchain.chains.llm.LLMChain [Optional]\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'requests_delete'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam requests_wrapper: TextRequestsWrapper [Required]\u00b6\nparam response_length: Optional[int] = 5000\u00b6\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsDeleteToolWithParsing.html"} {"id": "f11d6bac41c8-2", "text": "that after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsDeleteToolWithParsing.html"} {"id": "4abefdbd4c59-0", "text": "langchain.agents.agent_toolkits.azure_cognitive_services.toolkit.AzureCognitiveServicesToolkit\u00b6\nclass langchain.agents.agent_toolkits.azure_cognitive_services.toolkit.AzureCognitiveServicesToolkit[source]\u00b6\nBases: BaseToolkit\nToolkit for Azure Cognitive Services.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nget_tools() \u2192 List[BaseTool][source]\u00b6\nGet the tools in the toolkit.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.azure_cognitive_services.toolkit.AzureCognitiveServicesToolkit.html"} {"id": "c9e57fdb0d6f-0", "text": "langchain.agents.agent_toolkits.zapier.toolkit.ZapierToolkit\u00b6\nclass langchain.agents.agent_toolkits.zapier.toolkit.ZapierToolkit(*, tools: List[BaseTool] = [])[source]\u00b6\nBases: BaseToolkit\nZapier Toolkit.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam tools: List[langchain.tools.base.BaseTool] = []\u00b6\nasync classmethod async_from_zapier_nla_wrapper(zapier_nla_wrapper: ZapierNLAWrapper) \u2192 ZapierToolkit[source]\u00b6\nCreate a toolkit from a ZapierNLAWrapper.\nclassmethod from_zapier_nla_wrapper(zapier_nla_wrapper: ZapierNLAWrapper) \u2192 ZapierToolkit[source]\u00b6\nCreate a toolkit from a ZapierNLAWrapper.\nget_tools() \u2192 List[BaseTool][source]\u00b6\nGet the tools in the toolkit.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.zapier.toolkit.ZapierToolkit.html"} {"id": "b87a28926367-0", "text": "langchain.agents.react.base.ReActChain\u00b6\nclass langchain.agents.react.base.ReActChain(llm: BaseLanguageModel, docstore: Docstore, *, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, agent: Union[BaseSingleActionAgent, BaseMultiActionAgent], tools: Sequence[BaseTool], return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False)[source]\u00b6\nBases: AgentExecutor\nChain that implements the ReAct paper.\nExample\nfrom langchain import ReActChain, OpenAI\nreact = ReAct(llm=OpenAI())\nInitialize with the LLM and a docstore.\nparam agent: Union[BaseSingleActionAgent, BaseMultiActionAgent] [Required]\u00b6\nThe agent to run for creating a plan and determining actions\nto take at each step of the execution loop.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam early_stopping_method: str = 'force'\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.react.base.ReActChain.html"} {"id": "b87a28926367-1", "text": "for full details.\nparam early_stopping_method: str = 'force'\u00b6\nThe method to use for early stopping if the agent never\nreturns AgentFinish. Either \u2018force\u2019 or \u2018generate\u2019.\n\u201cforce\u201d returns a string saying that it stopped because it met atime or iteration limit.\n\u201cgenerate\u201d calls the agent\u2019s LLM Chain one final time to generatea final answer based on the previous steps.\nparam handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False\u00b6\nHow to handle errors raised by the agent\u2019s output parser.Defaults to False, which raises the error.\nsIf true, the error will be sent back to the LLM as an observation.\nIf a string, the string itself will be sent to the LLM as an observation.\nIf a callable function, the function will be called with the exception\nas an argument, and the result of that function will be passed to the agentas an observation.\nparam max_execution_time: Optional[float] = None\u00b6\nThe maximum amount of wall clock time to spend in the execution\nloop.\nparam max_iterations: Optional[int] = 15\u00b6\nThe maximum number of steps to take before ending the execution\nloop.\nSetting to \u2018None\u2019 could lead to an infinite loop.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.react.base.ReActChain.html"} {"id": "b87a28926367-2", "text": "This metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam return_intermediate_steps: bool = False\u00b6\nWhether to return the agent\u2019s trajectory of intermediate steps\nat the end in addition to the final output.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam tools: Sequence[BaseTool] [Required]\u00b6\nThe valid tools the agent can call.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.react.base.ReActChain.html"} {"id": "b87a28926367-3", "text": "chain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.react.base.ReActChain.html"} {"id": "b87a28926367-4", "text": "tags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.react.base.ReActChain.html"} {"id": "b87a28926367-5", "text": "these runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_agent_and_tools(agent: Union[BaseSingleActionAgent, BaseMultiActionAgent], tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, **kwargs: Any) \u2192 AgentExecutor\u00b6\nCreate from agent and tools.\nlookup_tool(name: str) \u2192 BaseTool\u00b6\nLookup tool by name.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.react.base.ReActChain.html"} {"id": "b87a28926367-6", "text": "lookup_tool(name: str) \u2192 BaseTool\u00b6\nLookup tool by name.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.react.base.ReActChain.html"} {"id": "b87a28926367-7", "text": "The other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nRaise error - saving not supported for Agent Executors.\nsave_agent(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the underlying agent.\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.react.base.ReActChain.html"} {"id": "b87a28926367-8", "text": "to_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_return_direct_tool\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that tools are compatible with agent.\nvalidator validate_tools\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that tools are compatible with agent.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.react.base.ReActChain.html"} {"id": "5490c7ec5e37-0", "text": "langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit\u00b6\nclass langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit(*, json_agent: AgentExecutor, requests_wrapper: TextRequestsWrapper)[source]\u00b6\nBases: BaseToolkit\nToolkit for interacting with an OpenAPI API.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam json_agent: langchain.agents.agent.AgentExecutor [Required]\u00b6\nparam requests_wrapper: langchain.requests.TextRequestsWrapper [Required]\u00b6\nclassmethod from_llm(llm: BaseLanguageModel, json_spec: JsonSpec, requests_wrapper: TextRequestsWrapper, **kwargs: Any) \u2192 OpenAPIToolkit[source]\u00b6\nCreate json agent from llm, then initialize.\nget_tools() \u2192 List[BaseTool][source]\u00b6\nGet the tools in the toolkit.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit.html"} {"id": "287ecf2d1dd2-0", "text": "langchain.agents.tools.InvalidTool\u00b6\nclass langchain.agents.tools.InvalidTool(*, name: str = 'invalid_tool', description: str = 'Called when tool name is invalid.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False)[source]\u00b6\nBases: BaseTool\nTool that is run when invalid tool name is encountered by agent.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Called when tool name is invalid.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.tools.InvalidTool.html"} {"id": "287ecf2d1dd2-1", "text": "and passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'invalid_tool'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.tools.InvalidTool.html"} {"id": "287ecf2d1dd2-2", "text": "Raise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.tools.InvalidTool.html"} {"id": "5ee4777a338d-0", "text": "langchain.agents.agent_toolkits.vectorstore.base.create_vectorstore_router_agent\u00b6\nlangchain.agents.agent_toolkits.vectorstore.base.create_vectorstore_router_agent(llm: BaseLanguageModel, toolkit: VectorStoreRouterToolkit, callback_manager: Optional[BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions.\\nYou have access to tools for interacting with different sources, and the inputs to the tools are questions.\\nYour main task is to decide which of the tools is relevant for answering question at hand.\\nFor complex questions, you can break the question down into sub questions and use tools to answers the sub questions.\\n', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 AgentExecutor[source]\u00b6\nConstruct a vectorstore router agent from an LLM and tools.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.vectorstore.base.create_vectorstore_router_agent.html"} {"id": "165023ce28a4-0", "text": "langchain.agents.agent_toolkits.spark.base.create_spark_dataframe_agent\u00b6\nlangchain.agents.agent_toolkits.spark.base.create_spark_dataframe_agent(llm: BaseLLM, df: Any, callback_manager: Optional[BaseCallbackManager] = None, prefix: str = '\\nYou are working with a spark dataframe in Python. The name of the dataframe is `df`.\\nYou should use the tools below to answer the question posed of you:', suffix: str = '\\nThis is the result of `print(df.first())`:\\n{df}\\n\\nBegin!\\nQuestion: {input}\\n{agent_scratchpad}', input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 AgentExecutor[source]\u00b6\nConstruct a spark agent from an LLM and dataframe.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.spark.base.create_spark_dataframe_agent.html"} {"id": "3b239ff7f38f-0", "text": "langchain.agents.load_tools.get_all_tool_names\u00b6\nlangchain.agents.load_tools.get_all_tool_names() \u2192 List[str][source]\u00b6\nGet a list of all possible tool names.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.load_tools.get_all_tool_names.html"} {"id": "cf926bdfd6ab-0", "text": "langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit\u00b6\nclass langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit(*, vectorstores: List[VectorStoreInfo], llm: BaseLanguageModel = None)[source]\u00b6\nBases: BaseToolkit\nToolkit for routing between vector stores.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam llm: langchain.schema.language_model.BaseLanguageModel [Optional]\u00b6\nparam vectorstores: List[langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo] [Required]\u00b6\nget_tools() \u2192 List[BaseTool][source]\u00b6\nGet the tools in the toolkit.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit.html"} {"id": "de5caaaa6870-0", "text": "langchain.agents.load_tools.load_huggingface_tool\u00b6\nlangchain.agents.load_tools.load_huggingface_tool(task_or_repo_id: str, model_repo_id: Optional[str] = None, token: Optional[str] = None, remote: bool = False, **kwargs: Any) \u2192 BaseTool[source]\u00b6\nLoads a tool from the HuggingFace Hub.\nParameters\ntask_or_repo_id \u2013 Task or model repo id.\nmodel_repo_id \u2013 Optional model repo id.\ntoken \u2013 Optional token.\nremote \u2013 Optional remote. Defaults to False.\n**kwargs \u2013 \nReturns\nA tool.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.load_tools.load_huggingface_tool.html"} {"id": "ac0f7b428977-0", "text": "langchain.agents.agent_toolkits.openapi.toolkit.RequestsToolkit\u00b6\nclass langchain.agents.agent_toolkits.openapi.toolkit.RequestsToolkit(*, requests_wrapper: TextRequestsWrapper)[source]\u00b6\nBases: BaseToolkit\nToolkit for making requests.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam requests_wrapper: langchain.requests.TextRequestsWrapper [Required]\u00b6\nget_tools() \u2192 List[BaseTool][source]\u00b6\nReturn a list of tools.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.toolkit.RequestsToolkit.html"} {"id": "86a1b0cef578-0", "text": "langchain.agents.react.output_parser.ReActOutputParser\u00b6\nclass langchain.agents.react.output_parser.ReActOutputParser[source]\u00b6\nBases: AgentOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 Union[AgentAction, AgentFinish][source]\u00b6\nParse text into agent action/finish.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.react.output_parser.ReActOutputParser.html"} {"id": "86a1b0cef578-1", "text": "property lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.react.output_parser.ReActOutputParser.html"} {"id": "b7b50251ea93-0", "text": "langchain.agents.mrkl.base.ZeroShotAgent\u00b6\nclass langchain.agents.mrkl.base.ZeroShotAgent(*, llm_chain: LLMChain, output_parser: AgentOutputParser = None, allowed_tools: Optional[List[str]] = None)[source]\u00b6\nBases: Agent\nAgent for the MRKL chain.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam allowed_tools: Optional[List[str]] = None\u00b6\nparam llm_chain: langchain.chains.llm.LLMChain [Required]\u00b6\nparam output_parser: langchain.agents.agent.AgentOutputParser [Optional]\u00b6\nasync aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[AgentAction, AgentFinish]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.ZeroShotAgent.html"} {"id": "b7b50251ea93-1", "text": "**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nclassmethod create_prompt(tools: Sequence[BaseTool], prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', suffix: str = 'Begin!\\n\\nQuestion: {input}\\nThought:{agent_scratchpad}', format_instructions: str = 'Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None) \u2192 PromptTemplate[source]\u00b6\nCreate prompt in the style of the zero shot agent.\nParameters\ntools \u2013 List of tools the agent will have access to, used to format the\nprompt.\nprefix \u2013 String to put before the list of tools.\nsuffix \u2013 String to put after the list of tools.\ninput_variables \u2013 List of input variables the final prompt will expect.\nReturns\nA PromptTemplate with the template assembled from the pieces here.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of agent.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.ZeroShotAgent.html"} {"id": "b7b50251ea93-2", "text": "dict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of agent.\nclassmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', suffix: str = 'Begin!\\n\\nQuestion: {input}\\nThought:{agent_scratchpad}', format_instructions: str = 'Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, **kwargs: Any) \u2192 Agent[source]\u00b6\nConstruct an agent from an LLM and tools.\nget_allowed_tools() \u2192 Optional[List[str]]\u00b6\nget_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) \u2192 Dict[str, Any]\u00b6\nCreate the full inputs for the LLMChain from intermediate steps.\nplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[AgentAction, AgentFinish]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.ZeroShotAgent.html"} {"id": "b7b50251ea93-3", "text": "**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nreturn_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) \u2192 AgentFinish\u00b6\nReturn response when agent has been stopped due to max iterations.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the agent.\nParameters\nfile_path \u2013 Path to file to save the agent to.\nExample:\n.. code-block:: python\n# If working with agent executor\nagent.agent.save(file_path=\u201dpath/agent.yaml\u201d)\ntool_run_logging_kwargs() \u2192 Dict\u00b6\nvalidator validate_prompt\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that prompt matches format.\nproperty llm_prefix: str\u00b6\nPrefix to append the llm call with.\nproperty observation_prefix: str\u00b6\nPrefix to append the observation with.\nproperty return_values: List[str]\u00b6\nReturn values of the agent.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.ZeroShotAgent.html"} {"id": "09b8575262ae-0", "text": "langchain.agents.agent.ExceptionTool\u00b6\nclass langchain.agents.agent.ExceptionTool(*, name: str = '_Exception', description: str = 'Exception tool', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False)[source]\u00b6\nBases: BaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Exception tool'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = '_Exception'\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.ExceptionTool.html"} {"id": "09b8575262ae-1", "text": "param name: str = '_Exception'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.ExceptionTool.html"} {"id": "09b8575262ae-2", "text": "Run the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.ExceptionTool.html"} {"id": "7675a569a340-0", "text": "langchain.agents.self_ask_with_search.output_parser.SelfAskOutputParser\u00b6\nclass langchain.agents.self_ask_with_search.output_parser.SelfAskOutputParser(*, followups: Sequence[str] = ('Follow up:', 'Followup:'), finish_string: str = 'So the final answer is: ')[source]\u00b6\nBases: AgentOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam finish_string: str = 'So the final answer is: '\u00b6\nparam followups: Sequence[str] = ('Follow up:', 'Followup:')\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 Union[AgentAction, AgentFinish][source]\u00b6\nParse text into agent action/finish.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.self_ask_with_search.output_parser.SelfAskOutputParser.html"} {"id": "7675a569a340-1", "text": "Returns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.self_ask_with_search.output_parser.SelfAskOutputParser.html"} {"id": "5df7fe15cbf4-0", "text": "langchain.agents.react.base.ReActTextWorldAgent\u00b6\nclass langchain.agents.react.base.ReActTextWorldAgent(*, llm_chain: LLMChain, output_parser: AgentOutputParser = None, allowed_tools: Optional[List[str]] = None)[source]\u00b6\nBases: ReActDocstoreAgent\nAgent for the ReAct TextWorld chain.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam allowed_tools: Optional[List[str]] = None\u00b6\nparam llm_chain: LLMChain [Required]\u00b6\nparam output_parser: langchain.agents.agent.AgentOutputParser [Optional]\u00b6\nasync aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[AgentAction, AgentFinish]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nclassmethod create_prompt(tools: Sequence[BaseTool]) \u2192 BasePromptTemplate[source]\u00b6\nReturn default prompt.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of agent.\nclassmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, **kwargs: Any) \u2192 Agent\u00b6\nConstruct an agent from an LLM and tools.\nget_allowed_tools() \u2192 Optional[List[str]]\u00b6\nget_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) \u2192 Dict[str, Any]\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.react.base.ReActTextWorldAgent.html"} {"id": "5df7fe15cbf4-1", "text": "Create the full inputs for the LLMChain from intermediate steps.\nplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[AgentAction, AgentFinish]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nreturn_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) \u2192 AgentFinish\u00b6\nReturn response when agent has been stopped due to max iterations.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the agent.\nParameters\nfile_path \u2013 Path to file to save the agent to.\nExample:\n.. code-block:: python\n# If working with agent executor\nagent.agent.save(file_path=\u201dpath/agent.yaml\u201d)\ntool_run_logging_kwargs() \u2192 Dict\u00b6\nvalidator validate_prompt\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that prompt matches format.\nproperty llm_prefix: str\u00b6\nPrefix to append the LLM call with.\nproperty observation_prefix: str\u00b6\nPrefix to append the observation with.\nproperty return_values: List[str]\u00b6\nReturn values of the agent.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.react.base.ReActTextWorldAgent.html"} {"id": "c55912d42f8b-0", "text": "langchain.agents.agent_toolkits.openapi.spec.reduce_openapi_spec\u00b6\nlangchain.agents.agent_toolkits.openapi.spec.reduce_openapi_spec(spec: dict, dereference: bool = True) \u2192 ReducedOpenAPISpec[source]\u00b6\nSimplify/distill/minify a spec somehow.\nI want a smaller target for retrieval and (more importantly)\nI want smaller results from retrieval.\nI was hoping https://openapi.tools/ would have some useful bits\nto this end, but doesn\u2019t seem so.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.spec.reduce_openapi_spec.html"} {"id": "d915495a7b6e-0", "text": "langchain.agents.agent_toolkits.vectorstore.base.create_vectorstore_agent\u00b6\nlangchain.agents.agent_toolkits.vectorstore.base.create_vectorstore_agent(llm: BaseLanguageModel, toolkit: VectorStoreToolkit, callback_manager: Optional[BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions about sets of documents.\\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\\nIf the question does not seem relevant to any of the tools provided, just return \"I don\\'t know\" as the answer.\\n', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 AgentExecutor[source]\u00b6\nConstruct a vectorstore agent from an LLM and tools.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.vectorstore.base.create_vectorstore_agent.html"} {"id": "3f8893a9a374-0", "text": "langchain.agents.agent_toolkits.json.base.create_json_agent\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.json.base.create_json_agent.html"} {"id": "3f8893a9a374-1", "text": "langchain.agents.agent_toolkits.json.base.create_json_agent(llm: BaseLanguageModel, toolkit: JsonToolkit, callback_manager: Optional[BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with JSON.\\nYour goal is to return a final answer by interacting with the JSON.\\nYou have access to the following tools which help you learn more about the JSON you are interacting with.\\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\\nDo not make up any information that is not contained in the JSON.\\nYour input to the tools should be in the form of `data[\"key\"][0]` where `data` is the JSON blob you are interacting with, and the syntax used is Python. \\nYou should only use keys that you know for a fact exist. You must validate that a key exists by seeing it previously when calling `json_spec_list_keys`. \\nIf you have not seen a key in one of those responses, you cannot use it.\\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\\nIf you encounter a \"KeyError\", go back to the previous key, look at the available keys, and try again.\\n\\nIf the question does not seem to be related to the JSON, just return \"I don\\'t know\" as the answer.\\nAlways begin your interaction with the `json_spec_list_keys` tool with input \"data\" to see what keys exist in the JSON.\\n\\nNote that sometimes the value at a given path is large. In this case, you will get an error \"Value is a large dictionary, should explore its keys directly\".\\nIn this case, you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\\nDo not simply refer the user to the JSON or", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.json.base.create_json_agent.html"} {"id": "3f8893a9a374-2", "text": "to see what keys exist at that path.\\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\\n', suffix: str = 'Begin!\"\\n\\nQuestion: {input}\\nThought: I should look at the keys that exist in data to see what I have access to\\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 AgentExecutor[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.json.base.create_json_agent.html"} {"id": "3f8893a9a374-3", "text": "Construct a json agent from an LLM and tools.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.json.base.create_json_agent.html"} {"id": "686f40accd6e-0", "text": "langchain.agents.conversational.output_parser.ConvoOutputParser\u00b6\nclass langchain.agents.conversational.output_parser.ConvoOutputParser(*, ai_prefix: str = 'AI')[source]\u00b6\nBases: AgentOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam ai_prefix: str = 'AI'\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str[source]\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 Union[AgentAction, AgentFinish][source]\u00b6\nParse text into agent action/finish.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.conversational.output_parser.ConvoOutputParser.html"} {"id": "686f40accd6e-1", "text": "serialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.conversational.output_parser.ConvoOutputParser.html"} {"id": "3cdfa002bf09-0", "text": "langchain.agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit\u00b6\nclass langchain.agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit(*, db: SparkSQL, llm: BaseLanguageModel)[source]\u00b6\nBases: BaseToolkit\nToolkit for interacting with Spark SQL.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam db: langchain.utilities.spark_sql.SparkSQL [Required]\u00b6\nparam llm: langchain.schema.language_model.BaseLanguageModel [Required]\u00b6\nget_tools() \u2192 List[BaseTool][source]\u00b6\nGet the tools in the toolkit.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit.html"} {"id": "7483f0d652af-0", "text": "langchain.agents.agent.LLMSingleActionAgent\u00b6\nclass langchain.agents.agent.LLMSingleActionAgent(*, llm_chain: LLMChain, output_parser: AgentOutputParser, stop: List[str])[source]\u00b6\nBases: BaseSingleActionAgent\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam llm_chain: langchain.chains.llm.LLMChain [Required]\u00b6\nparam output_parser: langchain.agents.agent.AgentOutputParser [Required]\u00b6\nparam stop: List[str] [Required]\u00b6\nasync aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[AgentAction, AgentFinish][source]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\ndict(**kwargs: Any) \u2192 Dict[source]\u00b6\nReturn dictionary representation of agent.\nclassmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, **kwargs: Any) \u2192 BaseSingleActionAgent\u00b6\nget_allowed_tools() \u2192 Optional[List[str]]\u00b6\nplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[AgentAction, AgentFinish][source]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.LLMSingleActionAgent.html"} {"id": "7483f0d652af-1", "text": "Parameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nreturn_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) \u2192 AgentFinish\u00b6\nReturn response when agent has been stopped due to max iterations.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the agent.\nParameters\nfile_path \u2013 Path to file to save the agent to.\nExample:\n.. code-block:: python\n# If working with agent executor\nagent.agent.save(file_path=\u201dpath/agent.yaml\u201d)\ntool_run_logging_kwargs() \u2192 Dict[source]\u00b6\nproperty return_values: List[str]\u00b6\nReturn values of the agent.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.LLMSingleActionAgent.html"} {"id": "9d80eb51867e-0", "text": "langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit\u00b6\nclass langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit(*, powerbi: PowerBIDataset, llm: Union[BaseLanguageModel, BaseChatModel], examples: Optional[str] = None, max_iterations: int = 5, callback_manager: Optional[BaseCallbackManager] = None, output_token_limit: Optional[int] = None, tiktoken_model_name: Optional[str] = None)[source]\u00b6\nBases: BaseToolkit\nToolkit for interacting with PowerBI dataset.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None\u00b6\nparam examples: Optional[str] = None\u00b6\nparam llm: Union[langchain.schema.language_model.BaseLanguageModel, langchain.chat_models.base.BaseChatModel] [Required]\u00b6\nparam max_iterations: int = 5\u00b6\nparam output_token_limit: Optional[int] = None\u00b6\nparam powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]\u00b6\nparam tiktoken_model_name: Optional[str] = None\u00b6\nget_tools() \u2192 List[BaseTool][source]\u00b6\nGet the tools in the toolkit.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit.html"} {"id": "80c3b4894d1f-0", "text": "langchain.agents.conversational_chat.base.ConversationalChatAgent\u00b6\nclass langchain.agents.conversational_chat.base.ConversationalChatAgent(*, llm_chain: LLMChain, output_parser: AgentOutputParser = None, allowed_tools: Optional[List[str]] = None, template_tool_response: str = \"TOOL RESPONSE: \\n---------------------\\n{observation}\\n\\nUSER'S INPUT\\n--------------------\\n\\nOkay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else.\")[source]\u00b6\nBases: Agent\nAn agent designed to hold a conversation in addition to using tools.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam allowed_tools: Optional[List[str]] = None\u00b6\nparam llm_chain: langchain.chains.llm.LLMChain [Required]\u00b6\nparam output_parser: langchain.agents.agent.AgentOutputParser [Optional]\u00b6\nparam template_tool_response: str = \"TOOL RESPONSE: \\n---------------------\\n{observation}\\n\\nUSER'S INPUT\\n--------------------\\n\\nOkay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else.\"\u00b6\nasync aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[AgentAction, AgentFinish]\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.conversational_chat.base.ConversationalChatAgent.html"} {"id": "80c3b4894d1f-1", "text": "Given input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.conversational_chat.base.ConversationalChatAgent.html"} {"id": "80c3b4894d1f-2", "text": "**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nclassmethod create_prompt(tools: Sequence[BaseTool], system_message: str = 'Assistant is a large language model trained by OpenAI.\\n\\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\\n\\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\\n\\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.', human_message: str = \"TOOLS\\n------\\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\\n\\n{{tools}}\\n\\n{format_instructions}\\n\\nUSER'S INPUT\\n--------------------\\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\\n\\n{{{{input}}}}\", input_variables: Optional[List[str]] = None, output_parser: Optional[BaseOutputParser] = None) \u2192 BasePromptTemplate[source]\u00b6\nCreate a prompt for this class.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.conversational_chat.base.ConversationalChatAgent.html"} {"id": "80c3b4894d1f-3", "text": "Create a prompt for this class.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of agent.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.conversational_chat.base.ConversationalChatAgent.html"} {"id": "80c3b4894d1f-4", "text": "classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, system_message: str = 'Assistant is a large language model trained by OpenAI.\\n\\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\\n\\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\\n\\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.', human_message: str = \"TOOLS\\n------\\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\\n\\n{{tools}}\\n\\n{format_instructions}\\n\\nUSER'S INPUT\\n--------------------\\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\\n\\n{{{{input}}}}\", input_variables: Optional[List[str]] = None, **kwargs: Any) \u2192 Agent[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.conversational_chat.base.ConversationalChatAgent.html"} {"id": "80c3b4894d1f-5", "text": "Construct an agent from an LLM and tools.\nget_allowed_tools() \u2192 Optional[List[str]]\u00b6\nget_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) \u2192 Dict[str, Any]\u00b6\nCreate the full inputs for the LLMChain from intermediate steps.\nplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[AgentAction, AgentFinish]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nreturn_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) \u2192 AgentFinish\u00b6\nReturn response when agent has been stopped due to max iterations.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the agent.\nParameters\nfile_path \u2013 Path to file to save the agent to.\nExample:\n.. code-block:: python\n# If working with agent executor\nagent.agent.save(file_path=\u201dpath/agent.yaml\u201d)\ntool_run_logging_kwargs() \u2192 Dict\u00b6\nvalidator validate_prompt\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that prompt matches format.\nproperty llm_prefix: str\u00b6\nPrefix to append the llm call with.\nproperty observation_prefix: str\u00b6\nPrefix to append the observation with.\nproperty return_values: List[str]\u00b6\nReturn values of the agent.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.conversational_chat.base.ConversationalChatAgent.html"} {"id": "6f7c99a7163c-0", "text": "langchain.agents.agent_toolkits.json.toolkit.JsonToolkit\u00b6\nclass langchain.agents.agent_toolkits.json.toolkit.JsonToolkit(*, spec: JsonSpec)[source]\u00b6\nBases: BaseToolkit\nToolkit for interacting with a JSON spec.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam spec: langchain.tools.json.tool.JsonSpec [Required]\u00b6\nget_tools() \u2192 List[BaseTool][source]\u00b6\nGet the tools in the toolkit.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.json.toolkit.JsonToolkit.html"} {"id": "e3d40ea66860-0", "text": "langchain.agents.agent_toolkits.playwright.toolkit.PlayWrightBrowserToolkit\u00b6\nclass langchain.agents.agent_toolkits.playwright.toolkit.PlayWrightBrowserToolkit(*, sync_browser: Optional['SyncBrowser'] = None, async_browser: Optional['AsyncBrowser'] = None)[source]\u00b6\nBases: BaseToolkit\nToolkit for web browser tools.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam async_browser: Optional['AsyncBrowser'] = None\u00b6\nparam sync_browser: Optional['SyncBrowser'] = None\u00b6\nclassmethod from_browser(sync_browser: Optional[SyncBrowser] = None, async_browser: Optional[AsyncBrowser] = None) \u2192 PlayWrightBrowserToolkit[source]\u00b6\nInstantiate the toolkit.\nget_tools() \u2192 List[BaseTool][source]\u00b6\nGet the tools in the toolkit.\nvalidator validate_imports_and_browser_provided\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nCheck that the arguments are valid.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.playwright.toolkit.PlayWrightBrowserToolkit.html"} {"id": "37c18fc01e22-0", "text": "langchain.agents.schema.AgentScratchPadChatPromptTemplate\u00b6\nclass langchain.agents.schema.AgentScratchPadChatPromptTemplate(*, input_variables: List[str], output_parser: Optional[BaseOutputParser] = None, partial_variables: Mapping[str, Union[str, Callable[[], str]]] = None, messages: List[Union[BaseMessagePromptTemplate, BaseMessage]])[source]\u00b6\nBases: ChatPromptTemplate\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam input_variables: List[str] [Required]\u00b6\nA list of the names of the variables the prompt template expects.\nparam messages: List[Union[BaseMessagePromptTemplate, BaseMessage]] [Required]\u00b6\nparam output_parser: Optional[BaseOutputParser] = None\u00b6\nHow to parse the output of calling an LLM on this formatted prompt.\nparam partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of prompt.\nformat(**kwargs: Any) \u2192 str\u00b6\nFormat the prompt with the inputs.\nParameters\nkwargs \u2013 Any arguments to be passed to the prompt template.\nReturns\nA formatted string.\nExample:\nprompt.format(variable1=\"foo\")\nformat_messages(**kwargs: Any) \u2192 List[BaseMessage]\u00b6\nFormat kwargs into a list of messages.\nformat_prompt(**kwargs: Any) \u2192 PromptValue\u00b6\nCreate Chat Messages.\nclassmethod from_messages(messages: Sequence[Union[BaseMessagePromptTemplate, BaseMessage]]) \u2192 ChatPromptTemplate\u00b6\nclassmethod from_role_strings(string_messages: List[Tuple[str, str]]) \u2192 ChatPromptTemplate\u00b6\nclassmethod from_strings(string_messages: List[Tuple[Type[BaseMessagePromptTemplate], str]]) \u2192 ChatPromptTemplate\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.schema.AgentScratchPadChatPromptTemplate.html"} {"id": "37c18fc01e22-1", "text": "classmethod from_template(template: str, **kwargs: Any) \u2192 ChatPromptTemplate\u00b6\npartial(**kwargs: Union[str, Callable[[], str]]) \u2192 BasePromptTemplate\u00b6\nReturn a partial of the prompt template.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the prompt.\nParameters\nfile_path \u2013 Path to directory to save prompt to.\nExample:\n.. code-block:: python\nprompt.save(file_path=\u201dpath/prompt.yaml\u201d)\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_input_variables\u00a0 \u00bb\u00a0 all fields\u00b6\nvalidator validate_variable_names\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate variable names do not include restricted names.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.schema.AgentScratchPadChatPromptTemplate.html"} {"id": "2f0355a10fdb-0", "text": "langchain.agents.agent_toolkits.powerbi.base.create_pbi_agent\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.powerbi.base.create_pbi_agent.html"} {"id": "2f0355a10fdb-1", "text": "langchain.agents.agent_toolkits.powerbi.base.create_pbi_agent(llm: BaseLanguageModel, toolkit: Optional[PowerBIToolkit] = None, powerbi: Optional[PowerBIDataset] = None, callback_manager: Optional[BaseCallbackManager] = None, prefix: str = 'You are an agent designed to help users interact with a PowerBI Dataset.\\n\\nAgent has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, return \"This does not appear to be part of this dataset.\" as the answer.\\n\\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readable format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\\n', suffix: str = 'Begin!\\n\\nQuestion: {input}\\nThought: I can first ask which tables I have, then how each table is defined and then ask the query tool the question I need, and finally create a nice sentence that answers the question.\\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n...", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.powerbi.base.create_pbi_agent.html"} {"id": "2f0355a10fdb-2", "text": "Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', examples: Optional[str] = None, input_variables: Optional[List[str]] = None, top_k: int = 10, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 AgentExecutor[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.powerbi.base.create_pbi_agent.html"} {"id": "2f0355a10fdb-3", "text": "Construct a pbi agent from an LLM and tools.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.powerbi.base.create_pbi_agent.html"} {"id": "b37c24e85f19-0", "text": "langchain.agents.agent_toolkits.nla.tool.NLATool\u00b6\nclass langchain.agents.agent_toolkits.nla.tool.NLATool(name: str, func: Callable, description: str, *, args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, coroutine: Optional[Callable[[...], Awaitable[str]]] = None)[source]\u00b6\nBases: Tool\nNatural Language API Tool.\nInitialize tool.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam coroutine: Optional[Callable[..., Awaitable[str]]] = None\u00b6\nThe asynchronous version of the function.\nparam description: str = ''\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam func: Callable[..., str] [Required]\u00b6\nThe function to run when the tool is called.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.nla.tool.NLATool.html"} {"id": "b37c24e85f19-1", "text": "Optional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str [Required]\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nclassmethod from_function(func: Callable, name: str, description: str, return_direct: bool = False, args_schema: Optional[Type[BaseModel]] = None, **kwargs: Any) \u2192 Tool\u00b6\nInitialize tool from a function.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.nla.tool.NLATool.html"} {"id": "b37c24e85f19-2", "text": "Initialize tool from a function.\nclassmethod from_llm_and_method(llm: BaseLanguageModel, path: str, method: str, spec: OpenAPISpec, requests: Optional[Requests] = None, verbose: bool = False, return_intermediate_steps: bool = False, **kwargs: Any) \u2192 NLATool[source]\u00b6\nInstantiate the tool from the specified path and method.\nclassmethod from_open_api_endpoint_chain(chain: OpenAPIEndpointChain, api_title: str) \u2192 NLATool[source]\u00b6\nConvert an endpoint chain to an API endpoint tool.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nThe tool\u2019s input arguments.\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.nla.tool.NLATool.html"} {"id": "16a2e7b94ed6-0", "text": "langchain.agents.react.base.ReActDocstoreAgent\u00b6\nclass langchain.agents.react.base.ReActDocstoreAgent(*, llm_chain: LLMChain, output_parser: AgentOutputParser = None, allowed_tools: Optional[List[str]] = None)[source]\u00b6\nBases: Agent\nAgent for the ReAct chain.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam allowed_tools: Optional[List[str]] = None\u00b6\nparam llm_chain: LLMChain [Required]\u00b6\nparam output_parser: langchain.agents.agent.AgentOutputParser [Optional]\u00b6\nasync aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[AgentAction, AgentFinish]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nclassmethod create_prompt(tools: Sequence[BaseTool]) \u2192 BasePromptTemplate[source]\u00b6\nReturn default prompt.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of agent.\nclassmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, **kwargs: Any) \u2192 Agent\u00b6\nConstruct an agent from an LLM and tools.\nget_allowed_tools() \u2192 Optional[List[str]]\u00b6\nget_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) \u2192 Dict[str, Any]\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.react.base.ReActDocstoreAgent.html"} {"id": "16a2e7b94ed6-1", "text": "Create the full inputs for the LLMChain from intermediate steps.\nplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[AgentAction, AgentFinish]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nreturn_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) \u2192 AgentFinish\u00b6\nReturn response when agent has been stopped due to max iterations.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the agent.\nParameters\nfile_path \u2013 Path to file to save the agent to.\nExample:\n.. code-block:: python\n# If working with agent executor\nagent.agent.save(file_path=\u201dpath/agent.yaml\u201d)\ntool_run_logging_kwargs() \u2192 Dict\u00b6\nvalidator validate_prompt\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that prompt matches format.\nproperty llm_prefix: str\u00b6\nPrefix to append the LLM call with.\nproperty observation_prefix: str\u00b6\nPrefix to append the observation with.\nproperty return_values: List[str]\u00b6\nReturn values of the agent.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.react.base.ReActDocstoreAgent.html"} {"id": "5e7ba8ddd7c7-0", "text": "langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit\u00b6\nclass langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit(*, nla_tools: Sequence[NLATool])[source]\u00b6\nBases: BaseToolkit\nNatural Language API Toolkit Definition.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam nla_tools: Sequence[langchain.agents.agent_toolkits.nla.tool.NLATool] [Required]\u00b6\nList of API Endpoint Tools.\nclassmethod from_llm_and_ai_plugin(llm: BaseLanguageModel, ai_plugin: AIPlugin, requests: Optional[Requests] = None, verbose: bool = False, **kwargs: Any) \u2192 NLAToolkit[source]\u00b6\nInstantiate the toolkit from an OpenAPI Spec URL\nclassmethod from_llm_and_ai_plugin_url(llm: BaseLanguageModel, ai_plugin_url: str, requests: Optional[Requests] = None, verbose: bool = False, **kwargs: Any) \u2192 NLAToolkit[source]\u00b6\nInstantiate the toolkit from an OpenAPI Spec URL\nclassmethod from_llm_and_spec(llm: BaseLanguageModel, spec: OpenAPISpec, requests: Optional[Requests] = None, verbose: bool = False, **kwargs: Any) \u2192 NLAToolkit[source]\u00b6\nInstantiate the toolkit by creating tools for each operation.\nclassmethod from_llm_and_url(llm: BaseLanguageModel, open_api_url: str, requests: Optional[Requests] = None, verbose: bool = False, **kwargs: Any) \u2192 NLAToolkit[source]\u00b6\nInstantiate the toolkit from an OpenAPI Spec URL\nget_tools() \u2192 List[BaseTool][source]\u00b6\nGet the tools for all the API operations.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit.html"} {"id": "2bee225f9652-0", "text": "langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit\u00b6\nclass langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit(*, db: SQLDatabase, llm: BaseLanguageModel)[source]\u00b6\nBases: BaseToolkit\nToolkit for interacting with SQL databases.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam db: langchain.sql_database.SQLDatabase [Required]\u00b6\nparam llm: langchain.schema.language_model.BaseLanguageModel [Required]\u00b6\nget_tools() \u2192 List[BaseTool][source]\u00b6\nGet the tools in the toolkit.\nproperty dialect: str\u00b6\nReturn string representation of dialect to use.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit.html"} {"id": "7e8a39339c77-0", "text": "langchain.agents.agent_toolkits.openapi.planner.RequestsPostToolWithParsing\u00b6\nclass langchain.agents.agent_toolkits.openapi.planner.RequestsPostToolWithParsing(*, name: str = 'requests_post', description: str = 'Use this when you want to POST to a website.\\nInput to the tool should be a json string with 3 keys: \"url\", \"data\", and \"output_instructions\".\\nThe value of \"url\" should be a string.\\nThe value of \"data\" should be a dictionary of key-value pairs you want to POST to the url.\\nThe value of \"output_instructions\" should be instructions on what information to extract from the response, for example the id(s) for a resource(s) that the POST request creates.\\nAlways use double quotes for strings in the json string.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, requests_wrapper: TextRequestsWrapper, response_length: Optional[int] = 5000, llm_chain: LLMChain = None)[source]\u00b6\nBases: BaseRequestsTool, BaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsPostToolWithParsing.html"} {"id": "7e8a39339c77-1", "text": "Deprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Use this when you want to POST to a website.\\nInput to the tool should be a json string with 3 keys: \"url\", \"data\", and \"output_instructions\".\\nThe value of \"url\" should be a string.\\nThe value of \"data\" should be a dictionary of key-value pairs you want to POST to the url.\\nThe value of \"output_instructions\" should be instructions on what information to extract from the response, for example the id(s) for a resource(s) that the POST request creates.\\nAlways use double quotes for strings in the json string.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam llm_chain: langchain.chains.llm.LLMChain [Optional]\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'requests_post'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam requests_wrapper: TextRequestsWrapper [Required]\u00b6\nparam response_length: Optional[int] = 5000\u00b6\nparam return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsPostToolWithParsing.html"} {"id": "7e8a39339c77-2", "text": "that after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsPostToolWithParsing.html"} {"id": "78557b85d4fd-0", "text": "langchain.agents.self_ask_with_search.base.SelfAskWithSearchAgent\u00b6\nclass langchain.agents.self_ask_with_search.base.SelfAskWithSearchAgent(*, llm_chain: LLMChain, output_parser: AgentOutputParser = None, allowed_tools: Optional[List[str]] = None)[source]\u00b6\nBases: Agent\nAgent for the self-ask-with-search paper.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam allowed_tools: Optional[List[str]] = None\u00b6\nparam llm_chain: LLMChain [Required]\u00b6\nparam output_parser: langchain.agents.agent.AgentOutputParser [Optional]\u00b6\nasync aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[AgentAction, AgentFinish]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nclassmethod create_prompt(tools: Sequence[BaseTool]) \u2192 BasePromptTemplate[source]\u00b6\nPrompt does not depend on tools.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of agent.\nclassmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, **kwargs: Any) \u2192 Agent\u00b6\nConstruct an agent from an LLM and tools.\nget_allowed_tools() \u2192 Optional[List[str]]\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.self_ask_with_search.base.SelfAskWithSearchAgent.html"} {"id": "78557b85d4fd-1", "text": "get_allowed_tools() \u2192 Optional[List[str]]\u00b6\nget_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) \u2192 Dict[str, Any]\u00b6\nCreate the full inputs for the LLMChain from intermediate steps.\nplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[AgentAction, AgentFinish]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nreturn_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) \u2192 AgentFinish\u00b6\nReturn response when agent has been stopped due to max iterations.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the agent.\nParameters\nfile_path \u2013 Path to file to save the agent to.\nExample:\n.. code-block:: python\n# If working with agent executor\nagent.agent.save(file_path=\u201dpath/agent.yaml\u201d)\ntool_run_logging_kwargs() \u2192 Dict\u00b6\nvalidator validate_prompt\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that prompt matches format.\nproperty llm_prefix: str\u00b6\nPrefix to append the LLM call with.\nproperty observation_prefix: str\u00b6\nPrefix to append the observation with.\nproperty return_values: List[str]\u00b6\nReturn values of the agent.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.self_ask_with_search.base.SelfAskWithSearchAgent.html"} {"id": "0aeaf9e57a6b-0", "text": "langchain.agents.agent_toolkits.pandas.base.create_pandas_dataframe_agent\u00b6\nlangchain.agents.agent_toolkits.pandas.base.create_pandas_dataframe_agent(llm: BaseLanguageModel, df: Any, agent_type: AgentType = AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager: Optional[BaseCallbackManager] = None, prefix: Optional[str] = None, suffix: Optional[str] = None, input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', agent_executor_kwargs: Optional[Dict[str, Any]] = None, include_df_in_prompt: Optional[bool] = True, number_of_head_rows: int = 5, **kwargs: Dict[str, Any]) \u2192 AgentExecutor[source]\u00b6\nConstruct a pandas agent from an LLM and dataframe.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.pandas.base.create_pandas_dataframe_agent.html"} {"id": "16092e37d3f3-0", "text": "langchain.agents.conversational.base.ConversationalAgent\u00b6\nclass langchain.agents.conversational.base.ConversationalAgent(*, llm_chain: LLMChain, output_parser: AgentOutputParser = None, allowed_tools: Optional[List[str]] = None, ai_prefix: str = 'AI')[source]\u00b6\nBases: Agent\nAn agent designed to hold a conversation in addition to using tools.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam ai_prefix: str = 'AI'\u00b6\nparam allowed_tools: Optional[List[str]] = None\u00b6\nparam llm_chain: langchain.chains.llm.LLMChain [Required]\u00b6\nparam output_parser: langchain.agents.agent.AgentOutputParser [Optional]\u00b6\nasync aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[AgentAction, AgentFinish]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.conversational.base.ConversationalAgent.html"} {"id": "16092e37d3f3-1", "text": "classmethod create_prompt(tools: Sequence[BaseTool], prefix: str = 'Assistant is a large language model trained by OpenAI.\\n\\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\\n\\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\\n\\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\\n\\nTOOLS:\\n------\\n\\nAssistant has access to the following tools:', suffix: str = 'Begin!\\n\\nPrevious conversation history:\\n{chat_history}\\n\\nNew input: {input}\\n{agent_scratchpad}', format_instructions: str = 'To use a tool, please use the following format:\\n\\n```\\nThought: Do I need to use a tool? Yes\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n```\\n\\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.conversational.base.ConversationalAgent.html"} {"id": "16092e37d3f3-2", "text": "say to the Human, or if you do not need to use a tool, you MUST use the format:\\n\\n```\\nThought: Do I need to use a tool? No\\n{ai_prefix}: [your response here]\\n```', ai_prefix: str = 'AI', human_prefix: str = 'Human', input_variables: Optional[List[str]] = None) \u2192 PromptTemplate[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.conversational.base.ConversationalAgent.html"} {"id": "16092e37d3f3-3", "text": "Create prompt in the style of the zero shot agent.\nParameters\ntools \u2013 List of tools the agent will have access to, used to format the\nprompt.\nprefix \u2013 String to put before the list of tools.\nsuffix \u2013 String to put after the list of tools.\nai_prefix \u2013 String to use before AI output.\nhuman_prefix \u2013 String to use before human output.\ninput_variables \u2013 List of input variables the final prompt will expect.\nReturns\nA PromptTemplate with the template assembled from the pieces here.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of agent.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.conversational.base.ConversationalAgent.html"} {"id": "16092e37d3f3-4", "text": "classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, prefix: str = 'Assistant is a large language model trained by OpenAI.\\n\\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\\n\\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\\n\\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\\n\\nTOOLS:\\n------\\n\\nAssistant has access to the following tools:', suffix: str = 'Begin!\\n\\nPrevious conversation history:\\n{chat_history}\\n\\nNew input: {input}\\n{agent_scratchpad}', format_instructions: str = 'To use a tool, please use the following format:\\n\\n```\\nThought: Do I need to use a tool? Yes\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.conversational.base.ConversationalAgent.html"} {"id": "16092e37d3f3-5", "text": "Input: the input to the action\\nObservation: the result of the action\\n```\\n\\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\\n\\n```\\nThought: Do I need to use a tool? No\\n{ai_prefix}: [your response here]\\n```', ai_prefix: str = 'AI', human_prefix: str = 'Human', input_variables: Optional[List[str]] = None, **kwargs: Any) \u2192 Agent[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.conversational.base.ConversationalAgent.html"} {"id": "16092e37d3f3-6", "text": "Construct an agent from an LLM and tools.\nget_allowed_tools() \u2192 Optional[List[str]]\u00b6\nget_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) \u2192 Dict[str, Any]\u00b6\nCreate the full inputs for the LLMChain from intermediate steps.\nplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[AgentAction, AgentFinish]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nreturn_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) \u2192 AgentFinish\u00b6\nReturn response when agent has been stopped due to max iterations.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the agent.\nParameters\nfile_path \u2013 Path to file to save the agent to.\nExample:\n.. code-block:: python\n# If working with agent executor\nagent.agent.save(file_path=\u201dpath/agent.yaml\u201d)\ntool_run_logging_kwargs() \u2192 Dict\u00b6\nvalidator validate_prompt\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that prompt matches format.\nproperty llm_prefix: str\u00b6\nPrefix to append the llm call with.\nproperty observation_prefix: str\u00b6\nPrefix to append the observation with.\nproperty return_values: List[str]\u00b6\nReturn values of the agent.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.conversational.base.ConversationalAgent.html"} {"id": "2be9d2d80788-0", "text": "langchain.agents.chat.output_parser.ChatOutputParser\u00b6\nclass langchain.agents.chat.output_parser.ChatOutputParser[source]\u00b6\nBases: AgentOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str[source]\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 Union[AgentAction, AgentFinish][source]\u00b6\nParse text into agent action/finish.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.chat.output_parser.ChatOutputParser.html"} {"id": "2be9d2d80788-1", "text": "property lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.chat.output_parser.ChatOutputParser.html"} {"id": "5409d5208804-0", "text": "langchain.agents.load_tools.load_tools\u00b6\nlangchain.agents.load_tools.load_tools(tool_names: List[str], llm: Optional[BaseLanguageModel] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 List[BaseTool][source]\u00b6\nLoad tools based on their name.\nParameters\ntool_names \u2013 name of tools to load.\nllm \u2013 Optional language model, may be needed to initialize certain tools.\ncallbacks \u2013 Optional callback manager or list of callback handlers.\nIf not provided, default global callback manager will be used.\nReturns\nList of tools.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.load_tools.load_tools.html"} {"id": "83512d141b07-0", "text": "langchain.agents.agent_toolkits.file_management.toolkit.FileManagementToolkit\u00b6\nclass langchain.agents.agent_toolkits.file_management.toolkit.FileManagementToolkit(*, root_dir: Optional[str] = None, selected_tools: Optional[List[str]] = None)[source]\u00b6\nBases: BaseToolkit\nToolkit for interacting with a Local Files.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam root_dir: Optional[str] = None\u00b6\nIf specified, all file operations are made relative to root_dir.\nparam selected_tools: Optional[List[str]] = None\u00b6\nIf provided, only provide the selected tools. Defaults to all.\nget_tools() \u2192 List[BaseTool][source]\u00b6\nGet the tools in the toolkit.\nvalidator validate_tools\u00a0 \u00bb\u00a0 all fields[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.file_management.toolkit.FileManagementToolkit.html"} {"id": "eadb62fb6a84-0", "text": "langchain.agents.agent_toolkits.spark_sql.base.create_spark_sql_agent\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.spark_sql.base.create_spark_sql_agent.html"} {"id": "eadb62fb6a84-1", "text": "langchain.agents.agent_toolkits.spark_sql.base.create_spark_sql_agent(llm: BaseLanguageModel, toolkit: SparkSQLToolkit, callback_manager: Optional[BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with Spark SQL.\\nGiven an input question, create a syntactically correct Spark SQL query to run, then look at the results of the query and return the answer.\\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\\nYou can order the results by a relevant column to return the most interesting examples in the database.\\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\\nYou have access to tools for interacting with the database.\\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\\n\\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\\n\\nIf the question does not seem related to the database, just return \"I don\\'t know\" as the answer.\\n', suffix: str = 'Begin!\\n\\nQuestion: {input}\\nThought: I should look at the tables in the database to see what I can query.\\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.spark_sql.base.create_spark_sql_agent.html"} {"id": "eadb62fb6a84-2", "text": "(this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 AgentExecutor[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.spark_sql.base.create_spark_sql_agent.html"} {"id": "eadb62fb6a84-3", "text": "Construct a sql agent from an LLM and tools.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.spark_sql.base.create_spark_sql_agent.html"} {"id": "1f2bdb2d0b31-0", "text": "langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo\u00b6\nclass langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo(*, vectorstore: VectorStore, name: str, description: str)[source]\u00b6\nBases: BaseModel\nInformation about a vectorstore.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam description: str [Required]\u00b6\nparam name: str [Required]\u00b6\nparam vectorstore: langchain.vectorstores.base.VectorStore [Required]\u00b6\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo.html"} {"id": "d99fbed6906a-0", "text": "langchain.agents.conversational_chat.output_parser.ConvoOutputParser\u00b6\nclass langchain.agents.conversational_chat.output_parser.ConvoOutputParser[source]\u00b6\nBases: AgentOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str[source]\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 Union[AgentAction, AgentFinish][source]\u00b6\nParse text into agent action/finish.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.conversational_chat.output_parser.ConvoOutputParser.html"} {"id": "d99fbed6906a-1", "text": "property lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.conversational_chat.output_parser.ConvoOutputParser.html"} {"id": "74baee74d2c2-0", "text": "langchain.agents.agent_types.AgentType\u00b6\nclass langchain.agents.agent_types.AgentType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]\u00b6\nBases: str, Enum\nEnumerator with the Agent types.\nMethods\n__init__(*args,\u00a0**kwds)\ncapitalize()\nReturn a capitalized version of the string.\ncasefold()\nReturn a version of the string suitable for caseless comparisons.\ncenter(width[,\u00a0fillchar])\nReturn a centered string of length width.\ncount(sub[,\u00a0start[,\u00a0end]])\nReturn the number of non-overlapping occurrences of substring sub in string S[start:end].\nencode([encoding,\u00a0errors])\nEncode the string using the codec registered for encoding.\nendswith(suffix[,\u00a0start[,\u00a0end]])\nReturn True if S ends with the specified suffix, False otherwise.\nexpandtabs([tabsize])\nReturn a copy where all tab characters are expanded using spaces.\nfind(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nformat(*args,\u00a0**kwargs)\nReturn a formatted version of S, using substitutions from args and kwargs.\nformat_map(mapping)\nReturn a formatted version of S, using substitutions from mapping.\nindex(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nisalnum()\nReturn True if the string is an alpha-numeric string, False otherwise.\nisalpha()\nReturn True if the string is an alphabetic string, False otherwise.\nisascii()\nReturn True if all characters in the string are ASCII, False otherwise.\nisdecimal()\nReturn True if the string is a decimal string, False otherwise.\nisdigit()", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_types.AgentType.html"} {"id": "74baee74d2c2-1", "text": "Return True if the string is a decimal string, False otherwise.\nisdigit()\nReturn True if the string is a digit string, False otherwise.\nisidentifier()\nReturn True if the string is a valid Python identifier, False otherwise.\nislower()\nReturn True if the string is a lowercase string, False otherwise.\nisnumeric()\nReturn True if the string is a numeric string, False otherwise.\nisprintable()\nReturn True if the string is printable, False otherwise.\nisspace()\nReturn True if the string is a whitespace string, False otherwise.\nistitle()\nReturn True if the string is a title-cased string, False otherwise.\nisupper()\nReturn True if the string is an uppercase string, False otherwise.\njoin(iterable,\u00a0/)\nConcatenate any number of strings.\nljust(width[,\u00a0fillchar])\nReturn a left-justified string of length width.\nlower()\nReturn a copy of the string converted to lowercase.\nlstrip([chars])\nReturn a copy of the string with leading whitespace removed.\nmaketrans\nReturn a translation table usable for str.translate().\npartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nremoveprefix(prefix,\u00a0/)\nReturn a str with the given prefix string removed if present.\nremovesuffix(suffix,\u00a0/)\nReturn a str with the given suffix string removed if present.\nreplace(old,\u00a0new[,\u00a0count])\nReturn a copy with all occurrences of substring old replaced by new.\nrfind(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].\nrindex(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_types.AgentType.html"} {"id": "74baee74d2c2-2", "text": "rjust(width[,\u00a0fillchar])\nReturn a right-justified string of length width.\nrpartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nrsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nrstrip([chars])\nReturn a copy of the string with trailing whitespace removed.\nsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nsplitlines([keepends])\nReturn a list of the lines in the string, breaking at line boundaries.\nstartswith(prefix[,\u00a0start[,\u00a0end]])\nReturn True if S starts with the specified prefix, False otherwise.\nstrip([chars])\nReturn a copy of the string with leading and trailing whitespace removed.\nswapcase()\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\nReturn a version of the string where each word is titlecased.\ntranslate(table,\u00a0/)\nReplace each character in the string using the given translation table.\nupper()\nReturn a copy of the string converted to uppercase.\nzfill(width,\u00a0/)\nPad a numeric string with zeros on the left, to fill a field of the given width.\nAttributes\nZERO_SHOT_REACT_DESCRIPTION\nREACT_DOCSTORE\nSELF_ASK_WITH_SEARCH\nCONVERSATIONAL_REACT_DESCRIPTION\nCHAT_ZERO_SHOT_REACT_DESCRIPTION\nCHAT_CONVERSATIONAL_REACT_DESCRIPTION\nSTRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION\nOPENAI_FUNCTIONS\nOPENAI_MULTI_FUNCTIONS\ncapitalize()\u00b6\nReturn a capitalized version of the string.\nMore specifically, make the first character have upper case and the rest lower\ncase.\ncasefold()\u00b6\nReturn a version of the string suitable for caseless comparisons.\ncenter(width, fillchar=' ', /)\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_types.AgentType.html"} {"id": "74baee74d2c2-3", "text": "center(width, fillchar=' ', /)\u00b6\nReturn a centered string of length width.\nPadding is done using the specified fill character (default is a space).\ncount(sub[, start[, end]]) \u2192 int\u00b6\nReturn the number of non-overlapping occurrences of substring sub in\nstring S[start:end]. Optional arguments start and end are\ninterpreted as in slice notation.\nencode(encoding='utf-8', errors='strict')\u00b6\nEncode the string using the codec registered for encoding.\nencodingThe encoding in which to encode the string.\nerrorsThe error handling scheme to use for encoding errors.\nThe default is \u2018strict\u2019 meaning that encoding errors raise a\nUnicodeEncodeError. Other possible values are \u2018ignore\u2019, \u2018replace\u2019 and\n\u2018xmlcharrefreplace\u2019 as well as any other name registered with\ncodecs.register_error that can handle UnicodeEncodeErrors.\nendswith(suffix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S ends with the specified suffix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nsuffix can also be a tuple of strings to try.\nexpandtabs(tabsize=8)\u00b6\nReturn a copy where all tab characters are expanded using spaces.\nIf tabsize is not given, a tab size of 8 characters is assumed.\nfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nformat(*args, **kwargs) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from args and kwargs.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nformat_map(mapping) \u2192 str\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_types.AgentType.html"} {"id": "74baee74d2c2-4", "text": "format_map(mapping) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from mapping.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nisalnum()\u00b6\nReturn True if the string is an alpha-numeric string, False otherwise.\nA string is alpha-numeric if all characters in the string are alpha-numeric and\nthere is at least one character in the string.\nisalpha()\u00b6\nReturn True if the string is an alphabetic string, False otherwise.\nA string is alphabetic if all characters in the string are alphabetic and there\nis at least one character in the string.\nisascii()\u00b6\nReturn True if all characters in the string are ASCII, False otherwise.\nASCII characters have code points in the range U+0000-U+007F.\nEmpty string is ASCII too.\nisdecimal()\u00b6\nReturn True if the string is a decimal string, False otherwise.\nA string is a decimal string if all characters in the string are decimal and\nthere is at least one character in the string.\nisdigit()\u00b6\nReturn True if the string is a digit string, False otherwise.\nA string is a digit string if all characters in the string are digits and there\nis at least one character in the string.\nisidentifier()\u00b6\nReturn True if the string is a valid Python identifier, False otherwise.\nCall keyword.iskeyword(s) to test whether string s is a reserved identifier,\nsuch as \u201cdef\u201d or \u201cclass\u201d.\nislower()\u00b6\nReturn True if the string is a lowercase string, False otherwise.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_types.AgentType.html"} {"id": "74baee74d2c2-5", "text": "islower()\u00b6\nReturn True if the string is a lowercase string, False otherwise.\nA string is lowercase if all cased characters in the string are lowercase and\nthere is at least one cased character in the string.\nisnumeric()\u00b6\nReturn True if the string is a numeric string, False otherwise.\nA string is numeric if all characters in the string are numeric and there is at\nleast one character in the string.\nisprintable()\u00b6\nReturn True if the string is printable, False otherwise.\nA string is printable if all of its characters are considered printable in\nrepr() or if it is empty.\nisspace()\u00b6\nReturn True if the string is a whitespace string, False otherwise.\nA string is whitespace if all characters in the string are whitespace and there\nis at least one character in the string.\nistitle()\u00b6\nReturn True if the string is a title-cased string, False otherwise.\nIn a title-cased string, upper- and title-case characters may only\nfollow uncased characters and lowercase characters only cased ones.\nisupper()\u00b6\nReturn True if the string is an uppercase string, False otherwise.\nA string is uppercase if all cased characters in the string are uppercase and\nthere is at least one cased character in the string.\njoin(iterable, /)\u00b6\nConcatenate any number of strings.\nThe string whose method is called is inserted in between each given string.\nThe result is returned as a new string.\nExample: \u2018.\u2019.join([\u2018ab\u2019, \u2018pq\u2019, \u2018rs\u2019]) -> \u2018ab.pq.rs\u2019\nljust(width, fillchar=' ', /)\u00b6\nReturn a left-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nlower()\u00b6\nReturn a copy of the string converted to lowercase.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_types.AgentType.html"} {"id": "74baee74d2c2-6", "text": "lower()\u00b6\nReturn a copy of the string converted to lowercase.\nlstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nstatic maketrans()\u00b6\nReturn a translation table usable for str.translate().\nIf there is only one argument, it must be a dictionary mapping Unicode\nordinals (integers) or characters to Unicode ordinals, strings or None.\nCharacter keys will be then converted to ordinals.\nIf there are two arguments, they must be strings of equal length, and\nin the resulting dictionary, each character in x will be mapped to the\ncharacter at the same position in y. If there is a third argument, it\nmust be a string, whose characters will be mapped to None in the result.\npartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string. If the separator is found,\nreturns a 3-tuple containing the part before the separator, the separator\nitself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing the original string\nand two empty strings.\nremoveprefix(prefix, /)\u00b6\nReturn a str with the given prefix string removed if present.\nIf the string starts with the prefix string, return string[len(prefix):].\nOtherwise, return a copy of the original string.\nremovesuffix(suffix, /)\u00b6\nReturn a str with the given suffix string removed if present.\nIf the string ends with the suffix string and that suffix is not empty,\nreturn string[:-len(suffix)]. Otherwise, return a copy of the original\nstring.\nreplace(old, new, count=- 1, /)\u00b6\nReturn a copy with all occurrences of substring old replaced by new.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_types.AgentType.html"} {"id": "74baee74d2c2-7", "text": "Return a copy with all occurrences of substring old replaced by new.\ncountMaximum number of occurrences to replace.\n-1 (the default value) means replace all occurrences.\nIf the optional argument count is given, only the first count occurrences are\nreplaced.\nrfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nrindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nrjust(width, fillchar=' ', /)\u00b6\nReturn a right-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nrpartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string, starting at the end. If\nthe separator is found, returns a 3-tuple containing the part before the\nseparator, the separator itself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing two empty strings\nand the original string.\nrsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_types.AgentType.html"} {"id": "74baee74d2c2-8", "text": "empty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nSplitting starts at the end of the string and works to the front.\nrstrip(chars=None, /)\u00b6\nReturn a copy of the string with trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nNote, str.split() is mainly useful for data that has been intentionally\ndelimited. With natural text that includes punctuation, consider using\nthe regular expression module.\nsplitlines(keepends=False)\u00b6\nReturn a list of the lines in the string, breaking at line boundaries.\nLine breaks are not included in the resulting list unless keepends is given and\ntrue.\nstartswith(prefix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S starts with the specified prefix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nprefix can also be a tuple of strings to try.\nstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading and trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nswapcase()\u00b6\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_types.AgentType.html"} {"id": "74baee74d2c2-9", "text": "Convert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\u00b6\nReturn a version of the string where each word is titlecased.\nMore specifically, words start with uppercased characters and all remaining\ncased characters have lower case.\ntranslate(table, /)\u00b6\nReplace each character in the string using the given translation table.\ntableTranslation table, which must be a mapping of Unicode ordinals to\nUnicode ordinals, strings, or None.\nThe table must implement lookup/indexing via __getitem__, for instance a\ndictionary or list. If this operation raises LookupError, the character is\nleft untouched. Characters mapped to None are deleted.\nupper()\u00b6\nReturn a copy of the string converted to uppercase.\nzfill(width, /)\u00b6\nPad a numeric string with zeros on the left, to fill a field of the given width.\nThe string is never truncated.\nCHAT_CONVERSATIONAL_REACT_DESCRIPTION = 'chat-conversational-react-description'\u00b6\nCHAT_ZERO_SHOT_REACT_DESCRIPTION = 'chat-zero-shot-react-description'\u00b6\nCONVERSATIONAL_REACT_DESCRIPTION = 'conversational-react-description'\u00b6\nOPENAI_FUNCTIONS = 'openai-functions'\u00b6\nOPENAI_MULTI_FUNCTIONS = 'openai-multi-functions'\u00b6\nREACT_DOCSTORE = 'react-docstore'\u00b6\nSELF_ASK_WITH_SEARCH = 'self-ask-with-search'\u00b6\nSTRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION = 'structured-chat-zero-shot-react-description'\u00b6\nZERO_SHOT_REACT_DESCRIPTION = 'zero-shot-react-description'\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_types.AgentType.html"} {"id": "21471585af4a-0", "text": "langchain.agents.openai_functions_agent.base.OpenAIFunctionsAgent\u00b6\nclass langchain.agents.openai_functions_agent.base.OpenAIFunctionsAgent(*, llm: BaseLanguageModel, tools: Sequence[BaseTool], prompt: BasePromptTemplate)[source]\u00b6\nBases: BaseSingleActionAgent\nAn Agent driven by OpenAIs function powered API.\nParameters\nllm \u2013 This should be an instance of ChatOpenAI, specifically a model\nthat supports using functions.\ntools \u2013 The tools this agent has access to.\nprompt \u2013 The prompt for this agent, should support agent_scratchpad as one\nof the variables. For an easy way to construct this prompt, use\nOpenAIFunctionsAgent.create_prompt(\u2026)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam llm: langchain.schema.language_model.BaseLanguageModel [Required]\u00b6\nparam prompt: langchain.schema.prompt_template.BasePromptTemplate [Required]\u00b6\nparam tools: Sequence[langchain.tools.base.BaseTool] [Required]\u00b6\nasync aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[AgentAction, AgentFinish][source]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nclassmethod create_prompt(system_message: Optional[SystemMessage] = SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}), extra_prompt_messages: Optional[List[BaseMessagePromptTemplate]] = None) \u2192 BasePromptTemplate[source]\u00b6\nCreate prompt for this agent.\nParameters", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.openai_functions_agent.base.OpenAIFunctionsAgent.html"} {"id": "21471585af4a-1", "text": "Create prompt for this agent.\nParameters\nsystem_message \u2013 Message to use as the system message that will be the\nfirst in the prompt.\nextra_prompt_messages \u2013 Prompt messages that will be placed between the\nsystem message and the new human input.\nReturns\nA prompt template to pass into this agent.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of agent.\nclassmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, extra_prompt_messages: Optional[List[BaseMessagePromptTemplate]] = None, system_message: Optional[SystemMessage] = SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}), **kwargs: Any) \u2192 BaseSingleActionAgent[source]\u00b6\nConstruct an agent from an LLM and tools.\nget_allowed_tools() \u2192 List[str][source]\u00b6\nGet allowed tools.\nplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[AgentAction, AgentFinish][source]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date, along with observations\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nreturn_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) \u2192 AgentFinish\u00b6\nReturn response when agent has been stopped due to max iterations.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the agent.\nParameters\nfile_path \u2013 Path to file to save the agent to.\nExample:\n.. code-block:: python\n# If working with agent executor\nagent.agent.save(file_path=\u201dpath/agent.yaml\u201d)", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.openai_functions_agent.base.OpenAIFunctionsAgent.html"} {"id": "21471585af4a-2", "text": "# If working with agent executor\nagent.agent.save(file_path=\u201dpath/agent.yaml\u201d)\ntool_run_logging_kwargs() \u2192 Dict\u00b6\nvalidator validate_llm\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nvalidator validate_prompt\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nproperty functions: List[dict]\u00b6\nproperty input_keys: List[str]\u00b6\nGet input keys. Input refers to user input here.\nproperty return_values: List[str]\u00b6\nReturn values of the agent.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.openai_functions_agent.base.OpenAIFunctionsAgent.html"} {"id": "b86067ce39a4-0", "text": "langchain.agents.self_ask_with_search.base.SelfAskWithSearchChain\u00b6\nclass langchain.agents.self_ask_with_search.base.SelfAskWithSearchChain(llm: BaseLanguageModel, search_chain: Union[GoogleSerperAPIWrapper, SerpAPIWrapper], *, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, agent: Union[BaseSingleActionAgent, BaseMultiActionAgent], tools: Sequence[BaseTool], return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False)[source]\u00b6\nBases: AgentExecutor\nChain that does self-ask with search.\nExample\nfrom langchain import SelfAskWithSearchChain, OpenAI, GoogleSerperAPIWrapper\nsearch_chain = GoogleSerperAPIWrapper()\nself_ask = SelfAskWithSearchChain(llm=OpenAI(), search_chain=search_chain)\nInitialize with just an LLM and a search chain.\nparam agent: Union[BaseSingleActionAgent, BaseMultiActionAgent] [Required]\u00b6\nThe agent to run for creating a plan and determining actions\nto take at each step of the execution loop.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.self_ask_with_search.base.SelfAskWithSearchChain.html"} {"id": "b86067ce39a4-1", "text": "Callback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam early_stopping_method: str = 'force'\u00b6\nThe method to use for early stopping if the agent never\nreturns AgentFinish. Either \u2018force\u2019 or \u2018generate\u2019.\n\u201cforce\u201d returns a string saying that it stopped because it met atime or iteration limit.\n\u201cgenerate\u201d calls the agent\u2019s LLM Chain one final time to generatea final answer based on the previous steps.\nparam handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False\u00b6\nHow to handle errors raised by the agent\u2019s output parser.Defaults to False, which raises the error.\nsIf true, the error will be sent back to the LLM as an observation.\nIf a string, the string itself will be sent to the LLM as an observation.\nIf a callable function, the function will be called with the exception\nas an argument, and the result of that function will be passed to the agentas an observation.\nparam max_execution_time: Optional[float] = None\u00b6\nThe maximum amount of wall clock time to spend in the execution\nloop.\nparam max_iterations: Optional[int] = 15\u00b6\nThe maximum number of steps to take before ending the execution\nloop.\nSetting to \u2018None\u2019 could lead to an infinite loop.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.self_ask_with_search.base.SelfAskWithSearchChain.html"} {"id": "b86067ce39a4-2", "text": "them along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam return_intermediate_steps: bool = False\u00b6\nWhether to return the agent\u2019s trajectory of intermediate steps\nat the end in addition to the final output.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam tools: Sequence[BaseTool] [Required]\u00b6\nThe valid tools the agent can call.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.self_ask_with_search.base.SelfAskWithSearchChain.html"} {"id": "b86067ce39a4-3", "text": "Chain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.self_ask_with_search.base.SelfAskWithSearchChain.html"} {"id": "b86067ce39a4-4", "text": "chain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.self_ask_with_search.base.SelfAskWithSearchChain.html"} {"id": "b86067ce39a4-5", "text": "sole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_agent_and_tools(agent: Union[BaseSingleActionAgent, BaseMultiActionAgent], tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, **kwargs: Any) \u2192 AgentExecutor\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.self_ask_with_search.base.SelfAskWithSearchChain.html"} {"id": "b86067ce39a4-6", "text": "Create from agent and tools.\nlookup_tool(name: str) \u2192 BaseTool\u00b6\nLookup tool by name.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.self_ask_with_search.base.SelfAskWithSearchChain.html"} {"id": "b86067ce39a4-7", "text": "info along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nRaise error - saving not supported for Agent Executors.\nsave_agent(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the underlying agent.\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.self_ask_with_search.base.SelfAskWithSearchChain.html"} {"id": "b86067ce39a4-8", "text": "Set the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_return_direct_tool\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that tools are compatible with agent.\nvalidator validate_tools\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that tools are compatible with agent.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.self_ask_with_search.base.SelfAskWithSearchChain.html"} {"id": "9d07275df4ca-0", "text": "langchain.agents.agent_toolkits.openapi.spec.dereference_refs\u00b6\nlangchain.agents.agent_toolkits.openapi.spec.dereference_refs(spec_obj: dict, full_spec: dict) \u2192 Union[dict, list][source]\u00b6\nTry to substitute $refs.\nThe goal is to get the complete docs for each endpoint in context for now.\nIn the few OpenAPI specs I studied, $refs referenced models\n(or in OpenAPI terms, components) and could be nested. This code most\nlikely misses lots of cases.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.spec.dereference_refs.html"} {"id": "36a4b6c6f80f-0", "text": "langchain.agents.agent.AgentOutputParser\u00b6\nclass langchain.agents.agent.AgentOutputParser[source]\u00b6\nBases: BaseOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nabstract parse(text: str) \u2192 Union[AgentAction, AgentFinish][source]\u00b6\nParse text into agent action/finish.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentOutputParser.html"} {"id": "36a4b6c6f80f-1", "text": "property lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentOutputParser.html"} {"id": "9b9fcac49b7e-0", "text": "langchain.agents.agent_toolkits.python.base.create_python_agent\u00b6\nlangchain.agents.agent_toolkits.python.base.create_python_agent(llm: BaseLanguageModel, tool: PythonREPLTool, agent_type: AgentType = AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = False, prefix: str = 'You are an agent designed to write and execute python code to answer questions.\\nYou have access to a python REPL, which you can use to execute python code.\\nIf you get an error, debug your code and try again.\\nOnly use the output of your code to answer the question. \\nYou might know the answer without running any code, but you should still run the code to get the answer.\\nIf it does not seem like you can write code to answer the question, just return \"I don\\'t know\" as the answer.\\n', agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) \u2192 AgentExecutor[source]\u00b6\nConstruct a python agent from an LLM and tool.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.python.base.create_python_agent.html"} {"id": "e67dcf13e0bd-0", "text": "langchain.agents.agent.Agent\u00b6\nclass langchain.agents.agent.Agent(*, llm_chain: LLMChain, output_parser: AgentOutputParser, allowed_tools: Optional[List[str]] = None)[source]\u00b6\nBases: BaseSingleActionAgent\nClass responsible for calling the language model and deciding the action.\nThis is driven by an LLMChain. The prompt in the LLMChain MUST include\na variable called \u201cagent_scratchpad\u201d where the agent can put its\nintermediary work.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam allowed_tools: Optional[List[str]] = None\u00b6\nparam llm_chain: langchain.chains.llm.LLMChain [Required]\u00b6\nparam output_parser: langchain.agents.agent.AgentOutputParser [Required]\u00b6\nasync aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[AgentAction, AgentFinish][source]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nabstract classmethod create_prompt(tools: Sequence[BaseTool]) \u2192 BasePromptTemplate[source]\u00b6\nCreate a prompt for this class.\ndict(**kwargs: Any) \u2192 Dict[source]\u00b6\nReturn dictionary representation of agent.\nclassmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, **kwargs: Any) \u2192 Agent[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.Agent.html"} {"id": "e67dcf13e0bd-1", "text": "Construct an agent from an LLM and tools.\nget_allowed_tools() \u2192 Optional[List[str]][source]\u00b6\nget_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) \u2192 Dict[str, Any][source]\u00b6\nCreate the full inputs for the LLMChain from intermediate steps.\nplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[AgentAction, AgentFinish][source]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nreturn_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) \u2192 AgentFinish[source]\u00b6\nReturn response when agent has been stopped due to max iterations.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the agent.\nParameters\nfile_path \u2013 Path to file to save the agent to.\nExample:\n.. code-block:: python\n# If working with agent executor\nagent.agent.save(file_path=\u201dpath/agent.yaml\u201d)\ntool_run_logging_kwargs() \u2192 Dict[source]\u00b6\nvalidator validate_prompt\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that prompt matches format.\nabstract property llm_prefix: str\u00b6\nPrefix to append the LLM call with.\nabstract property observation_prefix: str\u00b6\nPrefix to append the observation with.\nproperty return_values: List[str]\u00b6\nReturn values of the agent.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.Agent.html"} {"id": "06f4a8c84cfb-0", "text": "langchain.agents.agent_toolkits.base.BaseToolkit\u00b6\nclass langchain.agents.agent_toolkits.base.BaseToolkit[source]\u00b6\nBases: BaseModel, ABC\nClass representing a collection of related tools.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nabstract get_tools() \u2192 List[BaseTool][source]\u00b6\nGet the tools in the toolkit.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.base.BaseToolkit.html"} {"id": "72f815608243-0", "text": "langchain.agents.utils.validate_tools_single_input\u00b6\nlangchain.agents.utils.validate_tools_single_input(class_name: str, tools: Sequence[BaseTool]) \u2192 None[source]\u00b6\nValidate tools for single input.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.utils.validate_tools_single_input.html"} {"id": "7d91faf452c8-0", "text": "langchain.agents.agent_toolkits.openapi.planner.RequestsPatchToolWithParsing\u00b6\nclass langchain.agents.agent_toolkits.openapi.planner.RequestsPatchToolWithParsing(*, name: str = 'requests_patch', description: str = 'Use this when you want to PATCH content on a website.\\nInput to the tool should be a json string with 3 keys: \"url\", \"data\", and \"output_instructions\".\\nThe value of \"url\" should be a string.\\nThe value of \"data\" should be a dictionary of key-value pairs of the body params available in the OpenAPI spec you want to PATCH the content with at the url.\\nThe value of \"output_instructions\" should be instructions on what information to extract from the response, for example the id(s) for a resource(s) that the PATCH request creates.\\nAlways use double quotes for strings in the json string.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, requests_wrapper: TextRequestsWrapper, response_length: Optional[int] = 5000, llm_chain: LLMChain = None)[source]\u00b6\nBases: BaseRequestsTool, BaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsPatchToolWithParsing.html"} {"id": "7d91faf452c8-1", "text": "param callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Use this when you want to PATCH content on a website.\\nInput to the tool should be a json string with 3 keys: \"url\", \"data\", and \"output_instructions\".\\nThe value of \"url\" should be a string.\\nThe value of \"data\" should be a dictionary of key-value pairs of the body params available in the OpenAPI spec you want to PATCH the content with at the url.\\nThe value of \"output_instructions\" should be instructions on what information to extract from the response, for example the id(s) for a resource(s) that the PATCH request creates.\\nAlways use double quotes for strings in the json string.'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam llm_chain: langchain.chains.llm.LLMChain [Optional]\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'requests_patch'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam requests_wrapper: TextRequestsWrapper [Required]\u00b6\nparam response_length: Optional[int] = 5000\u00b6\nparam return_direct: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsPatchToolWithParsing.html"} {"id": "7d91faf452c8-2", "text": "param return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsPatchToolWithParsing.html"} {"id": "7d91faf452c8-3", "text": "Whether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsPatchToolWithParsing.html"} {"id": "d238d95e0e47-0", "text": "langchain.agents.chat.base.ChatAgent\u00b6\nclass langchain.agents.chat.base.ChatAgent(*, llm_chain: LLMChain, output_parser: AgentOutputParser = None, allowed_tools: Optional[List[str]] = None)[source]\u00b6\nBases: Agent\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam allowed_tools: Optional[List[str]] = None\u00b6\nparam llm_chain: langchain.chains.llm.LLMChain [Required]\u00b6\nparam output_parser: langchain.agents.agent.AgentOutputParser [Optional]\u00b6\nasync aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[AgentAction, AgentFinish]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.chat.base.ChatAgent.html"} {"id": "d238d95e0e47-1", "text": "**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nclassmethod create_prompt(tools: Sequence[BaseTool], system_message_prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', system_message_suffix: str = 'Begin! Reminder to always use the exact characters `Final Answer` when responding.', human_message: str = '{input}\\n\\n{agent_scratchpad}', format_instructions: str = 'The way you use the tools is by specifying a json blob.\\nSpecifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).\\n\\nThe only values that should be in the \"action\" field are: {tool_names}\\n\\nThe $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:\\n\\n```\\n{{{{\\n\u00a0 \"action\": $TOOL_NAME,\\n\u00a0 \"action_input\": $INPUT\\n}}}}\\n```\\n\\nALWAYS use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction:\\n```\\n$JSON_BLOB\\n```\\nObservation: the result of the action\\n... (this Thought/Action/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None) \u2192 BasePromptTemplate[source]\u00b6\nCreate a prompt for this class.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of agent.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.chat.base.ChatAgent.html"} {"id": "d238d95e0e47-2", "text": "dict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of agent.\nclassmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, system_message_prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', system_message_suffix: str = 'Begin! Reminder to always use the exact characters `Final Answer` when responding.', human_message: str = '{input}\\n\\n{agent_scratchpad}', format_instructions: str = 'The way you use the tools is by specifying a json blob.\\nSpecifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).\\n\\nThe only values that should be in the \"action\" field are: {tool_names}\\n\\nThe $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:\\n\\n```\\n{{{{\\n\u00a0 \"action\": $TOOL_NAME,\\n\u00a0 \"action_input\": $INPUT\\n}}}}\\n```\\n\\nALWAYS use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction:\\n```\\n$JSON_BLOB\\n```\\nObservation: the result of the action\\n... (this Thought/Action/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, **kwargs: Any) \u2192 Agent[source]\u00b6\nConstruct an agent from an LLM and tools.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.chat.base.ChatAgent.html"} {"id": "d238d95e0e47-3", "text": "Construct an agent from an LLM and tools.\nget_allowed_tools() \u2192 Optional[List[str]]\u00b6\nget_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) \u2192 Dict[str, Any]\u00b6\nCreate the full inputs for the LLMChain from intermediate steps.\nplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[AgentAction, AgentFinish]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nreturn_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) \u2192 AgentFinish\u00b6\nReturn response when agent has been stopped due to max iterations.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the agent.\nParameters\nfile_path \u2013 Path to file to save the agent to.\nExample:\n.. code-block:: python\n# If working with agent executor\nagent.agent.save(file_path=\u201dpath/agent.yaml\u201d)\ntool_run_logging_kwargs() \u2192 Dict\u00b6\nvalidator validate_prompt\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that prompt matches format.\nproperty llm_prefix: str\u00b6\nPrefix to append the llm call with.\nproperty observation_prefix: str\u00b6\nPrefix to append the observation with.\nproperty return_values: List[str]\u00b6\nReturn values of the agent.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.chat.base.ChatAgent.html"} {"id": "cf2402667f67-0", "text": "langchain.agents.loading.load_agent\u00b6\nlangchain.agents.loading.load_agent(path: Union[str, Path], **kwargs: Any) \u2192 Union[BaseSingleActionAgent, BaseMultiActionAgent][source]\u00b6\nUnified method for loading a agent from LangChainHub or local fs.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.loading.load_agent.html"} {"id": "5cec0de6005c-0", "text": "langchain.agents.agent.BaseSingleActionAgent\u00b6\nclass langchain.agents.agent.BaseSingleActionAgent[source]\u00b6\nBases: BaseModel\nBase Agent class.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nabstract async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[AgentAction, AgentFinish][source]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\ndict(**kwargs: Any) \u2192 Dict[source]\u00b6\nReturn dictionary representation of agent.\nclassmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, **kwargs: Any) \u2192 BaseSingleActionAgent[source]\u00b6\nget_allowed_tools() \u2192 Optional[List[str]][source]\u00b6\nabstract plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[AgentAction, AgentFinish][source]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nreturn_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) \u2192 AgentFinish[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.BaseSingleActionAgent.html"} {"id": "5cec0de6005c-1", "text": "Return response when agent has been stopped due to max iterations.\nsave(file_path: Union[Path, str]) \u2192 None[source]\u00b6\nSave the agent.\nParameters\nfile_path \u2013 Path to file to save the agent to.\nExample:\n.. code-block:: python\n# If working with agent executor\nagent.agent.save(file_path=\u201dpath/agent.yaml\u201d)\ntool_run_logging_kwargs() \u2192 Dict[source]\u00b6\nproperty return_values: List[str]\u00b6\nReturn values of the agent.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.BaseSingleActionAgent.html"} {"id": "5f71b654d8ad-0", "text": "langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit\u00b6\nclass langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit(*, vectorstore_info: VectorStoreInfo, llm: BaseLanguageModel = None)[source]\u00b6\nBases: BaseToolkit\nToolkit for interacting with a vector store.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam llm: langchain.schema.language_model.BaseLanguageModel [Optional]\u00b6\nparam vectorstore_info: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo [Required]\u00b6\nget_tools() \u2192 List[BaseTool][source]\u00b6\nGet the tools in the toolkit.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit.html"} {"id": "cf9309376fc5-0", "text": "langchain.agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent\u00b6\nclass langchain.agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent(*, llm: BaseLanguageModel, tools: Sequence[BaseTool], prompt: BasePromptTemplate)[source]\u00b6\nBases: BaseMultiActionAgent\nAn Agent driven by OpenAIs function powered API.\nParameters\nllm \u2013 This should be an instance of ChatOpenAI, specifically a model\nthat supports using functions.\ntools \u2013 The tools this agent has access to.\nprompt \u2013 The prompt for this agent, should support agent_scratchpad as one\nof the variables. For an easy way to construct this prompt, use\nOpenAIMultiFunctionsAgent.create_prompt(\u2026)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam llm: langchain.schema.language_model.BaseLanguageModel [Required]\u00b6\nparam prompt: langchain.schema.prompt_template.BasePromptTemplate [Required]\u00b6\nparam tools: Sequence[langchain.tools.base.BaseTool] [Required]\u00b6\nasync aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[List[AgentAction], AgentFinish][source]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date,\nalong with observations\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nclassmethod create_prompt(system_message: Optional[SystemMessage] = SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}), extra_prompt_messages: Optional[List[BaseMessagePromptTemplate]] = None) \u2192 BasePromptTemplate[source]\u00b6\nCreate prompt for this agent.\nParameters", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent.html"} {"id": "cf9309376fc5-1", "text": "Create prompt for this agent.\nParameters\nsystem_message \u2013 Message to use as the system message that will be the\nfirst in the prompt.\nextra_prompt_messages \u2013 Prompt messages that will be placed between the\nsystem message and the new human input.\nReturns\nA prompt template to pass into this agent.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of agent.\nclassmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, extra_prompt_messages: Optional[List[BaseMessagePromptTemplate]] = None, system_message: Optional[SystemMessage] = SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}), **kwargs: Any) \u2192 BaseMultiActionAgent[source]\u00b6\nConstruct an agent from an LLM and tools.\nget_allowed_tools() \u2192 List[str][source]\u00b6\nGet allowed tools.\nplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[List[AgentAction], AgentFinish][source]\u00b6\nGiven input, decided what to do.\nParameters\nintermediate_steps \u2013 Steps the LLM has taken to date, along with observations\n**kwargs \u2013 User inputs.\nReturns\nAction specifying what tool to use.\nreturn_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) \u2192 AgentFinish\u00b6\nReturn response when agent has been stopped due to max iterations.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the agent.\nParameters\nfile_path \u2013 Path to file to save the agent to.\nExample:\n.. code-block:: python\n# If working with agent executor\nagent.agent.save(file_path=\u201dpath/agent.yaml\u201d)", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent.html"} {"id": "cf9309376fc5-2", "text": "# If working with agent executor\nagent.agent.save(file_path=\u201dpath/agent.yaml\u201d)\ntool_run_logging_kwargs() \u2192 Dict\u00b6\nvalidator validate_llm\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nvalidator validate_prompt\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nproperty functions: List[dict]\u00b6\nproperty input_keys: List[str]\u00b6\nGet input keys. Input refers to user input here.\nproperty return_values: List[str]\u00b6\nReturn values of the agent.", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent.html"} {"id": "471b53b0a0ff-0", "text": "langchain.agents.agent_toolkits.openapi.planner.RequestsGetToolWithParsing\u00b6\nclass langchain.agents.agent_toolkits.openapi.planner.RequestsGetToolWithParsing(*, name: str = 'requests_get', description: str = 'Use this to GET content from a website.\\nInput to the tool should be a json string with 3 keys: \"url\", \"params\" and \"output_instructions\".\\nThe value of \"url\" should be a string. \\nThe value of \"params\" should be a dict of the needed and available parameters from the OpenAPI spec related to the endpoint. \\nIf parameters are not needed, or not available, leave it empty.\\nThe value of \"output_instructions\" should be instructions on what information to extract from the response, \\nfor example the id(s) for a resource(s) that the GET request fetches.\\n', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, requests_wrapper: TextRequestsWrapper, response_length: Optional[int] = 5000, llm_chain: LLMChain = None)[source]\u00b6\nBases: BaseRequestsTool, BaseTool\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam args_schema: Optional[Type[BaseModel]] = None\u00b6\nPydantic model class to validate and parse the tool\u2019s input arguments.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsGetToolWithParsing.html"} {"id": "471b53b0a0ff-1", "text": "param callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated. Please use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nCallbacks to be called during tool execution.\nparam description: str = 'Use this to GET content from a website.\\nInput to the tool should be a json string with 3 keys: \"url\", \"params\" and \"output_instructions\".\\nThe value of \"url\" should be a string. \\nThe value of \"params\" should be a dict of the needed and available parameters from the OpenAPI spec related to the endpoint. \\nIf parameters are not needed, or not available, leave it empty.\\nThe value of \"output_instructions\" should be instructions on what information to extract from the response, \\nfor example the id(s) for a resource(s) that the GET request fetches.\\n'\u00b6\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nparam handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False\u00b6\nHandle the content of the ToolException thrown.\nparam llm_chain: langchain.chains.llm.LLMChain [Optional]\u00b6\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the tool. Defaults to None\nThis metadata will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam name: str = 'requests_get'\u00b6\nThe unique name of the tool that clearly communicates its purpose.\nparam requests_wrapper: TextRequestsWrapper [Required]\u00b6\nparam response_length: Optional[int] = 5000\u00b6\nparam return_direct: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsGetToolWithParsing.html"} {"id": "471b53b0a0ff-2", "text": "param return_direct: bool = False\u00b6\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the tool. Defaults to None\nThese tags will be associated with each call to this tool,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a tool with its use case.\nparam verbose: bool = False\u00b6\nWhether to log the tool\u2019s progress.\n__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 str\u00b6\nMake tool callable.\nasync arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool asynchronously.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 Any\u00b6\nRun the tool.\nproperty args: dict\u00b6\nproperty is_single_input: bool\u00b6\nWhether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsGetToolWithParsing.html"} {"id": "471b53b0a0ff-3", "text": "Whether the tool only accepts a single input.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsGetToolWithParsing.html"} {"id": "98e27f32e30d-0", "text": "langchain.agents.mrkl.base.ChainConfig\u00b6\nclass langchain.agents.mrkl.base.ChainConfig(action_name: str, action: Callable, action_description: str)[source]\u00b6\nBases: NamedTuple\nConfiguration for chain to use in MRKL system.\nParameters\naction_name \u2013 Name of the action.\naction \u2013 Action function to call.\naction_description \u2013 Description of the action.\nCreate new instance of ChainConfig(action_name, action, action_description)\nMethods\n__init__()\ncount(value,\u00a0/)\nReturn number of occurrences of value.\nindex(value[,\u00a0start,\u00a0stop])\nReturn first index of value.\nAttributes\naction\nAlias for field number 1\naction_description\nAlias for field number 2\naction_name\nAlias for field number 0\ncount(value, /)\u00b6\nReturn number of occurrences of value.\nindex(value, start=0, stop=9223372036854775807, /)\u00b6\nReturn first index of value.\nRaises ValueError if the value is not present.\naction: Callable\u00b6\nAlias for field number 1\naction_description: str\u00b6\nAlias for field number 2\naction_name: str\u00b6\nAlias for field number 0", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.ChainConfig.html"} {"id": "6087e3a2c427-0", "text": "langchain.agents.structured_chat.output_parser.StructuredChatOutputParserWithRetries\u00b6\nclass langchain.agents.structured_chat.output_parser.StructuredChatOutputParserWithRetries(*, base_parser: AgentOutputParser = None, output_fixing_parser: Optional[OutputFixingParser] = None)[source]\u00b6\nBases: AgentOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam base_parser: langchain.agents.agent.AgentOutputParser [Optional]\u00b6\nparam output_fixing_parser: Optional[langchain.output_parsers.fix.OutputFixingParser] = None\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nclassmethod from_llm(llm: Optional[BaseLanguageModel] = None, base_parser: Optional[StructuredChatOutputParser] = None) \u2192 StructuredChatOutputParserWithRetries[source]\u00b6\nget_format_instructions() \u2192 str[source]\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 Union[AgentAction, AgentFinish][source]\u00b6\nParse text into agent action/finish.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.structured_chat.output_parser.StructuredChatOutputParserWithRetries.html"} {"id": "6087e3a2c427-1", "text": "The prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.structured_chat.output_parser.StructuredChatOutputParserWithRetries.html"} {"id": "38ce7d9f9fde-0", "text": "langchain.agents.initialize.initialize_agent\u00b6\nlangchain.agents.initialize.initialize_agent(tools: Sequence[BaseTool], llm: BaseLanguageModel, agent: Optional[AgentType] = None, callback_manager: Optional[BaseCallbackManager] = None, agent_path: Optional[str] = None, agent_kwargs: Optional[dict] = None, *, tags: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 AgentExecutor[source]\u00b6\nLoad an agent executor given tools and LLM.\nParameters\ntools \u2013 List of tools this agent has access to.\nllm \u2013 Language model to use as the agent.\nagent \u2013 Agent type to use. If None and agent_path is also None, will default to\nAgentType.ZERO_SHOT_REACT_DESCRIPTION.\ncallback_manager \u2013 CallbackManager to use. Global callback manager is used if\nnot provided. Defaults to None.\nagent_path \u2013 Path to serialized agent to use.\nagent_kwargs \u2013 Additional key word arguments to pass to the underlying agent\ntags \u2013 Tags to apply to the traced runs.\n**kwargs \u2013 Additional key word arguments passed to the agent executor\nReturns\nAn agent executor", "source": "https://api.python.langchain.com/en/latest/agents/langchain.agents.initialize.initialize_agent.html"} {"id": "70a143dc14ce-0", "text": "langchain.client.runner_utils.run_llm\u00b6\nlangchain.client.runner_utils.run_llm(llm: BaseLanguageModel, inputs: Dict[str, Any], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]], *, tags: Optional[List[str]] = None, input_mapper: Optional[Callable[[Dict], Any]] = None) \u2192 Union[LLMResult, ChatResult][source]\u00b6\nRun the language model on the example.\nParameters\nllm \u2013 The language model to run.\ninputs \u2013 The input dictionary.\ncallbacks \u2013 The callbacks to use during the run.\ntags \u2013 Optional tags to add to the run.\ninput_mapper \u2013 function to map to the inputs dictionary from an Example\nReturns\nThe LLMResult or ChatResult.\nRaises\nValueError \u2013 If the LLM type is unsupported.\nInputFormatError \u2013 If the input format is invalid.", "source": "https://api.python.langchain.com/en/latest/client/langchain.client.runner_utils.run_llm.html"} {"id": "4f619c3dade9-0", "text": "langchain.client.runner_utils.InputFormatError\u00b6\nclass langchain.client.runner_utils.InputFormatError[source]\u00b6\nBases: Exception\nRaised when the input format is invalid.\nadd_note()\u00b6\nException.add_note(note) \u2013\nadd a note to the exception\nwith_traceback()\u00b6\nException.with_traceback(tb) \u2013\nset self.__traceback__ to tb and return self.\nargs\u00b6", "source": "https://api.python.langchain.com/en/latest/client/langchain.client.runner_utils.InputFormatError.html"} {"id": "2e4240fc90ae-0", "text": "langchain.client.runner_utils.run_llm_or_chain\u00b6\nlangchain.client.runner_utils.run_llm_or_chain(example: Example, llm_or_chain_factory: Union[Callable[[], Chain], BaseLanguageModel], n_repetitions: int, *, tags: Optional[List[str]] = None, callbacks: Optional[List[BaseCallbackHandler]] = None, input_mapper: Optional[Callable[[Dict], Any]] = None) \u2192 Union[List[dict], List[str], List[LLMResult], List[ChatResult]][source]\u00b6\nRun the Chain or language model synchronously.\nParameters\nexample \u2013 The example to run.\nllm_or_chain_factory \u2013 The Chain or language model constructor to run.\nn_repetitions \u2013 The number of times to run the model on each example.\ntags \u2013 Optional tags to add to the run.\ncallbacks \u2013 Optional callbacks to use during the run.\nReturns\nThe outputs of the model or chain.\nReturn type\nUnion[List[dict], List[str], List[LLMResult], List[ChatResult]]", "source": "https://api.python.langchain.com/en/latest/client/langchain.client.runner_utils.run_llm_or_chain.html"} {"id": "f4eb8e4002cf-0", "text": "langchain.client.runner_utils.run_on_examples\u00b6\nlangchain.client.runner_utils.run_on_examples(examples: Iterator[Example], llm_or_chain_factory: Union[Callable[[], Chain], BaseLanguageModel], *, num_repetitions: int = 1, project_name: Optional[str] = None, verbose: bool = False, client: Optional[LangChainPlusClient] = None, tags: Optional[List[str]] = None, run_evaluators: Optional[Sequence[RunEvaluator]] = None, input_mapper: Optional[Callable[[Dict], Any]] = None) \u2192 Dict[str, Any][source]\u00b6\nRun the Chain or language model on examples and store\ntraces to the specified project name.\nParameters\nexamples \u2013 Examples to run the model or chain over.\nllm_or_chain_factory \u2013 Language model or Chain constructor to run\nover the dataset. The Chain constructor is used to permit\nindependent calls on each example without carrying over state.\nnum_repetitions \u2013 Number of times to run the model on each example.\nThis is useful when testing success rates or generating confidence\nintervals.\nproject_name \u2013 Name of the project to store the traces in.\nDefaults to {dataset_name}-{chain class name}-{datetime}.\nverbose \u2013 Whether to print progress.\nclient \u2013 Client to use to access the dataset. If None, a new client\nwill be created using the credentials in the environment.\ntags \u2013 Tags to add to each run in the project.\nrun_evaluators \u2013 Evaluators to run on the results of the chain.\ninput_mapper \u2013 A function to map to the inputs dictionary from an Example\nto the format expected by the model to be evaluated. This is useful if\nyour model needs to deserialize more complex schema or if your dataset\nhas inputs with keys that differ from what is expected by your chain\nor agent.\nReturns\nA dictionary mapping example ids to the model outputs.", "source": "https://api.python.langchain.com/en/latest/client/langchain.client.runner_utils.run_on_examples.html"} {"id": "9a5c3b331b4e-0", "text": "langchain.client.runner_utils.run_on_dataset\u00b6\nlangchain.client.runner_utils.run_on_dataset(dataset_name: str, llm_or_chain_factory: Union[Callable[[], Chain], BaseLanguageModel], *, num_repetitions: int = 1, project_name: Optional[str] = None, verbose: bool = False, client: Optional[LangChainPlusClient] = None, tags: Optional[List[str]] = None, run_evaluators: Optional[Sequence[RunEvaluator]] = None, input_mapper: Optional[Callable[[Dict], Any]] = None) \u2192 Dict[str, Any][source]\u00b6\nRun the Chain or language model on a dataset and store traces\nto the specified project name.\nParameters\ndataset_name \u2013 Name of the dataset to run the chain on.\nllm_or_chain_factory \u2013 Language model or Chain constructor to run\nover the dataset. The Chain constructor is used to permit\nindependent calls on each example without carrying over state.\nnum_repetitions \u2013 Number of times to run the model on each example.\nThis is useful when testing success rates or generating confidence\nintervals.\nproject_name \u2013 Name of the project to store the traces in.\nDefaults to {dataset_name}-{chain class name}-{datetime}.\nverbose \u2013 Whether to print progress.\nclient \u2013 Client to use to access the dataset. If None,\na new client will be created using the credentials in the environment.\ntags \u2013 Tags to add to each run in the project.\nrun_evaluators \u2013 Evaluators to run on the results of the chain.\ninput_mapper \u2013 A function to map to the inputs dictionary from an Example\nto the format expected by the model to be evaluated. This is useful if\nyour model needs to deserialize more complex schema or if your dataset\nhas inputs with keys that differ from what is expected by your chain\nor agent.\nReturns", "source": "https://api.python.langchain.com/en/latest/client/langchain.client.runner_utils.run_on_dataset.html"} {"id": "9a5c3b331b4e-1", "text": "has inputs with keys that differ from what is expected by your chain\nor agent.\nReturns\nA dictionary containing the run\u2019s project name and the resulting model outputs.", "source": "https://api.python.langchain.com/en/latest/client/langchain.client.runner_utils.run_on_dataset.html"} {"id": "9e22ac04a16c-0", "text": "langchain.load.serializable.BaseSerialized\u00b6\nclass langchain.load.serializable.BaseSerialized[source]\u00b6\nBases: TypedDict\nBase class for serialized objects.\nMethods\n__init__(*args,\u00a0**kwargs)\nclear()\ncopy()\nfromkeys([value])\nCreate a new dictionary with keys from iterable and values set to value.\nget(key[,\u00a0default])\nReturn the value for key if key is in the dictionary, else default.\nitems()\nkeys()\npop(k[,d])\nIf the key is not found, return the default if given; otherwise, raise a KeyError.\npopitem()\nRemove and return a (key, value) pair as a 2-tuple.\nsetdefault(key[,\u00a0default])\nInsert key with a value of default if key is not in the dictionary.\nupdate([E,\u00a0]**F)\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]\nvalues()\nAttributes\nlc\nid\nclear() \u2192 None.\u00a0 Remove all items from D.\u00b6\ncopy() \u2192 a shallow copy of D\u00b6\nfromkeys(value=None, /)\u00b6\nCreate a new dictionary with keys from iterable and values set to value.\nget(key, default=None, /)\u00b6\nReturn the value for key if key is in the dictionary, else default.\nitems() \u2192 a set-like object providing a view on D's items\u00b6\nkeys() \u2192 a set-like object providing a view on D's keys\u00b6\npop(k[, d]) \u2192 v, remove specified key and return the corresponding value.\u00b6", "source": "https://api.python.langchain.com/en/latest/load/langchain.load.serializable.BaseSerialized.html"} {"id": "9e22ac04a16c-1", "text": "pop(k[, d]) \u2192 v, remove specified key and return the corresponding value.\u00b6\nIf the key is not found, return the default if given; otherwise,\nraise a KeyError.\npopitem()\u00b6\nRemove and return a (key, value) pair as a 2-tuple.\nPairs are returned in LIFO (last-in, first-out) order.\nRaises KeyError if the dict is empty.\nsetdefault(key, default=None, /)\u00b6\nInsert key with a value of default if key is not in the dictionary.\nReturn the value for key if key is in the dictionary, else default.\nupdate([E, ]**F) \u2192 None.\u00a0 Update D from dict/iterable E and F.\u00b6\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k]\nIf E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v\nIn either case, this is followed by: for k in F: D[k] = F[k]\nvalues() \u2192 an object providing a view on D's values\u00b6\nid: List[str]\u00b6\nlc: int\u00b6", "source": "https://api.python.langchain.com/en/latest/load/langchain.load.serializable.BaseSerialized.html"} {"id": "4e7240462bd7-0", "text": "langchain.load.dump.default\u00b6\nlangchain.load.dump.default(obj: Any) \u2192 Any[source]\u00b6\nReturn a default value for a Serializable object or\na SerializedNotImplemented object.", "source": "https://api.python.langchain.com/en/latest/load/langchain.load.dump.default.html"} {"id": "1719f11faf76-0", "text": "langchain.load.serializable.SerializedSecret\u00b6\nclass langchain.load.serializable.SerializedSecret[source]\u00b6\nBases: dict\nSerialized secret.\nMethods\n__init__(*args,\u00a0**kwargs)\nclear()\ncopy()\nfromkeys([value])\nCreate a new dictionary with keys from iterable and values set to value.\nget(key[,\u00a0default])\nReturn the value for key if key is in the dictionary, else default.\nitems()\nkeys()\npop(k[,d])\nIf the key is not found, return the default if given; otherwise, raise a KeyError.\npopitem()\nRemove and return a (key, value) pair as a 2-tuple.\nsetdefault(key[,\u00a0default])\nInsert key with a value of default if key is not in the dictionary.\nupdate([E,\u00a0]**F)\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]\nvalues()\nAttributes\ntype\nclear() \u2192 None.\u00a0 Remove all items from D.\u00b6\ncopy() \u2192 a shallow copy of D\u00b6\nfromkeys(value=None, /)\u00b6\nCreate a new dictionary with keys from iterable and values set to value.\nget(key, default=None, /)\u00b6\nReturn the value for key if key is in the dictionary, else default.\nitems() \u2192 a set-like object providing a view on D's items\u00b6\nkeys() \u2192 a set-like object providing a view on D's keys\u00b6\npop(k[, d]) \u2192 v, remove specified key and return the corresponding value.\u00b6", "source": "https://api.python.langchain.com/en/latest/load/langchain.load.serializable.SerializedSecret.html"} {"id": "1719f11faf76-1", "text": "pop(k[, d]) \u2192 v, remove specified key and return the corresponding value.\u00b6\nIf the key is not found, return the default if given; otherwise,\nraise a KeyError.\npopitem()\u00b6\nRemove and return a (key, value) pair as a 2-tuple.\nPairs are returned in LIFO (last-in, first-out) order.\nRaises KeyError if the dict is empty.\nsetdefault(key, default=None, /)\u00b6\nInsert key with a value of default if key is not in the dictionary.\nReturn the value for key if key is in the dictionary, else default.\nupdate([E, ]**F) \u2192 None.\u00a0 Update D from dict/iterable E and F.\u00b6\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k]\nIf E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v\nIn either case, this is followed by: for k in F: D[k] = F[k]\nvalues() \u2192 an object providing a view on D's values\u00b6\nid: List[str]\u00b6\nlc: int\u00b6\ntype: Literal['secret']\u00b6", "source": "https://api.python.langchain.com/en/latest/load/langchain.load.serializable.SerializedSecret.html"} {"id": "660e62b125e7-0", "text": "langchain.load.serializable.to_json_not_implemented\u00b6\nlangchain.load.serializable.to_json_not_implemented(obj: object) \u2192 SerializedNotImplemented[source]\u00b6\nSerialize a \u201cnot implemented\u201d object.\nParameters\nobj \u2013 object to serialize\nReturns\nSerializedNotImplemented", "source": "https://api.python.langchain.com/en/latest/load/langchain.load.serializable.to_json_not_implemented.html"} {"id": "4045857e1647-0", "text": "langchain.load.serializable.Serializable\u00b6\nclass langchain.load.serializable.Serializable[source]\u00b6\nBases: BaseModel, ABC\nSerializable base class.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented][source]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented[source]\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config[source]\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/load/langchain.load.serializable.Serializable.html"} {"id": "9335c66b81d7-0", "text": "langchain.load.dump.dumps\u00b6\nlangchain.load.dump.dumps(obj: Any, *, pretty: bool = False) \u2192 str[source]\u00b6\nReturn a json string representation of an object.", "source": "https://api.python.langchain.com/en/latest/load/langchain.load.dump.dumps.html"} {"id": "2c908ef7fa99-0", "text": "langchain.load.serializable.SerializedNotImplemented\u00b6\nclass langchain.load.serializable.SerializedNotImplemented[source]\u00b6\nBases: dict\nSerialized not implemented.\nMethods\n__init__(*args,\u00a0**kwargs)\nclear()\ncopy()\nfromkeys([value])\nCreate a new dictionary with keys from iterable and values set to value.\nget(key[,\u00a0default])\nReturn the value for key if key is in the dictionary, else default.\nitems()\nkeys()\npop(k[,d])\nIf the key is not found, return the default if given; otherwise, raise a KeyError.\npopitem()\nRemove and return a (key, value) pair as a 2-tuple.\nsetdefault(key[,\u00a0default])\nInsert key with a value of default if key is not in the dictionary.\nupdate([E,\u00a0]**F)\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]\nvalues()\nAttributes\ntype\nclear() \u2192 None.\u00a0 Remove all items from D.\u00b6\ncopy() \u2192 a shallow copy of D\u00b6\nfromkeys(value=None, /)\u00b6\nCreate a new dictionary with keys from iterable and values set to value.\nget(key, default=None, /)\u00b6\nReturn the value for key if key is in the dictionary, else default.\nitems() \u2192 a set-like object providing a view on D's items\u00b6\nkeys() \u2192 a set-like object providing a view on D's keys\u00b6\npop(k[, d]) \u2192 v, remove specified key and return the corresponding value.\u00b6", "source": "https://api.python.langchain.com/en/latest/load/langchain.load.serializable.SerializedNotImplemented.html"} {"id": "2c908ef7fa99-1", "text": "pop(k[, d]) \u2192 v, remove specified key and return the corresponding value.\u00b6\nIf the key is not found, return the default if given; otherwise,\nraise a KeyError.\npopitem()\u00b6\nRemove and return a (key, value) pair as a 2-tuple.\nPairs are returned in LIFO (last-in, first-out) order.\nRaises KeyError if the dict is empty.\nsetdefault(key, default=None, /)\u00b6\nInsert key with a value of default if key is not in the dictionary.\nReturn the value for key if key is in the dictionary, else default.\nupdate([E, ]**F) \u2192 None.\u00a0 Update D from dict/iterable E and F.\u00b6\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k]\nIf E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v\nIn either case, this is followed by: for k in F: D[k] = F[k]\nvalues() \u2192 an object providing a view on D's values\u00b6\nid: List[str]\u00b6\nlc: int\u00b6\ntype: Literal['not_implemented']\u00b6", "source": "https://api.python.langchain.com/en/latest/load/langchain.load.serializable.SerializedNotImplemented.html"} {"id": "0c5de3cd3ea7-0", "text": "langchain.load.serializable.SerializedConstructor\u00b6\nclass langchain.load.serializable.SerializedConstructor[source]\u00b6\nBases: dict\nSerialized constructor.\nMethods\n__init__(*args,\u00a0**kwargs)\nclear()\ncopy()\nfromkeys([value])\nCreate a new dictionary with keys from iterable and values set to value.\nget(key[,\u00a0default])\nReturn the value for key if key is in the dictionary, else default.\nitems()\nkeys()\npop(k[,d])\nIf the key is not found, return the default if given; otherwise, raise a KeyError.\npopitem()\nRemove and return a (key, value) pair as a 2-tuple.\nsetdefault(key[,\u00a0default])\nInsert key with a value of default if key is not in the dictionary.\nupdate([E,\u00a0]**F)\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]\nvalues()\nAttributes\ntype\nkwargs\nclear() \u2192 None.\u00a0 Remove all items from D.\u00b6\ncopy() \u2192 a shallow copy of D\u00b6\nfromkeys(value=None, /)\u00b6\nCreate a new dictionary with keys from iterable and values set to value.\nget(key, default=None, /)\u00b6\nReturn the value for key if key is in the dictionary, else default.\nitems() \u2192 a set-like object providing a view on D's items\u00b6\nkeys() \u2192 a set-like object providing a view on D's keys\u00b6\npop(k[, d]) \u2192 v, remove specified key and return the corresponding value.\u00b6", "source": "https://api.python.langchain.com/en/latest/load/langchain.load.serializable.SerializedConstructor.html"} {"id": "0c5de3cd3ea7-1", "text": "pop(k[, d]) \u2192 v, remove specified key and return the corresponding value.\u00b6\nIf the key is not found, return the default if given; otherwise,\nraise a KeyError.\npopitem()\u00b6\nRemove and return a (key, value) pair as a 2-tuple.\nPairs are returned in LIFO (last-in, first-out) order.\nRaises KeyError if the dict is empty.\nsetdefault(key, default=None, /)\u00b6\nInsert key with a value of default if key is not in the dictionary.\nReturn the value for key if key is in the dictionary, else default.\nupdate([E, ]**F) \u2192 None.\u00a0 Update D from dict/iterable E and F.\u00b6\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k]\nIf E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v\nIn either case, this is followed by: for k in F: D[k] = F[k]\nvalues() \u2192 an object providing a view on D's values\u00b6\nid: List[str]\u00b6\nkwargs: Dict[str, Any]\u00b6\nlc: int\u00b6\ntype: Literal['constructor']\u00b6", "source": "https://api.python.langchain.com/en/latest/load/langchain.load.serializable.SerializedConstructor.html"} {"id": "cb82926bcf70-0", "text": "langchain.load.load.loads\u00b6\nlangchain.load.load.loads(text: str, *, secrets_map: Optional[Dict[str, str]] = None) \u2192 Any[source]\u00b6\nLoad a JSON object from a string.\nParameters\ntext \u2013 The string to load.\nsecrets_map \u2013 A map of secrets to load.\nReturns:", "source": "https://api.python.langchain.com/en/latest/load/langchain.load.load.loads.html"} {"id": "55b5eb3a49cc-0", "text": "langchain.load.dump.dumpd\u00b6\nlangchain.load.dump.dumpd(obj: Any) \u2192 Dict[str, Any][source]\u00b6\nReturn a json dict representation of an object.", "source": "https://api.python.langchain.com/en/latest/load/langchain.load.dump.dumpd.html"} {"id": "5cfcd80a8760-0", "text": "langchain.evaluation.run_evaluators.string_run_evaluator.ChainStringRunMapper\u00b6\nclass langchain.evaluation.run_evaluators.string_run_evaluator.ChainStringRunMapper(*, input_key: str, prediction_key: str)[source]\u00b6\nBases: StringRunMapper\nExtract items to evaluate from the run object from a chain.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam input_key: str [Required]\u00b6\nThe key from the model Run\u2019s inputs to use as the eval input.\nparam prediction_key: str [Required]\u00b6\nThe key from the model Run\u2019s outputs to use as the eval prediction.\n__call__(run: Run) \u2192 Dict[str, str]\u00b6\nMaps the Run to a dictionary.\nclassmethod from_chain(model: Chain, input_key: Optional[str] = None, prediction_key: Optional[str] = None) \u2192 ChainStringRunMapper[source]\u00b6\nCreate a RunMapper from a chain.\nmap(run: Run) \u2192 Dict[str, str][source]\u00b6\nMaps the Run to a dictionary.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.string_run_evaluator.ChainStringRunMapper.html"} {"id": "5cfcd80a8760-1", "text": "property lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\u00b6\nThe keys to extract from the run.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.string_run_evaluator.ChainStringRunMapper.html"} {"id": "1d48acb93719-0", "text": "langchain.evaluation.run_evaluators.string_run_evaluator.StringExampleMapper\u00b6\nclass langchain.evaluation.run_evaluators.string_run_evaluator.StringExampleMapper(*, reference_key: Optional[str] = None)[source]\u00b6\nBases: Serializable\nMap an example, or row in the dataset, to the inputs of an evaluation.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam reference_key: Optional[str] = None\u00b6\n__call__(example: Example) \u2192 Dict[str, str][source]\u00b6\nMaps the Run and Example to a dictionary.\nmap(example: Example) \u2192 Dict[str, str][source]\u00b6\nMaps the Example, or dataset row to a dictionary.\nserialize_chat_messages(messages: List[Dict]) \u2192 str[source]\u00b6\nExtract the input messages from the run.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\u00b6\nThe keys to extract from the run.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.string_run_evaluator.StringExampleMapper.html"} {"id": "62d98df8383f-0", "text": "langchain.evaluation.embedding_distance.base.EmbeddingDistanceEvalChain\u00b6\nclass langchain.evaluation.embedding_distance.base.EmbeddingDistanceEvalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, embeddings: Embeddings = None, distance_metric: EmbeddingDistance = EmbeddingDistance.COSINE)[source]\u00b6\nBases: _EmbeddingDistanceChainMixin, StringEvaluator\nUse embedding distances to score semantic difference between\na prediction and reference.\nExamples\n>>> chain = EmbeddingDistanceEvalChain()\n>>> result = chain.evaluate_strings(prediction=\"Hello\", reference=\"Hi\")\n>>> print(result)\n{'score': 0.5}\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam distance_metric: langchain.evaluation.embedding_distance.base.EmbeddingDistance = EmbeddingDistance.COSINE\u00b6\nparam embeddings: langchain.embeddings.base.Embeddings [Optional]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.EmbeddingDistanceEvalChain.html"} {"id": "62d98df8383f-1", "text": "Optional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.EmbeddingDistanceEvalChain.html"} {"id": "62d98df8383f-2", "text": "memory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.EmbeddingDistanceEvalChain.html"} {"id": "62d98df8383f-3", "text": "callbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.EmbeddingDistanceEvalChain.html"} {"id": "62d98df8383f-4", "text": "sole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.EmbeddingDistanceEvalChain.html"} {"id": "62d98df8383f-5", "text": "Parameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.EmbeddingDistanceEvalChain.html"} {"id": "62d98df8383f-6", "text": "sole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty evaluation_name: str\u00b6\nproperty input_keys: List[str]\u00b6\nReturn the input keys of the chain.\nReturns\nThe input keys.\nReturn type", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.EmbeddingDistanceEvalChain.html"} {"id": "62d98df8383f-7", "text": "Return the input keys of the chain.\nReturns\nThe input keys.\nReturn type\nList[str]\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\u00b6\nReturn the output keys of the chain.\nReturns\nThe output keys.\nReturn type\nList[str]\nproperty requires_reference: bool\u00b6\nReturn whether the chain requires a reference.\nReturns\nTrue if a reference is required, False otherwise.\nReturn type\nbool\nmodel Config\u00b6\nBases: object\nPermit embeddings to go unvalidated.\narbitrary_types_allowed: bool = True\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.EmbeddingDistanceEvalChain.html"} {"id": "5fb2329419bb-0", "text": "langchain.evaluation.qa.eval_chain.QAEvalChain\u00b6\nclass langchain.evaluation.qa.eval_chain.QAEvalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, prompt: BasePromptTemplate, llm: BaseLanguageModel, output_key: str = 'text', output_parser: BaseLLMOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None)[source]\u00b6\nBases: LLMChain, StringEvaluator, LLMEvalChain\nLLM Chain specifically for evaluating question answering.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam llm: BaseLanguageModel [Required]\u00b6\nLanguage model to call.\nparam llm_kwargs: dict [Optional]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html"} {"id": "5fb2329419bb-1", "text": "for the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam output_key: str = 'text'\u00b6\nparam output_parser: BaseLLMOutputParser [Optional]\u00b6\nOutput parser to use.\nDefaults to one that takes the most likely string but does not change it\notherwise.\nparam prompt: BasePromptTemplate [Required]\u00b6\nPrompt object to use.\nparam return_final_only: bool = True\u00b6\nWhether to return only the final parsed result. Defaults to True.\nIf false, will return a bunch of extra information about the generation.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html"} {"id": "5fb2329419bb-2", "text": "only one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.\nasync aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html"} {"id": "5fb2329419bb-3", "text": "Call apply and then parse the results.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync aevaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) \u2192 dict\u00b6\nAsynchronously evaluate Chain or LLM output, based on optionalinput and label.\nParameters\nprediction (str) \u2013 the LLM or chain prediction to evaluate.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html"} {"id": "5fb2329419bb-4", "text": "Parameters\nprediction (str) \u2013 the LLM or chain prediction to evaluate.\nreference (Optional[str], optional) \u2013 the reference label\nto evaluate against.\ninput (Optional[str], optional) \u2013 the input to consider during evaluation\n**kwargs \u2013 additional keyword arguments, including callbacks, tags, etc.\nReturns\nThe evaluation results containing the score or value.\nReturn type\ndict\nasync agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.\napply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.\nasync apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\nasync apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, str]]\u00b6\nCall apredict and then parse the results.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html"} {"id": "5fb2329419bb-5", "text": "Call apredict and then parse the results.\nasync aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html"} {"id": "5fb2329419bb-6", "text": "Example\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ncreate_outputs(llm_result: LLMResult) \u2192 List[Dict[str, Any]]\u00b6\nCreate outputs from response.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nevaluate(examples: Sequence[dict], predictions: Sequence[dict], question_key: str = 'query', answer_key: str = 'answer', prediction_key: str = 'result', *, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[dict][source]\u00b6\nEvaluate question answering examples and predictions.\nevaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) \u2192 dict\u00b6\nEvaluate Chain or LLM output, based on optional input and label.\nParameters\nprediction (str) \u2013 the LLM or chain prediction to evaluate.\nreference (Optional[str], optional) \u2013 the reference label", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html"} {"id": "5fb2329419bb-7", "text": "reference (Optional[str], optional) \u2013 the reference label\nto evaluate against.\ninput (Optional[str], optional) \u2013 the input to consider during evaluation\n**kwargs \u2013 additional keyword arguments, including callbacks, tags, etc.\nReturns\nThe evaluation results containing the score or value.\nReturn type\ndict\nclassmethod from_llm(llm: BaseLanguageModel, prompt: PromptTemplate = PromptTemplate(input_variables=['query', 'result', 'answer'], output_parser=None, partial_variables={}, template=\"You are a teacher grading a quiz.\\nYou are given a question, the student's answer, and the true answer, and are asked to score the student answer as either CORRECT or INCORRECT.\\n\\nExample Format:\\nQUESTION: question here\\nSTUDENT ANSWER: student's answer here\\nTRUE ANSWER: true answer here\\nGRADE: CORRECT or INCORRECT here\\n\\nGrade the student answers based ONLY on their factual accuracy. Ignore differences in punctuation and phrasing between the student answer and true answer. It is OK if the student answer contains more information than the true answer, as long as it does not contain any conflicting statements. Begin! \\n\\nQUESTION: {query}\\nSTUDENT ANSWER: {result}\\nTRUE ANSWER: {answer}\\nGRADE:\", template_format='f-string', validate_template=True), **kwargs: Any) \u2192 QAEvalChain[source]\u00b6\nLoad QA Eval Chain from LLM.\nParameters\nllm (BaseLanguageModel) \u2013 the base language model to use.\nprompt ('answer' and 'result' that will be used as the) \u2013 A prompt template containing the input_variables:\n'input' \u2013 \nprompt \u2013 \nevaluation. (for) \u2013 \nPROMPT. (Defaults to) \u2013 \n**kwargs \u2013 additional keyword arguments.\nReturns\nthe loaded QA eval chain.\nReturn type\nQAEvalChain", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html"} {"id": "5fb2329419bb-8", "text": "Returns\nthe loaded QA eval chain.\nReturn type\nQAEvalChain\nclassmethod from_string(llm: BaseLanguageModel, template: str) \u2192 LLMChain\u00b6\nCreate LLMChain from LLM and template.\ngenerate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.\npredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\npredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, Any]]\u00b6\nCall predict and then parse the results.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html"} {"id": "5fb2329419bb-9", "text": "memory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html"} {"id": "5fb2329419bb-10", "text": "addition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty evaluation_name: str\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html"} {"id": "5fb2329419bb-11", "text": "Return a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty requires_input: bool\u00b6\nWhether this evaluator requires an input string.\nproperty requires_reference: bool\u00b6\nWhether this evaluator requires a reference label.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for the QAEvalChain.\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html"} {"id": "aed05405b8c3-0", "text": "langchain.evaluation.string_distance.base.StringDistance\u00b6\nclass langchain.evaluation.string_distance.base.StringDistance(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]\u00b6\nBases: str, Enum\nDistance metric to use.\nMethods\n__init__(*args,\u00a0**kwds)\ncapitalize()\nReturn a capitalized version of the string.\ncasefold()\nReturn a version of the string suitable for caseless comparisons.\ncenter(width[,\u00a0fillchar])\nReturn a centered string of length width.\ncount(sub[,\u00a0start[,\u00a0end]])\nReturn the number of non-overlapping occurrences of substring sub in string S[start:end].\nencode([encoding,\u00a0errors])\nEncode the string using the codec registered for encoding.\nendswith(suffix[,\u00a0start[,\u00a0end]])\nReturn True if S ends with the specified suffix, False otherwise.\nexpandtabs([tabsize])\nReturn a copy where all tab characters are expanded using spaces.\nfind(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nformat(*args,\u00a0**kwargs)\nReturn a formatted version of S, using substitutions from args and kwargs.\nformat_map(mapping)\nReturn a formatted version of S, using substitutions from mapping.\nindex(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nisalnum()\nReturn True if the string is an alpha-numeric string, False otherwise.\nisalpha()\nReturn True if the string is an alphabetic string, False otherwise.\nisascii()\nReturn True if all characters in the string are ASCII, False otherwise.\nisdecimal()\nReturn True if the string is a decimal string, False otherwise.\nisdigit()", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistance.html"} {"id": "aed05405b8c3-1", "text": "Return True if the string is a decimal string, False otherwise.\nisdigit()\nReturn True if the string is a digit string, False otherwise.\nisidentifier()\nReturn True if the string is a valid Python identifier, False otherwise.\nislower()\nReturn True if the string is a lowercase string, False otherwise.\nisnumeric()\nReturn True if the string is a numeric string, False otherwise.\nisprintable()\nReturn True if the string is printable, False otherwise.\nisspace()\nReturn True if the string is a whitespace string, False otherwise.\nistitle()\nReturn True if the string is a title-cased string, False otherwise.\nisupper()\nReturn True if the string is an uppercase string, False otherwise.\njoin(iterable,\u00a0/)\nConcatenate any number of strings.\nljust(width[,\u00a0fillchar])\nReturn a left-justified string of length width.\nlower()\nReturn a copy of the string converted to lowercase.\nlstrip([chars])\nReturn a copy of the string with leading whitespace removed.\nmaketrans\nReturn a translation table usable for str.translate().\npartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nremoveprefix(prefix,\u00a0/)\nReturn a str with the given prefix string removed if present.\nremovesuffix(suffix,\u00a0/)\nReturn a str with the given suffix string removed if present.\nreplace(old,\u00a0new[,\u00a0count])\nReturn a copy with all occurrences of substring old replaced by new.\nrfind(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].\nrindex(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistance.html"} {"id": "aed05405b8c3-2", "text": "rjust(width[,\u00a0fillchar])\nReturn a right-justified string of length width.\nrpartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nrsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nrstrip([chars])\nReturn a copy of the string with trailing whitespace removed.\nsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nsplitlines([keepends])\nReturn a list of the lines in the string, breaking at line boundaries.\nstartswith(prefix[,\u00a0start[,\u00a0end]])\nReturn True if S starts with the specified prefix, False otherwise.\nstrip([chars])\nReturn a copy of the string with leading and trailing whitespace removed.\nswapcase()\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\nReturn a version of the string where each word is titlecased.\ntranslate(table,\u00a0/)\nReplace each character in the string using the given translation table.\nupper()\nReturn a copy of the string converted to uppercase.\nzfill(width,\u00a0/)\nPad a numeric string with zeros on the left, to fill a field of the given width.\nAttributes\nDAMERAU_LEVENSHTEIN\nLEVENSHTEIN\nJARO\nJARO_WINKLER\ncapitalize()\u00b6\nReturn a capitalized version of the string.\nMore specifically, make the first character have upper case and the rest lower\ncase.\ncasefold()\u00b6\nReturn a version of the string suitable for caseless comparisons.\ncenter(width, fillchar=' ', /)\u00b6\nReturn a centered string of length width.\nPadding is done using the specified fill character (default is a space).\ncount(sub[, start[, end]]) \u2192 int\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistance.html"} {"id": "aed05405b8c3-3", "text": "count(sub[, start[, end]]) \u2192 int\u00b6\nReturn the number of non-overlapping occurrences of substring sub in\nstring S[start:end]. Optional arguments start and end are\ninterpreted as in slice notation.\nencode(encoding='utf-8', errors='strict')\u00b6\nEncode the string using the codec registered for encoding.\nencodingThe encoding in which to encode the string.\nerrorsThe error handling scheme to use for encoding errors.\nThe default is \u2018strict\u2019 meaning that encoding errors raise a\nUnicodeEncodeError. Other possible values are \u2018ignore\u2019, \u2018replace\u2019 and\n\u2018xmlcharrefreplace\u2019 as well as any other name registered with\ncodecs.register_error that can handle UnicodeEncodeErrors.\nendswith(suffix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S ends with the specified suffix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nsuffix can also be a tuple of strings to try.\nexpandtabs(tabsize=8)\u00b6\nReturn a copy where all tab characters are expanded using spaces.\nIf tabsize is not given, a tab size of 8 characters is assumed.\nfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nformat(*args, **kwargs) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from args and kwargs.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nformat_map(mapping) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from mapping.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nindex(sub[, start[, end]]) \u2192 int\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistance.html"} {"id": "aed05405b8c3-4", "text": "index(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nisalnum()\u00b6\nReturn True if the string is an alpha-numeric string, False otherwise.\nA string is alpha-numeric if all characters in the string are alpha-numeric and\nthere is at least one character in the string.\nisalpha()\u00b6\nReturn True if the string is an alphabetic string, False otherwise.\nA string is alphabetic if all characters in the string are alphabetic and there\nis at least one character in the string.\nisascii()\u00b6\nReturn True if all characters in the string are ASCII, False otherwise.\nASCII characters have code points in the range U+0000-U+007F.\nEmpty string is ASCII too.\nisdecimal()\u00b6\nReturn True if the string is a decimal string, False otherwise.\nA string is a decimal string if all characters in the string are decimal and\nthere is at least one character in the string.\nisdigit()\u00b6\nReturn True if the string is a digit string, False otherwise.\nA string is a digit string if all characters in the string are digits and there\nis at least one character in the string.\nisidentifier()\u00b6\nReturn True if the string is a valid Python identifier, False otherwise.\nCall keyword.iskeyword(s) to test whether string s is a reserved identifier,\nsuch as \u201cdef\u201d or \u201cclass\u201d.\nislower()\u00b6\nReturn True if the string is a lowercase string, False otherwise.\nA string is lowercase if all cased characters in the string are lowercase and\nthere is at least one cased character in the string.\nisnumeric()\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistance.html"} {"id": "aed05405b8c3-5", "text": "there is at least one cased character in the string.\nisnumeric()\u00b6\nReturn True if the string is a numeric string, False otherwise.\nA string is numeric if all characters in the string are numeric and there is at\nleast one character in the string.\nisprintable()\u00b6\nReturn True if the string is printable, False otherwise.\nA string is printable if all of its characters are considered printable in\nrepr() or if it is empty.\nisspace()\u00b6\nReturn True if the string is a whitespace string, False otherwise.\nA string is whitespace if all characters in the string are whitespace and there\nis at least one character in the string.\nistitle()\u00b6\nReturn True if the string is a title-cased string, False otherwise.\nIn a title-cased string, upper- and title-case characters may only\nfollow uncased characters and lowercase characters only cased ones.\nisupper()\u00b6\nReturn True if the string is an uppercase string, False otherwise.\nA string is uppercase if all cased characters in the string are uppercase and\nthere is at least one cased character in the string.\njoin(iterable, /)\u00b6\nConcatenate any number of strings.\nThe string whose method is called is inserted in between each given string.\nThe result is returned as a new string.\nExample: \u2018.\u2019.join([\u2018ab\u2019, \u2018pq\u2019, \u2018rs\u2019]) -> \u2018ab.pq.rs\u2019\nljust(width, fillchar=' ', /)\u00b6\nReturn a left-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nlower()\u00b6\nReturn a copy of the string converted to lowercase.\nlstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading whitespace removed.\nIf chars is given and not None, remove characters in chars instead.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistance.html"} {"id": "aed05405b8c3-6", "text": "If chars is given and not None, remove characters in chars instead.\nstatic maketrans()\u00b6\nReturn a translation table usable for str.translate().\nIf there is only one argument, it must be a dictionary mapping Unicode\nordinals (integers) or characters to Unicode ordinals, strings or None.\nCharacter keys will be then converted to ordinals.\nIf there are two arguments, they must be strings of equal length, and\nin the resulting dictionary, each character in x will be mapped to the\ncharacter at the same position in y. If there is a third argument, it\nmust be a string, whose characters will be mapped to None in the result.\npartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string. If the separator is found,\nreturns a 3-tuple containing the part before the separator, the separator\nitself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing the original string\nand two empty strings.\nremoveprefix(prefix, /)\u00b6\nReturn a str with the given prefix string removed if present.\nIf the string starts with the prefix string, return string[len(prefix):].\nOtherwise, return a copy of the original string.\nremovesuffix(suffix, /)\u00b6\nReturn a str with the given suffix string removed if present.\nIf the string ends with the suffix string and that suffix is not empty,\nreturn string[:-len(suffix)]. Otherwise, return a copy of the original\nstring.\nreplace(old, new, count=- 1, /)\u00b6\nReturn a copy with all occurrences of substring old replaced by new.\ncountMaximum number of occurrences to replace.\n-1 (the default value) means replace all occurrences.\nIf the optional argument count is given, only the first count occurrences are", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistance.html"} {"id": "aed05405b8c3-7", "text": "If the optional argument count is given, only the first count occurrences are\nreplaced.\nrfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nrindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nrjust(width, fillchar=' ', /)\u00b6\nReturn a right-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nrpartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string, starting at the end. If\nthe separator is found, returns a 3-tuple containing the part before the\nseparator, the separator itself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing two empty strings\nand the original string.\nrsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nSplitting starts at the end of the string and works to the front.\nrstrip(chars=None, /)\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistance.html"} {"id": "aed05405b8c3-8", "text": "rstrip(chars=None, /)\u00b6\nReturn a copy of the string with trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nNote, str.split() is mainly useful for data that has been intentionally\ndelimited. With natural text that includes punctuation, consider using\nthe regular expression module.\nsplitlines(keepends=False)\u00b6\nReturn a list of the lines in the string, breaking at line boundaries.\nLine breaks are not included in the resulting list unless keepends is given and\ntrue.\nstartswith(prefix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S starts with the specified prefix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nprefix can also be a tuple of strings to try.\nstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading and trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nswapcase()\u00b6\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\u00b6\nReturn a version of the string where each word is titlecased.\nMore specifically, words start with uppercased characters and all remaining\ncased characters have lower case.\ntranslate(table, /)\u00b6\nReplace each character in the string using the given translation table.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistance.html"} {"id": "aed05405b8c3-9", "text": "translate(table, /)\u00b6\nReplace each character in the string using the given translation table.\ntableTranslation table, which must be a mapping of Unicode ordinals to\nUnicode ordinals, strings, or None.\nThe table must implement lookup/indexing via __getitem__, for instance a\ndictionary or list. If this operation raises LookupError, the character is\nleft untouched. Characters mapped to None are deleted.\nupper()\u00b6\nReturn a copy of the string converted to uppercase.\nzfill(width, /)\u00b6\nPad a numeric string with zeros on the left, to fill a field of the given width.\nThe string is never truncated.\nDAMERAU_LEVENSHTEIN = 'damerau_levenshtein'\u00b6\nJARO = 'jaro'\u00b6\nJARO_WINKLER = 'jaro_winkler'\u00b6\nLEVENSHTEIN = 'levenshtein'\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistance.html"} {"id": "2d713ddef643-0", "text": "langchain.evaluation.run_evaluators.implementations.TrajectoryRunEvalOutputParser\u00b6\nclass langchain.evaluation.run_evaluators.implementations.TrajectoryRunEvalOutputParser(*, eval_chain_output_key: str = 'text', evaluation_name: str = 'Agent Trajectory', evaluator_info: dict = None)[source]\u00b6\nBases: RunEvaluatorOutputParser, TrajectoryOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam eval_chain_output_key: str = 'text'\u00b6\nparam evaluation_name: str = 'Agent Trajectory'\u00b6\nThe name assigned to the evaluation feedback.\nparam evaluator_info: dict [Optional]\u00b6\nAdditional information to log as feedback metadata.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 TrajectoryEval\u00b6\nParse the output text and extract the score and reasoning.\nParameters\ntext (str) \u2013 The output text to parse.\nReturns\nA named tuple containing the score and reasoning.\nReturn type\nTrajectoryEval\nRaises\nOutputParserException \u2013 If the score is not found in the output text or\n if the score is not a digit in the range 1-5.\nparse_chain_output(output: Dict[str, Any]) \u2192 EvaluationResult[source]\u00b6\nParse the output of a run.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.TrajectoryRunEvalOutputParser.html"} {"id": "2d713ddef643-1", "text": "Parameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.TrajectoryRunEvalOutputParser.html"} {"id": "72fe25cfd1e9-0", "text": "langchain.evaluation.criteria.eval_chain.CriteriaResultOutputParser\u00b6\nclass langchain.evaluation.criteria.eval_chain.CriteriaResultOutputParser[source]\u00b6\nBases: BaseOutputParser[dict]\nA parser for the output of the CriteriaEvalChain.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 Any[source]\u00b6\nParse the output text.\nParameters\ntext (str) \u2013 The output text to parse.\nReturns\nThe parsed output.\nReturn type\nAny\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaResultOutputParser.html"} {"id": "72fe25cfd1e9-1", "text": "serialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaResultOutputParser.html"} {"id": "3c16894c4d14-0", "text": "langchain.evaluation.run_evaluators.implementations.StringRunEvaluatorInputMapper\u00b6\nclass langchain.evaluation.run_evaluators.implementations.StringRunEvaluatorInputMapper(*, prediction_map: Dict[str, str], input_map: Dict[str, str], answer_map: Optional[Dict[str, str]] = None)[source]\u00b6\nBases: RunEvaluatorInputMapper, BaseModel\nMaps the Run and Optional[Example] to a dictionary.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam answer_map: Optional[Dict[str, str]] = None\u00b6\nMap from example outputs to the evaluation inputs.\nparam input_map: Dict[str, str] [Required]\u00b6\nMap from run inputs to the evaluation inputs.\nparam prediction_map: Dict[str, str] [Required]\u00b6\nMap from run outputs to the evaluation inputs.\n__call__(run: Run, example: Optional[Example] = None) \u2192 Any\u00b6\nMaps the Run and Optional[Example] to a dictionary\nmap(run: Run, example: Optional[Example] = None) \u2192 Dict[str, Any][source]\u00b6\nMaps the Run and Optional[Example] to a dictionary", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.StringRunEvaluatorInputMapper.html"} {"id": "23a62bc59589-0", "text": "langchain.evaluation.loading.load_dataset\u00b6\nlangchain.evaluation.loading.load_dataset(uri: str) \u2192 List[Dict][source]\u00b6\nLoad a dataset from the LangChainDatasets HuggingFace org.\nParameters\nuri \u2013 The uri of the dataset to load.\nReturns\nA list of dictionaries, each representing a row in the dataset.\nPrerequisites\npip install datasets\nExamples\nfrom langchain.evaluation import load_dataset\nds = load_dataset(\"llm-math\")", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.loading.load_dataset.html"} {"id": "98450e004ac9-0", "text": "langchain.evaluation.loading.load_evaluators\u00b6\nlangchain.evaluation.loading.load_evaluators(evaluators: Sequence[EvaluatorType], *, llm: Optional[BaseLanguageModel] = None, config: Optional[dict] = None, **kwargs: Any) \u2192 List[Chain][source]\u00b6\nLoad evaluators specified by a list of evaluator types.\nParameters\nevaluators (Sequence[EvaluatorType]) \u2013 The list of evaluator types to load.\nllm (BaseLanguageModel, optional) \u2013 The language model to use for evaluation, if none is provided, a default\nChatOpenAI gpt-4 model will be used.\nconfig (dict, optional) \u2013 A dictionary mapping evaluator types to additional keyword arguments,\nby default None\n**kwargs (Any) \u2013 Additional keyword arguments to pass to all evaluators.\nReturns\nThe loaded evaluators.\nReturn type\nList[Chain]\nExamples\n>>> from langchain.evaluation import load_evaluators, EvaluatorType\n>>> evaluators = [EvaluatorType.QA, EvaluatorType.CRITERIA]\n>>> loaded_evaluators = load_evaluators(evaluators, criteria=\"helpfulness\")", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.loading.load_evaluators.html"} {"id": "7e27a1af7a6c-0", "text": "langchain.evaluation.run_evaluators.loading.load_run_evaluators_for_model\u00b6\nlangchain.evaluation.run_evaluators.loading.load_run_evaluators_for_model(evaluators: Sequence[EvaluatorType], model: Union[Chain, BaseLanguageModel, Tool], *, input_key: Optional[str] = None, prediction_key: Optional[str] = None, reference_key: Optional[str] = None, eval_llm: Optional[BaseLanguageModel] = None, config: Optional[dict] = None, **kwargs: Any) \u2192 List[RunEvaluator][source]\u00b6\nLoad evaluators specified by a list of evaluator types.\nParameters\nevaluators (Sequence[EvaluatorType]) \u2013 The list of evaluator types to load.\nmodel (Union[Chain, BaseLanguageModel, Tool]) \u2013 The model to evaluate. Used to infer how to parse the run.\ninput_key (Optional[str], a chain run's input key to map) \u2013 to the evaluator\u2019s input\nprediction_key (Optional[str], the key in the run's outputs to) \u2013 represent the Chain prediction\nreference_key (Optional[str], the key in the dataset example (row)) \u2013 outputs to represent the reference, or ground-truth label\neval_llm (BaseLanguageModel, optional) \u2013 The language model to use for evaluation, if none is provided, a default\nChatOpenAI gpt-4 model will be used.\n**kwargs (Any) \u2013 Additional keyword arguments to pass to all evaluators.\nReturns\nThe loaded Run evaluators.\nReturn type\nList[RunEvaluator]", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.loading.load_run_evaluators_for_model.html"} {"id": "edc9d0ce53f5-0", "text": "langchain.evaluation.criteria.eval_chain.CriteriaEvalChain\u00b6\nclass langchain.evaluation.criteria.eval_chain.CriteriaEvalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, prompt: BasePromptTemplate, llm: BaseLanguageModel, output_key: str = 'text', output_parser: BaseOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None, criteria_names: List[str] = None)[source]\u00b6\nBases: StringEvaluator, LLMEvalChain, LLMChain\nLLM Chain for evaluating runs against criteria.\nParameters\nllm (BaseLanguageModel) \u2013 The language model to use for evaluation.\ncriteria (Union[Mapping[str, str], Sequence[str], str]) \u2013 The criteria to evaluate the runs against. It can be a mapping of\ncriterion names to descriptions, a sequence of criterion names, or a\nsingle criterion name.\nprompt (Optional[BasePromptTemplate], default=None) \u2013 The prompt template to use for generating prompts. If not provided, a\ndefault prompt template will be used based on the value of\nrequires_reference.\nrequires_reference (bool, default=False) \u2013 Whether the evaluation requires a reference text. If True, the\nPROMPT_WITH_REFERENCES template will be used, which includes the\nreference labels in the prompt. Otherwise, the PROMPT template will be\nused, which is a reference-free prompt.\n**kwargs (Any) \u2013 Additional keyword arguments to pass to the LLMChain constructor.\nReturns\nAn instance of the CriteriaEvalChain class.\nReturn type\nCriteriaEvalChain\nExamples\n>>> from langchain.chat_models import ChatAnthropic", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html"} {"id": "edc9d0ce53f5-1", "text": "Return type\nCriteriaEvalChain\nExamples\n>>> from langchain.chat_models import ChatAnthropic\n>>> from langchain.evaluation.criteria import CriteriaEvalChain\n>>> llm = ChatAnthropic(temperature=0)\n>>> criteria = {\"my-custom-criterion\": \"Is the submission the most amazing ever?\"}\n>>> evaluator = CriteriaEvalChain.from_llm(llm=llm, criteria=criteria)\n>>> evaluator.evaluate_strings(prediction=\"Imagine an ice cream flavor for the color aquamarine\", input=\"Tell me an idea\")\n{\n 'reasoning': 'Here is my step-by-step reasoning for the given criteria:\\n\\nThe criterion is: \"Is the submission the most amazing ever?\" This is a subjective criterion and open to interpretation. The submission suggests an aquamarine-colored ice cream flavor which is creative but may or may not be considered the most amazing idea ever conceived. There are many possible amazing ideas and this one ice cream flavor suggestion may or may not rise to that level for every person. \\n\\nN',\n 'value': 'N',\n 'score': 0,\n}\n>>> from langchain.chat_models import ChatOpenAI\n>>> from langchain.evaluation.criteria import CriteriaEvalChain\n>>> llm = ChatOpenAI(model=\"gpt-4\", temperature=0)\n>>> criteria = \"correctness\"\n>>> evaluator = CriteriaEvalChain.from_llm(\n... llm=llm,\n... criteria=criteria,\n... requires_reference=True,\n... )\n>>> evaluator.evaluate_strings(\n... prediction=\"The answer is 4\",\n... input=\"How many apples are there?\",\n... reference=\"There are 3 apples\",\n... )\n{\n 'score': 0,", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html"} {"id": "edc9d0ce53f5-2", "text": "... )\n{\n 'score': 0,\n 'reasoning': 'The criterion for this task is the correctness of the submission. The submission states that there are 4 apples, but the reference indicates that there are actually 3 apples. Therefore, the submission is not correct, accurate, or factual according to the given criterion.\\n\\nN',\n 'value': 'N',\n}\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam criteria_names: List[str] [Optional]\u00b6\nThe names of the criteria being evaluated.\nparam llm: BaseLanguageModel [Required]\u00b6\nLanguage model to call.\nparam llm_kwargs: dict [Optional]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam output_parser: BaseOutputParser [Optional]\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html"} {"id": "edc9d0ce53f5-3", "text": "param output_parser: BaseOutputParser [Optional]\u00b6\nThe parser to use to map the output to a structured result.\nparam prompt: BasePromptTemplate [Required]\u00b6\nPrompt object to use.\nparam return_final_only: bool = True\u00b6\nWhether to return only the final parsed result. Defaults to True.\nIf false, will return a bunch of extra information about the generation.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html"} {"id": "edc9d0ce53f5-4", "text": "addition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.\nasync aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html"} {"id": "edc9d0ce53f5-5", "text": "response. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync aevaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) \u2192 dict\u00b6\nAsynchronously evaluate Chain or LLM output, based on optionalinput and label.\nParameters\nprediction (str) \u2013 the LLM or chain prediction to evaluate.\nreference (Optional[str], optional) \u2013 the reference label\nto evaluate against.\ninput (Optional[str], optional) \u2013 the input to consider during evaluation\n**kwargs \u2013 additional keyword arguments, including callbacks, tags, etc.\nReturns\nThe evaluation results containing the score or value.\nReturn type\ndict\nasync agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html"} {"id": "edc9d0ce53f5-6", "text": "Utilize the LLM generate method for speed gains.\napply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.\nasync apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\nasync apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, str]]\u00b6\nCall apredict and then parse the results.\nasync aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html"} {"id": "edc9d0ce53f5-7", "text": "info along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ncreate_outputs(llm_result: LLMResult) \u2192 List[Dict[str, Any]]\u00b6\nCreate outputs from response.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html"} {"id": "edc9d0ce53f5-8", "text": "Parameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nevaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) \u2192 dict\u00b6\nEvaluate Chain or LLM output, based on optional input and label.\nParameters\nprediction (str) \u2013 the LLM or chain prediction to evaluate.\nreference (Optional[str], optional) \u2013 the reference label\nto evaluate against.\ninput (Optional[str], optional) \u2013 the input to consider during evaluation\n**kwargs \u2013 additional keyword arguments, including callbacks, tags, etc.\nReturns\nThe evaluation results containing the score or value.\nReturn type\ndict\nclassmethod from_llm(llm: BaseLanguageModel, criteria: Optional[Union[Mapping[str, str], Sequence[str], Sequence[ConstitutionalPrinciple], str, ConstitutionalPrinciple]] = None, *, prompt: Optional[BasePromptTemplate] = None, requires_reference: bool = False, **kwargs: Any) \u2192 CriteriaEvalChain[source]\u00b6\nCreate a CriteriaEvalChain instance from an llm and criteria.\nParameters\nllm (BaseLanguageModel) \u2013 The language model to use for evaluation.\ncriteria (CRITERIA_TYPE - default=None for \"helpfulness\") \u2013 \nThe criteria to evaluate the runs against. It can be:\na mapping of criterion names to descriptions\na sequence of criterion names\na single criterion name present in one of the default criteria\na sequence of ConstitutionalPrinciple instances\na single ConstitutionalPrinciple instance\nprompt (Optional[BasePromptTemplate], default=None) \u2013 The prompt template to use for generating prompts. If not provided,", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html"} {"id": "edc9d0ce53f5-9", "text": "a default prompt template will be used based on the value of\nrequires_reference.\nrequires_reference (bool, default=False) \u2013 Whether the evaluation requires a reference text. If True, the\nPROMPT_WITH_REFERENCES template will be used for generating\nprompts. If False, the PROMPT template will be used.\n**kwargs (Any) \u2013 Additional keyword arguments to pass to the LLMChain\nconstructor.\nReturns\nAn instance of the CriteriaEvalChain class.\nReturn type\nCriteriaEvalChain\nExamples\n>>> from langchain.llms import OpenAI\n>>> from langchain.evaluation.criteria import CriteriaEvalChain\n>>> llm = OpenAI()\n>>> criteria = {\n \"hallucination\": (\n \"Does this submission contain information\"\n \" not present in the input or reference?\"\n ),\n }\n>>> chain = CriteriaEvalChain.from_llm(\n llm=llm,\n criteria=criteria,\n requires_reference=True,\n )\nclassmethod from_string(llm: BaseLanguageModel, template: str) \u2192 LLMChain\u00b6\nCreate LLMChain from LLM and template.\ngenerate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.\nstatic get_supported_default_criteria() \u2192 List[str][source]\u00b6\nGet the list of supported default criteria.\nReturns\nThe list of supported default criteria.\nReturn type\nList[str]\nExamples\n>>> CriteriaEvalChain.supported_default_criteria()\n['conciseness', 'relevance', 'coherence', 'harmfulness',\n 'maliciousness', 'helpfulness',\n 'controversiality', 'mysogyny', 'criminality', 'insensitive']", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html"} {"id": "edc9d0ce53f5-10", "text": "predict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\npredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, Any]]\u00b6\nCall predict and then parse the results.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html"} {"id": "edc9d0ce53f5-11", "text": "Prepare prompts from inputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nclassmethod resolve_criteria(criteria: Optional[Union[Mapping[str, str], Sequence[str], Sequence[ConstitutionalPrinciple], str, ConstitutionalPrinciple]]) \u2192 Dict[str, str][source]\u00b6\nResolve the criteria to evaluate.\nParameters\ncriteria (CRITERIA_TYPE) \u2013 \nThe criteria to evaluate the runs against. It can be:\na mapping of criterion names to descriptions\na sequence of criterion names\na single criterion name present in one of the default criteria\na sequence of ConstitutionalPrinciple instances\na single ConstitutionalPrinciple instance\nReturns\nA dictionary mapping criterion names to descriptions.\nReturn type\nDict[str, str]\nExamples\n>>> criteria = [\"relevance\", \"coherence\"]\n>>> CriteriaEvalChain.resolve_criteria(criteria)\n{'relevance': 'Is the submission referring to a real quote from the text?',\n 'coherence': 'Is the submission coherent, well-structured, and organized?'}\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html"} {"id": "edc9d0ce53f5-12", "text": "a single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html"} {"id": "edc9d0ce53f5-13", "text": "to_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty evaluation_name: str\u00b6\nGet the name of the evaluation.\nReturns\nThe name of the evaluation.\nReturn type\nstr\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty requires_input: bool\u00b6\nWhether this evaluator requires an input string.\nproperty requires_reference: bool\u00b6\nWhether the evaluation requires a reference text.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for the QAEvalChain.\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html"} {"id": "45f5183c3673-0", "text": "langchain.evaluation.schema.StringEvaluator\u00b6\nclass langchain.evaluation.schema.StringEvaluator[source]\u00b6\nBases: _EvalArgsMixin, ABC\nGrade, tag, or otherwise evaluate predictions relative to their inputs\nand/or reference labels.\nMethods\n__init__()\naevaluate_strings(*,\u00a0prediction[,\u00a0...])\nAsynchronously evaluate Chain or LLM output, based on optional\nevaluate_strings(*,\u00a0prediction[,\u00a0reference,\u00a0...])\nEvaluate Chain or LLM output, based on optional input and label.\nAttributes\nevaluation_name\nrequires_input\nWhether this evaluator requires an input string.\nrequires_reference\nWhether this evaluator requires a reference label.\nasync aevaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) \u2192 dict[source]\u00b6\nAsynchronously evaluate Chain or LLM output, based on optionalinput and label.\nParameters\nprediction (str) \u2013 the LLM or chain prediction to evaluate.\nreference (Optional[str], optional) \u2013 the reference label\nto evaluate against.\ninput (Optional[str], optional) \u2013 the input to consider during evaluation\n**kwargs \u2013 additional keyword arguments, including callbacks, tags, etc.\nReturns\nThe evaluation results containing the score or value.\nReturn type\ndict\nevaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) \u2192 dict[source]\u00b6\nEvaluate Chain or LLM output, based on optional input and label.\nParameters\nprediction (str) \u2013 the LLM or chain prediction to evaluate.\nreference (Optional[str], optional) \u2013 the reference label\nto evaluate against.\ninput (Optional[str], optional) \u2013 the input to consider during evaluation\n**kwargs \u2013 additional keyword arguments, including callbacks, tags, etc.\nReturns\nThe evaluation results containing the score or value.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.StringEvaluator.html"} {"id": "45f5183c3673-1", "text": "Returns\nThe evaluation results containing the score or value.\nReturn type\ndict\nproperty evaluation_name: str\u00b6\nproperty requires_input: bool\u00b6\nWhether this evaluator requires an input string.\nproperty requires_reference: bool\u00b6\nWhether this evaluator requires a reference label.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.StringEvaluator.html"} {"id": "242fe07808a2-0", "text": "langchain.evaluation.run_evaluators.base.RunEvaluatorChain\u00b6\nclass langchain.evaluation.run_evaluators.base.RunEvaluatorChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, input_mapper: RunEvaluatorInputMapper, eval_chain: Chain, output_parser: RunEvaluatorOutputParser)[source]\u00b6\nBases: Chain, RunEvaluator\nEvaluate Run and optional examples.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam eval_chain: Chain [Required]\u00b6\nThe evaluation chain.\nparam input_mapper: RunEvaluatorInputMapper [Required]\u00b6\nMaps the Run and Optional example to a dictionary for the eval chain.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.base.RunEvaluatorChain.html"} {"id": "242fe07808a2-1", "text": "There are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam output_parser: RunEvaluatorOutputParser [Required]\u00b6\nParse the output of the eval chain into feedback.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.base.RunEvaluatorChain.html"} {"id": "242fe07808a2-2", "text": "chain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.base.RunEvaluatorChain.html"} {"id": "242fe07808a2-3", "text": "tags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync aevaluate_run(run: Run, example: Optional[Example] = None) \u2192 EvaluationResult[source]\u00b6\nEvaluate an example.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.base.RunEvaluatorChain.html"} {"id": "242fe07808a2-4", "text": "callbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nevaluate_run(run: Run, example: Optional[Example] = None) \u2192 EvaluationResult[source]\u00b6\nEvaluate an example.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.base.RunEvaluatorChain.html"} {"id": "242fe07808a2-5", "text": "Validate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.base.RunEvaluatorChain.html"} {"id": "242fe07808a2-6", "text": "a single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.base.RunEvaluatorChain.html"} {"id": "242fe07808a2-7", "text": "to_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty input_keys: List[str]\u00b6\nReturn the keys expected to be in the chain input.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\u00b6\nReturn the keys expected to be in the chain output.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.base.RunEvaluatorChain.html"} {"id": "344611d155d8-0", "text": "langchain.evaluation.embedding_distance.base.PairwiseEmbeddingDistanceEvalChain\u00b6\nclass langchain.evaluation.embedding_distance.base.PairwiseEmbeddingDistanceEvalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, embeddings: Embeddings = None, distance_metric: EmbeddingDistance = EmbeddingDistance.COSINE)[source]\u00b6\nBases: _EmbeddingDistanceChainMixin, PairwiseStringEvaluator\nUse embedding distances to score semantic difference between two predictions.\nExamples\n>>> chain = PairwiseEmbeddingDistanceEvalChain()\n>>> result = chain.evaluate_string_pairs(prediction=\"Hello\", prediction_b=\"Hi\")\n>>> print(result)\n{'score': 0.5}\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam distance_metric: langchain.evaluation.embedding_distance.base.EmbeddingDistance = EmbeddingDistance.COSINE\u00b6\nparam embeddings: langchain.embeddings.base.Embeddings [Optional]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.PairwiseEmbeddingDistanceEvalChain.html"} {"id": "344611d155d8-1", "text": "Optional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.PairwiseEmbeddingDistanceEvalChain.html"} {"id": "344611d155d8-2", "text": "memory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.PairwiseEmbeddingDistanceEvalChain.html"} {"id": "344611d155d8-3", "text": "callbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.PairwiseEmbeddingDistanceEvalChain.html"} {"id": "344611d155d8-4", "text": "sole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.PairwiseEmbeddingDistanceEvalChain.html"} {"id": "344611d155d8-5", "text": "Parameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.PairwiseEmbeddingDistanceEvalChain.html"} {"id": "344611d155d8-6", "text": "sole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty evaluation_name: str\u00b6\nproperty input_keys: List[str]\u00b6\nReturn the input keys of the chain.\nReturns\nThe input keys.\nReturn type", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.PairwiseEmbeddingDistanceEvalChain.html"} {"id": "344611d155d8-7", "text": "Return the input keys of the chain.\nReturns\nThe input keys.\nReturn type\nList[str]\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\u00b6\nReturn the output keys of the chain.\nReturns\nThe output keys.\nReturn type\nList[str]\nmodel Config\u00b6\nBases: object\nPermit embeddings to go unvalidated.\narbitrary_types_allowed: bool = True\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.PairwiseEmbeddingDistanceEvalChain.html"} {"id": "9cf8345bfaab-0", "text": "langchain.evaluation.run_evaluators.string_run_evaluator.StringRunEvaluatorChain\u00b6\nclass langchain.evaluation.run_evaluators.string_run_evaluator.StringRunEvaluatorChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_mapper: StringRunMapper, example_mapper: Optional[StringExampleMapper] = None, name: str, string_evaluator: StringEvaluator)[source]\u00b6\nBases: Chain, RunEvaluator\nEvaluate Run and optional examples.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam example_mapper: Optional[StringExampleMapper] = None\u00b6\nMaps the Example (dataset row) to a dictionary\nwith a \u2018reference\u2019 string.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.string_run_evaluator.StringRunEvaluatorChain.html"} {"id": "9cf8345bfaab-1", "text": "There are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam name: str [Required]\u00b6\nThe name of the evaluation metric.\nparam run_mapper: StringRunMapper [Required]\u00b6\nMaps the Run to a dictionary with \u2018input\u2019 and \u2018prediction\u2019 strings.\nparam string_evaluator: StringEvaluator [Required]\u00b6\nThe evaluation chain.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.string_run_evaluator.StringRunEvaluatorChain.html"} {"id": "9cf8345bfaab-2", "text": "memory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.string_run_evaluator.StringRunEvaluatorChain.html"} {"id": "9cf8345bfaab-3", "text": "callbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync aevaluate_run(run: Run, example: Optional[Example] = None) \u2192 EvaluationResult[source]\u00b6\nEvaluate an example.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.string_run_evaluator.StringRunEvaluatorChain.html"} {"id": "9cf8345bfaab-4", "text": "a single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nevaluate_run(run: Run, example: Optional[Example] = None) \u2192 EvaluationResult[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.string_run_evaluator.StringRunEvaluatorChain.html"} {"id": "9cf8345bfaab-5", "text": "Evaluate an example.\nclassmethod from_model_and_evaluator(model: Union[Chain, BaseLanguageModel, Tool], evaluator: StringEvaluator, input_key: Optional[str] = None, prediction_key: Optional[str] = None, reference_key: Optional[str] = None) \u2192 StringRunEvaluatorChain[source]\u00b6\nCreate a StringRunEvaluatorChain from a model and evaluator.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.string_run_evaluator.StringRunEvaluatorChain.html"} {"id": "9cf8345bfaab-6", "text": "Convenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.string_run_evaluator.StringRunEvaluatorChain.html"} {"id": "9cf8345bfaab-7", "text": "save(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty input_keys: List[str]\u00b6\nReturn the keys expected to be in the chain input.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\u00b6\nReturn the keys expected to be in the chain output.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.string_run_evaluator.StringRunEvaluatorChain.html"} {"id": "feb95d6c5664-0", "text": "langchain.evaluation.run_evaluators.implementations.ChoicesOutputParser\u00b6\nclass langchain.evaluation.run_evaluators.implementations.ChoicesOutputParser(*, eval_chain_output_key: str = 'text', evaluation_name: str, choices_map: Optional[Dict[str, int]] = None)[source]\u00b6\nBases: RunEvaluatorOutputParser\nParse a feedback run with optional choices.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam choices_map: Optional[Dict[str, int]] = None\u00b6\nparam eval_chain_output_key: str = 'text'\u00b6\nparam evaluation_name: str [Required]\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 EvaluationResult[source]\u00b6\nParse the last line of the text and return an evaluation result.\nparse_chain_output(output: Dict[str, Any]) \u2192 EvaluationResult\u00b6\nParse the output of a run.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.ChoicesOutputParser.html"} {"id": "feb95d6c5664-1", "text": "the prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.ChoicesOutputParser.html"} {"id": "66bc68634695-0", "text": "langchain.evaluation.run_evaluators.string_run_evaluator.StringRunMapper\u00b6\nclass langchain.evaluation.run_evaluators.string_run_evaluator.StringRunMapper[source]\u00b6\nBases: Serializable\nExtract items to evaluate from the run object.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\n__call__(run: Run) \u2192 Dict[str, str][source]\u00b6\nMaps the Run to a dictionary.\nabstract map(run: Run) \u2192 Dict[str, str][source]\u00b6\nMaps the Run to a dictionary.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\u00b6\nThe keys to extract from the run.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.string_run_evaluator.StringRunMapper.html"} {"id": "93a0a746f92c-0", "text": "langchain.evaluation.comparison.eval_chain.PairwiseStringResultOutputParser\u00b6\nclass langchain.evaluation.comparison.eval_chain.PairwiseStringResultOutputParser[source]\u00b6\nBases: BaseOutputParser[dict]\nA parser for the output of the PairwiseStringEvalChain.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 Any[source]\u00b6\nParse the output text.\nParameters\ntext (str) \u2013 The output text to parse.\nReturns\nThe parsed output.\nReturn type\nAny\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringResultOutputParser.html"} {"id": "93a0a746f92c-1", "text": "property lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringResultOutputParser.html"} {"id": "c2b6e8d27975-0", "text": "langchain.evaluation.schema.LLMEvalChain\u00b6\nclass langchain.evaluation.schema.LLMEvalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None)[source]\u00b6\nBases: Chain\nA base class for evaluators that use an LLM.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam memory: Optional[langchain.schema.memory.BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.LLMEvalChain.html"} {"id": "c2b6e8d27975-1", "text": "This metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.LLMEvalChain.html"} {"id": "c2b6e8d27975-2", "text": "tags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.LLMEvalChain.html"} {"id": "c2b6e8d27975-3", "text": "to False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.LLMEvalChain.html"} {"id": "c2b6e8d27975-4", "text": "directly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nabstract classmethod from_llm(llm: BaseLanguageModel, **kwargs: Any) \u2192 LLMEvalChain[source]\u00b6\nCreate a new evaluator from an LLM.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.LLMEvalChain.html"} {"id": "c2b6e8d27975-5", "text": "Returns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.LLMEvalChain.html"} {"id": "c2b6e8d27975-6", "text": "these runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nabstract property input_keys: List[str]\u00b6\nReturn the keys expected to be in the chain input.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.LLMEvalChain.html"} {"id": "c2b6e8d27975-7", "text": "constructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nabstract property output_keys: List[str]\u00b6\nReturn the keys expected to be in the chain output.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.LLMEvalChain.html"} {"id": "faea5207c3d7-0", "text": "langchain.evaluation.loading.load_evaluator\u00b6\nlangchain.evaluation.loading.load_evaluator(evaluator: EvaluatorType, *, llm: Optional[BaseLanguageModel] = None, **kwargs: Any) \u2192 Chain[source]\u00b6\nLoad the requested evaluation chain specified by a string.\nParameters\nevaluator (EvaluatorType) \u2013 The type of evaluator to load.\nllm (BaseLanguageModel, optional) \u2013 The language model to use for evaluation, by default None\n**kwargs (Any) \u2013 Additional keyword arguments to pass to the evaluator.\nReturns\nThe loaded evaluation chain.\nReturn type\nChain\nExamples\n>>> from langchain.evaluation import load_evaluator, EvaluatorType\n>>> evaluator = load_evaluator(EvaluatorType.QA)", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.loading.load_evaluator.html"} {"id": "21f4a9fdddf7-0", "text": "langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain\u00b6\nclass langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, agent_tools: Optional[List[BaseTool]] = None, eval_chain: LLMChain, output_parser: TrajectoryOutputParser = None, return_reasoning: bool = False)[source]\u00b6\nBases: AgentTrajectoryEvaluator, LLMEvalChain\nA chain for evaluating ReAct style agents.\nThis chain is used to evaluate ReAct style agents by reasoning about\nthe sequence of actions taken and their outcomes.\nExample:\nfrom langchain.agents import AgentType, initialize_agent\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.evaluation import TrajectoryEvalChain\nfrom langchain.tools import tool\n@tool\ndef geography_answers(country: str, question: str) -> str:\n \"\"\"Very helpful answers to geography questions.\"\"\"\n return f\"{country}? IDK - We may never know {question}.\"\nllm = ChatOpenAI(model=\"gpt-3.5-turbo-0613\", temperature=0)\nagent = initialize_agent(\n tools=[geography_answers],\n llm=llm,\n agent=AgentType.OPENAI_FUNCTIONS,\n return_intermediate_steps=True,\n)\nquestion = \"How many dwell in the largest minor region in Argentina?\"\nresponse = agent(question)\neval_chain = TrajectoryEvalChain.from_llm(\n llm=llm, agent_tools=[geography_answers], return_reasoning=True\n)", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html"} {"id": "21f4a9fdddf7-1", "text": ")\nresult = eval_chain.evaluate_agent_trajectory(\n input=question,\n agent_trajectory=response[\"intermediate_steps\"],\n prediction=response[\"output\"],\n reference=\"Paris\",\n)\nprint(result[\"score\"])\n# 0\nparam agent_tools: Optional[List[langchain.tools.base.BaseTool]] = None\u00b6\nA list of tools available to the agent.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam eval_chain: langchain.chains.llm.LLMChain [Required]\u00b6\nThe language model chain used for evaluation.\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam output_parser: langchain.evaluation.agents.trajectory_eval_chain.TrajectoryOutputParser [Optional]\u00b6\nThe output parser used to parse the output.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html"} {"id": "21f4a9fdddf7-2", "text": "The output parser used to parse the output.\nparam return_reasoning: bool = False\u00b6\nWhether to return the reasoning along with the score.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html"} {"id": "21f4a9fdddf7-3", "text": "addition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html"} {"id": "21f4a9fdddf7-4", "text": "Returns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync aevaluate_agent_trajectory(*, prediction: str, agent_trajectory: Sequence[Tuple[AgentAction, str]], input: str, reference: Optional[str] = None, **kwargs: Any) \u2192 dict\u00b6\nAsynchronously evaluate a trajectory.\nParameters\nprediction (str) \u2013 The final predicted response.\nagent_trajectory (List[Tuple[AgentAction, str]]) \u2013 The intermediate steps forming the agent trajectory.\ninput (str) \u2013 The input to the agent.\nreference (Optional[str]) \u2013 The reference answer.\nReturns\nThe evaluation result.\nReturn type\ndict\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html"} {"id": "21f4a9fdddf7-5", "text": "sole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nevaluate_agent_trajectory(*, prediction: str, agent_trajectory: Sequence[Tuple[AgentAction, str]], input: str, reference: Optional[str] = None, **kwargs: Any) \u2192 dict\u00b6\nEvaluate a trajectory.\nParameters", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html"} {"id": "21f4a9fdddf7-6", "text": "Evaluate a trajectory.\nParameters\nprediction (str) \u2013 The final predicted response.\nagent_trajectory (List[Tuple[AgentAction, str]]) \u2013 The intermediate steps forming the agent trajectory.\ninput (str) \u2013 The input to the agent.\nreference (Optional[str]) \u2013 The reference answer.\nReturns\nThe evaluation result.\nReturn type\ndict\nclassmethod from_llm(llm: BaseLanguageModel, agent_tools: Optional[Sequence[BaseTool]] = None, output_parser: Optional[TrajectoryOutputParser] = None, return_reasoning: bool = False, **kwargs: Any) \u2192 TrajectoryEvalChain[source]\u00b6\nCreate a TrajectoryEvalChain object from a language model chain.\nParameters\nllm (BaseChatModel) \u2013 The language model chain.\nagent_tools (Optional[Sequence[BaseTool]]) \u2013 A list of tools\navailable tothe agent.\noutput_parser (Optional[TrajectoryOutputParser]) \u2013 The output parser\nused to parse the chain output into a score.\nreturn_reasoning (bool) \u2013 Whether to return the\nreasoning along with the score.\nReturns\nThe TrajectoryEvalChain object.\nReturn type\nTrajectoryEvalChain\nstatic get_agent_trajectory(steps: Union[str, Sequence[Tuple[AgentAction, str]]]) \u2192 str[source]\u00b6\nGet the agent trajectory as a formatted string.\nParameters\nsteps (Union[str, List[Tuple[AgentAction, str]]]) \u2013 The agent trajectory.\nReturns\nThe formatted agent trajectory.\nReturn type\nstr\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str][source]\u00b6\nValidate and prep inputs.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html"} {"id": "21f4a9fdddf7-7", "text": "Validate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html"} {"id": "21f4a9fdddf7-8", "text": "these runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty input_keys: List[str]\u00b6\nGet the input keys for the chain.\nReturns\nThe input keys.\nReturn type\nList[str]\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html"} {"id": "21f4a9fdddf7-9", "text": "property lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\u00b6\nGet the output keys for the chain.\nReturns\nThe output keys.\nReturn type\nList[str]\nproperty requires_input: bool\u00b6\nWhether this evaluator requires an input string.\nproperty requires_reference: bool\u00b6\nWhether this evaluator requires a reference label.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for the QAEvalChain.\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html"} {"id": "d4ca4a0e7d91-0", "text": "langchain.evaluation.embedding_distance.base.EmbeddingDistance\u00b6\nclass langchain.evaluation.embedding_distance.base.EmbeddingDistance(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]\u00b6\nBases: str, Enum\nEmbedding Distance Metric.\nCOSINE\u00b6\nCosine distance metric.\nEUCLIDEAN\u00b6\nEuclidean distance metric.\nMANHATTAN\u00b6\nManhattan distance metric.\nCHEBYSHEV\u00b6\nChebyshev distance metric.\nHAMMING\u00b6\nHamming distance metric.\nMethods\n__init__(*args,\u00a0**kwds)\ncapitalize()\nReturn a capitalized version of the string.\ncasefold()\nReturn a version of the string suitable for caseless comparisons.\ncenter(width[,\u00a0fillchar])\nReturn a centered string of length width.\ncount(sub[,\u00a0start[,\u00a0end]])\nReturn the number of non-overlapping occurrences of substring sub in string S[start:end].\nencode([encoding,\u00a0errors])\nEncode the string using the codec registered for encoding.\nendswith(suffix[,\u00a0start[,\u00a0end]])\nReturn True if S ends with the specified suffix, False otherwise.\nexpandtabs([tabsize])\nReturn a copy where all tab characters are expanded using spaces.\nfind(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nformat(*args,\u00a0**kwargs)\nReturn a formatted version of S, using substitutions from args and kwargs.\nformat_map(mapping)\nReturn a formatted version of S, using substitutions from mapping.\nindex(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nisalnum()", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.EmbeddingDistance.html"} {"id": "d4ca4a0e7d91-1", "text": "isalnum()\nReturn True if the string is an alpha-numeric string, False otherwise.\nisalpha()\nReturn True if the string is an alphabetic string, False otherwise.\nisascii()\nReturn True if all characters in the string are ASCII, False otherwise.\nisdecimal()\nReturn True if the string is a decimal string, False otherwise.\nisdigit()\nReturn True if the string is a digit string, False otherwise.\nisidentifier()\nReturn True if the string is a valid Python identifier, False otherwise.\nislower()\nReturn True if the string is a lowercase string, False otherwise.\nisnumeric()\nReturn True if the string is a numeric string, False otherwise.\nisprintable()\nReturn True if the string is printable, False otherwise.\nisspace()\nReturn True if the string is a whitespace string, False otherwise.\nistitle()\nReturn True if the string is a title-cased string, False otherwise.\nisupper()\nReturn True if the string is an uppercase string, False otherwise.\njoin(iterable,\u00a0/)\nConcatenate any number of strings.\nljust(width[,\u00a0fillchar])\nReturn a left-justified string of length width.\nlower()\nReturn a copy of the string converted to lowercase.\nlstrip([chars])\nReturn a copy of the string with leading whitespace removed.\nmaketrans\nReturn a translation table usable for str.translate().\npartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nremoveprefix(prefix,\u00a0/)\nReturn a str with the given prefix string removed if present.\nremovesuffix(suffix,\u00a0/)\nReturn a str with the given suffix string removed if present.\nreplace(old,\u00a0new[,\u00a0count])\nReturn a copy with all occurrences of substring old replaced by new.\nrfind(sub[,\u00a0start[,\u00a0end]])", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.EmbeddingDistance.html"} {"id": "d4ca4a0e7d91-2", "text": "rfind(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].\nrindex(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].\nrjust(width[,\u00a0fillchar])\nReturn a right-justified string of length width.\nrpartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nrsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nrstrip([chars])\nReturn a copy of the string with trailing whitespace removed.\nsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nsplitlines([keepends])\nReturn a list of the lines in the string, breaking at line boundaries.\nstartswith(prefix[,\u00a0start[,\u00a0end]])\nReturn True if S starts with the specified prefix, False otherwise.\nstrip([chars])\nReturn a copy of the string with leading and trailing whitespace removed.\nswapcase()\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\nReturn a version of the string where each word is titlecased.\ntranslate(table,\u00a0/)\nReplace each character in the string using the given translation table.\nupper()\nReturn a copy of the string converted to uppercase.\nzfill(width,\u00a0/)\nPad a numeric string with zeros on the left, to fill a field of the given width.\nAttributes\nCOSINE\nEUCLIDEAN\nMANHATTAN\nCHEBYSHEV\nHAMMING\ncapitalize()\u00b6\nReturn a capitalized version of the string.\nMore specifically, make the first character have upper case and the rest lower\ncase.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.EmbeddingDistance.html"} {"id": "d4ca4a0e7d91-3", "text": "More specifically, make the first character have upper case and the rest lower\ncase.\ncasefold()\u00b6\nReturn a version of the string suitable for caseless comparisons.\ncenter(width, fillchar=' ', /)\u00b6\nReturn a centered string of length width.\nPadding is done using the specified fill character (default is a space).\ncount(sub[, start[, end]]) \u2192 int\u00b6\nReturn the number of non-overlapping occurrences of substring sub in\nstring S[start:end]. Optional arguments start and end are\ninterpreted as in slice notation.\nencode(encoding='utf-8', errors='strict')\u00b6\nEncode the string using the codec registered for encoding.\nencodingThe encoding in which to encode the string.\nerrorsThe error handling scheme to use for encoding errors.\nThe default is \u2018strict\u2019 meaning that encoding errors raise a\nUnicodeEncodeError. Other possible values are \u2018ignore\u2019, \u2018replace\u2019 and\n\u2018xmlcharrefreplace\u2019 as well as any other name registered with\ncodecs.register_error that can handle UnicodeEncodeErrors.\nendswith(suffix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S ends with the specified suffix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nsuffix can also be a tuple of strings to try.\nexpandtabs(tabsize=8)\u00b6\nReturn a copy where all tab characters are expanded using spaces.\nIf tabsize is not given, a tab size of 8 characters is assumed.\nfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nformat(*args, **kwargs) \u2192 str\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.EmbeddingDistance.html"} {"id": "d4ca4a0e7d91-4", "text": "Return -1 on failure.\nformat(*args, **kwargs) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from args and kwargs.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nformat_map(mapping) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from mapping.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nisalnum()\u00b6\nReturn True if the string is an alpha-numeric string, False otherwise.\nA string is alpha-numeric if all characters in the string are alpha-numeric and\nthere is at least one character in the string.\nisalpha()\u00b6\nReturn True if the string is an alphabetic string, False otherwise.\nA string is alphabetic if all characters in the string are alphabetic and there\nis at least one character in the string.\nisascii()\u00b6\nReturn True if all characters in the string are ASCII, False otherwise.\nASCII characters have code points in the range U+0000-U+007F.\nEmpty string is ASCII too.\nisdecimal()\u00b6\nReturn True if the string is a decimal string, False otherwise.\nA string is a decimal string if all characters in the string are decimal and\nthere is at least one character in the string.\nisdigit()\u00b6\nReturn True if the string is a digit string, False otherwise.\nA string is a digit string if all characters in the string are digits and there\nis at least one character in the string.\nisidentifier()\u00b6\nReturn True if the string is a valid Python identifier, False otherwise.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.EmbeddingDistance.html"} {"id": "d4ca4a0e7d91-5", "text": "isidentifier()\u00b6\nReturn True if the string is a valid Python identifier, False otherwise.\nCall keyword.iskeyword(s) to test whether string s is a reserved identifier,\nsuch as \u201cdef\u201d or \u201cclass\u201d.\nislower()\u00b6\nReturn True if the string is a lowercase string, False otherwise.\nA string is lowercase if all cased characters in the string are lowercase and\nthere is at least one cased character in the string.\nisnumeric()\u00b6\nReturn True if the string is a numeric string, False otherwise.\nA string is numeric if all characters in the string are numeric and there is at\nleast one character in the string.\nisprintable()\u00b6\nReturn True if the string is printable, False otherwise.\nA string is printable if all of its characters are considered printable in\nrepr() or if it is empty.\nisspace()\u00b6\nReturn True if the string is a whitespace string, False otherwise.\nA string is whitespace if all characters in the string are whitespace and there\nis at least one character in the string.\nistitle()\u00b6\nReturn True if the string is a title-cased string, False otherwise.\nIn a title-cased string, upper- and title-case characters may only\nfollow uncased characters and lowercase characters only cased ones.\nisupper()\u00b6\nReturn True if the string is an uppercase string, False otherwise.\nA string is uppercase if all cased characters in the string are uppercase and\nthere is at least one cased character in the string.\njoin(iterable, /)\u00b6\nConcatenate any number of strings.\nThe string whose method is called is inserted in between each given string.\nThe result is returned as a new string.\nExample: \u2018.\u2019.join([\u2018ab\u2019, \u2018pq\u2019, \u2018rs\u2019]) -> \u2018ab.pq.rs\u2019\nljust(width, fillchar=' ', /)\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.EmbeddingDistance.html"} {"id": "d4ca4a0e7d91-6", "text": "ljust(width, fillchar=' ', /)\u00b6\nReturn a left-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nlower()\u00b6\nReturn a copy of the string converted to lowercase.\nlstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nstatic maketrans()\u00b6\nReturn a translation table usable for str.translate().\nIf there is only one argument, it must be a dictionary mapping Unicode\nordinals (integers) or characters to Unicode ordinals, strings or None.\nCharacter keys will be then converted to ordinals.\nIf there are two arguments, they must be strings of equal length, and\nin the resulting dictionary, each character in x will be mapped to the\ncharacter at the same position in y. If there is a third argument, it\nmust be a string, whose characters will be mapped to None in the result.\npartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string. If the separator is found,\nreturns a 3-tuple containing the part before the separator, the separator\nitself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing the original string\nand two empty strings.\nremoveprefix(prefix, /)\u00b6\nReturn a str with the given prefix string removed if present.\nIf the string starts with the prefix string, return string[len(prefix):].\nOtherwise, return a copy of the original string.\nremovesuffix(suffix, /)\u00b6\nReturn a str with the given suffix string removed if present.\nIf the string ends with the suffix string and that suffix is not empty,", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.EmbeddingDistance.html"} {"id": "d4ca4a0e7d91-7", "text": "If the string ends with the suffix string and that suffix is not empty,\nreturn string[:-len(suffix)]. Otherwise, return a copy of the original\nstring.\nreplace(old, new, count=- 1, /)\u00b6\nReturn a copy with all occurrences of substring old replaced by new.\ncountMaximum number of occurrences to replace.\n-1 (the default value) means replace all occurrences.\nIf the optional argument count is given, only the first count occurrences are\nreplaced.\nrfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nrindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nrjust(width, fillchar=' ', /)\u00b6\nReturn a right-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nrpartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string, starting at the end. If\nthe separator is found, returns a 3-tuple containing the part before the\nseparator, the separator itself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing two empty strings\nand the original string.\nrsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.EmbeddingDistance.html"} {"id": "d4ca4a0e7d91-8", "text": "sepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nSplitting starts at the end of the string and works to the front.\nrstrip(chars=None, /)\u00b6\nReturn a copy of the string with trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nNote, str.split() is mainly useful for data that has been intentionally\ndelimited. With natural text that includes punctuation, consider using\nthe regular expression module.\nsplitlines(keepends=False)\u00b6\nReturn a list of the lines in the string, breaking at line boundaries.\nLine breaks are not included in the resulting list unless keepends is given and\ntrue.\nstartswith(prefix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S starts with the specified prefix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nprefix can also be a tuple of strings to try.\nstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading and trailing whitespace removed.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.EmbeddingDistance.html"} {"id": "d4ca4a0e7d91-9", "text": "Return a copy of the string with leading and trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nswapcase()\u00b6\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\u00b6\nReturn a version of the string where each word is titlecased.\nMore specifically, words start with uppercased characters and all remaining\ncased characters have lower case.\ntranslate(table, /)\u00b6\nReplace each character in the string using the given translation table.\ntableTranslation table, which must be a mapping of Unicode ordinals to\nUnicode ordinals, strings, or None.\nThe table must implement lookup/indexing via __getitem__, for instance a\ndictionary or list. If this operation raises LookupError, the character is\nleft untouched. Characters mapped to None are deleted.\nupper()\u00b6\nReturn a copy of the string converted to uppercase.\nzfill(width, /)\u00b6\nPad a numeric string with zeros on the left, to fill a field of the given width.\nThe string is never truncated.\nCHEBYSHEV = 'chebyshev'\u00b6\nCOSINE = 'cosine'\u00b6\nEUCLIDEAN = 'euclidean'\u00b6\nHAMMING = 'hamming'\u00b6\nMANHATTAN = 'manhattan'\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.EmbeddingDistance.html"} {"id": "d79611548405-0", "text": "langchain.evaluation.run_evaluators.implementations.TrajectoryInputMapper\u00b6\nclass langchain.evaluation.run_evaluators.implementations.TrajectoryInputMapper(*, agent_input_key: str = 'input', agent_output_key: str = 'output', tool_input_key: str = 'input', tool_output_key: str = 'output', reference_output_key: Optional[str] = None)[source]\u00b6\nBases: RunEvaluatorInputMapper, BaseModel\nMaps the Run and Optional[Example] to a dictionary.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam agent_input_key: str = 'input'\u00b6\nThe key to load from the agent executor\u2019s run input dictionary.\nparam agent_output_key: str = 'output'\u00b6\nThe key to load from the agent executor\u2019s run output dictionary.\nparam reference_output_key: Optional[str] = None\u00b6\nThe key to use for selecting the reference answer.\nparam tool_input_key: str = 'input'\u00b6\nThe key to load from the tool executor\u2019s run input dictionary.\nparam tool_output_key: str = 'output'\u00b6\nThe key to load from the tool executor\u2019s run output dictionary.\n__call__(run: Run, example: Optional[Example] = None) \u2192 Any\u00b6\nMaps the Run and Optional[Example] to a dictionary\nmap(run: Run, example: Optional[Example] = None) \u2192 Dict[str, str][source]\u00b6\nMaps the Run and Optional[Example] to a dictionary", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.TrajectoryInputMapper.html"} {"id": "91f77678d62c-0", "text": "langchain.evaluation.qa.generate_chain.QAGenerateChain\u00b6\nclass langchain.evaluation.qa.generate_chain.QAGenerateChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, prompt: BasePromptTemplate, llm: BaseLanguageModel, output_key: str = 'text', output_parser: BaseLLMOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None)[source]\u00b6\nBases: LLMChain\nLLM Chain specifically for generating examples for question answering.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam llm: BaseLanguageModel [Required]\u00b6\nLanguage model to call.\nparam llm_kwargs: dict [Optional]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.generate_chain.QAGenerateChain.html"} {"id": "91f77678d62c-1", "text": "them along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam output_key: str = 'text'\u00b6\nparam output_parser: BaseLLMOutputParser [Optional]\u00b6\nOutput parser to use.\nDefaults to one that takes the most likely string but does not change it\notherwise.\nparam prompt: BasePromptTemplate [Required]\u00b6\nPrompt object to use.\nparam return_final_only: bool = True\u00b6\nWhether to return only the final parsed result. Defaults to True.\nIf false, will return a bunch of extra information about the generation.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.generate_chain.QAGenerateChain.html"} {"id": "91f77678d62c-2", "text": "Execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.\nasync aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.generate_chain.QAGenerateChain.html"} {"id": "91f77678d62c-3", "text": "Call apply and then parse the results.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.generate_chain.QAGenerateChain.html"} {"id": "91f77678d62c-4", "text": "Generate LLM result from inputs.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.\napply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.\nasync apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\nasync apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, str]]\u00b6\nCall apredict and then parse the results.\nasync aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.generate_chain.QAGenerateChain.html"} {"id": "91f77678d62c-5", "text": "Convenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.generate_chain.QAGenerateChain.html"} {"id": "91f77678d62c-6", "text": "# -> \"The temperature in Boise is...\"\ncreate_outputs(llm_result: LLMResult) \u2192 List[Dict[str, Any]]\u00b6\nCreate outputs from response.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_llm(llm: BaseLanguageModel, **kwargs: Any) \u2192 QAGenerateChain[source]\u00b6\nLoad QA Generate Chain from LLM.\nclassmethod from_string(llm: BaseLanguageModel, template: str) \u2192 LLMChain\u00b6\nCreate LLMChain from LLM and template.\ngenerate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.\npredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\npredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, Any]]\u00b6\nCall predict and then parse the results.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.generate_chain.QAGenerateChain.html"} {"id": "91f77678d62c-7", "text": "Validate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.generate_chain.QAGenerateChain.html"} {"id": "91f77678d62c-8", "text": "info along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.generate_chain.QAGenerateChain.html"} {"id": "91f77678d62c-9", "text": "validator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.generate_chain.QAGenerateChain.html"} {"id": "b84893fd361a-0", "text": "langchain.evaluation.string_distance.base.StringDistanceEvalChain\u00b6\nclass langchain.evaluation.string_distance.base.StringDistanceEvalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, distance: StringDistance = StringDistance.LEVENSHTEIN)[source]\u00b6\nBases: _RapidFuzzChainMixin, StringEvaluator\nCompute string distances between the prediction and the reference.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam distance: langchain.evaluation.string_distance.base.StringDistance = StringDistance.LEVENSHTEIN\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistanceEvalChain.html"} {"id": "b84893fd361a-1", "text": "Optional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistanceEvalChain.html"} {"id": "b84893fd361a-2", "text": "these runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistanceEvalChain.html"} {"id": "b84893fd361a-3", "text": "metadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistanceEvalChain.html"} {"id": "b84893fd361a-4", "text": "these runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistanceEvalChain.html"} {"id": "b84893fd361a-5", "text": "Validate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistanceEvalChain.html"} {"id": "b84893fd361a-6", "text": "these runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_dependencies\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that the rapidfuzz library is installed.\nParameters\nvalues (Dict[str, Any]) \u2013 The input values.\nReturns\nThe validated values.\nReturn type\nDict[str, Any]\nproperty evaluation_name: str\u00b6\nproperty input_keys: List[str]\u00b6\nGet the input keys.\nReturns\nThe input keys.\nReturn type\nList[str]\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistanceEvalChain.html"} {"id": "b84893fd361a-7", "text": "property lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty metric: Callable\u00b6\nGet the distance metric function.\nReturns\nThe distance metric function.\nReturn type\nCallable\nproperty output_keys: List[str]\u00b6\nGet the output keys.\nReturns\nThe output keys.\nReturn type\nList[str]\nproperty requires_input: bool\u00b6\nCheck if input is required.\nReturns\nTrue if input is required, False otherwise.\nReturn type\nbool\nproperty requires_reference: bool\u00b6\nCheck if reference is required.\nReturns\nTrue if reference is required, False otherwise.\nReturn type\nbool\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistanceEvalChain.html"} {"id": "4178916e2dda-0", "text": "langchain.evaluation.schema.EvaluatorType\u00b6\nclass langchain.evaluation.schema.EvaluatorType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]\u00b6\nBases: str, Enum\nThe types of the evaluators.\nMethods\n__init__(*args,\u00a0**kwds)\ncapitalize()\nReturn a capitalized version of the string.\ncasefold()\nReturn a version of the string suitable for caseless comparisons.\ncenter(width[,\u00a0fillchar])\nReturn a centered string of length width.\ncount(sub[,\u00a0start[,\u00a0end]])\nReturn the number of non-overlapping occurrences of substring sub in string S[start:end].\nencode([encoding,\u00a0errors])\nEncode the string using the codec registered for encoding.\nendswith(suffix[,\u00a0start[,\u00a0end]])\nReturn True if S ends with the specified suffix, False otherwise.\nexpandtabs([tabsize])\nReturn a copy where all tab characters are expanded using spaces.\nfind(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nformat(*args,\u00a0**kwargs)\nReturn a formatted version of S, using substitutions from args and kwargs.\nformat_map(mapping)\nReturn a formatted version of S, using substitutions from mapping.\nindex(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nisalnum()\nReturn True if the string is an alpha-numeric string, False otherwise.\nisalpha()\nReturn True if the string is an alphabetic string, False otherwise.\nisascii()\nReturn True if all characters in the string are ASCII, False otherwise.\nisdecimal()\nReturn True if the string is a decimal string, False otherwise.\nisdigit()", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.EvaluatorType.html"} {"id": "4178916e2dda-1", "text": "Return True if the string is a decimal string, False otherwise.\nisdigit()\nReturn True if the string is a digit string, False otherwise.\nisidentifier()\nReturn True if the string is a valid Python identifier, False otherwise.\nislower()\nReturn True if the string is a lowercase string, False otherwise.\nisnumeric()\nReturn True if the string is a numeric string, False otherwise.\nisprintable()\nReturn True if the string is printable, False otherwise.\nisspace()\nReturn True if the string is a whitespace string, False otherwise.\nistitle()\nReturn True if the string is a title-cased string, False otherwise.\nisupper()\nReturn True if the string is an uppercase string, False otherwise.\njoin(iterable,\u00a0/)\nConcatenate any number of strings.\nljust(width[,\u00a0fillchar])\nReturn a left-justified string of length width.\nlower()\nReturn a copy of the string converted to lowercase.\nlstrip([chars])\nReturn a copy of the string with leading whitespace removed.\nmaketrans\nReturn a translation table usable for str.translate().\npartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nremoveprefix(prefix,\u00a0/)\nReturn a str with the given prefix string removed if present.\nremovesuffix(suffix,\u00a0/)\nReturn a str with the given suffix string removed if present.\nreplace(old,\u00a0new[,\u00a0count])\nReturn a copy with all occurrences of substring old replaced by new.\nrfind(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].\nrindex(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.EvaluatorType.html"} {"id": "4178916e2dda-2", "text": "rjust(width[,\u00a0fillchar])\nReturn a right-justified string of length width.\nrpartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nrsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nrstrip([chars])\nReturn a copy of the string with trailing whitespace removed.\nsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nsplitlines([keepends])\nReturn a list of the lines in the string, breaking at line boundaries.\nstartswith(prefix[,\u00a0start[,\u00a0end]])\nReturn True if S starts with the specified prefix, False otherwise.\nstrip([chars])\nReturn a copy of the string with leading and trailing whitespace removed.\nswapcase()\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\nReturn a version of the string where each word is titlecased.\ntranslate(table,\u00a0/)\nReplace each character in the string using the given translation table.\nupper()\nReturn a copy of the string converted to uppercase.\nzfill(width,\u00a0/)\nPad a numeric string with zeros on the left, to fill a field of the given width.\nAttributes\nQA\nQuestion answering evaluator, which grades answers to questions directly using an LLM.\nCOT_QA\nChain of thought question answering evaluator, which grades answers to questions using chain of thought 'reasoning'.\nCONTEXT_QA\nQuestion answering evaluator that incorporates 'context' in the response.\nPAIRWISE_STRING\nThe pairwise string evaluator, which compares the output of two models.\nAGENT_TRAJECTORY\nThe agent trajectory evaluator, which grades the agent's intermediate steps.\nCRITERIA\nThe criteria evaluator, which evaluates a model based on a custom set of criteria.\nSTRING_DISTANCE", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.EvaluatorType.html"} {"id": "4178916e2dda-3", "text": "The criteria evaluator, which evaluates a model based on a custom set of criteria.\nSTRING_DISTANCE\nCompare predictions to a reference answer using string edit distances.\nPAIRWISE_STRING_DISTANCE\nCompare predictions based on string edit distances.\nEMBEDDING_DISTANCE\nCompare a prediction to a reference label using embedding distance.\nPAIRWISE_EMBEDDING_DISTANCE\nCompare two predictions using embedding distance.\ncapitalize()\u00b6\nReturn a capitalized version of the string.\nMore specifically, make the first character have upper case and the rest lower\ncase.\ncasefold()\u00b6\nReturn a version of the string suitable for caseless comparisons.\ncenter(width, fillchar=' ', /)\u00b6\nReturn a centered string of length width.\nPadding is done using the specified fill character (default is a space).\ncount(sub[, start[, end]]) \u2192 int\u00b6\nReturn the number of non-overlapping occurrences of substring sub in\nstring S[start:end]. Optional arguments start and end are\ninterpreted as in slice notation.\nencode(encoding='utf-8', errors='strict')\u00b6\nEncode the string using the codec registered for encoding.\nencodingThe encoding in which to encode the string.\nerrorsThe error handling scheme to use for encoding errors.\nThe default is \u2018strict\u2019 meaning that encoding errors raise a\nUnicodeEncodeError. Other possible values are \u2018ignore\u2019, \u2018replace\u2019 and\n\u2018xmlcharrefreplace\u2019 as well as any other name registered with\ncodecs.register_error that can handle UnicodeEncodeErrors.\nendswith(suffix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S ends with the specified suffix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nsuffix can also be a tuple of strings to try.\nexpandtabs(tabsize=8)\u00b6\nReturn a copy where all tab characters are expanded using spaces.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.EvaluatorType.html"} {"id": "4178916e2dda-4", "text": "Return a copy where all tab characters are expanded using spaces.\nIf tabsize is not given, a tab size of 8 characters is assumed.\nfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nformat(*args, **kwargs) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from args and kwargs.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nformat_map(mapping) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from mapping.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nisalnum()\u00b6\nReturn True if the string is an alpha-numeric string, False otherwise.\nA string is alpha-numeric if all characters in the string are alpha-numeric and\nthere is at least one character in the string.\nisalpha()\u00b6\nReturn True if the string is an alphabetic string, False otherwise.\nA string is alphabetic if all characters in the string are alphabetic and there\nis at least one character in the string.\nisascii()\u00b6\nReturn True if all characters in the string are ASCII, False otherwise.\nASCII characters have code points in the range U+0000-U+007F.\nEmpty string is ASCII too.\nisdecimal()\u00b6\nReturn True if the string is a decimal string, False otherwise.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.EvaluatorType.html"} {"id": "4178916e2dda-5", "text": "isdecimal()\u00b6\nReturn True if the string is a decimal string, False otherwise.\nA string is a decimal string if all characters in the string are decimal and\nthere is at least one character in the string.\nisdigit()\u00b6\nReturn True if the string is a digit string, False otherwise.\nA string is a digit string if all characters in the string are digits and there\nis at least one character in the string.\nisidentifier()\u00b6\nReturn True if the string is a valid Python identifier, False otherwise.\nCall keyword.iskeyword(s) to test whether string s is a reserved identifier,\nsuch as \u201cdef\u201d or \u201cclass\u201d.\nislower()\u00b6\nReturn True if the string is a lowercase string, False otherwise.\nA string is lowercase if all cased characters in the string are lowercase and\nthere is at least one cased character in the string.\nisnumeric()\u00b6\nReturn True if the string is a numeric string, False otherwise.\nA string is numeric if all characters in the string are numeric and there is at\nleast one character in the string.\nisprintable()\u00b6\nReturn True if the string is printable, False otherwise.\nA string is printable if all of its characters are considered printable in\nrepr() or if it is empty.\nisspace()\u00b6\nReturn True if the string is a whitespace string, False otherwise.\nA string is whitespace if all characters in the string are whitespace and there\nis at least one character in the string.\nistitle()\u00b6\nReturn True if the string is a title-cased string, False otherwise.\nIn a title-cased string, upper- and title-case characters may only\nfollow uncased characters and lowercase characters only cased ones.\nisupper()\u00b6\nReturn True if the string is an uppercase string, False otherwise.\nA string is uppercase if all cased characters in the string are uppercase and", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.EvaluatorType.html"} {"id": "4178916e2dda-6", "text": "A string is uppercase if all cased characters in the string are uppercase and\nthere is at least one cased character in the string.\njoin(iterable, /)\u00b6\nConcatenate any number of strings.\nThe string whose method is called is inserted in between each given string.\nThe result is returned as a new string.\nExample: \u2018.\u2019.join([\u2018ab\u2019, \u2018pq\u2019, \u2018rs\u2019]) -> \u2018ab.pq.rs\u2019\nljust(width, fillchar=' ', /)\u00b6\nReturn a left-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nlower()\u00b6\nReturn a copy of the string converted to lowercase.\nlstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nstatic maketrans()\u00b6\nReturn a translation table usable for str.translate().\nIf there is only one argument, it must be a dictionary mapping Unicode\nordinals (integers) or characters to Unicode ordinals, strings or None.\nCharacter keys will be then converted to ordinals.\nIf there are two arguments, they must be strings of equal length, and\nin the resulting dictionary, each character in x will be mapped to the\ncharacter at the same position in y. If there is a third argument, it\nmust be a string, whose characters will be mapped to None in the result.\npartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string. If the separator is found,\nreturns a 3-tuple containing the part before the separator, the separator\nitself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing the original string\nand two empty strings.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.EvaluatorType.html"} {"id": "4178916e2dda-7", "text": "and two empty strings.\nremoveprefix(prefix, /)\u00b6\nReturn a str with the given prefix string removed if present.\nIf the string starts with the prefix string, return string[len(prefix):].\nOtherwise, return a copy of the original string.\nremovesuffix(suffix, /)\u00b6\nReturn a str with the given suffix string removed if present.\nIf the string ends with the suffix string and that suffix is not empty,\nreturn string[:-len(suffix)]. Otherwise, return a copy of the original\nstring.\nreplace(old, new, count=- 1, /)\u00b6\nReturn a copy with all occurrences of substring old replaced by new.\ncountMaximum number of occurrences to replace.\n-1 (the default value) means replace all occurrences.\nIf the optional argument count is given, only the first count occurrences are\nreplaced.\nrfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nrindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nrjust(width, fillchar=' ', /)\u00b6\nReturn a right-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nrpartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string, starting at the end. If\nthe separator is found, returns a 3-tuple containing the part before the", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.EvaluatorType.html"} {"id": "4178916e2dda-8", "text": "the separator is found, returns a 3-tuple containing the part before the\nseparator, the separator itself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing two empty strings\nand the original string.\nrsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nSplitting starts at the end of the string and works to the front.\nrstrip(chars=None, /)\u00b6\nReturn a copy of the string with trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nNote, str.split() is mainly useful for data that has been intentionally\ndelimited. With natural text that includes punctuation, consider using\nthe regular expression module.\nsplitlines(keepends=False)\u00b6\nReturn a list of the lines in the string, breaking at line boundaries.\nLine breaks are not included in the resulting list unless keepends is given and\ntrue.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.EvaluatorType.html"} {"id": "4178916e2dda-9", "text": "Line breaks are not included in the resulting list unless keepends is given and\ntrue.\nstartswith(prefix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S starts with the specified prefix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nprefix can also be a tuple of strings to try.\nstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading and trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nswapcase()\u00b6\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\u00b6\nReturn a version of the string where each word is titlecased.\nMore specifically, words start with uppercased characters and all remaining\ncased characters have lower case.\ntranslate(table, /)\u00b6\nReplace each character in the string using the given translation table.\ntableTranslation table, which must be a mapping of Unicode ordinals to\nUnicode ordinals, strings, or None.\nThe table must implement lookup/indexing via __getitem__, for instance a\ndictionary or list. If this operation raises LookupError, the character is\nleft untouched. Characters mapped to None are deleted.\nupper()\u00b6\nReturn a copy of the string converted to uppercase.\nzfill(width, /)\u00b6\nPad a numeric string with zeros on the left, to fill a field of the given width.\nThe string is never truncated.\nAGENT_TRAJECTORY = 'trajectory'\u00b6\nThe agent trajectory evaluator, which grades the agent\u2019s intermediate steps.\nCONTEXT_QA = 'context_qa'\u00b6\nQuestion answering evaluator that incorporates \u2018context\u2019 in the response.\nCOT_QA = 'cot_qa'\u00b6\nChain of thought question answering evaluator, which grades\nanswers to questions using\nchain of thought \u2018reasoning\u2019.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.EvaluatorType.html"} {"id": "4178916e2dda-10", "text": "answers to questions using\nchain of thought \u2018reasoning\u2019.\nCRITERIA = 'criteria'\u00b6\nThe criteria evaluator, which evaluates a model based on a\ncustom set of criteria.\nEMBEDDING_DISTANCE = 'embedding_distance'\u00b6\nCompare a prediction to a reference label using embedding distance.\nPAIRWISE_EMBEDDING_DISTANCE = 'pairwise_embedding_distance'\u00b6\nCompare two predictions using embedding distance.\nPAIRWISE_STRING = 'pairwise_string'\u00b6\nThe pairwise string evaluator, which compares the output of two models.\nPAIRWISE_STRING_DISTANCE = 'pairwise_string_distance'\u00b6\nCompare predictions based on string edit distances.\nQA = 'qa'\u00b6\nQuestion answering evaluator, which grades answers to questions\ndirectly using an LLM.\nSTRING_DISTANCE = 'string_distance'\u00b6\nCompare predictions to a reference answer using string edit distances.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.EvaluatorType.html"} {"id": "36a5b4392f60-0", "text": "langchain.evaluation.string_distance.base.PairwiseStringDistanceEvalChain\u00b6\nclass langchain.evaluation.string_distance.base.PairwiseStringDistanceEvalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, distance: StringDistance = StringDistance.LEVENSHTEIN)[source]\u00b6\nBases: _RapidFuzzChainMixin, PairwiseStringEvaluator\nCompute string edit distances between two predictions.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam distance: langchain.evaluation.string_distance.base.StringDistance = StringDistance.LEVENSHTEIN\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.PairwiseStringDistanceEvalChain.html"} {"id": "36a5b4392f60-1", "text": "for the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.PairwiseStringDistanceEvalChain.html"} {"id": "36a5b4392f60-2", "text": "addition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.PairwiseStringDistanceEvalChain.html"} {"id": "36a5b4392f60-3", "text": "these runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.PairwiseStringDistanceEvalChain.html"} {"id": "36a5b4392f60-4", "text": "these runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.PairwiseStringDistanceEvalChain.html"} {"id": "36a5b4392f60-5", "text": "Validate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.PairwiseStringDistanceEvalChain.html"} {"id": "36a5b4392f60-6", "text": "these runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_dependencies\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that the rapidfuzz library is installed.\nParameters\nvalues (Dict[str, Any]) \u2013 The input values.\nReturns\nThe validated values.\nReturn type\nDict[str, Any]\nproperty evaluation_name: str\u00b6\nproperty input_keys: List[str]\u00b6\nGet the input keys.\nReturns\nThe input keys.\nReturn type\nList[str]\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.PairwiseStringDistanceEvalChain.html"} {"id": "36a5b4392f60-7", "text": "property lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty metric: Callable\u00b6\nGet the distance metric function.\nReturns\nThe distance metric function.\nReturn type\nCallable\nproperty output_keys: List[str]\u00b6\nGet the output keys.\nReturns\nThe output keys.\nReturn type\nList[str]\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.PairwiseStringDistanceEvalChain.html"} {"id": "59521b4371ba-0", "text": "langchain.evaluation.run_evaluators.base.RunEvaluatorOutputParser\u00b6\nclass langchain.evaluation.run_evaluators.base.RunEvaluatorOutputParser(*, eval_chain_output_key: str = 'text')[source]\u00b6\nBases: BaseOutputParser[EvaluationResult]\nParse the output of a run.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam eval_chain_output_key: str = 'text'\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nabstract parse(text: str) \u2192 T\u00b6\nParse a single string model output into some structure.\nParameters\ntext \u2013 String output of language model.\nReturns\nStructured output.\nparse_chain_output(output: Dict[str, Any]) \u2192 EvaluationResult[source]\u00b6\nParse the output of a run.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.base.RunEvaluatorOutputParser.html"} {"id": "59521b4371ba-1", "text": "prompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.base.RunEvaluatorOutputParser.html"} {"id": "b99c90fe2dc2-0", "text": "langchain.evaluation.run_evaluators.implementations.get_qa_evaluator\u00b6\nlangchain.evaluation.run_evaluators.implementations.get_qa_evaluator(llm: BaseLanguageModel, *, prompt: Union[PromptTemplate, str] = PromptTemplate(input_variables=['query', 'result', 'answer'], output_parser=None, partial_variables={}, template=\"You are a teacher grading a quiz.\\nYou are given a question, the student's answer, and the true answer, and are asked to score the student answer as either CORRECT or INCORRECT.\\n\\nExample Format:\\nQUESTION: question here\\nSTUDENT ANSWER: student's answer here\\nTRUE ANSWER: true answer here\\nGRADE: CORRECT or INCORRECT here\\n\\nGrade the student answers based ONLY on their factual accuracy. Ignore differences in punctuation and phrasing between the student answer and true answer. It is OK if the student answer contains more information than the true answer, as long as it does not contain any conflicting statements. Begin! \\n\\nQUESTION: {query}\\nSTUDENT ANSWER: {result}\\nTRUE ANSWER: {answer}\\nGRADE:\", template_format='f-string', validate_template=True), input_key: str = 'input', prediction_key: str = 'output', answer_key: str = 'output', evaluation_name: Optional[str] = None, **kwargs: Any) \u2192 RunEvaluatorChain[source]\u00b6\nGet an eval chain that compares response against ground truth.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.get_qa_evaluator.html"} {"id": "2269254574df-0", "text": "langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEval\u00b6\nclass langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEval(score, reasoning)[source]\u00b6\nBases: NamedTuple\nCreate new instance of TrajectoryEval(score, reasoning)\nMethods\n__init__()\ncount(value,\u00a0/)\nReturn number of occurrences of value.\nindex(value[,\u00a0start,\u00a0stop])\nReturn first index of value.\nAttributes\nreasoning\nAlias for field number 1\nscore\nAlias for field number 0\ncount(value, /)\u00b6\nReturn number of occurrences of value.\nindex(value, start=0, stop=9223372036854775807, /)\u00b6\nReturn first index of value.\nRaises ValueError if the value is not present.\nreasoning: str\u00b6\nAlias for field number 1\nscore: int\u00b6\nAlias for field number 0", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEval.html"} {"id": "1e3a9b298b1d-0", "text": "langchain.evaluation.run_evaluators.loading.load_run_evaluator_for_model\u00b6\nlangchain.evaluation.run_evaluators.loading.load_run_evaluator_for_model(evaluator: EvaluatorType, model: Union[Chain, BaseLanguageModel, Tool], *, input_key: Optional[str] = None, prediction_key: Optional[str] = None, reference_key: Optional[str] = None, eval_llm: Optional[BaseLanguageModel] = None, **kwargs: Any) \u2192 List[RunEvaluator][source]\u00b6\nLoad evaluators specified by a list of evaluator types.\nParameters\nevaluator (EvaluatorType) \u2013 The evaluator type to load.\nmodel (Union[Chain, BaseLanguageModel, Tool]) \u2013 The model to evaluate. Used to infer how to parse the run.\ninput_key (Optional[str], a chain run's input key to map) \u2013 to the evaluator\u2019s input\nprediction_key (Optional[str], the key in the run's outputs to) \u2013 represent the Chain prediction\nreference_key (Optional[str], the key in the dataset example (row)) \u2013 outputs to represent the reference, or ground-truth label\neval_llm (BaseLanguageModel, optional) \u2013 The language model to use for evaluation, if none is provided, a default\nChatOpenAI gpt-4 model will be used.\n**kwargs (Any) \u2013 Additional keyword arguments to pass to all evaluators.\nReturns\nThe loaded Run evaluator.\nReturn type\nRunEvaluator", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.loading.load_run_evaluator_for_model.html"} {"id": "c21025ae5e4e-0", "text": "langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain\u00b6\nclass langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, prompt: BasePromptTemplate, llm: BaseLanguageModel, output_key: str = 'text', output_parser: BaseOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None)[source]\u00b6\nBases: PairwiseStringEvaluator, LLMEvalChain, LLMChain\nA chain for comparing two outputs, such as the outputs\nof two models, prompts, or outputs of a single model on similar inputs.\nExample:\n>>> from langchain.chat_models import ChatOpenAI\n>>> from langchain.evaluation.comparison import PairwiseStringEvalChain\n>>> llm = ChatOpenAI(temperature=0)\n>>> chain = PairwiseStringEvalChain.from_llm(llm=llm)\n>>> result = chain.evaluate_string_pairs(\n\u2026 input = \u201cWhat is the chemical formula for water?\u201d,\n\u2026 prediction = \u201cH2O\u201d,\n\u2026 prediction_b = (\n\u2026 \u201cThe chemical formula for water is H2O, which means\u201d\n\u2026 \u201d there are two hydrogen atoms and one oxygen atom.\u201d\n\u2026 referenc = \u201cThe chemical formula for water is H2O.\u201d,\n\u2026 )\n>>> print(result[\u201ctext\u201d])\n# {\n# \u201cvalue\u201d: \u201cB\u201d,\n# \u201ccomment\u201d: \u201cBoth responses accurately state\u201d\n# \u201d that the chemical formula for water is H2O.\u201d", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html"} {"id": "c21025ae5e4e-1", "text": "# \u201d that the chemical formula for water is H2O.\u201d\n# \u201d However, Response B provides additional information\u201d\n# . \u201d by explaining what the formula means.\n[[B]]\u201d# }\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam llm: BaseLanguageModel [Required]\u00b6\nLanguage model to call.\nparam llm_kwargs: dict [Optional]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam output_parser: BaseOutputParser [Optional]\u00b6\nOutput parser to use.\nDefaults to one that takes the most likely string but does not change it\notherwise.\nparam prompt: BasePromptTemplate [Required]\u00b6\nPrompt object to use.\nparam return_final_only: bool = True\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html"} {"id": "c21025ae5e4e-2", "text": "Prompt object to use.\nparam return_final_only: bool = True\u00b6\nWhether to return only the final parsed result. Defaults to True.\nIf false, will return a bunch of extra information about the generation.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html"} {"id": "c21025ae5e4e-3", "text": "tags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.\nasync aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html"} {"id": "c21025ae5e4e-4", "text": "chain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync aevaluate_string_pairs(*, prediction: str, prediction_b: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) \u2192 dict\u00b6\nEvaluate the output string pairs.\nParameters\nprediction (str) \u2013 The output string from the first model.\nprediction_b (str) \u2013 The output string from the second model.\nreference (str, optional) \u2013 The expected output / reference\nstring. Defaults to None.\ninput (str, optional) \u2013 The input string. Defaults to None.\n**kwargs (Any) \u2013 Additional keyword arguments, such\nas callbacks and optional reference strings.\nReturns\nA dictionary containing the preference, scores, and/orother information.\nReturn type\ndict\nasync agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html"} {"id": "c21025ae5e4e-5", "text": "Utilize the LLM generate method for speed gains.\napply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.\nasync apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\nasync apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, str]]\u00b6\nCall apredict and then parse the results.\nasync aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html"} {"id": "c21025ae5e4e-6", "text": "info along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ncreate_outputs(llm_result: LLMResult) \u2192 List[Dict[str, Any]]\u00b6\nCreate outputs from response.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html"} {"id": "c21025ae5e4e-7", "text": "Parameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nevaluate_string_pairs(*, prediction: str, prediction_b: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) \u2192 dict\u00b6\nEvaluate the output string pairs.\nParameters\nprediction (str) \u2013 The output string from the first model.\nprediction_b (str) \u2013 The output string from the second model.\nreference (str, optional) \u2013 The expected output / reference\nstring. Defaults to None.\ninput (str, optional) \u2013 The input string. Defaults to None.\n**kwargs (Any) \u2013 Additional keyword arguments, such\nas callbacks and optional reference strings.\nReturns\nA dictionary containing the preference, scores, and/orother information.\nReturn type\ndict\nclassmethod from_llm(llm: BaseLanguageModel, *, prompt: Optional[PromptTemplate] = None, requires_reference: bool = False, **kwargs: Any) \u2192 PairwiseStringEvalChain[source]\u00b6\nInitialize the PairwiseStringEvalChain from an LLM.\nParameters\nllm (BaseLanguageModel) \u2013 The LLM to use.\nprompt (PromptTemplate, optional) \u2013 The prompt to use.\nrequires_reference (bool, optional) \u2013 Whether to require a reference\nstring. Defaults to False.\n**kwargs (Any) \u2013 Additional keyword arguments.\nReturns\nThe initialized PairwiseStringEvalChain.\nReturn type\nPairwiseStringEvalChain\nclassmethod from_string(llm: BaseLanguageModel, template: str) \u2192 LLMChain\u00b6\nCreate LLMChain from LLM and template.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html"} {"id": "c21025ae5e4e-8", "text": "Create LLMChain from LLM and template.\ngenerate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.\npredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\npredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, Any]]\u00b6\nCall predict and then parse the results.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html"} {"id": "c21025ae5e4e-9", "text": "Returns\nA dict of the final chain outputs.\nprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html"} {"id": "c21025ae5e4e-10", "text": "directly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty requires_input: bool\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html"} {"id": "c21025ae5e4e-11", "text": "Return whether or not the class is serializable.\nproperty requires_input: bool\u00b6\nWhether this evaluator requires an input string.\nproperty requires_reference: bool\u00b6\nWhether this evaluator requires a reference label.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for the QAEvalChain.\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html"} {"id": "225a7b9cdeb6-0", "text": "langchain.evaluation.run_evaluators.implementations.get_criteria_evaluator\u00b6\nlangchain.evaluation.run_evaluators.implementations.get_criteria_evaluator(llm: BaseLanguageModel, criteria: Union[Mapping[str, str], Sequence[str], str], *, input_key: str = 'input', prediction_key: str = 'output', prompt: Optional[BasePromptTemplate] = None, evaluation_name: Optional[str] = None, requires_reference: bool = False, **kwargs: Any) \u2192 RunEvaluatorChain[source]\u00b6\nGet an eval chain for grading a model\u2019s response against a map of criteria.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.get_criteria_evaluator.html"} {"id": "ac6cd548b4e7-0", "text": "langchain.evaluation.qa.eval_chain.CotQAEvalChain\u00b6\nclass langchain.evaluation.qa.eval_chain.CotQAEvalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, prompt: BasePromptTemplate, llm: BaseLanguageModel, output_key: str = 'text', output_parser: BaseLLMOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None)[source]\u00b6\nBases: ContextQAEvalChain\nLLM Chain specifically for evaluating QA using chain of thought reasoning.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam llm: BaseLanguageModel [Required]\u00b6\nLanguage model to call.\nparam llm_kwargs: dict [Optional]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html"} {"id": "ac6cd548b4e7-1", "text": "for the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam output_key: str = 'text'\u00b6\nparam output_parser: BaseLLMOutputParser [Optional]\u00b6\nOutput parser to use.\nDefaults to one that takes the most likely string but does not change it\notherwise.\nparam prompt: BasePromptTemplate [Required]\u00b6\nPrompt object to use.\nparam return_final_only: bool = True\u00b6\nWhether to return only the final parsed result. Defaults to True.\nIf false, will return a bunch of extra information about the generation.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html"} {"id": "ac6cd548b4e7-2", "text": "only one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.\nasync aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html"} {"id": "ac6cd548b4e7-3", "text": "Call apply and then parse the results.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync aevaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) \u2192 dict\u00b6\nAsynchronously evaluate Chain or LLM output, based on optionalinput and label.\nParameters\nprediction (str) \u2013 the LLM or chain prediction to evaluate.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html"} {"id": "ac6cd548b4e7-4", "text": "Parameters\nprediction (str) \u2013 the LLM or chain prediction to evaluate.\nreference (Optional[str], optional) \u2013 the reference label\nto evaluate against.\ninput (Optional[str], optional) \u2013 the input to consider during evaluation\n**kwargs \u2013 additional keyword arguments, including callbacks, tags, etc.\nReturns\nThe evaluation results containing the score or value.\nReturn type\ndict\nasync agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.\napply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.\nasync apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\nasync apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, str]]\u00b6\nCall apredict and then parse the results.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html"} {"id": "ac6cd548b4e7-5", "text": "Call apredict and then parse the results.\nasync aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html"} {"id": "ac6cd548b4e7-6", "text": "Example\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ncreate_outputs(llm_result: LLMResult) \u2192 List[Dict[str, Any]]\u00b6\nCreate outputs from response.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nevaluate(examples: List[dict], predictions: List[dict], question_key: str = 'query', context_key: str = 'context', prediction_key: str = 'result', *, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[dict]\u00b6\nEvaluate question answering examples and predictions.\nevaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) \u2192 dict\u00b6\nEvaluate Chain or LLM output, based on optional input and label.\nParameters\nprediction (str) \u2013 the LLM or chain prediction to evaluate.\nreference (Optional[str], optional) \u2013 the reference label", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html"} {"id": "ac6cd548b4e7-7", "text": "reference (Optional[str], optional) \u2013 the reference label\nto evaluate against.\ninput (Optional[str], optional) \u2013 the input to consider during evaluation\n**kwargs \u2013 additional keyword arguments, including callbacks, tags, etc.\nReturns\nThe evaluation results containing the score or value.\nReturn type\ndict\nclassmethod from_llm(llm: BaseLanguageModel, prompt: PromptTemplate = PromptTemplate(input_variables=['query', 'context', 'result'], output_parser=None, partial_variables={}, template=\"You are a teacher grading a quiz.\\nYou are given a question, the context the question is about, and the student's answer. You are asked to score the student's answer as either CORRECT or INCORRECT, based on the context.\\nWrite out in a step by step manner your reasoning to be sure that your conclusion is correct. Avoid simply stating the correct answer at the outset.\\n\\nExample Format:\\nQUESTION: question here\\nCONTEXT: context the question is about here\\nSTUDENT ANSWER: student's answer here\\nEXPLANATION: step by step reasoning here\\nGRADE: CORRECT or INCORRECT here\\n\\nGrade the student answers based ONLY on their factual accuracy. Ignore differences in punctuation and phrasing between the student answer and true answer. It is OK if the student answer contains more information than the true answer, as long as it does not contain any conflicting statements. Begin! \\n\\nQUESTION: {query}\\nCONTEXT: {context}\\nSTUDENT ANSWER: {result}\\nEXPLANATION:\", template_format='f-string', validate_template=True), **kwargs: Any) \u2192 CotQAEvalChain[source]\u00b6\nLoad QA Eval Chain from LLM.\nParameters\nllm (BaseLanguageModel) \u2013 the base language model to use.\nprompt ('context' and 'result' that will be used as the) \u2013 A prompt template containing the input_variables:\n'query' \u2013", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html"} {"id": "ac6cd548b4e7-8", "text": "'query' \u2013 \nprompt \u2013 \nevaluation. (for) \u2013 \nPROMPT. (Defaults to) \u2013 \n**kwargs \u2013 additional keyword arguments.\nReturns\nthe loaded QA eval chain.\nReturn type\nContextQAEvalChain\nclassmethod from_string(llm: BaseLanguageModel, template: str) \u2192 LLMChain\u00b6\nCreate LLMChain from LLM and template.\ngenerate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.\npredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\npredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, Any]]\u00b6\nCall predict and then parse the results.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html"} {"id": "ac6cd548b4e7-9", "text": "Validate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html"} {"id": "ac6cd548b4e7-10", "text": "these runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty evaluation_name: str\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html"} {"id": "ac6cd548b4e7-11", "text": "property lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty requires_input: bool\u00b6\nWhether the chain requires an input string.\nproperty requires_reference: bool\u00b6\nWhether the chain requires a reference string.\nmodel Config\u00b6\nBases: object\nConfiguration for the QAEvalChain.\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html"} {"id": "f3645a464ad6-0", "text": "langchain.evaluation.run_evaluators.string_run_evaluator.LLMStringRunMapper\u00b6\nclass langchain.evaluation.run_evaluators.string_run_evaluator.LLMStringRunMapper[source]\u00b6\nBases: StringRunMapper\nExtract items to evaluate from the run object.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\n__call__(run: Run) \u2192 Dict[str, str]\u00b6\nMaps the Run to a dictionary.\nmap(run: Run) \u2192 Dict[str, str][source]\u00b6\nMaps the Run to a dictionary.\nserialize_chat_messages(messages: List[Dict]) \u2192 str[source]\u00b6\nExtract the input messages from the run.\nserialize_inputs(inputs: Dict) \u2192 str[source]\u00b6\nserialize_outputs(outputs: Dict) \u2192 str[source]\u00b6\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\u00b6\nThe keys to extract from the run.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.string_run_evaluator.LLMStringRunMapper.html"} {"id": "4592ab1602e8-0", "text": "langchain.evaluation.qa.eval_chain.ContextQAEvalChain\u00b6\nclass langchain.evaluation.qa.eval_chain.ContextQAEvalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, prompt: BasePromptTemplate, llm: BaseLanguageModel, output_key: str = 'text', output_parser: BaseLLMOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None)[source]\u00b6\nBases: LLMChain, StringEvaluator, LLMEvalChain\nLLM Chain specifically for evaluating QA w/o GT based on context\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam llm: BaseLanguageModel [Required]\u00b6\nLanguage model to call.\nparam llm_kwargs: dict [Optional]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html"} {"id": "4592ab1602e8-1", "text": "There are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam output_key: str = 'text'\u00b6\nparam output_parser: BaseLLMOutputParser [Optional]\u00b6\nOutput parser to use.\nDefaults to one that takes the most likely string but does not change it\notherwise.\nparam prompt: BasePromptTemplate [Required]\u00b6\nPrompt object to use.\nparam return_final_only: bool = True\u00b6\nWhether to return only the final parsed result. Defaults to True.\nIf false, will return a bunch of extra information about the generation.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html"} {"id": "4592ab1602e8-2", "text": "Execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.\nasync aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html"} {"id": "4592ab1602e8-3", "text": "Call apply and then parse the results.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync aevaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) \u2192 dict\u00b6\nAsynchronously evaluate Chain or LLM output, based on optionalinput and label.\nParameters\nprediction (str) \u2013 the LLM or chain prediction to evaluate.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html"} {"id": "4592ab1602e8-4", "text": "Parameters\nprediction (str) \u2013 the LLM or chain prediction to evaluate.\nreference (Optional[str], optional) \u2013 the reference label\nto evaluate against.\ninput (Optional[str], optional) \u2013 the input to consider during evaluation\n**kwargs \u2013 additional keyword arguments, including callbacks, tags, etc.\nReturns\nThe evaluation results containing the score or value.\nReturn type\ndict\nasync agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.\napply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.\nasync apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\nasync apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, str]]\u00b6\nCall apredict and then parse the results.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html"} {"id": "4592ab1602e8-5", "text": "Call apredict and then parse the results.\nasync aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html"} {"id": "4592ab1602e8-6", "text": "Example\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ncreate_outputs(llm_result: LLMResult) \u2192 List[Dict[str, Any]]\u00b6\nCreate outputs from response.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nevaluate(examples: List[dict], predictions: List[dict], question_key: str = 'query', context_key: str = 'context', prediction_key: str = 'result', *, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[dict][source]\u00b6\nEvaluate question answering examples and predictions.\nevaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) \u2192 dict\u00b6\nEvaluate Chain or LLM output, based on optional input and label.\nParameters\nprediction (str) \u2013 the LLM or chain prediction to evaluate.\nreference (Optional[str], optional) \u2013 the reference label", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html"} {"id": "4592ab1602e8-7", "text": "reference (Optional[str], optional) \u2013 the reference label\nto evaluate against.\ninput (Optional[str], optional) \u2013 the input to consider during evaluation\n**kwargs \u2013 additional keyword arguments, including callbacks, tags, etc.\nReturns\nThe evaluation results containing the score or value.\nReturn type\ndict\nclassmethod from_llm(llm: BaseLanguageModel, prompt: PromptTemplate = PromptTemplate(input_variables=['query', 'context', 'result'], output_parser=None, partial_variables={}, template=\"You are a teacher grading a quiz.\\nYou are given a question, the context the question is about, and the student's answer. You are asked to score the student's answer as either CORRECT or INCORRECT, based on the context.\\n\\nExample Format:\\nQUESTION: question here\\nCONTEXT: context the question is about here\\nSTUDENT ANSWER: student's answer here\\nGRADE: CORRECT or INCORRECT here\\n\\nGrade the student answers based ONLY on their factual accuracy. Ignore differences in punctuation and phrasing between the student answer and true answer. It is OK if the student answer contains more information than the true answer, as long as it does not contain any conflicting statements. Begin! \\n\\nQUESTION: {query}\\nCONTEXT: {context}\\nSTUDENT ANSWER: {result}\\nGRADE:\", template_format='f-string', validate_template=True), **kwargs: Any) \u2192 ContextQAEvalChain[source]\u00b6\nLoad QA Eval Chain from LLM.\nParameters\nllm (BaseLanguageModel) \u2013 the base language model to use.\nprompt ('context' and 'result' that will be used as the) \u2013 A prompt template containing the input_variables:\n'query' \u2013 \nprompt \u2013 \nevaluation. (for) \u2013 \nPROMPT. (Defaults to) \u2013 \n**kwargs \u2013 additional keyword arguments.\nReturns\nthe loaded QA eval chain.\nReturn type", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html"} {"id": "4592ab1602e8-8", "text": "**kwargs \u2013 additional keyword arguments.\nReturns\nthe loaded QA eval chain.\nReturn type\nContextQAEvalChain\nclassmethod from_string(llm: BaseLanguageModel, template: str) \u2192 LLMChain\u00b6\nCreate LLMChain from LLM and template.\ngenerate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.\npredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\npredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, Any]]\u00b6\nCall predict and then parse the results.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html"} {"id": "4592ab1602e8-9", "text": "Parameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html"} {"id": "4592ab1602e8-10", "text": "tags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty evaluation_name: str\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html"} {"id": "4592ab1602e8-11", "text": "eg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty requires_input: bool\u00b6\nWhether the chain requires an input string.\nproperty requires_reference: bool\u00b6\nWhether the chain requires a reference string.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for the QAEvalChain.\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html"} {"id": "417f9266f706-0", "text": "langchain.evaluation.run_evaluators.implementations.get_trajectory_evaluator\u00b6\nlangchain.evaluation.run_evaluators.implementations.get_trajectory_evaluator(llm: BaseChatModel, agent_tools: Sequence[BaseTool], *, input_key: str = 'input', prediction_key: str = 'output', tool_input_key: str = 'input', tool_output_key: str = 'output', reference_output_key: Optional[str] = None, evaluation_name: str = 'Agent Trajectory', **kwargs: Any) \u2192 RunEvaluatorChain[source]\u00b6\nGet an eval chain for grading a model\u2019s response against a map of criteria.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.get_trajectory_evaluator.html"} {"id": "decf8b425551-0", "text": "langchain.evaluation.schema.PairwiseStringEvaluator\u00b6\nclass langchain.evaluation.schema.PairwiseStringEvaluator[source]\u00b6\nBases: _EvalArgsMixin, ABC\nCompare the output of two models (or two outputs of the same model).\nMethods\n__init__()\naevaluate_string_pairs(*,\u00a0prediction,\u00a0...[,\u00a0...])\nEvaluate the output string pairs.\nevaluate_string_pairs(*,\u00a0prediction,\u00a0...[,\u00a0...])\nEvaluate the output string pairs.\nAttributes\nrequires_input\nWhether this evaluator requires an input string.\nrequires_reference\nWhether this evaluator requires a reference label.\nasync aevaluate_string_pairs(*, prediction: str, prediction_b: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) \u2192 dict[source]\u00b6\nEvaluate the output string pairs.\nParameters\nprediction (str) \u2013 The output string from the first model.\nprediction_b (str) \u2013 The output string from the second model.\nreference (str, optional) \u2013 The expected output / reference\nstring. Defaults to None.\ninput (str, optional) \u2013 The input string. Defaults to None.\n**kwargs (Any) \u2013 Additional keyword arguments, such\nas callbacks and optional reference strings.\nReturns\nA dictionary containing the preference, scores, and/orother information.\nReturn type\ndict\nevaluate_string_pairs(*, prediction: str, prediction_b: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) \u2192 dict[source]\u00b6\nEvaluate the output string pairs.\nParameters\nprediction (str) \u2013 The output string from the first model.\nprediction_b (str) \u2013 The output string from the second model.\nreference (str, optional) \u2013 The expected output / reference\nstring. Defaults to None.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.PairwiseStringEvaluator.html"} {"id": "decf8b425551-1", "text": "reference (str, optional) \u2013 The expected output / reference\nstring. Defaults to None.\ninput (str, optional) \u2013 The input string. Defaults to None.\n**kwargs (Any) \u2013 Additional keyword arguments, such\nas callbacks and optional reference strings.\nReturns\nA dictionary containing the preference, scores, and/orother information.\nReturn type\ndict\nproperty requires_input: bool\u00b6\nWhether this evaluator requires an input string.\nproperty requires_reference: bool\u00b6\nWhether this evaluator requires a reference label.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.PairwiseStringEvaluator.html"} {"id": "31795defe6c3-0", "text": "langchain.evaluation.run_evaluators.string_run_evaluator.ToolStringRunMapper\u00b6\nclass langchain.evaluation.run_evaluators.string_run_evaluator.ToolStringRunMapper[source]\u00b6\nBases: StringRunMapper\nMap an input to the tool.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\n__call__(run: Run) \u2192 Dict[str, str]\u00b6\nMaps the Run to a dictionary.\nmap(run: Run) \u2192 Dict[str, str][source]\u00b6\nMaps the Run to a dictionary.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\u00b6\nThe keys to extract from the run.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.string_run_evaluator.ToolStringRunMapper.html"} {"id": "36cea55b931a-0", "text": "langchain.evaluation.run_evaluators.implementations.CriteriaOutputParser\u00b6\nclass langchain.evaluation.run_evaluators.implementations.CriteriaOutputParser(*, eval_chain_output_key: str = 'text', evaluation_name: str)[source]\u00b6\nBases: RunEvaluatorOutputParser\nParse a criteria results into an evaluation result.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam eval_chain_output_key: str = 'text'\u00b6\nparam evaluation_name: str [Required]\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nparse(parsed_output: Union[str, dict]) \u2192 EvaluationResult[source]\u00b6\nParse the last line of the text and return an evaluation result.\nparse_chain_output(output: Dict[str, Any]) \u2192 EvaluationResult\u00b6\nParse the output of a run.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.CriteriaOutputParser.html"} {"id": "36cea55b931a-1", "text": "prompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.CriteriaOutputParser.html"} {"id": "2e25974f5777-0", "text": "langchain.evaluation.agents.trajectory_eval_chain.TrajectoryOutputParser\u00b6\nclass langchain.evaluation.agents.trajectory_eval_chain.TrajectoryOutputParser[source]\u00b6\nBases: BaseOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 TrajectoryEval[source]\u00b6\nParse the output text and extract the score and reasoning.\nParameters\ntext (str) \u2013 The output text to parse.\nReturns\nA named tuple containing the score and reasoning.\nReturn type\nTrajectoryEval\nRaises\nOutputParserException \u2013 If the score is not found in the output text or\n if the score is not a digit in the range 1-5.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryOutputParser.html"} {"id": "2e25974f5777-1", "text": "Returns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryOutputParser.html"} {"id": "cc4e6af13072-0", "text": "langchain.evaluation.schema.AgentTrajectoryEvaluator\u00b6\nclass langchain.evaluation.schema.AgentTrajectoryEvaluator[source]\u00b6\nBases: _EvalArgsMixin, ABC\nInterface for evaluating agent trajectories.\nMethods\n__init__()\naevaluate_agent_trajectory(*,\u00a0prediction,\u00a0...)\nAsynchronously evaluate a trajectory.\nevaluate_agent_trajectory(*,\u00a0prediction,\u00a0...)\nEvaluate a trajectory.\nAttributes\nrequires_input\nWhether this evaluator requires an input string.\nrequires_reference\nWhether this evaluator requires a reference label.\nasync aevaluate_agent_trajectory(*, prediction: str, agent_trajectory: Sequence[Tuple[AgentAction, str]], input: str, reference: Optional[str] = None, **kwargs: Any) \u2192 dict[source]\u00b6\nAsynchronously evaluate a trajectory.\nParameters\nprediction (str) \u2013 The final predicted response.\nagent_trajectory (List[Tuple[AgentAction, str]]) \u2013 The intermediate steps forming the agent trajectory.\ninput (str) \u2013 The input to the agent.\nreference (Optional[str]) \u2013 The reference answer.\nReturns\nThe evaluation result.\nReturn type\ndict\nevaluate_agent_trajectory(*, prediction: str, agent_trajectory: Sequence[Tuple[AgentAction, str]], input: str, reference: Optional[str] = None, **kwargs: Any) \u2192 dict[source]\u00b6\nEvaluate a trajectory.\nParameters\nprediction (str) \u2013 The final predicted response.\nagent_trajectory (List[Tuple[AgentAction, str]]) \u2013 The intermediate steps forming the agent trajectory.\ninput (str) \u2013 The input to the agent.\nreference (Optional[str]) \u2013 The reference answer.\nReturns\nThe evaluation result.\nReturn type\ndict\nproperty requires_input: bool\u00b6\nWhether this evaluator requires an input string.\nproperty requires_reference: bool\u00b6\nWhether this evaluator requires a reference label.", "source": "https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.AgentTrajectoryEvaluator.html"} {"id": "24ced55f54e2-0", "text": "langchain.graphs.networkx_graph.get_entities\u00b6\nlangchain.graphs.networkx_graph.get_entities(entity_str: str) \u2192 List[str][source]\u00b6\nExtract entities from entity string.", "source": "https://api.python.langchain.com/en/latest/graphs/langchain.graphs.networkx_graph.get_entities.html"} {"id": "aa11b1d13048-0", "text": "langchain.graphs.networkx_graph.KnowledgeTriple\u00b6\nclass langchain.graphs.networkx_graph.KnowledgeTriple(subject: str, predicate: str, object_: str)[source]\u00b6\nBases: NamedTuple\nA triple in the graph.\nCreate new instance of KnowledgeTriple(subject, predicate, object_)\nMethods\n__init__()\ncount(value,\u00a0/)\nReturn number of occurrences of value.\nfrom_string(triple_string)\nCreate a KnowledgeTriple from a string.\nindex(value[,\u00a0start,\u00a0stop])\nReturn first index of value.\nAttributes\nobject_\nAlias for field number 2\npredicate\nAlias for field number 1\nsubject\nAlias for field number 0\ncount(value, /)\u00b6\nReturn number of occurrences of value.\nclassmethod from_string(triple_string: str) \u2192 KnowledgeTriple[source]\u00b6\nCreate a KnowledgeTriple from a string.\nindex(value, start=0, stop=9223372036854775807, /)\u00b6\nReturn first index of value.\nRaises ValueError if the value is not present.\nobject_: str\u00b6\nAlias for field number 2\npredicate: str\u00b6\nAlias for field number 1\nsubject: str\u00b6\nAlias for field number 0", "source": "https://api.python.langchain.com/en/latest/graphs/langchain.graphs.networkx_graph.KnowledgeTriple.html"} {"id": "7cfcb9b280ec-0", "text": "langchain.graphs.networkx_graph.parse_triples\u00b6\nlangchain.graphs.networkx_graph.parse_triples(knowledge_str: str) \u2192 List[KnowledgeTriple][source]\u00b6\nParse knowledge triples from the knowledge string.", "source": "https://api.python.langchain.com/en/latest/graphs/langchain.graphs.networkx_graph.parse_triples.html"} {"id": "c6075e1430f2-0", "text": "langchain.document_loaders.mediawikidump.MWDumpLoader\u00b6\nclass langchain.document_loaders.mediawikidump.MWDumpLoader(file_path: str, encoding: Optional[str] = 'utf8')[source]\u00b6\nBases: BaseLoader\nLoad MediaWiki dump from XML file\n.. rubric:: Example\nfrom langchain.document_loaders import MWDumpLoader\nloader = MWDumpLoader(\n file_path=\"myWiki.xml\",\n encoding=\"utf8\"\n)\ndocs = loader.load()\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\ntext_splitter = RecursiveCharacterTextSplitter(\n chunk_size=1000, chunk_overlap=0\n)\ntexts = text_splitter.split_documents(docs)\nParameters\nfile_path (str) \u2013 XML local file path\nencoding (str, optional) \u2013 Charset encoding, defaults to \u201cutf8\u201d\nInitialize with a file path.\nParameters\nfile_path \u2013 XML local file path\nencoding \u2013 Charset encoding, defaults to \u201cutf8\u201d\nMethods\n__init__(file_path[,\u00a0encoding])\nInitialize with a file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad from a file path.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad from a file path.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mediawikidump.MWDumpLoader.html"} {"id": "208511476ad9-0", "text": "langchain.document_loaders.image.UnstructuredImageLoader\u00b6\nclass langchain.document_loaders.image.UnstructuredImageLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]\u00b6\nBases: UnstructuredFileLoader\nLoader that uses unstructured to load image files, such as PNGs and JPGs.\nInitialize with file path.\nMethods\n__init__(file_path[,\u00a0mode])\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.image.UnstructuredImageLoader.html"} {"id": "bebd9e312dfd-0", "text": "langchain.document_loaders.gcs_file.GCSFileLoader\u00b6\nclass langchain.document_loaders.gcs_file.GCSFileLoader(project_name: str, bucket: str, blob: str)[source]\u00b6\nBases: BaseLoader\nLoad Documents from a GCS file.\nInitialize with bucket and key name.\nParameters\nproject_name \u2013 The name of the project to load\nbucket \u2013 The name of the GCS bucket.\nblob \u2013 The name of the GCS blob to load.\nMethods\n__init__(project_name,\u00a0bucket,\u00a0blob)\nInitialize with bucket and key name.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gcs_file.GCSFileLoader.html"} {"id": "214687420e46-0", "text": "langchain.document_loaders.unstructured.UnstructuredAPIFileLoader\u00b6\nclass langchain.document_loaders.unstructured.UnstructuredAPIFileLoader(file_path: Union[str, List[str]] = '', mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]\u00b6\nBases: UnstructuredFileLoader\nUnstructuredAPIFileLoader uses the Unstructured API to load files.\nBy default, the loader makes a call to the hosted Unstructured API.\nIf you are running the unstructured API locally, you can change the\nAPI rule by passing in the url parameter when you initialize the loader.\nThe hosted Unstructured API requires an API key. See\nhttps://www.unstructured.io/api-key/ if you need to generate a key.\nYou can run the loader in one of two modes: \u201csingle\u201d and \u201celements\u201d.\nIf you use \u201csingle\u201d mode, the document will be returned as a single\nlangchain Document object. If you use \u201celements\u201d mode, the unstructured\nlibrary will split the document into elements such as Title and NarrativeText.\nYou can pass in additional unstructured kwargs after mode to apply\ndifferent unstructured settings.\nExamples\n```python\nfrom langchain.document_loaders import UnstructuredAPIFileLoader\nloader = UnstructuredFileAPILoader(\u201cexample.pdf\u201d, mode=\u201delements\u201d, strategy=\u201dfast\u201d, api_key=\u201dMY_API_KEY\u201d,\n)\ndocs = loader.load()\n```\nReferences\nhttps://unstructured-io.github.io/unstructured/bricks.html#partition\nhttps://www.unstructured.io/api-key/\nhttps://github.com/Unstructured-IO/unstructured-api\nInitialize with file path.\nMethods\n__init__([file_path,\u00a0mode,\u00a0url,\u00a0api_key])\nInitialize with file path.\nlazy_load()", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileLoader.html"} {"id": "214687420e46-1", "text": "Initialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileLoader.html"} {"id": "e26bc0d6b0d2-0", "text": "langchain.document_loaders.pdf.PDFMinerPDFasHTMLLoader\u00b6\nclass langchain.document_loaders.pdf.PDFMinerPDFasHTMLLoader(file_path: str)[source]\u00b6\nBases: BasePDFLoader\nLoader that uses PDFMiner to load PDF files as HTML content.\nInitialize with file path.\nMethods\n__init__(file_path)\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nAttributes\nsource\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nproperty source: str\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PDFMinerPDFasHTMLLoader.html"} {"id": "b7c030137983-0", "text": "langchain.document_loaders.slack_directory.SlackDirectoryLoader\u00b6\nclass langchain.document_loaders.slack_directory.SlackDirectoryLoader(zip_path: str, workspace_url: Optional[str] = None)[source]\u00b6\nBases: BaseLoader\nLoader for loading documents from a Slack directory dump.\nInitialize the SlackDirectoryLoader.\nParameters\nzip_path (str) \u2013 The path to the Slack directory dump zip file.\nworkspace_url (Optional[str]) \u2013 The Slack workspace URL.\nIncluding the URL will turn\nsources into links. Defaults to None.\nMethods\n__init__(zip_path[,\u00a0workspace_url])\nInitialize the SlackDirectoryLoader.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad and return documents from the Slack directory dump.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad and return documents from the Slack directory dump.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.slack_directory.SlackDirectoryLoader.html"} {"id": "10578a18f472-0", "text": "langchain.document_loaders.web_base.WebBaseLoader\u00b6\nclass langchain.document_loaders.web_base.WebBaseLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None)[source]\u00b6\nBases: BaseLoader\nLoader that uses urllib and beautiful soup to load webpages.\nInitialize with webpage path.\nMethods\n__init__(web_path[,\u00a0header_template,\u00a0...])\nInitialize with webpage path.\naload()\nLoad text from the urls in web_path async into Documents.\nfetch_all(urls)\nFetch all urls concurrently with rate limiting.\nlazy_load()\nLazy load text from the url(s) in web_path.\nload()\nLoad text from the url(s) in web_path.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nscrape([parser])\nScrape data from webpage and return it in BeautifulSoup format.\nscrape_all(urls[,\u00a0parser])\nFetch all urls, then return soups for all results.\nAttributes\nbs_get_text_kwargs\nkwargs for beatifulsoup4 get_text\ndefault_parser\nDefault parser to use for BeautifulSoup.\nraise_for_status\nRaise an exception if http status code denotes an error.\nrequests_kwargs\nkwargs for requests\nrequests_per_second\nMax number of concurrent requests to make.\nweb_path\nweb_paths\naload() \u2192 List[Document][source]\u00b6\nLoad text from the urls in web_path async into Documents.\nasync fetch_all(urls: List[str]) \u2192 Any[source]\u00b6\nFetch all urls concurrently with rate limiting.\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nLazy load text from the url(s) in web_path.\nload() \u2192 List[Document][source]\u00b6\nLoad text from the url(s) in web_path.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.web_base.WebBaseLoader.html"} {"id": "10578a18f472-1", "text": "Load text from the url(s) in web_path.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nscrape(parser: Optional[str] = None) \u2192 Any[source]\u00b6\nScrape data from webpage and return it in BeautifulSoup format.\nscrape_all(urls: List[str], parser: Optional[str] = None) \u2192 List[Any][source]\u00b6\nFetch all urls, then return soups for all results.\nbs_get_text_kwargs: Dict[str, Any] = {}\u00b6\nkwargs for beatifulsoup4 get_text\ndefault_parser: str = 'html.parser'\u00b6\nDefault parser to use for BeautifulSoup.\nraise_for_status: bool = False\u00b6\nRaise an exception if http status code denotes an error.\nrequests_kwargs: Dict[str, Any] = {}\u00b6\nkwargs for requests\nrequests_per_second: int = 2\u00b6\nMax number of concurrent requests to make.\nproperty web_path: str\u00b6\nweb_paths: List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.web_base.WebBaseLoader.html"} {"id": "0a6e61e2b17b-0", "text": "langchain.document_loaders.reddit.RedditPostsLoader\u00b6\nclass langchain.document_loaders.reddit.RedditPostsLoader(client_id: str, client_secret: str, user_agent: str, search_queries: Sequence[str], mode: str, categories: Sequence[str] = ['new'], number_posts: Optional[int] = 10)[source]\u00b6\nBases: BaseLoader\nReddit posts loader.\nRead posts on a subreddit.\nFirst you need to go to\nhttps://www.reddit.com/prefs/apps/\nand create your application\nMethods\n__init__(client_id,\u00a0client_secret,\u00a0...[,\u00a0...])\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad reddits.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad reddits.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.reddit.RedditPostsLoader.html"} {"id": "f079be66e89d-0", "text": "langchain.document_loaders.url_selenium.SeleniumURLLoader\u00b6\nclass langchain.document_loaders.url_selenium.SeleniumURLLoader(urls: List[str], continue_on_failure: bool = True, browser: Literal['chrome', 'firefox'] = 'chrome', binary_location: Optional[str] = None, executable_path: Optional[str] = None, headless: bool = True, arguments: List[str] = [])[source]\u00b6\nBases: BaseLoader\nLoader that uses Selenium and to load a page and unstructured to load the html.\nThis is useful for loading pages that require javascript to render.\nurls\u00b6\nList of URLs to load.\nType\nList[str]\ncontinue_on_failure\u00b6\nIf True, continue loading other URLs on failure.\nType\nbool\nbrowser\u00b6\nThe browser to use, either \u2018chrome\u2019 or \u2018firefox\u2019.\nType\nstr\nbinary_location\u00b6\nThe location of the browser binary.\nType\nOptional[str]\nexecutable_path\u00b6\nThe path to the browser executable.\nType\nOptional[str]\nheadless\u00b6\nIf True, the browser will run in headless mode.\nType\nbool\narguments [List[str]]\nList of arguments to pass to the browser.\nLoad a list of URLs using Selenium and unstructured.\nMethods\n__init__(urls[,\u00a0continue_on_failure,\u00a0...])\nLoad a list of URLs using Selenium and unstructured.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad the specified URLs using Selenium and create Document instances.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad the specified URLs using Selenium and create Document instances.\nReturns\nA list of Document instances with loaded content.\nReturn type\nList[Document]", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_selenium.SeleniumURLLoader.html"} {"id": "f079be66e89d-1", "text": "Returns\nA list of Document instances with loaded content.\nReturn type\nList[Document]\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_selenium.SeleniumURLLoader.html"} {"id": "e4e29f75fcd7-0", "text": "langchain.document_loaders.base.BaseLoader\u00b6\nclass langchain.document_loaders.base.BaseLoader[source]\u00b6\nBases: ABC\nInterface for loading Documents.\nImplementations should implement the lazy-loading method using generators\nto avoid loading all Documents into memory at once.\nThe load method will remain as is for backwards compatibility, but its\nimplementation should be just list(self.lazy_load()).\nMethods\n__init__()\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nA lazy loader for Documents.\nabstract load() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document][source]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.base.BaseLoader.html"} {"id": "6e01fd9bbb0e-0", "text": "langchain.document_loaders.notebook.NotebookLoader\u00b6\nclass langchain.document_loaders.notebook.NotebookLoader(path: str, include_outputs: bool = False, max_output_length: int = 10, remove_newline: bool = False, traceback: bool = False)[source]\u00b6\nBases: BaseLoader\nLoader that loads .ipynb notebook files.\nInitialize with path.\nMethods\n__init__(path[,\u00a0include_outputs,\u00a0...])\nInitialize with path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notebook.NotebookLoader.html"} {"id": "d58f646102d1-0", "text": "langchain.document_loaders.parsers.pdf.PyMuPDFParser\u00b6\nclass langchain.document_loaders.parsers.pdf.PyMuPDFParser(text_kwargs: Optional[Mapping[str, Any]] = None)[source]\u00b6\nBases: BaseBlobParser\nParse PDFs with PyMuPDF.\nInitialize the parser.\nParameters\ntext_kwargs \u2013 Keyword arguments to pass to fitz.Page.get_text().\nMethods\n__init__([text_kwargs])\nInitialize the parser.\nlazy_parse(blob)\nLazily parse the blob.\nparse(blob)\nEagerly parse the blob into a document or documents.\nlazy_parse(blob: Blob) \u2192 Iterator[Document][source]\u00b6\nLazily parse the blob.\nparse(blob: Blob) \u2192 List[Document]\u00b6\nEagerly parse the blob into a document or documents.\nThis is a convenience method for interactive development environment.\nProduction applications should favor the lazy_parse method instead.\nSubclasses should generally not over-ride this parse method.\nParameters\nblob \u2013 Blob instance\nReturns\nList of documents", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyMuPDFParser.html"} {"id": "29e0d9c013e9-0", "text": "langchain.document_loaders.azure_blob_storage_file.AzureBlobStorageFileLoader\u00b6\nclass langchain.document_loaders.azure_blob_storage_file.AzureBlobStorageFileLoader(conn_str: str, container: str, blob_name: str)[source]\u00b6\nBases: BaseLoader\nLoading Documents from Azure Blob Storage.\nInitialize with connection string, container and blob name.\nMethods\n__init__(conn_str,\u00a0container,\u00a0blob_name)\nInitialize with connection string, container and blob name.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nAttributes\nconn_str\nConnection string for Azure Blob Storage.\ncontainer\nContainer name.\nblob\nBlob name.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nblob\u00b6\nBlob name.\nconn_str\u00b6\nConnection string for Azure Blob Storage.\ncontainer\u00b6\nContainer name.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azure_blob_storage_file.AzureBlobStorageFileLoader.html"} {"id": "524108c1ad59-0", "text": "langchain.document_loaders.excel.UnstructuredExcelLoader\u00b6\nclass langchain.document_loaders.excel.UnstructuredExcelLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]\u00b6\nBases: UnstructuredFileLoader\nLoader that uses unstructured to load Microsoft Excel files.\nParameters\nfile_path \u2013 The path to the Microsoft Excel file.\nmode \u2013 The mode to use when partitioning the file. See unstructured docs\nfor more info. Optional. Defaults to \u201csingle\u201d.\n**unstructured_kwargs \u2013 Keyword arguments to pass to unstructured.\nMethods\n__init__(file_path[,\u00a0mode])\nparam file_path\nThe path to the Microsoft Excel file.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.excel.UnstructuredExcelLoader.html"} {"id": "551f166ab6b7-0", "text": "langchain.document_loaders.gcs_directory.GCSDirectoryLoader\u00b6\nclass langchain.document_loaders.gcs_directory.GCSDirectoryLoader(project_name: str, bucket: str, prefix: str = '')[source]\u00b6\nBases: BaseLoader\nLoads Documents from GCS.\nInitialize with bucket and key name.\nParameters\nproject_name \u2013 The name of the project for the GCS bucket.\nbucket \u2013 The name of the GCS bucket.\nprefix \u2013 The prefix of the GCS bucket.\nMethods\n__init__(project_name,\u00a0bucket[,\u00a0prefix])\nInitialize with bucket and key name.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gcs_directory.GCSDirectoryLoader.html"} {"id": "6400ac42f35e-0", "text": "langchain.document_loaders.unstructured.get_elements_from_api\u00b6\nlangchain.document_loaders.unstructured.get_elements_from_api(file_path: Optional[Union[str, List[str]]] = None, file: Optional[Union[IO, Sequence[IO]]] = None, api_url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any) \u2192 List[source]\u00b6\nRetrieves a list of elements from the Unstructured API.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.get_elements_from_api.html"} {"id": "7e87819da9c5-0", "text": "langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader\u00b6\nclass langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]\u00b6\nBases: UnstructuredFileLoader\nLoader that uses unstructured to load powerpoint files.\nInitialize with file path.\nMethods\n__init__(file_path[,\u00a0mode])\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader.html"} {"id": "93935b36c30c-0", "text": "langchain.document_loaders.college_confidential.CollegeConfidentialLoader\u00b6\nclass langchain.document_loaders.college_confidential.CollegeConfidentialLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None)[source]\u00b6\nBases: WebBaseLoader\nLoader that loads College Confidential webpages.\nInitialize with webpage path.\nMethods\n__init__(web_path[,\u00a0header_template,\u00a0...])\nInitialize with webpage path.\naload()\nLoad text from the urls in web_path async into Documents.\nfetch_all(urls)\nFetch all urls concurrently with rate limiting.\nlazy_load()\nLazy load text from the url(s) in web_path.\nload()\nLoad webpages as Documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nscrape([parser])\nScrape data from webpage and return it in BeautifulSoup format.\nscrape_all(urls[,\u00a0parser])\nFetch all urls, then return soups for all results.\nAttributes\nbs_get_text_kwargs\nkwargs for beatifulsoup4 get_text\ndefault_parser\nDefault parser to use for BeautifulSoup.\nraise_for_status\nRaise an exception if http status code denotes an error.\nrequests_kwargs\nkwargs for requests\nrequests_per_second\nMax number of concurrent requests to make.\nweb_path\naload() \u2192 List[Document]\u00b6\nLoad text from the urls in web_path async into Documents.\nasync fetch_all(urls: List[str]) \u2192 Any\u00b6\nFetch all urls concurrently with rate limiting.\nlazy_load() \u2192 Iterator[Document]\u00b6\nLazy load text from the url(s) in web_path.\nload() \u2192 List[Document][source]\u00b6\nLoad webpages as Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.college_confidential.CollegeConfidentialLoader.html"} {"id": "93935b36c30c-1", "text": "load() \u2192 List[Document][source]\u00b6\nLoad webpages as Documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nscrape(parser: Optional[str] = None) \u2192 Any\u00b6\nScrape data from webpage and return it in BeautifulSoup format.\nscrape_all(urls: List[str], parser: Optional[str] = None) \u2192 List[Any]\u00b6\nFetch all urls, then return soups for all results.\nbs_get_text_kwargs: Dict[str, Any] = {}\u00b6\nkwargs for beatifulsoup4 get_text\ndefault_parser: str = 'html.parser'\u00b6\nDefault parser to use for BeautifulSoup.\nraise_for_status: bool = False\u00b6\nRaise an exception if http status code denotes an error.\nrequests_kwargs: Dict[str, Any] = {}\u00b6\nkwargs for requests\nrequests_per_second: int = 2\u00b6\nMax number of concurrent requests to make.\nproperty web_path: str\u00b6\nweb_paths: List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.college_confidential.CollegeConfidentialLoader.html"} {"id": "5d40c532de2f-0", "text": "langchain.document_loaders.org_mode.UnstructuredOrgModeLoader\u00b6\nclass langchain.document_loaders.org_mode.UnstructuredOrgModeLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]\u00b6\nBases: UnstructuredFileLoader\nLoader that uses unstructured to load Org-Mode files.\nInitialize with file path.\nMethods\n__init__(file_path[,\u00a0mode])\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.org_mode.UnstructuredOrgModeLoader.html"} {"id": "4eef859a18e4-0", "text": "langchain.document_loaders.parsers.grobid.GrobidParser\u00b6\nclass langchain.document_loaders.parsers.grobid.GrobidParser(segment_sentences: bool, grobid_server: str = 'http://localhost:8070/api/processFulltextDocument')[source]\u00b6\nBases: BaseBlobParser\nLoader that uses Grobid to load article PDF files.\nMethods\n__init__(segment_sentences[,\u00a0grobid_server])\nlazy_parse(blob)\nLazy parsing interface.\nparse(blob)\nEagerly parse the blob into a document or documents.\nprocess_xml(file_path,\u00a0xml_data,\u00a0...)\nProcess the XML file from Grobin.\nlazy_parse(blob: Blob) \u2192 Iterator[Document][source]\u00b6\nLazy parsing interface.\nSubclasses are required to implement this method.\nParameters\nblob \u2013 Blob instance\nReturns\nGenerator of documents\nparse(blob: Blob) \u2192 List[Document]\u00b6\nEagerly parse the blob into a document or documents.\nThis is a convenience method for interactive development environment.\nProduction applications should favor the lazy_parse method instead.\nSubclasses should generally not over-ride this parse method.\nParameters\nblob \u2013 Blob instance\nReturns\nList of documents\nprocess_xml(file_path: str, xml_data: str, segment_sentences: bool) \u2192 Iterator[Document][source]\u00b6\nProcess the XML file from Grobin.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.grobid.GrobidParser.html"} {"id": "f187a22f2971-0", "text": "langchain.document_loaders.confluence.ContentFormat\u00b6\nclass langchain.document_loaders.confluence.ContentFormat(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]\u00b6\nBases: str, Enum\nEnumerator of the content formats of Confluence page.\nMethods\nget_content(page)\n__init__(*args,\u00a0**kwds)\ncapitalize()\nReturn a capitalized version of the string.\ncasefold()\nReturn a version of the string suitable for caseless comparisons.\ncenter(width[,\u00a0fillchar])\nReturn a centered string of length width.\ncount(sub[,\u00a0start[,\u00a0end]])\nReturn the number of non-overlapping occurrences of substring sub in string S[start:end].\nencode([encoding,\u00a0errors])\nEncode the string using the codec registered for encoding.\nendswith(suffix[,\u00a0start[,\u00a0end]])\nReturn True if S ends with the specified suffix, False otherwise.\nexpandtabs([tabsize])\nReturn a copy where all tab characters are expanded using spaces.\nfind(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nformat(*args,\u00a0**kwargs)\nReturn a formatted version of S, using substitutions from args and kwargs.\nformat_map(mapping)\nReturn a formatted version of S, using substitutions from mapping.\nindex(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nisalnum()\nReturn True if the string is an alpha-numeric string, False otherwise.\nisalpha()\nReturn True if the string is an alphabetic string, False otherwise.\nisascii()\nReturn True if all characters in the string are ASCII, False otherwise.\nisdecimal()", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html"} {"id": "f187a22f2971-1", "text": "Return True if all characters in the string are ASCII, False otherwise.\nisdecimal()\nReturn True if the string is a decimal string, False otherwise.\nisdigit()\nReturn True if the string is a digit string, False otherwise.\nisidentifier()\nReturn True if the string is a valid Python identifier, False otherwise.\nislower()\nReturn True if the string is a lowercase string, False otherwise.\nisnumeric()\nReturn True if the string is a numeric string, False otherwise.\nisprintable()\nReturn True if the string is printable, False otherwise.\nisspace()\nReturn True if the string is a whitespace string, False otherwise.\nistitle()\nReturn True if the string is a title-cased string, False otherwise.\nisupper()\nReturn True if the string is an uppercase string, False otherwise.\njoin(iterable,\u00a0/)\nConcatenate any number of strings.\nljust(width[,\u00a0fillchar])\nReturn a left-justified string of length width.\nlower()\nReturn a copy of the string converted to lowercase.\nlstrip([chars])\nReturn a copy of the string with leading whitespace removed.\nmaketrans\nReturn a translation table usable for str.translate().\npartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nremoveprefix(prefix,\u00a0/)\nReturn a str with the given prefix string removed if present.\nremovesuffix(suffix,\u00a0/)\nReturn a str with the given suffix string removed if present.\nreplace(old,\u00a0new[,\u00a0count])\nReturn a copy with all occurrences of substring old replaced by new.\nrfind(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].\nrindex(sub[,\u00a0start[,\u00a0end]])", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html"} {"id": "f187a22f2971-2", "text": "rindex(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].\nrjust(width[,\u00a0fillchar])\nReturn a right-justified string of length width.\nrpartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nrsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nrstrip([chars])\nReturn a copy of the string with trailing whitespace removed.\nsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nsplitlines([keepends])\nReturn a list of the lines in the string, breaking at line boundaries.\nstartswith(prefix[,\u00a0start[,\u00a0end]])\nReturn True if S starts with the specified prefix, False otherwise.\nstrip([chars])\nReturn a copy of the string with leading and trailing whitespace removed.\nswapcase()\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\nReturn a version of the string where each word is titlecased.\ntranslate(table,\u00a0/)\nReplace each character in the string using the given translation table.\nupper()\nReturn a copy of the string converted to uppercase.\nzfill(width,\u00a0/)\nPad a numeric string with zeros on the left, to fill a field of the given width.\nAttributes\nSTORAGE\nVIEW\ncapitalize()\u00b6\nReturn a capitalized version of the string.\nMore specifically, make the first character have upper case and the rest lower\ncase.\ncasefold()\u00b6\nReturn a version of the string suitable for caseless comparisons.\ncenter(width, fillchar=' ', /)\u00b6\nReturn a centered string of length width.\nPadding is done using the specified fill character (default is a space).", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html"} {"id": "f187a22f2971-3", "text": "Padding is done using the specified fill character (default is a space).\ncount(sub[, start[, end]]) \u2192 int\u00b6\nReturn the number of non-overlapping occurrences of substring sub in\nstring S[start:end]. Optional arguments start and end are\ninterpreted as in slice notation.\nencode(encoding='utf-8', errors='strict')\u00b6\nEncode the string using the codec registered for encoding.\nencodingThe encoding in which to encode the string.\nerrorsThe error handling scheme to use for encoding errors.\nThe default is \u2018strict\u2019 meaning that encoding errors raise a\nUnicodeEncodeError. Other possible values are \u2018ignore\u2019, \u2018replace\u2019 and\n\u2018xmlcharrefreplace\u2019 as well as any other name registered with\ncodecs.register_error that can handle UnicodeEncodeErrors.\nendswith(suffix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S ends with the specified suffix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nsuffix can also be a tuple of strings to try.\nexpandtabs(tabsize=8)\u00b6\nReturn a copy where all tab characters are expanded using spaces.\nIf tabsize is not given, a tab size of 8 characters is assumed.\nfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nformat(*args, **kwargs) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from args and kwargs.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nformat_map(mapping) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from mapping.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html"} {"id": "f187a22f2971-4", "text": "The substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nget_content(page: dict) \u2192 str[source]\u00b6\nindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nisalnum()\u00b6\nReturn True if the string is an alpha-numeric string, False otherwise.\nA string is alpha-numeric if all characters in the string are alpha-numeric and\nthere is at least one character in the string.\nisalpha()\u00b6\nReturn True if the string is an alphabetic string, False otherwise.\nA string is alphabetic if all characters in the string are alphabetic and there\nis at least one character in the string.\nisascii()\u00b6\nReturn True if all characters in the string are ASCII, False otherwise.\nASCII characters have code points in the range U+0000-U+007F.\nEmpty string is ASCII too.\nisdecimal()\u00b6\nReturn True if the string is a decimal string, False otherwise.\nA string is a decimal string if all characters in the string are decimal and\nthere is at least one character in the string.\nisdigit()\u00b6\nReturn True if the string is a digit string, False otherwise.\nA string is a digit string if all characters in the string are digits and there\nis at least one character in the string.\nisidentifier()\u00b6\nReturn True if the string is a valid Python identifier, False otherwise.\nCall keyword.iskeyword(s) to test whether string s is a reserved identifier,\nsuch as \u201cdef\u201d or \u201cclass\u201d.\nislower()\u00b6\nReturn True if the string is a lowercase string, False otherwise.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html"} {"id": "f187a22f2971-5", "text": "islower()\u00b6\nReturn True if the string is a lowercase string, False otherwise.\nA string is lowercase if all cased characters in the string are lowercase and\nthere is at least one cased character in the string.\nisnumeric()\u00b6\nReturn True if the string is a numeric string, False otherwise.\nA string is numeric if all characters in the string are numeric and there is at\nleast one character in the string.\nisprintable()\u00b6\nReturn True if the string is printable, False otherwise.\nA string is printable if all of its characters are considered printable in\nrepr() or if it is empty.\nisspace()\u00b6\nReturn True if the string is a whitespace string, False otherwise.\nA string is whitespace if all characters in the string are whitespace and there\nis at least one character in the string.\nistitle()\u00b6\nReturn True if the string is a title-cased string, False otherwise.\nIn a title-cased string, upper- and title-case characters may only\nfollow uncased characters and lowercase characters only cased ones.\nisupper()\u00b6\nReturn True if the string is an uppercase string, False otherwise.\nA string is uppercase if all cased characters in the string are uppercase and\nthere is at least one cased character in the string.\njoin(iterable, /)\u00b6\nConcatenate any number of strings.\nThe string whose method is called is inserted in between each given string.\nThe result is returned as a new string.\nExample: \u2018.\u2019.join([\u2018ab\u2019, \u2018pq\u2019, \u2018rs\u2019]) -> \u2018ab.pq.rs\u2019\nljust(width, fillchar=' ', /)\u00b6\nReturn a left-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nlower()\u00b6\nReturn a copy of the string converted to lowercase.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html"} {"id": "f187a22f2971-6", "text": "lower()\u00b6\nReturn a copy of the string converted to lowercase.\nlstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nstatic maketrans()\u00b6\nReturn a translation table usable for str.translate().\nIf there is only one argument, it must be a dictionary mapping Unicode\nordinals (integers) or characters to Unicode ordinals, strings or None.\nCharacter keys will be then converted to ordinals.\nIf there are two arguments, they must be strings of equal length, and\nin the resulting dictionary, each character in x will be mapped to the\ncharacter at the same position in y. If there is a third argument, it\nmust be a string, whose characters will be mapped to None in the result.\npartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string. If the separator is found,\nreturns a 3-tuple containing the part before the separator, the separator\nitself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing the original string\nand two empty strings.\nremoveprefix(prefix, /)\u00b6\nReturn a str with the given prefix string removed if present.\nIf the string starts with the prefix string, return string[len(prefix):].\nOtherwise, return a copy of the original string.\nremovesuffix(suffix, /)\u00b6\nReturn a str with the given suffix string removed if present.\nIf the string ends with the suffix string and that suffix is not empty,\nreturn string[:-len(suffix)]. Otherwise, return a copy of the original\nstring.\nreplace(old, new, count=- 1, /)\u00b6\nReturn a copy with all occurrences of substring old replaced by new.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html"} {"id": "f187a22f2971-7", "text": "Return a copy with all occurrences of substring old replaced by new.\ncountMaximum number of occurrences to replace.\n-1 (the default value) means replace all occurrences.\nIf the optional argument count is given, only the first count occurrences are\nreplaced.\nrfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nrindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nrjust(width, fillchar=' ', /)\u00b6\nReturn a right-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nrpartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string, starting at the end. If\nthe separator is found, returns a 3-tuple containing the part before the\nseparator, the separator itself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing two empty strings\nand the original string.\nrsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html"} {"id": "f187a22f2971-8", "text": "empty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nSplitting starts at the end of the string and works to the front.\nrstrip(chars=None, /)\u00b6\nReturn a copy of the string with trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nNote, str.split() is mainly useful for data that has been intentionally\ndelimited. With natural text that includes punctuation, consider using\nthe regular expression module.\nsplitlines(keepends=False)\u00b6\nReturn a list of the lines in the string, breaking at line boundaries.\nLine breaks are not included in the resulting list unless keepends is given and\ntrue.\nstartswith(prefix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S starts with the specified prefix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nprefix can also be a tuple of strings to try.\nstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading and trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nswapcase()\u00b6\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html"} {"id": "f187a22f2971-9", "text": "Convert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\u00b6\nReturn a version of the string where each word is titlecased.\nMore specifically, words start with uppercased characters and all remaining\ncased characters have lower case.\ntranslate(table, /)\u00b6\nReplace each character in the string using the given translation table.\ntableTranslation table, which must be a mapping of Unicode ordinals to\nUnicode ordinals, strings, or None.\nThe table must implement lookup/indexing via __getitem__, for instance a\ndictionary or list. If this operation raises LookupError, the character is\nleft untouched. Characters mapped to None are deleted.\nupper()\u00b6\nReturn a copy of the string converted to uppercase.\nzfill(width, /)\u00b6\nPad a numeric string with zeros on the left, to fill a field of the given width.\nThe string is never truncated.\nSTORAGE = 'body.storage'\u00b6\nVIEW = 'body.view'\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html"} {"id": "30a0bb1467ea-0", "text": "langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader\u00b6\nclass langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader(url: str, exclude_dirs: Optional[str] = None)[source]\u00b6\nBases: BaseLoader\nLoader that loads all child links from a given url.\nInitialize with URL to crawl and any sub-directories to exclude.\nMethods\n__init__(url[,\u00a0exclude_dirs])\nInitialize with URL to crawl and any sub-directories to exclude.\nget_child_links_recursive(url[,\u00a0visited])\nRecursively get all child links starting with the path of the input URL.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad web pages.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nget_child_links_recursive(url: str, visited: Optional[Set[str]] = None) \u2192 Set[str][source]\u00b6\nRecursively get all child links starting with the path of the input URL.\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad web pages.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader.html"} {"id": "0278296b6263-0", "text": "langchain.document_loaders.pdf.PDFPlumberLoader\u00b6\nclass langchain.document_loaders.pdf.PDFPlumberLoader(file_path: str, text_kwargs: Optional[Mapping[str, Any]] = None)[source]\u00b6\nBases: BasePDFLoader\nLoader that uses pdfplumber to load PDF files.\nInitialize with file path.\nMethods\n__init__(file_path[,\u00a0text_kwargs])\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nAttributes\nsource\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nproperty source: str\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PDFPlumberLoader.html"} {"id": "e93cadf1d99c-0", "text": "langchain.document_loaders.max_compute.MaxComputeLoader\u00b6\nclass langchain.document_loaders.max_compute.MaxComputeLoader(query: str, api_wrapper: MaxComputeAPIWrapper, *, page_content_columns: Optional[Sequence[str]] = None, metadata_columns: Optional[Sequence[str]] = None)[source]\u00b6\nBases: BaseLoader\nLoads a query result from Alibaba Cloud MaxCompute table into documents.\nInitialize Alibaba Cloud MaxCompute document loader.\nParameters\nquery \u2013 SQL query to execute.\napi_wrapper \u2013 MaxCompute API wrapper.\npage_content_columns \u2013 The columns to write into the page_content of the\nDocument. If unspecified, all columns will be written to page_content.\nmetadata_columns \u2013 The columns to write into the metadata of the Document.\nIf unspecified, all columns not added to page_content will be written.\nMethods\n__init__(query,\u00a0api_wrapper,\u00a0*[,\u00a0...])\nInitialize Alibaba Cloud MaxCompute document loader.\nfrom_params(query,\u00a0endpoint,\u00a0project,\u00a0*[,\u00a0...])\nConvenience constructor that builds the MaxCompute API wrapper from\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nclassmethod from_params(query: str, endpoint: str, project: str, *, access_id: Optional[str] = None, secret_access_key: Optional[str] = None, **kwargs: Any) \u2192 MaxComputeLoader[source]\u00b6\nConvenience constructor that builds the MaxCompute API wrapper fromgiven parameters.\nParameters\nquery \u2013 SQL query to execute.\nendpoint \u2013 MaxCompute endpoint.\nproject \u2013 A project is a basic organizational unit of MaxCompute, which is\nsimilar to a database.\naccess_id \u2013 MaxCompute access ID. Should be passed in directly or set as the\nenvironment variable MAX_COMPUTE_ACCESS_ID.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.max_compute.MaxComputeLoader.html"} {"id": "e93cadf1d99c-1", "text": "environment variable MAX_COMPUTE_ACCESS_ID.\nsecret_access_key \u2013 MaxCompute secret access key. Should be passed in\ndirectly or set as the environment variable\nMAX_COMPUTE_SECRET_ACCESS_KEY.\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.max_compute.MaxComputeLoader.html"} {"id": "ee3a4e877e1a-0", "text": "langchain.document_loaders.markdown.UnstructuredMarkdownLoader\u00b6\nclass langchain.document_loaders.markdown.UnstructuredMarkdownLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]\u00b6\nBases: UnstructuredFileLoader\nLoader that uses unstructured to load markdown files.\nInitialize with file path.\nMethods\n__init__(file_path[,\u00a0mode])\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.markdown.UnstructuredMarkdownLoader.html"} {"id": "61a41468e4c8-0", "text": "langchain.document_loaders.blockchain.BlockchainDocumentLoader\u00b6\nclass langchain.document_loaders.blockchain.BlockchainDocumentLoader(contract_address: str, blockchainType: BlockchainType = BlockchainType.ETH_MAINNET, api_key: str = 'docs-demo', startToken: str = '', get_all_tokens: bool = False, max_execution_time: Optional[int] = None)[source]\u00b6\nBases: BaseLoader\nLoads elements from a blockchain smart contract into Langchain documents.\nThe supported blockchains are: Ethereum mainnet, Ethereum Goerli testnet,\nPolygon mainnet, and Polygon Mumbai testnet.\nIf no BlockchainType is specified, the default is Ethereum mainnet.\nThe Loader uses the Alchemy API to interact with the blockchain.\nALCHEMY_API_KEY environment variable must be set to use this loader.\nThe API returns 100 NFTs per request and can be paginated using the\nstartToken parameter.\nIf get_all_tokens is set to True, the loader will get all tokens\non the contract. Note that for contracts with a large number of tokens,\nthis may take a long time (e.g. 10k tokens is 100 requests).\nDefault value is false for this reason.\nThe max_execution_time (sec) can be set to limit the execution time\nof the loader.\nFuture versions of this loader can:\nSupport additional Alchemy APIs (e.g. getTransactions, etc.)\nSupport additional blockain APIs (e.g. Infura, Opensea, etc.)\nParameters\ncontract_address \u2013 The address of the smart contract.\nblockchainType \u2013 The blockchain type.\napi_key \u2013 The Alchemy API key.\nstartToken \u2013 The start token for pagination.\nget_all_tokens \u2013 Whether to get all tokens on the contract.\nmax_execution_time \u2013 The maximum execution time (sec).\nMethods", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blockchain.BlockchainDocumentLoader.html"} {"id": "61a41468e4c8-1", "text": "max_execution_time \u2013 The maximum execution time (sec).\nMethods\n__init__(contract_address[,\u00a0blockchainType,\u00a0...])\nparam contract_address\nThe address of the smart contract.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blockchain.BlockchainDocumentLoader.html"} {"id": "ef7a0a2657c7-0", "text": "langchain.document_loaders.pdf.PDFMinerLoader\u00b6\nclass langchain.document_loaders.pdf.PDFMinerLoader(file_path: str)[source]\u00b6\nBases: BasePDFLoader\nLoader that uses PDFMiner to load PDF files.\nInitialize with file path.\nMethods\n__init__(file_path)\nInitialize with file path.\nlazy_load()\nLazily lod documents.\nload()\nEagerly load the content.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nAttributes\nsource\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nLazily lod documents.\nload() \u2192 List[Document][source]\u00b6\nEagerly load the content.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nproperty source: str\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PDFMinerLoader.html"} {"id": "53eb81e739f4-0", "text": "langchain.document_loaders.embaas.BaseEmbaasLoader\u00b6\nclass langchain.document_loaders.embaas.BaseEmbaasLoader(*, embaas_api_key: Optional[str] = None, api_url: str = 'https://api.embaas.io/v1/document/extract-text/bytes/', params: EmbaasDocumentExtractionParameters = {})[source]\u00b6\nBases: BaseModel\nBase class for embedding a model into an Embaas document extraction API.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_url: str = 'https://api.embaas.io/v1/document/extract-text/bytes/'\u00b6\nThe URL of the embaas document extraction API.\nparam embaas_api_key: Optional[str] = None\u00b6\nThe API key for the embaas document extraction API.\nparam params: langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters = {}\u00b6\nAdditional parameters to pass to the embaas document extraction API.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that api key and python package exists in environment.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.BaseEmbaasLoader.html"} {"id": "a65798027a2c-0", "text": "langchain.document_loaders.evernote.EverNoteLoader\u00b6\nclass langchain.document_loaders.evernote.EverNoteLoader(file_path: str, load_single_document: bool = True)[source]\u00b6\nBases: BaseLoader\nEverNote Loader.\nLoads an EverNote notebook export file e.g. my_notebook.enex into Documents.\nInstructions on producing this file can be found at\nhttps://help.evernote.com/hc/en-us/articles/209005557-Export-notes-and-notebooks-as-ENEX-or-HTML\nCurrently only the plain text in the note is extracted and stored as the contents\nof the Document, any non content metadata (e.g. \u2018author\u2019, \u2018created\u2019, \u2018updated\u2019 etc.\nbut not \u2018content-raw\u2019 or \u2018resource\u2019) tags on the note will be extracted and stored\nas metadata on the Document.\nParameters\nfile_path (str) \u2013 The path to the notebook export with a .enex extension\nload_single_document (bool) \u2013 Whether or not to concatenate the content of all\nnotes into a single long Document.\nTrue (If this is set to) \u2013 the \u2018source\u2019 which contains the file name of the export.\nInitialize with file path.\nMethods\n__init__(file_path[,\u00a0load_single_document])\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad documents from EverNote export file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents from EverNote export file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.evernote.EverNoteLoader.html"} {"id": "a65798027a2c-1", "text": "Load Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.evernote.EverNoteLoader.html"} {"id": "2ee5a173c1d3-0", "text": "langchain.document_loaders.stripe.StripeLoader\u00b6\nclass langchain.document_loaders.stripe.StripeLoader(resource: str, access_token: Optional[str] = None)[source]\u00b6\nBases: BaseLoader\nLoader that fetches data from Stripe.\nMethods\n__init__(resource[,\u00a0access_token])\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.stripe.StripeLoader.html"} {"id": "edcb0ac32476-0", "text": "langchain.document_loaders.wikipedia.WikipediaLoader\u00b6\nclass langchain.document_loaders.wikipedia.WikipediaLoader(query: str, lang: str = 'en', load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False, doc_content_chars_max: Optional[int] = 4000)[source]\u00b6\nBases: BaseLoader\nLoads a query result from www.wikipedia.org into a list of Documents.\nThe hard limit on the number of downloaded Documents is 300 for now.\nEach wiki page represents one Document.\nInitializes a new instance of the WikipediaLoader class.\nParameters\nquery (str) \u2013 The query string to search on Wikipedia.\nlang (str, optional) \u2013 The language code for the Wikipedia language edition.\nDefaults to \u201cen\u201d.\nload_max_docs (int, optional) \u2013 The maximum number of documents to load.\nDefaults to 100.\nload_all_available_meta (bool, optional) \u2013 Indicates whether to load all\navailable metadata for each document. Defaults to False.\ndoc_content_chars_max (int, optional) \u2013 The maximum number of characters\nfor the document content. Defaults to 4000.\nMethods\n__init__(query[,\u00a0lang,\u00a0load_max_docs,\u00a0...])\nInitializes a new instance of the WikipediaLoader class.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoads the query result from Wikipedia into a list of Documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoads the query result from Wikipedia into a list of Documents.\nReturns\nA list of Document objects representing the loadedWikipedia pages.\nReturn type\nList[Document]", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.wikipedia.WikipediaLoader.html"} {"id": "edcb0ac32476-1", "text": "Return type\nList[Document]\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.wikipedia.WikipediaLoader.html"} {"id": "42a81bfadcbc-0", "text": "langchain.document_loaders.pdf.UnstructuredPDFLoader\u00b6\nclass langchain.document_loaders.pdf.UnstructuredPDFLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]\u00b6\nBases: UnstructuredFileLoader\nLoader that uses unstructured to load PDF files.\nInitialize with file path.\nMethods\n__init__(file_path[,\u00a0mode])\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.UnstructuredPDFLoader.html"} {"id": "26fefa896729-0", "text": "langchain.document_loaders.parsers.audio.OpenAIWhisperParser\u00b6\nclass langchain.document_loaders.parsers.audio.OpenAIWhisperParser(api_key: Optional[str] = None)[source]\u00b6\nBases: BaseBlobParser\nTranscribe and parse audio files.\nAudio transcription is with OpenAI Whisper model.\nMethods\n__init__([api_key])\nlazy_parse(blob)\nLazily parse the blob.\nparse(blob)\nEagerly parse the blob into a document or documents.\nlazy_parse(blob: Blob) \u2192 Iterator[Document][source]\u00b6\nLazily parse the blob.\nparse(blob: Blob) \u2192 List[Document]\u00b6\nEagerly parse the blob into a document or documents.\nThis is a convenience method for interactive development environment.\nProduction applications should favor the lazy_parse method instead.\nSubclasses should generally not over-ride this parse method.\nParameters\nblob \u2013 Blob instance\nReturns\nList of documents", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.audio.OpenAIWhisperParser.html"} {"id": "d875202ecc16-0", "text": "langchain.document_loaders.parsers.pdf.PDFMinerParser\u00b6\nclass langchain.document_loaders.parsers.pdf.PDFMinerParser[source]\u00b6\nBases: BaseBlobParser\nParse PDFs with PDFMiner.\nMethods\n__init__()\nlazy_parse(blob)\nLazily parse the blob.\nparse(blob)\nEagerly parse the blob into a document or documents.\nlazy_parse(blob: Blob) \u2192 Iterator[Document][source]\u00b6\nLazily parse the blob.\nparse(blob: Blob) \u2192 List[Document]\u00b6\nEagerly parse the blob into a document or documents.\nThis is a convenience method for interactive development environment.\nProduction applications should favor the lazy_parse method instead.\nSubclasses should generally not over-ride this parse method.\nParameters\nblob \u2013 Blob instance\nReturns\nList of documents", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PDFMinerParser.html"} {"id": "8fababe112ca-0", "text": "langchain.document_loaders.whatsapp_chat.WhatsAppChatLoader\u00b6\nclass langchain.document_loaders.whatsapp_chat.WhatsAppChatLoader(path: str)[source]\u00b6\nBases: BaseLoader\nLoader that loads WhatsApp messages text file.\nInitialize with path.\nMethods\n__init__(path)\nInitialize with path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.whatsapp_chat.WhatsAppChatLoader.html"} {"id": "485360c79f6f-0", "text": "langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload\u00b6\nclass langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload[source]\u00b6\nBases: EmbaasDocumentExtractionParameters\nPayload for the Embaas document extraction API.\nMethods\n__init__(*args,\u00a0**kwargs)\nclear()\ncopy()\nfromkeys([value])\nCreate a new dictionary with keys from iterable and values set to value.\nget(key[,\u00a0default])\nReturn the value for key if key is in the dictionary, else default.\nitems()\nkeys()\npop(k[,d])\nIf the key is not found, return the default if given; otherwise, raise a KeyError.\npopitem()\nRemove and return a (key, value) pair as a 2-tuple.\nsetdefault(key[,\u00a0default])\nInsert key with a value of default if key is not in the dictionary.\nupdate([E,\u00a0]**F)\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]\nvalues()\nAttributes\nbytes\nThe base64 encoded bytes of the document to extract text from.\nclear() \u2192 None.\u00a0 Remove all items from D.\u00b6\ncopy() \u2192 a shallow copy of D\u00b6\nfromkeys(value=None, /)\u00b6\nCreate a new dictionary with keys from iterable and values set to value.\nget(key, default=None, /)\u00b6\nReturn the value for key if key is in the dictionary, else default.\nitems() \u2192 a set-like object providing a view on D's items\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload.html"} {"id": "485360c79f6f-1", "text": "items() \u2192 a set-like object providing a view on D's items\u00b6\nkeys() \u2192 a set-like object providing a view on D's keys\u00b6\npop(k[, d]) \u2192 v, remove specified key and return the corresponding value.\u00b6\nIf the key is not found, return the default if given; otherwise,\nraise a KeyError.\npopitem()\u00b6\nRemove and return a (key, value) pair as a 2-tuple.\nPairs are returned in LIFO (last-in, first-out) order.\nRaises KeyError if the dict is empty.\nsetdefault(key, default=None, /)\u00b6\nInsert key with a value of default if key is not in the dictionary.\nReturn the value for key if key is in the dictionary, else default.\nupdate([E, ]**F) \u2192 None.\u00a0 Update D from dict/iterable E and F.\u00b6\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k]\nIf E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v\nIn either case, this is followed by: for k in F: D[k] = F[k]\nvalues() \u2192 an object providing a view on D's values\u00b6\nbytes: str\u00b6\nThe base64 encoded bytes of the document to extract text from.\nchunk_overlap: int\u00b6\nchunk_size: int\u00b6\nchunk_splitter: str\u00b6\nfile_extension: str\u00b6\nfile_name: str\u00b6\ninstruction: str\u00b6\nmime_type: str\u00b6\nmodel: str\u00b6\nseparators: List[str]\u00b6\nshould_chunk: bool\u00b6\nshould_embed: bool\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload.html"} {"id": "15b08048f783-0", "text": "langchain.document_loaders.parsers.language.javascript.JavaScriptSegmenter\u00b6\nclass langchain.document_loaders.parsers.language.javascript.JavaScriptSegmenter(code: str)[source]\u00b6\nBases: CodeSegmenter\nThe code segmenter for JavaScript.\nMethods\n__init__(code)\nextract_functions_classes()\nis_valid()\nsimplify_code()\nextract_functions_classes() \u2192 List[str][source]\u00b6\nis_valid() \u2192 bool[source]\u00b6\nsimplify_code() \u2192 str[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.javascript.JavaScriptSegmenter.html"} {"id": "faaf9f60ac8a-0", "text": "langchain.document_loaders.cube_semantic.CubeSemanticLoader\u00b6\nclass langchain.document_loaders.cube_semantic.CubeSemanticLoader(cube_api_url: str, cube_api_token: str)[source]\u00b6\nBases: BaseLoader\nLoad Cube semantic layer metadata.\nMethods\n__init__(cube_api_url,\u00a0cube_api_token)\nlazy_load()\nA lazy loader for Documents.\nload()\nMakes a call to Cube's REST API metadata endpoint.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nAttributes\ncube_api_url\nUse the REST API of your Cube's deployment.\ncube_api_token\nAuthentication tokens are generated based on your Cube's API secret.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nMakes a call to Cube\u2019s REST API metadata endpoint.\nReturns\npage_content=column_name\nmetadata\ntable_name\ncolumn_name\ncolumn_data_type\ncolumn_title\ncolumn_description\nReturn type\nA list of documents with attributes\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\ncube_api_token\u00b6\nAuthentication tokens are generated based on your Cube\u2019s API secret.\nPlease find out more information here:\nhttps://cube.dev/docs/security#generating-json-web-tokens-jwt\ncube_api_url\u00b6\nUse the REST API of your Cube\u2019s deployment.\nPlease find out more information here:\nhttps://cube.dev/docs/http-api/rest#configuration-base-path", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.cube_semantic.CubeSemanticLoader.html"} {"id": "b3241cdac3e1-0", "text": "langchain.document_loaders.joplin.JoplinLoader\u00b6\nclass langchain.document_loaders.joplin.JoplinLoader(access_token: Optional[str] = None, port: int = 41184, host: str = 'localhost')[source]\u00b6\nBases: BaseLoader\nLoader that fetches notes from Joplin.\nIn order to use this loader, you need to have Joplin running with the\nWeb Clipper enabled (look for \u201cWeb Clipper\u201d in the app settings).\nTo get the access token, you need to go to the Web Clipper options and\nunder \u201cAdvanced Options\u201d you will find the access token.\nYou can find more information about the Web Clipper service here:\nhttps://joplinapp.org/clipper/\nParameters\naccess_token \u2013 The access token to use.\nport \u2013 The port where the Web Clipper service is running. Default is 41184.\nhost \u2013 The host where the Web Clipper service is running.\nDefault is localhost.\nMethods\n__init__([access_token,\u00a0port,\u00a0host])\nparam access_token\nThe access token to use.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.joplin.JoplinLoader.html"} {"id": "14f223f9c73b-0", "text": "langchain.document_loaders.word_document.UnstructuredWordDocumentLoader\u00b6\nclass langchain.document_loaders.word_document.UnstructuredWordDocumentLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]\u00b6\nBases: UnstructuredFileLoader\nLoader that uses unstructured to load word documents.\nInitialize with file path.\nMethods\n__init__(file_path[,\u00a0mode])\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.word_document.UnstructuredWordDocumentLoader.html"} {"id": "0d88103ed389-0", "text": "langchain.document_loaders.bibtex.BibtexLoader\u00b6\nclass langchain.document_loaders.bibtex.BibtexLoader(file_path: str, *, parser: Optional[BibtexparserWrapper] = None, max_docs: Optional[int] = None, max_content_chars: Optional[int] = 4000, load_extra_metadata: bool = False, file_pattern: str = '[^:]+\\\\.pdf')[source]\u00b6\nBases: BaseLoader\nLoads a bibtex file into a list of Documents.\nEach document represents one entry from the bibtex file.\nIf a PDF file is present in the file bibtex field, the original PDF\nis loaded into the document text. If no such file entry is present,\nthe abstract field is used instead.\nInitialize the BibtexLoader.\nParameters\nfile_path \u2013 Path to the bibtex file.\nparser \u2013 The parser to use. If None, a default parser is used.\nmax_docs \u2013 Max number of associated documents to load. Use -1 means\nno limit.\nmax_content_chars \u2013 Maximum number of characters to load from the PDF.\nload_extra_metadata \u2013 Whether to load extra metadata from the PDF.\nfile_pattern \u2013 Regex pattern to match the file name in the bibtex.\nMethods\n__init__(file_path,\u00a0*[,\u00a0parser,\u00a0max_docs,\u00a0...])\nInitialize the BibtexLoader.\nlazy_load()\nLoad bibtex file using bibtexparser and get the article texts plus the article metadata.\nload()\nLoad bibtex file documents from the given bibtex file path.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nLoad bibtex file using bibtexparser and get the article texts plus the\narticle metadata.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bibtex.BibtexLoader.html"} {"id": "0d88103ed389-1", "text": "article metadata.\nSee https://bibtexparser.readthedocs.io/en/master/\nReturns\na list of documents with the document.page_content in text format\nload() \u2192 List[Document][source]\u00b6\nLoad bibtex file documents from the given bibtex file path.\nSee https://bibtexparser.readthedocs.io/en/master/\nParameters\nfile_path \u2013 the path to the bibtex file\nReturns\na list of documents with the document.page_content in text format\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bibtex.BibtexLoader.html"} {"id": "5468e11ec9ca-0", "text": "langchain.document_loaders.bilibili.BiliBiliLoader\u00b6\nclass langchain.document_loaders.bilibili.BiliBiliLoader(video_urls: List[str])[source]\u00b6\nBases: BaseLoader\nLoader that loads bilibili transcripts.\nInitialize with bilibili url.\nParameters\nvideo_urls \u2013 List of bilibili urls.\nMethods\n__init__(video_urls)\nInitialize with bilibili url.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad Documents from bilibili url.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad Documents from bilibili url.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bilibili.BiliBiliLoader.html"} {"id": "536c59d57796-0", "text": "langchain.document_loaders.bigquery.BigQueryLoader\u00b6\nclass langchain.document_loaders.bigquery.BigQueryLoader(query: str, project: Optional[str] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None, credentials: Optional[Credentials] = None)[source]\u00b6\nBases: BaseLoader\nLoads a query result from BigQuery into a list of documents.\nEach document represents one row of the result. The page_content_columns\nare written into the page_content of the document. The metadata_columns\nare written into the metadata of the document. By default, all columns\nare written into the page_content and none into the metadata.\nInitialize BigQuery document loader.\nParameters\nquery \u2013 The query to run in BigQuery.\nproject \u2013 Optional. The project to run the query in.\npage_content_columns \u2013 Optional. The columns to write into the page_content\nof the document.\nmetadata_columns \u2013 Optional. The columns to write into the metadata of the\ndocument.\ncredentials \u2013 google.auth.credentials.Credentials, optional\nCredentials for accessing Google APIs. Use this parameter to override\ndefault credentials, such as to use Compute Engine\n(google.auth.compute_engine.Credentials) or Service Account\n(google.oauth2.service_account.Credentials) credentials directly.\nMethods\n__init__(query[,\u00a0project,\u00a0...])\nInitialize BigQuery document loader.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bigquery.BigQueryLoader.html"} {"id": "536c59d57796-1", "text": "Load Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bigquery.BigQueryLoader.html"} {"id": "796722096c6a-0", "text": "langchain.document_loaders.parsers.language.language_parser.LanguageParser\u00b6\nclass langchain.document_loaders.parsers.language.language_parser.LanguageParser(language: Optional[Language] = None, parser_threshold: int = 0)[source]\u00b6\nBases: BaseBlobParser\nLanguage parser that split code using the respective language syntax.\nEach top-level function and class in the code is loaded into separate documents.\nFurthermore, an extra document is generated, containing the remaining top-level code\nthat excludes the already segmented functions and classes.\nThis approach can potentially improve the accuracy of QA models over source code.\nCurrently, the supported languages for code parsing are Python and JavaScript.\nThe language used for parsing can be configured, along with the minimum number of\nlines required to activate the splitting based on syntax.\nExamples\nfrom langchain.text_splitter.Language\nfrom langchain.document_loaders.generic import GenericLoader\nfrom langchain.document_loaders.parsers import LanguageParser\nloader = GenericLoader.from_filesystem(\n \"./code\",\n glob=\"**/*\",\n suffixes=[\".py\", \".js\"],\n parser=LanguageParser()\n)\ndocs = loader.load()\nExample instantiations to manually select the language:\n\u2026 code-block:: python\nfrom langchain.text_splitter import Language\nloader = GenericLoader.from_filesystem(\u201c./code\u201d,\nglob=\u201d**/*\u201d,\nsuffixes=[\u201c.py\u201d],\nparser=LanguageParser(language=Language.PYTHON)\n)\nExample instantiations to set number of lines threshold:\n\u2026 code-block:: python\nloader = GenericLoader.from_filesystem(\u201c./code\u201d,\nglob=\u201d**/*\u201d,\nsuffixes=[\u201c.py\u201d],\nparser=LanguageParser(parser_threshold=200)\n)\nLanguage parser that split code using the respective language syntax.\nParameters\nlanguage \u2013 If None (default), it will try to infer language from source.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.language_parser.LanguageParser.html"} {"id": "796722096c6a-1", "text": "Parameters\nlanguage \u2013 If None (default), it will try to infer language from source.\nparser_threshold \u2013 Minimum lines needed to activate parsing (0 by default).\nMethods\n__init__([language,\u00a0parser_threshold])\nLanguage parser that split code using the respective language syntax.\nlazy_parse(blob)\nLazy parsing interface.\nparse(blob)\nEagerly parse the blob into a document or documents.\nlazy_parse(blob: Blob) \u2192 Iterator[Document][source]\u00b6\nLazy parsing interface.\nSubclasses are required to implement this method.\nParameters\nblob \u2013 Blob instance\nReturns\nGenerator of documents\nparse(blob: Blob) \u2192 List[Document]\u00b6\nEagerly parse the blob into a document or documents.\nThis is a convenience method for interactive development environment.\nProduction applications should favor the lazy_parse method instead.\nSubclasses should generally not over-ride this parse method.\nParameters\nblob \u2013 Blob instance\nReturns\nList of documents", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.language_parser.LanguageParser.html"} {"id": "c4a029bea78e-0", "text": "langchain.document_loaders.onedrive.OneDriveLoader\u00b6\nclass langchain.document_loaders.onedrive.OneDriveLoader(*, settings: _OneDriveSettings = None, drive_id: str, folder_path: Optional[str] = None, object_ids: Optional[List[str]] = None, auth_with_token: bool = False)[source]\u00b6\nBases: BaseLoader, BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam auth_with_token: bool = False\u00b6\nparam drive_id: str [Required]\u00b6\nparam folder_path: Optional[str] = None\u00b6\nparam object_ids: Optional[List[str]] = None\u00b6\nparam settings: langchain.document_loaders.onedrive._OneDriveSettings [Optional]\u00b6\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoads all supported document files from the specified OneDrive drive a\nnd returns a list of Document objects.\nReturns\nA list of Document objects\nrepresenting the loaded documents.\nReturn type\nList[Document]\nRaises\nValueError \u2013 If the specified drive ID\ndoes not correspond to a drive in the OneDrive storage. \u2013 \nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive.OneDriveLoader.html"} {"id": "7264fb9dd17e-0", "text": "langchain.document_loaders.larksuite.LarkSuiteDocLoader\u00b6\nclass langchain.document_loaders.larksuite.LarkSuiteDocLoader(domain: str, access_token: str, document_id: str)[source]\u00b6\nBases: BaseLoader\nLoads LarkSuite (FeiShu) document.\nInitialize with domain, access_token (tenant / user), and document_id.\nParameters\ndomain \u2013 The domain to load the LarkSuite.\naccess_token \u2013 The access_token to use.\ndocument_id \u2013 The document_id to load.\nMethods\n__init__(domain,\u00a0access_token,\u00a0document_id)\nInitialize with domain, access_token (tenant / user), and document_id.\nlazy_load()\nLazy load LarkSuite (FeiShu) document.\nload()\nLoad LarkSuite (FeiShu) document.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nLazy load LarkSuite (FeiShu) document.\nload() \u2192 List[Document][source]\u00b6\nLoad LarkSuite (FeiShu) document.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.larksuite.LarkSuiteDocLoader.html"} {"id": "0f17ed8d1682-0", "text": "langchain.document_loaders.readthedocs.ReadTheDocsLoader\u00b6\nclass langchain.document_loaders.readthedocs.ReadTheDocsLoader(path: Union[str, Path], encoding: Optional[str] = None, errors: Optional[str] = None, custom_html_tag: Optional[Tuple[str, dict]] = None, **kwargs: Optional[Any])[source]\u00b6\nBases: BaseLoader\nLoader that loads ReadTheDocs documentation directory dump.\nInitialize ReadTheDocsLoader\nThe loader loops over all files under path and extract the actual content of\nthe files by retrieving main html tags. Default main html tags include\n
,
, and
. You\ncan also define your own html tags by passing custom_html_tag, e.g.\n(\u201cdiv\u201d, \u201cclass=main\u201d). The loader iterates html tags with the order of\ncustom html tags (if exists) and default html tags. If any of the tags is not\nempty, the loop will break and retrieve the content out of that tag.\nParameters\npath \u2013 The location of pulled readthedocs folder.\nencoding \u2013 The encoding with which to open the documents.\nerrors \u2013 Specifies how encoding and decoding errors are to be handled\u2014this\ncannot be used in binary mode.\ncustom_html_tag \u2013 Optional custom html tag to retrieve the content from\nfiles.\nMethods\n__init__(path[,\u00a0encoding,\u00a0errors,\u00a0...])\nInitialize ReadTheDocsLoader\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.readthedocs.ReadTheDocsLoader.html"} {"id": "0f17ed8d1682-1", "text": "Load Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.readthedocs.ReadTheDocsLoader.html"} {"id": "3b52fa5179a3-0", "text": "langchain.document_loaders.duckdb_loader.DuckDBLoader\u00b6\nclass langchain.document_loaders.duckdb_loader.DuckDBLoader(query: str, database: str = ':memory:', read_only: bool = False, config: Optional[Dict[str, str]] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None)[source]\u00b6\nBases: BaseLoader\nLoads a query result from DuckDB into a list of documents.\nEach document represents one row of the result. The page_content_columns\nare written into the page_content of the document. The metadata_columns\nare written into the metadata of the document. By default, all columns\nare written into the page_content and none into the metadata.\nParameters\nquery \u2013 The query to execute.\ndatabase \u2013 The database to connect to. Defaults to \u201c:memory:\u201d.\nread_only \u2013 Whether to open the database in read-only mode.\nDefaults to False.\nconfig \u2013 A dictionary of configuration options to pass to the database.\nOptional.\npage_content_columns \u2013 The columns to write into the page_content\nof the document. Optional.\nmetadata_columns \u2013 The columns to write into the metadata of the document.\nOptional.\nMethods\n__init__(query[,\u00a0database,\u00a0read_only,\u00a0...])\nparam query\nThe query to execute.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.duckdb_loader.DuckDBLoader.html"} {"id": "3b52fa5179a3-1", "text": "Load Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.duckdb_loader.DuckDBLoader.html"} {"id": "db5ef0705afe-0", "text": "langchain.document_loaders.unstructured.UnstructuredFileLoader\u00b6\nclass langchain.document_loaders.unstructured.UnstructuredFileLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]\u00b6\nBases: UnstructuredBaseLoader\nUnstructuredFileLoader uses unstructured to load files. The file loader uses the\nunstructured partition function and will automatically detect the file\ntype. You can run the loader in one of two modes: \u201csingle\u201d and \u201celements\u201d.\nIf you use \u201csingle\u201d mode, the document will be returned as a single\nlangchain Document object. If you use \u201celements\u201d mode, the unstructured\nlibrary will split the document into elements such as Title and NarrativeText.\nYou can pass in additional unstructured kwargs after mode to apply\ndifferent unstructured settings.\nExamples\n```python\nfrom langchain.document_loaders import UnstructuredFileLoader\nloader = UnstructuredFileLoader(\u201cexample.pdf\u201d, mode=\u201delements\u201d, strategy=\u201dfast\u201d,\n)\ndocs = loader.load()\n```\nReferences\nhttps://unstructured-io.github.io/unstructured/bricks.html#partition\nInitialize with file path.\nMethods\n__init__(file_path[,\u00a0mode])\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileLoader.html"} {"id": "db5ef0705afe-1", "text": "Defaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileLoader.html"} {"id": "856e9dd63ed3-0", "text": "langchain.document_loaders.helpers.FileEncoding\u00b6\nclass langchain.document_loaders.helpers.FileEncoding(encoding: Optional[str], confidence: float, language: Optional[str])[source]\u00b6\nBases: NamedTuple\nA file encoding as the NamedTuple.\nCreate new instance of FileEncoding(encoding, confidence, language)\nMethods\n__init__()\ncount(value,\u00a0/)\nReturn number of occurrences of value.\nindex(value[,\u00a0start,\u00a0stop])\nReturn first index of value.\nAttributes\nconfidence\nThe confidence of the encoding.\nencoding\nThe encoding of the file.\nlanguage\nThe language of the file.\ncount(value, /)\u00b6\nReturn number of occurrences of value.\nindex(value, start=0, stop=9223372036854775807, /)\u00b6\nReturn first index of value.\nRaises ValueError if the value is not present.\nconfidence: float\u00b6\nThe confidence of the encoding.\nencoding: Optional[str]\u00b6\nThe encoding of the file.\nlanguage: Optional[str]\u00b6\nThe language of the file.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.helpers.FileEncoding.html"} {"id": "8953d81d7bb5-0", "text": "langchain.document_loaders.figma.FigmaFileLoader\u00b6\nclass langchain.document_loaders.figma.FigmaFileLoader(access_token: str, ids: str, key: str)[source]\u00b6\nBases: BaseLoader\nLoads Figma file json.\nInitialize with access token, ids, and key.\nParameters\naccess_token \u2013 The access token for the Figma REST API.\nids \u2013 The ids of the Figma file.\nkey \u2013 The key for the Figma file\nMethods\n__init__(access_token,\u00a0ids,\u00a0key)\nInitialize with access token, ids, and key.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad file\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.figma.FigmaFileLoader.html"} {"id": "fa25d51da8d7-0", "text": "langchain.document_loaders.airtable.AirtableLoader\u00b6\nclass langchain.document_loaders.airtable.AirtableLoader(api_token: str, table_id: str, base_id: str)[source]\u00b6\nBases: BaseLoader\nLoader for Airtable tables.\nInitialize with API token and the IDs for table and base\nMethods\n__init__(api_token,\u00a0table_id,\u00a0base_id)\nInitialize with API token and the IDs for table and base\nlazy_load()\nLazy load Documents from table.\nload()\nLoad Documents from table.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nAttributes\napi_token\nAirtable API token.\ntable_id\nAirtable table ID.\nbase_id\nAirtable base ID.\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nLazy load Documents from table.\nload() \u2192 List[Document][source]\u00b6\nLoad Documents from table.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\napi_token\u00b6\nAirtable API token.\nbase_id\u00b6\nAirtable base ID.\ntable_id\u00b6\nAirtable table ID.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airtable.AirtableLoader.html"} {"id": "978daf51d24c-0", "text": "langchain.document_loaders.trello.TrelloLoader\u00b6\nclass langchain.document_loaders.trello.TrelloLoader(client: TrelloClient, board_name: str, *, include_card_name: bool = True, include_comments: bool = True, include_checklist: bool = True, card_filter: Literal['closed', 'open', 'all'] = 'all', extra_metadata: Tuple[str, ...] = ('due_date', 'labels', 'list', 'closed'))[source]\u00b6\nBases: BaseLoader\nTrello loader. Reads all cards from a Trello board.\nInitialize Trello loader.\nParameters\nclient \u2013 Trello API client.\nboard_name \u2013 The name of the Trello board.\ninclude_card_name \u2013 Whether to include the name of the card in the document.\ninclude_comments \u2013 Whether to include the comments on the card in the\ndocument.\ninclude_checklist \u2013 Whether to include the checklist on the card in the\ndocument.\ncard_filter \u2013 Filter on card status. Valid values are \u201cclosed\u201d, \u201copen\u201d,\n\u201call\u201d.\nextra_metadata \u2013 List of additional metadata fields to include as document\nmetadata.Valid values are \u201cdue_date\u201d, \u201clabels\u201d, \u201clist\u201d, \u201cclosed\u201d.\nMethods\n__init__(client,\u00a0board_name,\u00a0*[,\u00a0...])\nInitialize Trello loader.\nfrom_credentials(board_name,\u00a0*[,\u00a0api_key,\u00a0token])\nConvenience constructor that builds TrelloClient init param for you.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoads all cards from the specified Trello board.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nclassmethod from_credentials(board_name: str, *, api_key: Optional[str] = None, token: Optional[str] = None, **kwargs: Any) \u2192 TrelloLoader[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.trello.TrelloLoader.html"} {"id": "978daf51d24c-1", "text": "Convenience constructor that builds TrelloClient init param for you.\nParameters\nboard_name \u2013 The name of the Trello board.\napi_key \u2013 Trello API key. Can also be specified as environment variable\nTRELLO_API_KEY.\ntoken \u2013 Trello token. Can also be specified as environment variable\nTRELLO_TOKEN.\ninclude_card_name \u2013 Whether to include the name of the card in the document.\ninclude_comments \u2013 Whether to include the comments on the card in the\ndocument.\ninclude_checklist \u2013 Whether to include the checklist on the card in the\ndocument.\ncard_filter \u2013 Filter on card status. Valid values are \u201cclosed\u201d, \u201copen\u201d,\n\u201call\u201d.\nextra_metadata \u2013 List of additional metadata fields to include as document\nmetadata.Valid values are \u201cdue_date\u201d, \u201clabels\u201d, \u201clist\u201d, \u201cclosed\u201d.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoads all cards from the specified Trello board.\nYou can filter the cards, metadata and text included by using the optional\nparameters.\nReturns:A list of documents, one for each card in the board.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.trello.TrelloLoader.html"} {"id": "f5b014096545-0", "text": "langchain.document_loaders.pdf.PyPDFium2Loader\u00b6\nclass langchain.document_loaders.pdf.PyPDFium2Loader(file_path: str)[source]\u00b6\nBases: BasePDFLoader\nLoads a PDF with pypdfium2 and chunks at character level.\nInitialize with file path.\nMethods\n__init__(file_path)\nInitialize with file path.\nlazy_load()\nLazy load given path as pages.\nload()\nLoad given path as pages.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nAttributes\nsource\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nLazy load given path as pages.\nload() \u2192 List[Document][source]\u00b6\nLoad given path as pages.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nproperty source: str\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyPDFium2Loader.html"} {"id": "2eee89f75308-0", "text": "langchain.document_loaders.mhtml.MHTMLLoader\u00b6\nclass langchain.document_loaders.mhtml.MHTMLLoader(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '')[source]\u00b6\nBases: BaseLoader\nLoader that uses beautiful soup to parse HTML files.\nInitialise with path, and optionally, file encoding to use, and any kwargs\nto pass to the BeautifulSoup object.\nParameters\nfile_path \u2013 The path to the file to load.\nopen_encoding \u2013 The encoding to use when opening the file.\nbs_kwargs \u2013 soup kwargs to pass to the BeautifulSoup object.\nget_text_separator \u2013 The separator to use when getting text from the soup.\nMethods\n__init__(file_path[,\u00a0open_encoding,\u00a0...])\nInitialise with path, and optionally, file encoding to use, and any kwargs to pass to the BeautifulSoup object.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mhtml.MHTMLLoader.html"} {"id": "ba6c0ef3fd0e-0", "text": "langchain.document_loaders.parsers.registry.get_parser\u00b6\nlangchain.document_loaders.parsers.registry.get_parser(parser_name: str) \u2192 BaseBlobParser[source]\u00b6\nGet a parser by parser name.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.registry.get_parser.html"} {"id": "4782d4e61c73-0", "text": "langchain.document_loaders.pdf.OnlinePDFLoader\u00b6\nclass langchain.document_loaders.pdf.OnlinePDFLoader(file_path: str)[source]\u00b6\nBases: BasePDFLoader\nLoader that loads online PDFs.\nInitialize with file path.\nMethods\n__init__(file_path)\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nAttributes\nsource\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nproperty source: str\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.OnlinePDFLoader.html"} {"id": "054821ee375e-0", "text": "langchain.document_loaders.blockchain.BlockchainType\u00b6\nclass langchain.document_loaders.blockchain.BlockchainType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]\u00b6\nBases: Enum\nEnumerator of the supported blockchains.\nAttributes\nETH_MAINNET\nETH_GOERLI\nPOLYGON_MAINNET\nPOLYGON_MUMBAI\nETH_GOERLI = 'eth-goerli'\u00b6\nETH_MAINNET = 'eth-mainnet'\u00b6\nPOLYGON_MAINNET = 'polygon-mainnet'\u00b6\nPOLYGON_MUMBAI = 'polygon-mumbai'\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blockchain.BlockchainType.html"} {"id": "5ee981f9568c-0", "text": "langchain.document_loaders.base.BaseBlobParser\u00b6\nclass langchain.document_loaders.base.BaseBlobParser[source]\u00b6\nBases: ABC\nAbstract interface for blob parsers.\nA blob parser provides a way to parse raw data stored in a blob into one\nor more documents.\nThe parser can be composed with blob loaders, making it easy to re-use\na parser independent of how the blob was originally loaded.\nMethods\n__init__()\nlazy_parse(blob)\nLazy parsing interface.\nparse(blob)\nEagerly parse the blob into a document or documents.\nabstract lazy_parse(blob: Blob) \u2192 Iterator[Document][source]\u00b6\nLazy parsing interface.\nSubclasses are required to implement this method.\nParameters\nblob \u2013 Blob instance\nReturns\nGenerator of documents\nparse(blob: Blob) \u2192 List[Document][source]\u00b6\nEagerly parse the blob into a document or documents.\nThis is a convenience method for interactive development environment.\nProduction applications should favor the lazy_parse method instead.\nSubclasses should generally not over-ride this parse method.\nParameters\nblob \u2013 Blob instance\nReturns\nList of documents", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.base.BaseBlobParser.html"} {"id": "791f17b7bb27-0", "text": "langchain.document_loaders.apify_dataset.ApifyDatasetLoader\u00b6\nclass langchain.document_loaders.apify_dataset.ApifyDatasetLoader(dataset_id: str, dataset_mapping_function: Callable[[Dict], Document])[source]\u00b6\nBases: BaseLoader, BaseModel\nLoading Documents from Apify datasets.\nInitialize the loader with an Apify dataset ID and a mapping function.\nParameters\ndataset_id (str) \u2013 The ID of the dataset on the Apify platform.\ndataset_mapping_function (Callable) \u2013 A function that takes a single\ndictionary (an Apify dataset item) and converts it to an instance\nof the Document class.\nparam apify_client: Any = None\u00b6\nAn instance of the ApifyClient class from the apify-client Python package.\nparam dataset_id: str [Required]\u00b6\nThe ID of the dataset on the Apify platform.\nparam dataset_mapping_function: Callable[[Dict], langchain.schema.document.Document] [Required]\u00b6\nA custom function that takes a single dictionary (an Apify dataset item)\nand converts it to an instance of the Document class.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate environment.\nParameters\nvalues \u2013 The values to validate.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.apify_dataset.ApifyDatasetLoader.html"} {"id": "4eef79132bc6-0", "text": "langchain.document_loaders.arxiv.ArxivLoader\u00b6\nclass langchain.document_loaders.arxiv.ArxivLoader(query: str, load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False)[source]\u00b6\nBases: BaseLoader\nLoads a query result from arxiv.org into a list of Documents.\nEach document represents one Document.\nThe loader converts the original PDF format into the text.\nMethods\n__init__(query[,\u00a0load_max_docs,\u00a0...])\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nAttributes\nquery\nThe query to be passed to the arxiv.org API.\nload_max_docs\nThe maximum number of documents to load.\nload_all_available_meta\nWhether to load all available metadata.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nload_all_available_meta\u00b6\nWhether to load all available metadata.\nload_max_docs\u00b6\nThe maximum number of documents to load.\nquery\u00b6\nThe query to be passed to the arxiv.org API.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.arxiv.ArxivLoader.html"} {"id": "036f472b4419-0", "text": "langchain.document_loaders.rst.UnstructuredRSTLoader\u00b6\nclass langchain.document_loaders.rst.UnstructuredRSTLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]\u00b6\nBases: UnstructuredFileLoader\nLoader that uses unstructured to load RST files.\nInitialize with file path.\nMethods\n__init__(file_path[,\u00a0mode])\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rst.UnstructuredRSTLoader.html"} {"id": "ca0c904e60a5-0", "text": "langchain.document_loaders.helpers.detect_file_encodings\u00b6\nlangchain.document_loaders.helpers.detect_file_encodings(file_path: str, timeout: int = 5) \u2192 List[FileEncoding][source]\u00b6\nTry to detect the file encoding.\nReturns a list of FileEncoding tuples with the detected encodings ordered\nby confidence.\nParameters\nfile_path \u2013 The path to the file to detect the encoding for.\ntimeout \u2013 The timeout in seconds for the encoding detection.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.helpers.detect_file_encodings.html"} {"id": "5e1dd02d87c6-0", "text": "langchain.document_loaders.generic.GenericLoader\u00b6\nclass langchain.document_loaders.generic.GenericLoader(blob_loader: BlobLoader, blob_parser: BaseBlobParser)[source]\u00b6\nBases: BaseLoader\nA generic document loader.\nA generic document loader that allows combining an arbitrary blob loader with\na blob parser.\nExamples\nfrom langchain.document_loaders import GenericLoader\nfrom langchain.document_loaders.blob_loaders import FileSystemBlobLoader\nloader = GenericLoader.from_filesystem(path=\u201dpath/to/directory\u201d,\nglob=\u201d**/[!.]*\u201d,\nsuffixes=[\u201c.pdf\u201d],\nshow_progress=True,\n)\ndocs = loader.lazy_load()\nnext(docs)\nExample instantiations to change which files are loaded:\n\u2026 code-block:: python\n# Recursively load all text files in a directory.\nloader = GenericLoader.from_filesystem(\u201c/path/to/dir\u201d, glob=\u201d**/*.txt\u201d)\n# Recursively load all non-hidden files in a directory.\nloader = GenericLoader.from_filesystem(\u201c/path/to/dir\u201d, glob=\u201d**/[!.]*\u201d)\n# Load all files in a directory without recursion.\nloader = GenericLoader.from_filesystem(\u201c/path/to/dir\u201d, glob=\u201d*\u201d)\nExample instantiations to change which parser is used:\n\u2026 code-block:: python\nfrom langchain.document_loaders.parsers.pdf import PyPDFParser\n# Recursively load all text files in a directory.\nloader = GenericLoader.from_filesystem(\n\u201c/path/to/dir\u201d,\nglob=\u201d**/*.pdf\u201d,\nparser=PyPDFParser()\n)\nA generic document loader.\nParameters\nblob_loader \u2013 A blob loader which knows how to yield blobs\nblob_parser \u2013 A blob parser which knows how to parse blobs into documents\nMethods\n__init__(blob_loader,\u00a0blob_parser)\nA generic document loader.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.generic.GenericLoader.html"} {"id": "5e1dd02d87c6-1", "text": "Methods\n__init__(blob_loader,\u00a0blob_parser)\nA generic document loader.\nfrom_filesystem(path,\u00a0*[,\u00a0glob,\u00a0suffixes,\u00a0...])\nCreate a generic document loader using a filesystem blob loader.\nlazy_load()\nLoad documents lazily.\nload()\nLoad all documents.\nload_and_split([text_splitter])\nLoad all documents and split them into sentences.\nclassmethod from_filesystem(path: Union[str, Path], *, glob: str = '**/[!.]*', suffixes: Optional[Sequence[str]] = None, show_progress: bool = False, parser: Union[Literal['default'], BaseBlobParser] = 'default') \u2192 GenericLoader[source]\u00b6\nCreate a generic document loader using a filesystem blob loader.\nParameters\npath \u2013 The path to the directory to load documents from.\nglob \u2013 The glob pattern to use to find documents.\nsuffixes \u2013 The suffixes to use to filter documents. If None, all files\nmatching the glob will be loaded.\nshow_progress \u2013 Whether to show a progress bar or not (requires tqdm).\nProxies to the file system loader.\nparser \u2013 A blob parser which knows how to parse blobs into documents\nReturns\nA generic document loader.\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nLoad documents lazily. Use this when working at a large scale.\nload() \u2192 List[Document][source]\u00b6\nLoad all documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document][source]\u00b6\nLoad all documents and split them into sentences.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.generic.GenericLoader.html"} {"id": "be8fe256d8ef-0", "text": "langchain.document_loaders.pdf.PyMuPDFLoader\u00b6\nclass langchain.document_loaders.pdf.PyMuPDFLoader(file_path: str)[source]\u00b6\nBases: BasePDFLoader\nLoader that uses PyMuPDF to load PDF files.\nInitialize with file path.\nMethods\n__init__(file_path)\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload(**kwargs)\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nAttributes\nsource\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload(**kwargs: Optional[Any]) \u2192 List[Document][source]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nproperty source: str\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyMuPDFLoader.html"} {"id": "73dfd0d1372b-0", "text": "langchain.document_loaders.airbyte_json.AirbyteJSONLoader\u00b6\nclass langchain.document_loaders.airbyte_json.AirbyteJSONLoader(file_path: str)[source]\u00b6\nBases: BaseLoader\nLoader that loads local airbyte json files.\nInitialize with a file path. This should start with \u2018/tmp/airbyte_local/\u2019.\nMethods\n__init__(file_path)\nInitialize with a file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nAttributes\nfile_path\nPath to the directory containing the json files.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nfile_path\u00b6\nPath to the directory containing the json files.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte_json.AirbyteJSONLoader.html"} {"id": "13bdea28c7a8-0", "text": "langchain.document_loaders.notebook.concatenate_cells\u00b6\nlangchain.document_loaders.notebook.concatenate_cells(cell: dict, include_outputs: bool, max_output_length: int, traceback: bool) \u2192 str[source]\u00b6\nCombine cells information in a readable format ready to be used.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notebook.concatenate_cells.html"} {"id": "af6c111fc27a-0", "text": "langchain.document_loaders.url.UnstructuredURLLoader\u00b6\nclass langchain.document_loaders.url.UnstructuredURLLoader(urls: List[str], continue_on_failure: bool = True, mode: str = 'single', show_progress_bar: bool = False, **unstructured_kwargs: Any)[source]\u00b6\nBases: BaseLoader\nLoader that uses unstructured to load HTML files.\nInitialize with file path.\nMethods\n__init__(urls[,\u00a0continue_on_failure,\u00a0mode,\u00a0...])\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url.UnstructuredURLLoader.html"} {"id": "68d85d9828db-0", "text": "langchain.document_loaders.twitter.TwitterTweetLoader\u00b6\nclass langchain.document_loaders.twitter.TwitterTweetLoader(auth_handler: Union[OAuthHandler, OAuth2BearerHandler], twitter_users: Sequence[str], number_tweets: Optional[int] = 100)[source]\u00b6\nBases: BaseLoader\nTwitter tweets loader.\nRead tweets of user twitter handle.\nFirst you need to go to\nhttps://developer.twitter.com/en/docs/twitter-api\n/getting-started/getting-access-to-the-twitter-api\nto get your token. And create a v2 version of the app.\nMethods\n__init__(auth_handler,\u00a0twitter_users[,\u00a0...])\nfrom_bearer_token(oauth2_bearer_token,\u00a0...)\nCreate a TwitterTweetLoader from OAuth2 bearer token.\nfrom_secrets(access_token,\u00a0...[,\u00a0number_tweets])\nCreate a TwitterTweetLoader from access tokens and secrets.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad tweets.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nclassmethod from_bearer_token(oauth2_bearer_token: str, twitter_users: Sequence[str], number_tweets: Optional[int] = 100) \u2192 TwitterTweetLoader[source]\u00b6\nCreate a TwitterTweetLoader from OAuth2 bearer token.\nclassmethod from_secrets(access_token: str, access_token_secret: str, consumer_key: str, consumer_secret: str, twitter_users: Sequence[str], number_tweets: Optional[int] = 100) \u2192 TwitterTweetLoader[source]\u00b6\nCreate a TwitterTweetLoader from access tokens and secrets.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad tweets.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.twitter.TwitterTweetLoader.html"} {"id": "68d85d9828db-1", "text": "Load Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.twitter.TwitterTweetLoader.html"} {"id": "4f3503f8db4d-0", "text": "langchain.document_loaders.text.TextLoader\u00b6\nclass langchain.document_loaders.text.TextLoader(file_path: str, encoding: Optional[str] = None, autodetect_encoding: bool = False)[source]\u00b6\nBases: BaseLoader\nLoad text files.\nParameters\nfile_path \u2013 Path to the file to load.\nencoding \u2013 File encoding to use. If None, the file will be loaded\nencoding. (with the default system) \u2013 \nautodetect_encoding \u2013 Whether to try to autodetect the file encoding\nif the specified encoding fails.\nInitialize with file path.\nMethods\n__init__(file_path[,\u00a0encoding,\u00a0...])\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad from file path.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad from file path.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.text.TextLoader.html"} {"id": "21585e4db667-0", "text": "langchain.document_loaders.rtf.UnstructuredRTFLoader\u00b6\nclass langchain.document_loaders.rtf.UnstructuredRTFLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]\u00b6\nBases: UnstructuredFileLoader\nLoader that uses unstructured to load rtf files.\nInitialize with file path.\nMethods\n__init__(file_path[,\u00a0mode])\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rtf.UnstructuredRTFLoader.html"} {"id": "3023df87bf82-0", "text": "langchain.document_loaders.github.BaseGitHubLoader\u00b6\nclass langchain.document_loaders.github.BaseGitHubLoader(*, repo: str, access_token: str)[source]\u00b6\nBases: BaseLoader, BaseModel, ABC\nLoad issues of a GitHub repository.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam access_token: str [Required]\u00b6\nPersonal access token - see https://github.com/settings/tokens?type=beta\nparam repo: str [Required]\u00b6\nName of repository\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nabstract load() \u2192 List[Document]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that access token exists in environment.\nproperty headers: Dict[str, str]\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.BaseGitHubLoader.html"} {"id": "1930de7da119-0", "text": "langchain.document_loaders.url_playwright.PlaywrightURLLoader\u00b6\nclass langchain.document_loaders.url_playwright.PlaywrightURLLoader(urls: List[str], continue_on_failure: bool = True, headless: bool = True, remove_selectors: Optional[List[str]] = None)[source]\u00b6\nBases: BaseLoader\nLoader that uses Playwright and to load a page and unstructured to load the html.\nThis is useful for loading pages that require javascript to render.\nurls\u00b6\nList of URLs to load.\nType\nList[str]\ncontinue_on_failure\u00b6\nIf True, continue loading other URLs on failure.\nType\nbool\nheadless\u00b6\nIf True, the browser will run in headless mode.\nType\nbool\nLoad a list of URLs using Playwright and unstructured.\nMethods\n__init__(urls[,\u00a0continue_on_failure,\u00a0...])\nLoad a list of URLs using Playwright and unstructured.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad the specified URLs using Playwright and create Document instances.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad the specified URLs using Playwright and create Document instances.\nReturns\nA list of Document instances with loaded content.\nReturn type\nList[Document]\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_playwright.PlaywrightURLLoader.html"} {"id": "c092f24861c1-0", "text": "langchain.document_loaders.s3_file.S3FileLoader\u00b6\nclass langchain.document_loaders.s3_file.S3FileLoader(bucket: str, key: str)[source]\u00b6\nBases: BaseLoader\nLoading logic for loading documents from s3.\nInitialize with bucket and key name.\nMethods\n__init__(bucket,\u00a0key)\nInitialize with bucket and key name.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_file.S3FileLoader.html"} {"id": "bae9db184a6f-0", "text": "langchain.document_loaders.open_city_data.OpenCityDataLoader\u00b6\nclass langchain.document_loaders.open_city_data.OpenCityDataLoader(city_id: str, dataset_id: str, limit: int)[source]\u00b6\nBases: BaseLoader\nLoader that loads Open city data.\nInitialize with dataset_id\nMethods\n__init__(city_id,\u00a0dataset_id,\u00a0limit)\nInitialize with dataset_id\nlazy_load()\nLazy load records.\nload()\nLoad records.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nLazy load records.\nload() \u2192 List[Document][source]\u00b6\nLoad records.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.open_city_data.OpenCityDataLoader.html"} {"id": "9fc11293957a-0", "text": "langchain.document_loaders.googledrive.GoogleDriveLoader\u00b6\nclass langchain.document_loaders.googledrive.GoogleDriveLoader(*, service_account_key: Path = PosixPath('/home/docs/.credentials/keys.json'), credentials_path: Path = PosixPath('/home/docs/.credentials/credentials.json'), token_path: Path = PosixPath('/home/docs/.credentials/token.json'), folder_id: Optional[str] = None, document_ids: Optional[List[str]] = None, file_ids: Optional[List[str]] = None, recursive: bool = False, file_types: Optional[Sequence[str]] = None, load_trashed_files: bool = False, file_loader_cls: Any = None, file_loader_kwargs: Dict[str, Any] = {})[source]\u00b6\nBases: BaseLoader, BaseModel\nLoads Google Docs from Google Drive.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')\u00b6\nPath to the credentials file.\nparam document_ids: Optional[List[str]] = None\u00b6\nThe document ids to load from.\nparam file_ids: Optional[List[str]] = None\u00b6\nThe file ids to load from.\nparam file_loader_cls: Any = None\u00b6\nThe file loader class to use.\nparam file_loader_kwargs: Dict[str, Any] = {}\u00b6\nThe file loader kwargs to use.\nparam file_types: Optional[Sequence[str]] = None\u00b6\nThe file types to load. Only applies when folder_id is given.\nparam folder_id: Optional[str] = None\u00b6\nThe folder id to load from.\nparam load_trashed_files: bool = False\u00b6\nWhether to load trashed files. Only applies when folder_id is given.\nparam recursive: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.googledrive.GoogleDriveLoader.html"} {"id": "9fc11293957a-1", "text": "param recursive: bool = False\u00b6\nWhether to load recursively. Only applies when folder_id is given.\nparam service_account_key: pathlib.Path = PosixPath('/home/docs/.credentials/keys.json')\u00b6\nPath to the service account key file.\nparam token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')\u00b6\nPath to the token file.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nvalidator validate_credentials_path\u00a0 \u00bb\u00a0 credentials_path[source]\u00b6\nValidate that credentials_path exists.\nvalidator validate_inputs\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that either folder_id or document_ids is set, but not both.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.googledrive.GoogleDriveLoader.html"} {"id": "90ab19f31ef6-0", "text": "langchain.document_loaders.iugu.IuguLoader\u00b6\nclass langchain.document_loaders.iugu.IuguLoader(resource: str, api_token: Optional[str] = None)[source]\u00b6\nBases: BaseLoader\nLoader that fetches data from IUGU.\nInitialize the IUGU resource.\nParameters\nresource \u2013 The name of the resource to fetch.\napi_token \u2013 The IUGU API token to use.\nMethods\n__init__(resource[,\u00a0api_token])\nInitialize the IUGU resource.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.iugu.IuguLoader.html"} {"id": "276ad4de5c17-0", "text": "langchain.document_loaders.parsers.grobid.ServerUnavailableException\u00b6\nclass langchain.document_loaders.parsers.grobid.ServerUnavailableException[source]\u00b6\nBases: Exception\nadd_note()\u00b6\nException.add_note(note) \u2013\nadd a note to the exception\nwith_traceback()\u00b6\nException.with_traceback(tb) \u2013\nset self.__traceback__ to tb and return self.\nargs\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.grobid.ServerUnavailableException.html"} {"id": "a9236c5c1fa9-0", "text": "langchain.document_loaders.srt.SRTLoader\u00b6\nclass langchain.document_loaders.srt.SRTLoader(file_path: str)[source]\u00b6\nBases: BaseLoader\nLoader for .srt (subtitle) files.\nInitialize with file path.\nMethods\n__init__(file_path)\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad using pysrt file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad using pysrt file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.srt.SRTLoader.html"} {"id": "2020d837cba1-0", "text": "langchain.document_loaders.xml.UnstructuredXMLLoader\u00b6\nclass langchain.document_loaders.xml.UnstructuredXMLLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]\u00b6\nBases: UnstructuredFileLoader\nLoader that uses unstructured to load XML files.\nInitialize with file path.\nMethods\n__init__(file_path[,\u00a0mode])\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.xml.UnstructuredXMLLoader.html"} {"id": "88c87cd897ec-0", "text": "langchain.document_loaders.azlyrics.AZLyricsLoader\u00b6\nclass langchain.document_loaders.azlyrics.AZLyricsLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None)[source]\u00b6\nBases: WebBaseLoader\nLoader that loads AZLyrics webpages.\nInitialize with webpage path.\nMethods\n__init__(web_path[,\u00a0header_template,\u00a0...])\nInitialize with webpage path.\naload()\nLoad text from the urls in web_path async into Documents.\nfetch_all(urls)\nFetch all urls concurrently with rate limiting.\nlazy_load()\nLazy load text from the url(s) in web_path.\nload()\nLoad webpages into Documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nscrape([parser])\nScrape data from webpage and return it in BeautifulSoup format.\nscrape_all(urls[,\u00a0parser])\nFetch all urls, then return soups for all results.\nAttributes\nbs_get_text_kwargs\nkwargs for beatifulsoup4 get_text\ndefault_parser\nDefault parser to use for BeautifulSoup.\nraise_for_status\nRaise an exception if http status code denotes an error.\nrequests_kwargs\nkwargs for requests\nrequests_per_second\nMax number of concurrent requests to make.\nweb_path\naload() \u2192 List[Document]\u00b6\nLoad text from the urls in web_path async into Documents.\nasync fetch_all(urls: List[str]) \u2192 Any\u00b6\nFetch all urls concurrently with rate limiting.\nlazy_load() \u2192 Iterator[Document]\u00b6\nLazy load text from the url(s) in web_path.\nload() \u2192 List[Document][source]\u00b6\nLoad webpages into Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azlyrics.AZLyricsLoader.html"} {"id": "88c87cd897ec-1", "text": "load() \u2192 List[Document][source]\u00b6\nLoad webpages into Documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nscrape(parser: Optional[str] = None) \u2192 Any\u00b6\nScrape data from webpage and return it in BeautifulSoup format.\nscrape_all(urls: List[str], parser: Optional[str] = None) \u2192 List[Any]\u00b6\nFetch all urls, then return soups for all results.\nbs_get_text_kwargs: Dict[str, Any] = {}\u00b6\nkwargs for beatifulsoup4 get_text\ndefault_parser: str = 'html.parser'\u00b6\nDefault parser to use for BeautifulSoup.\nraise_for_status: bool = False\u00b6\nRaise an exception if http status code denotes an error.\nrequests_kwargs: Dict[str, Any] = {}\u00b6\nkwargs for requests\nrequests_per_second: int = 2\u00b6\nMax number of concurrent requests to make.\nproperty web_path: str\u00b6\nweb_paths: List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azlyrics.AZLyricsLoader.html"} {"id": "61ff40dcbe9d-0", "text": "langchain.document_loaders.telegram.concatenate_rows\u00b6\nlangchain.document_loaders.telegram.concatenate_rows(row: dict) \u2192 str[source]\u00b6\nCombine message information in a readable format ready to be used.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.concatenate_rows.html"} {"id": "4e375b3a768a-0", "text": "langchain.document_loaders.obsidian.ObsidianLoader\u00b6\nclass langchain.document_loaders.obsidian.ObsidianLoader(path: str, encoding: str = 'UTF-8', collect_metadata: bool = True)[source]\u00b6\nBases: BaseLoader\nLoader that loads Obsidian files from disk.\nInitialize with path.\nMethods\n__init__(path[,\u00a0encoding,\u00a0collect_metadata])\nInitialize with path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nAttributes\nFRONT_MATTER_REGEX\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nFRONT_MATTER_REGEX = re.compile('^---\\\\n(.*?)\\\\n---\\\\n', re.MULTILINE|re.DOTALL)\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obsidian.ObsidianLoader.html"} {"id": "8d448f38dffb-0", "text": "langchain.document_loaders.acreom.AcreomLoader\u00b6\nclass langchain.document_loaders.acreom.AcreomLoader(path: str, encoding: str = 'UTF-8', collect_metadata: bool = True)[source]\u00b6\nBases: BaseLoader\nLoader that loads acreom vault from a directory.\nMethods\n__init__(path[,\u00a0encoding,\u00a0collect_metadata])\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nAttributes\nFRONT_MATTER_REGEX\nRegex to match front matter metadata in markdown files.\nfile_path\nPath to the directory containing the markdown files.\nencoding\nEncoding to use when reading the files.\ncollect_metadata\nWhether to collect metadata from the front matter.\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nFRONT_MATTER_REGEX = re.compile('^---\\\\n(.*?)\\\\n---\\\\n', re.MULTILINE|re.DOTALL)\u00b6\nRegex to match front matter metadata in markdown files.\ncollect_metadata\u00b6\nWhether to collect metadata from the front matter.\nencoding\u00b6\nEncoding to use when reading the files.\nfile_path\u00b6\nPath to the directory containing the markdown files.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.acreom.AcreomLoader.html"} {"id": "5a063aeffa69-0", "text": "langchain.document_loaders.unstructured.UnstructuredFileIOLoader\u00b6\nclass langchain.document_loaders.unstructured.UnstructuredFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', **unstructured_kwargs: Any)[source]\u00b6\nBases: UnstructuredBaseLoader\nUnstructuredFileIOLoader uses unstructured to load files. The file loader\nuses the unstructured partition function and will automatically detect the file\ntype. You can run the loader in one of two modes: \u201csingle\u201d and \u201celements\u201d.\nIf you use \u201csingle\u201d mode, the document will be returned as a single\nlangchain Document object. If you use \u201celements\u201d mode, the unstructured\nlibrary will split the document into elements such as Title and NarrativeText.\nYou can pass in additional unstructured kwargs after mode to apply\ndifferent unstructured settings.\nExamples\n```python\nfrom langchain.document_loaders import UnstructuredFileIOLoader\nwith open(\u201cexample.pdf\u201d, \u201crb\u201d) as f:\nloader = UnstructuredFileIOLoader(f, mode=\u201delements\u201d, strategy=\u201dfast\u201d,\n)\ndocs = loader.load()\n```\nReferences\nhttps://unstructured-io.github.io/unstructured/bricks.html#partition\nInitialize with file path.\nMethods\n__init__(file[,\u00a0mode])\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileIOLoader.html"} {"id": "5a063aeffa69-1", "text": "Load Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileIOLoader.html"} {"id": "8a8cc227f8d6-0", "text": "langchain.document_loaders.facebook_chat.FacebookChatLoader\u00b6\nclass langchain.document_loaders.facebook_chat.FacebookChatLoader(path: str)[source]\u00b6\nBases: BaseLoader\nLoads Facebook messages json directory dump.\nInitialize with a path.\nMethods\n__init__(path)\nInitialize with a path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.facebook_chat.FacebookChatLoader.html"} {"id": "c77f7df6d5b0-0", "text": "langchain.document_loaders.blob_loaders.schema.Blob\u00b6\nclass langchain.document_loaders.blob_loaders.schema.Blob(*, data: Optional[Union[bytes, str]] = None, mimetype: Optional[str] = None, encoding: str = 'utf-8', path: Optional[Union[str, PurePath]] = None)[source]\u00b6\nBases: BaseModel\nA blob is used to represent raw data by either reference or value.\nProvides an interface to materialize the blob in different representations, and\nhelp to decouple the development of data loaders from the downstream parsing of\nthe raw data.\nInspired by: https://developer.mozilla.org/en-US/docs/Web/API/Blob\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam data: Optional[Union[bytes, str]] = None\u00b6\nparam encoding: str = 'utf-8'\u00b6\nparam mimetype: Optional[str] = None\u00b6\nparam path: Optional[Union[str, pathlib.PurePath]] = None\u00b6\nas_bytes() \u2192 bytes[source]\u00b6\nRead data as bytes.\nas_bytes_io() \u2192 Generator[Union[BytesIO, BufferedReader], None, None][source]\u00b6\nRead data as a byte stream.\nas_string() \u2192 str[source]\u00b6\nRead data as a string.\nvalidator check_blob_is_valid\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nVerify that either data or path is provided.\nclassmethod from_data(data: Union[str, bytes], *, encoding: str = 'utf-8', mime_type: Optional[str] = None, path: Optional[str] = None) \u2192 Blob[source]\u00b6\nInitialize the blob from in-memory data.\nParameters\ndata \u2013 the in-memory data associated with the blob\nencoding \u2013 Encoding to use if decoding the bytes into a string", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.Blob.html"} {"id": "c77f7df6d5b0-1", "text": "encoding \u2013 Encoding to use if decoding the bytes into a string\nmime_type \u2013 if provided, will be set as the mime-type of the data\npath \u2013 if provided, will be set as the source from which the data came\nReturns\nBlob instance\nclassmethod from_path(path: Union[str, PurePath], *, encoding: str = 'utf-8', mime_type: Optional[str] = None, guess_type: bool = True) \u2192 Blob[source]\u00b6\nLoad the blob from a path like object.\nParameters\npath \u2013 path like object to file to be read\nencoding \u2013 Encoding to use if decoding the bytes into a string\nmime_type \u2013 if provided, will be set as the mime-type of the data\nguess_type \u2013 If True, the mimetype will be guessed from the file extension,\nif a mime-type was not provided\nReturns\nBlob instance\nproperty source: Optional[str]\u00b6\nThe source location of the blob as string if known otherwise none.\nmodel Config[source]\u00b6\nBases: object\narbitrary_types_allowed = True\u00b6\nfrozen = True\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.Blob.html"} {"id": "b2da08d33074-0", "text": "langchain.document_loaders.docugami.DocugamiLoader\u00b6\nclass langchain.document_loaders.docugami.DocugamiLoader(*, api: str = 'https://api.docugami.com/v1preview1', access_token: Optional[str] = None, docset_id: Optional[str] = None, document_ids: Optional[Sequence[str]] = None, file_paths: Optional[Sequence[Union[Path, str]]] = None, min_chunk_size: int = 32)[source]\u00b6\nBases: BaseLoader, BaseModel\nLoads processed docs from Docugami.\nTo use, you should have the lxml python package installed.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam access_token: Optional[str] = None\u00b6\nThe Docugami API access token to use.\nparam api: str = 'https://api.docugami.com/v1preview1'\u00b6\nThe Docugami API endpoint to use.\nparam docset_id: Optional[str] = None\u00b6\nThe Docugami API docset ID to use.\nparam document_ids: Optional[Sequence[str]] = None\u00b6\nThe Docugami API document IDs to use.\nparam file_paths: Optional[Sequence[Union[pathlib.Path, str]]] = None\u00b6\nThe local file paths to use.\nparam min_chunk_size: int = 32\u00b6\nThe minimum chunk size to use when parsing DGML. Defaults to 32.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.docugami.DocugamiLoader.html"} {"id": "b2da08d33074-1", "text": "Load Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nvalidator validate_local_or_remote\u00a0 \u00bb\u00a0 all fields[source]\u00b6\nValidate that either local file paths are given, or remote API docset ID.\nParameters\nvalues \u2013 The values to validate.\nReturns\nThe validated values.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.docugami.DocugamiLoader.html"} {"id": "3f39de8bd21e-0", "text": "langchain.document_loaders.psychic.PsychicLoader\u00b6\nclass langchain.document_loaders.psychic.PsychicLoader(api_key: str, account_id: str, connector_id: Optional[str] = None)[source]\u00b6\nBases: BaseLoader\nLoader that loads documents from Psychic.dev.\nInitialize with API key, connector id, and account id.\nMethods\n__init__(api_key,\u00a0account_id[,\u00a0connector_id])\nInitialize with API key, connector id, and account id.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.psychic.PsychicLoader.html"} {"id": "ad8ce9606b33-0", "text": "langchain.document_loaders.ifixit.IFixitLoader\u00b6\nclass langchain.document_loaders.ifixit.IFixitLoader(web_path: str)[source]\u00b6\nBases: BaseLoader\nLoad iFixit repair guides, device wikis and answers.\niFixit is the largest, open repair community on the web. The site contains nearly\n100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is\nlicensed under CC-BY.\nThis loader will allow you to download the text of a repair guide, text of Q&A\u2019s\nand wikis from devices on iFixit using their open APIs and web scraping.\nInitialize with a web path.\nMethods\n__init__(web_path)\nInitialize with a web path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nload_device([url_override,\u00a0include_guides])\nLoads a device\nload_guide([url_override])\nLoad a guide\nload_questions_and_answers([url_override])\nLoad a list of questions and answers.\nload_suggestions([query,\u00a0doc_type])\nLoad suggestions.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nload_device(url_override: Optional[str] = None, include_guides: bool = True) \u2192 List[Document][source]\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.ifixit.IFixitLoader.html"} {"id": "ad8ce9606b33-1", "text": "Loads a device\nParameters\nurl_override \u2013 A URL to override the default URL.\ninclude_guides \u2013 Whether to include guides linked to from the device.\nDefaults to True.\nReturns:\nload_guide(url_override: Optional[str] = None) \u2192 List[Document][source]\u00b6\nLoad a guide\nParameters\nurl_override \u2013 A URL to override the default URL.\nReturns: List[Document]\nload_questions_and_answers(url_override: Optional[str] = None) \u2192 List[Document][source]\u00b6\nLoad a list of questions and answers.\nParameters\nurl_override \u2013 A URL to override the default URL.\nReturns: List[Document]\nstatic load_suggestions(query: str = '', doc_type: str = 'all') \u2192 List[Document][source]\u00b6\nLoad suggestions.\nParameters\nquery \u2013 A query string\ndoc_type \u2013 The type of document to search for. Can be one of \u201call\u201d,\n\u201cdevice\u201d, \u201cguide\u201d, \u201cteardown\u201d, \u201canswer\u201d, \u201cwiki\u201d.\nReturns:", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.ifixit.IFixitLoader.html"} {"id": "209add47fceb-0", "text": "langchain.document_loaders.toml.TomlLoader\u00b6\nclass langchain.document_loaders.toml.TomlLoader(source: Union[str, Path])[source]\u00b6\nBases: BaseLoader\nA TOML document loader that inherits from the BaseLoader class.\nThis class can be initialized with either a single source file or a source\ndirectory containing TOML files.\nInitialize the TomlLoader with a source file or directory.\nMethods\n__init__(source)\nInitialize the TomlLoader with a source file or directory.\nlazy_load()\nLazily load the TOML documents from the source file or directory.\nload()\nLoad and return all documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nLazily load the TOML documents from the source file or directory.\nload() \u2192 List[Document][source]\u00b6\nLoad and return all documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.toml.TomlLoader.html"} {"id": "97348a48988c-0", "text": "langchain.document_loaders.image_captions.ImageCaptionLoader\u00b6\nclass langchain.document_loaders.image_captions.ImageCaptionLoader(path_images: Union[str, List[str]], blip_processor: str = 'Salesforce/blip-image-captioning-base', blip_model: str = 'Salesforce/blip-image-captioning-base')[source]\u00b6\nBases: BaseLoader\nLoads the captions of an image\nInitialize with a list of image paths\nParameters\npath_images \u2013 A list of image paths.\nblip_processor \u2013 The name of the pre-trained BLIP processor.\nblip_model \u2013 The name of the pre-trained BLIP model.\nMethods\n__init__(path_images[,\u00a0blip_processor,\u00a0...])\nInitialize with a list of image paths\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad from a list of image files\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad from a list of image files\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.image_captions.ImageCaptionLoader.html"} {"id": "0d8c18f2ed47-0", "text": "langchain.document_loaders.parsers.language.code_segmenter.CodeSegmenter\u00b6\nclass langchain.document_loaders.parsers.language.code_segmenter.CodeSegmenter(code: str)[source]\u00b6\nBases: ABC\nThe abstract class for the code segmenter.\nMethods\n__init__(code)\nextract_functions_classes()\nis_valid()\nsimplify_code()\nabstract extract_functions_classes() \u2192 List[str][source]\u00b6\nis_valid() \u2192 bool[source]\u00b6\nabstract simplify_code() \u2192 str[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.code_segmenter.CodeSegmenter.html"} {"id": "76db6e8b5f88-0", "text": "langchain.document_loaders.imsdb.IMSDbLoader\u00b6\nclass langchain.document_loaders.imsdb.IMSDbLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None)[source]\u00b6\nBases: WebBaseLoader\nLoads IMSDb webpages.\nInitialize with webpage path.\nMethods\n__init__(web_path[,\u00a0header_template,\u00a0...])\nInitialize with webpage path.\naload()\nLoad text from the urls in web_path async into Documents.\nfetch_all(urls)\nFetch all urls concurrently with rate limiting.\nlazy_load()\nLazy load text from the url(s) in web_path.\nload()\nLoad webpage.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nscrape([parser])\nScrape data from webpage and return it in BeautifulSoup format.\nscrape_all(urls[,\u00a0parser])\nFetch all urls, then return soups for all results.\nAttributes\nbs_get_text_kwargs\nkwargs for beatifulsoup4 get_text\ndefault_parser\nDefault parser to use for BeautifulSoup.\nraise_for_status\nRaise an exception if http status code denotes an error.\nrequests_kwargs\nkwargs for requests\nrequests_per_second\nMax number of concurrent requests to make.\nweb_path\naload() \u2192 List[Document]\u00b6\nLoad text from the urls in web_path async into Documents.\nasync fetch_all(urls: List[str]) \u2192 Any\u00b6\nFetch all urls concurrently with rate limiting.\nlazy_load() \u2192 Iterator[Document]\u00b6\nLazy load text from the url(s) in web_path.\nload() \u2192 List[Document][source]\u00b6\nLoad webpage.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.imsdb.IMSDbLoader.html"} {"id": "76db6e8b5f88-1", "text": "Load Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nscrape(parser: Optional[str] = None) \u2192 Any\u00b6\nScrape data from webpage and return it in BeautifulSoup format.\nscrape_all(urls: List[str], parser: Optional[str] = None) \u2192 List[Any]\u00b6\nFetch all urls, then return soups for all results.\nbs_get_text_kwargs: Dict[str, Any] = {}\u00b6\nkwargs for beatifulsoup4 get_text\ndefault_parser: str = 'html.parser'\u00b6\nDefault parser to use for BeautifulSoup.\nraise_for_status: bool = False\u00b6\nRaise an exception if http status code denotes an error.\nrequests_kwargs: Dict[str, Any] = {}\u00b6\nkwargs for requests\nrequests_per_second: int = 2\u00b6\nMax number of concurrent requests to make.\nproperty web_path: str\u00b6\nweb_paths: List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.imsdb.IMSDbLoader.html"} {"id": "2c875d9c0e04-0", "text": "langchain.document_loaders.brave_search.BraveSearchLoader\u00b6\nclass langchain.document_loaders.brave_search.BraveSearchLoader(query: str, api_key: str, search_kwargs: Optional[dict] = None)[source]\u00b6\nBases: BaseLoader\nLoads a query result from Brave Search engine into a list of Documents.\nInitializes the BraveLoader.\nParameters\nquery \u2013 The query to search for.\napi_key \u2013 The API key to use.\nsearch_kwargs \u2013 The search kwargs to use.\nMethods\n__init__(query,\u00a0api_key[,\u00a0search_kwargs])\nInitializes the BraveLoader.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.brave_search.BraveSearchLoader.html"} {"id": "cd8c3396dda6-0", "text": "langchain.document_loaders.notion.NotionDirectoryLoader\u00b6\nclass langchain.document_loaders.notion.NotionDirectoryLoader(path: str)[source]\u00b6\nBases: BaseLoader\nLoader that loads Notion directory dump.\nInitialize with path.\nMethods\n__init__(path)\nInitialize with path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notion.NotionDirectoryLoader.html"} {"id": "1cc825995544-0", "text": "langchain.document_loaders.gutenberg.GutenbergLoader\u00b6\nclass langchain.document_loaders.gutenberg.GutenbergLoader(file_path: str)[source]\u00b6\nBases: BaseLoader\nLoader that uses urllib to load .txt web files.\nInitialize with a file path.\nMethods\n__init__(file_path)\nInitialize with a file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gutenberg.GutenbergLoader.html"} {"id": "a9f91648d2a9-0", "text": "langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader\u00b6\nclass langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader(path: str, page_content_column: str = 'text', name: Optional[str] = None, data_dir: Optional[str] = None, data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None, cache_dir: Optional[str] = None, keep_in_memory: Optional[bool] = None, save_infos: bool = False, use_auth_token: Optional[Union[bool, str]] = None, num_proc: Optional[int] = None)[source]\u00b6\nBases: BaseLoader\nLoad Documents from the Hugging Face Hub.\nInitialize the HuggingFaceDatasetLoader.\nParameters\npath \u2013 Path or name of the dataset.\npage_content_column \u2013 Page content column name. Default is \u201ctext\u201d.\nname \u2013 Name of the dataset configuration.\ndata_dir \u2013 Data directory of the dataset configuration.\ndata_files \u2013 Path(s) to source data file(s).\ncache_dir \u2013 Directory to read/write data.\nkeep_in_memory \u2013 Whether to copy the dataset in-memory.\nsave_infos \u2013 Save the dataset information (checksums/size/splits/\u2026).\nDefault is False.\nuse_auth_token \u2013 Bearer token for remote files on the Dataset Hub.\nnum_proc \u2013 Number of processes.\nMethods\n__init__(path[,\u00a0page_content_column,\u00a0name,\u00a0...])\nInitialize the HuggingFaceDatasetLoader.\nlazy_load()\nLoad documents lazily.\nload()\nLoad documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nLoad documents lazily.\nload() \u2192 List[Document][source]\u00b6\nLoad documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader.html"} {"id": "a9f91648d2a9-1", "text": "load() \u2192 List[Document][source]\u00b6\nLoad documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader.html"} {"id": "433b9945f4c2-0", "text": "langchain.document_loaders.sitemap.SitemapLoader\u00b6\nclass langchain.document_loaders.sitemap.SitemapLoader(web_path: str, filter_urls: Optional[List[str]] = None, parsing_function: Optional[Callable] = None, blocksize: Optional[int] = None, blocknum: int = 0, meta_function: Optional[Callable] = None, is_local: bool = False)[source]\u00b6\nBases: WebBaseLoader\nLoader that fetches a sitemap and loads those URLs.\nInitialize with webpage path and optional filter URLs.\nParameters\nweb_path \u2013 url of the sitemap. can also be a local path\nfilter_urls \u2013 list of strings or regexes that will be applied to filter the\nurls that are parsed and loaded\nparsing_function \u2013 Function to parse bs4.Soup output\nblocksize \u2013 number of sitemap locations per block\nblocknum \u2013 the number of the block that should be loaded - zero indexed\nmeta_function \u2013 Function to parse bs4.Soup output for metadata\nremember when setting this method to also copy metadata[\u201cloc\u201d]\nto metadata[\u201csource\u201d] if you are using this field\nis_local \u2013 whether the sitemap is a local file\nMethods\n__init__(web_path[,\u00a0filter_urls,\u00a0...])\nInitialize with webpage path and optional filter URLs.\naload()\nLoad text from the urls in web_path async into Documents.\nfetch_all(urls)\nFetch all urls concurrently with rate limiting.\nlazy_load()\nLazy load text from the url(s) in web_path.\nload()\nLoad sitemap.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nparse_sitemap(soup)\nParse sitemap xml and load into a list of dicts.\nscrape([parser])\nScrape data from webpage and return it in BeautifulSoup format.\nscrape_all(urls[,\u00a0parser])", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html"} {"id": "433b9945f4c2-1", "text": "scrape_all(urls[,\u00a0parser])\nFetch all urls, then return soups for all results.\nAttributes\nbs_get_text_kwargs\nkwargs for beatifulsoup4 get_text\ndefault_parser\nDefault parser to use for BeautifulSoup.\nraise_for_status\nRaise an exception if http status code denotes an error.\nrequests_kwargs\nkwargs for requests\nrequests_per_second\nMax number of concurrent requests to make.\nweb_path\naload() \u2192 List[Document]\u00b6\nLoad text from the urls in web_path async into Documents.\nasync fetch_all(urls: List[str]) \u2192 Any\u00b6\nFetch all urls concurrently with rate limiting.\nlazy_load() \u2192 Iterator[Document]\u00b6\nLazy load text from the url(s) in web_path.\nload() \u2192 List[Document][source]\u00b6\nLoad sitemap.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nparse_sitemap(soup: Any) \u2192 List[dict][source]\u00b6\nParse sitemap xml and load into a list of dicts.\nscrape(parser: Optional[str] = None) \u2192 Any\u00b6\nScrape data from webpage and return it in BeautifulSoup format.\nscrape_all(urls: List[str], parser: Optional[str] = None) \u2192 List[Any]\u00b6\nFetch all urls, then return soups for all results.\nbs_get_text_kwargs: Dict[str, Any] = {}\u00b6\nkwargs for beatifulsoup4 get_text\ndefault_parser: str = 'html.parser'\u00b6\nDefault parser to use for BeautifulSoup.\nraise_for_status: bool = False\u00b6\nRaise an exception if http status code denotes an error.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html"} {"id": "433b9945f4c2-2", "text": "Raise an exception if http status code denotes an error.\nrequests_kwargs: Dict[str, Any] = {}\u00b6\nkwargs for requests\nrequests_per_second: int = 2\u00b6\nMax number of concurrent requests to make.\nproperty web_path: str\u00b6\nweb_paths: List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html"} {"id": "ec77a94b792b-0", "text": "langchain.document_loaders.spreedly.SpreedlyLoader\u00b6\nclass langchain.document_loaders.spreedly.SpreedlyLoader(access_token: str, resource: str)[source]\u00b6\nBases: BaseLoader\nLoader that fetches data from Spreedly API.\nMethods\n__init__(access_token,\u00a0resource)\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.spreedly.SpreedlyLoader.html"} {"id": "5033f752659b-0", "text": "langchain.document_loaders.merge.MergedDataLoader\u00b6\nclass langchain.document_loaders.merge.MergedDataLoader(loaders: List)[source]\u00b6\nBases: BaseLoader\nMerge documents from a list of loaders\nInitialize with a list of loaders\nMethods\n__init__(loaders)\nInitialize with a list of loaders\nlazy_load()\nLazy load docs from each individual loader.\nload()\nLoad docs.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nLazy load docs from each individual loader.\nload() \u2192 List[Document][source]\u00b6\nLoad docs.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.merge.MergedDataLoader.html"} {"id": "a00ba3474d34-0", "text": "langchain.document_loaders.email.UnstructuredEmailLoader\u00b6\nclass langchain.document_loaders.email.UnstructuredEmailLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]\u00b6\nBases: UnstructuredFileLoader\nLoader that uses unstructured to load email files. Works with both\n.eml and .msg files. You can process attachments in addition to the\ne-mail message itself by passing process_attachments=True into the\nconstructor for the loader. By default, attachments will be processed\nwith the unstructured partition function. If you already know the document\ntypes of the attachments, you can specify another partitioning function\nwith the attachment partitioner kwarg.\nExample\nfrom langchain.document_loaders import UnstructuredEmailLoader\nloader = UnstructuredEmailLoader(\u201cexample_data/fake-email.eml\u201d, mode=\u201delements\u201d)\nloader.load()\nExample\nfrom langchain.document_loaders import UnstructuredEmailLoader\nloader = UnstructuredEmailLoader(\u201cexample_data/fake-email-attachment.eml\u201d,\nmode=\u201delements\u201d,\nprocess_attachments=True,\n)\nloader.load()\nInitialize with file path.\nMethods\n__init__(file_path[,\u00a0mode])\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.email.UnstructuredEmailLoader.html"} {"id": "514952020e6d-0", "text": "langchain.document_loaders.embaas.EmbaasLoader\u00b6\nclass langchain.document_loaders.embaas.EmbaasLoader(*, embaas_api_key: Optional[str] = None, api_url: str = 'https://api.embaas.io/v1/document/extract-text/bytes/', params: EmbaasDocumentExtractionParameters = {}, file_path: str, blob_loader: Optional[EmbaasBlobLoader] = None)[source]\u00b6\nBases: BaseEmbaasLoader, BaseLoader\nEmbaas\u2019s document loader.\nTo use, you should have the\nenvironment variable EMBAAS_API_KEY set with your API key, or pass\nit as a named parameter to the constructor.\nExample\n# Default parsing\nfrom langchain.document_loaders.embaas import EmbaasLoader\nloader = EmbaasLoader(file_path=\"example.mp3\")\ndocuments = loader.load()\n# Custom api parameters (create embeddings automatically)\nfrom langchain.document_loaders.embaas import EmbaasBlobLoader\nloader = EmbaasBlobLoader(\n file_path=\"example.pdf\",\n params={\n \"should_embed\": True,\n \"model\": \"e5-large-v2\",\n \"chunk_size\": 256,\n \"chunk_splitter\": \"CharacterTextSplitter\"\n }\n)\ndocuments = loader.load()\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_url: str = 'https://api.embaas.io/v1/document/extract-text/bytes/'\u00b6\nThe URL of the embaas document extraction API.\nparam blob_loader: Optional[langchain.document_loaders.embaas.EmbaasBlobLoader] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasLoader.html"} {"id": "514952020e6d-1", "text": "The blob loader to use. If not provided, a default one will be created.\nparam embaas_api_key: Optional[str] = None\u00b6\nThe API key for the embaas document extraction API.\nparam file_path: str [Required]\u00b6\nThe path to the file to load.\nparam params: langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters = {}\u00b6\nAdditional parameters to pass to the embaas document extraction API.\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nLoad the documents from the file path lazily.\nload() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document][source]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nvalidator validate_blob_loader\u00a0 \u00bb\u00a0 blob_loader[source]\u00b6\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that api key and python package exists in environment.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasLoader.html"} {"id": "a73b4aedc2b6-0", "text": "langchain.document_loaders.json_loader.JSONLoader\u00b6\nclass langchain.document_loaders.json_loader.JSONLoader(file_path: Union[str, Path], jq_schema: str, content_key: Optional[str] = None, metadata_func: Optional[Callable[[Dict, Dict], Dict]] = None, text_content: bool = True, json_lines: bool = False)[source]\u00b6\nBases: BaseLoader\nLoads a JSON file using a jq schema.\nExample\n[{\u201ctext\u201d: \u2026}, {\u201ctext\u201d: \u2026}, {\u201ctext\u201d: \u2026}] -> schema = .[].text\n{\u201ckey\u201d: [{\u201ctext\u201d: \u2026}, {\u201ctext\u201d: \u2026}, {\u201ctext\u201d: \u2026}]} -> schema = .key[].text\n[\u201c\u201d, \u201c\u201d, \u201c\u201d] -> schema = .[]\nInitialize the JSONLoader.\nParameters\nfile_path (Union[str, Path]) \u2013 The path to the JSON or JSON Lines file.\njq_schema (str) \u2013 The jq schema to use to extract the data or text from\nthe JSON.\ncontent_key (str) \u2013 The key to use to extract the content from the JSON if\nthe jq_schema results to a list of objects (dict).\nmetadata_func (Callable[Dict, Dict]) \u2013 A function that takes in the JSON\nobject extracted by the jq_schema and the default metadata and returns\na dict of the updated metadata.\ntext_content (bool) \u2013 Boolean flag to indicate whether the content is in\nstring format, default to True.\njson_lines (bool) \u2013 Boolean flag to indicate whether the input is in\nJSON Lines format.\nMethods\n__init__(file_path,\u00a0jq_schema[,\u00a0...])\nInitialize the JSONLoader.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad and return documents from the JSON file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.json_loader.JSONLoader.html"} {"id": "a73b4aedc2b6-1", "text": "load_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad and return documents from the JSON file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.json_loader.JSONLoader.html"} {"id": "221fe4577134-0", "text": "langchain.document_loaders.csv_loader.UnstructuredCSVLoader\u00b6\nclass langchain.document_loaders.csv_loader.UnstructuredCSVLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]\u00b6\nBases: UnstructuredFileLoader\nLoader that uses unstructured to load CSV files.\nParameters\nfile_path \u2013 The path to the CSV file.\nmode \u2013 The mode to use when loading the CSV file.\nOptional. Defaults to \u201csingle\u201d.\n**unstructured_kwargs \u2013 Keyword arguments to pass to unstructured.\nMethods\n__init__(file_path[,\u00a0mode])\nparam file_path\nThe path to the CSV file.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.csv_loader.UnstructuredCSVLoader.html"} {"id": "67fd0e80f68b-0", "text": "langchain.document_loaders.github.GitHubIssuesLoader\u00b6\nclass langchain.document_loaders.github.GitHubIssuesLoader(*, repo: str, access_token: str, include_prs: bool = True, milestone: Optional[Union[int, Literal['*', 'none']]] = None, state: Optional[Literal['open', 'closed', 'all']] = None, assignee: Optional[str] = None, creator: Optional[str] = None, mentioned: Optional[str] = None, labels: Optional[List[str]] = None, sort: Optional[Literal['created', 'updated', 'comments']] = None, direction: Optional[Literal['asc', 'desc']] = None, since: Optional[str] = None)[source]\u00b6\nBases: BaseGitHubLoader\nLoad issues of a GitHub repository.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam access_token: str [Required]\u00b6\nPersonal access token - see https://github.com/settings/tokens?type=beta\nparam assignee: Optional[str] = None\u00b6\nFilter on assigned user. Pass \u2018none\u2019 for no user and \u2018*\u2019 for any user.\nparam creator: Optional[str] = None\u00b6\nFilter on the user that created the issue.\nparam direction: Optional[Literal['asc', 'desc']] = None\u00b6\nThe direction to sort the results by. Can be one of: \u2018asc\u2019, \u2018desc\u2019.\nparam include_prs: bool = True\u00b6\nIf True include Pull Requests in results, otherwise ignore them.\nparam labels: Optional[List[str]] = None\u00b6\nLabel names to filter one. Example: bug,ui,@high.\nparam mentioned: Optional[str] = None\u00b6\nFilter on a user that\u2019s mentioned in the issue.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html"} {"id": "67fd0e80f68b-1", "text": "Filter on a user that\u2019s mentioned in the issue.\nparam milestone: Optional[Union[int, Literal['*', 'none']]] = None\u00b6\nIf integer is passed, it should be a milestone\u2019s number field.\nIf the string \u2018*\u2019 is passed, issues with any milestone are accepted.\nIf the string \u2018none\u2019 is passed, issues without milestones are returned.\nparam repo: str [Required]\u00b6\nName of repository\nparam since: Optional[str] = None\u00b6\nOnly show notifications updated after the given time.\nThis is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.\nparam sort: Optional[Literal['created', 'updated', 'comments']] = None\u00b6\nWhat to sort results by. Can be one of: \u2018created\u2019, \u2018updated\u2019, \u2018comments\u2019.\nDefault is \u2018created\u2019.\nparam state: Optional[Literal['open', 'closed', 'all']] = None\u00b6\nFilter on issue state. Can be one of: \u2018open\u2019, \u2018closed\u2019, \u2018all\u2019.\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nGet issues of a GitHub repository.\nReturns\npage_content\nmetadata\nurl\ntitle\ncreator\ncreated_at\nlast_update_time\nclosed_time\nnumber of comments\nstate\nlabels\nassignee\nassignees\nmilestone\nlocked\nnumber\nis_pull_request\nReturn type\nA list of Documents with attributes\nload() \u2192 List[Document][source]\u00b6\nGet issues of a GitHub repository.\nReturns\npage_content\nmetadata\nurl\ntitle\ncreator\ncreated_at\nlast_update_time\nclosed_time\nnumber of comments\nstate\nlabels\nassignee\nassignees\nmilestone\nlocked\nnumber\nis_pull_request\nReturn type\nA list of Documents with attributes\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html"} {"id": "67fd0e80f68b-2", "text": "Load Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nparse_issue(issue: dict) \u2192 Document[source]\u00b6\nCreate Document objects from a list of GitHub issues.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that access token exists in environment.\nvalidator validate_since\u00a0 \u00bb\u00a0 since[source]\u00b6\nproperty headers: Dict[str, str]\u00b6\nproperty query_params: str\u00b6\nCreate query parameters for GitHub API.\nproperty url: str\u00b6\nCreate URL for GitHub API.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html"} {"id": "c39c63dc3888-0", "text": "langchain.document_loaders.pdf.PyPDFLoader\u00b6\nclass langchain.document_loaders.pdf.PyPDFLoader(file_path: str, password: Optional[Union[str, bytes]] = None)[source]\u00b6\nBases: BasePDFLoader\nLoads a PDF with pypdf and chunks at character level.\nLoader also stores page numbers in metadatas.\nInitialize with file path.\nMethods\n__init__(file_path[,\u00a0password])\nInitialize with file path.\nlazy_load()\nLazy load given path as pages.\nload()\nLoad given path as pages.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nAttributes\nsource\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nLazy load given path as pages.\nload() \u2192 List[Document][source]\u00b6\nLoad given path as pages.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nproperty source: str\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyPDFLoader.html"} {"id": "ab4769c9beec-0", "text": "langchain.document_loaders.email.OutlookMessageLoader\u00b6\nclass langchain.document_loaders.email.OutlookMessageLoader(file_path: str)[source]\u00b6\nBases: BaseLoader\nLoads Outlook Message files using extract_msg.\nhttps://github.com/TeamMsgExtractor/msg-extractor\nInitialize with a file path.\nParameters\nfile_path \u2013 The path to the Outlook Message file.\nMethods\n__init__(file_path)\nInitialize with a file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.email.OutlookMessageLoader.html"} {"id": "0825f258de41-0", "text": "langchain.document_loaders.parsers.language.python.PythonSegmenter\u00b6\nclass langchain.document_loaders.parsers.language.python.PythonSegmenter(code: str)[source]\u00b6\nBases: CodeSegmenter\nThe code segmenter for Python.\nMethods\n__init__(code)\nextract_functions_classes()\nis_valid()\nsimplify_code()\nextract_functions_classes() \u2192 List[str][source]\u00b6\nis_valid() \u2192 bool[source]\u00b6\nsimplify_code() \u2192 str[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.python.PythonSegmenter.html"} {"id": "3ec0dc0f5bd5-0", "text": "langchain.document_loaders.dataframe.DataFrameLoader\u00b6\nclass langchain.document_loaders.dataframe.DataFrameLoader(data_frame: Any, page_content_column: str = 'text')[source]\u00b6\nBases: BaseLoader\nLoad Pandas DataFrame.\nInitialize with dataframe object.\nParameters\ndata_frame \u2013 Pandas DataFrame object.\npage_content_column \u2013 Name of the column containing the page content.\nDefaults to \u201ctext\u201d.\nMethods\n__init__(data_frame[,\u00a0page_content_column])\nInitialize with dataframe object.\nlazy_load()\nLazy load records from dataframe.\nload()\nLoad full dataframe.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nLazy load records from dataframe.\nload() \u2192 List[Document][source]\u00b6\nLoad full dataframe.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.dataframe.DataFrameLoader.html"} {"id": "289c8df19642-0", "text": "langchain.document_loaders.blackboard.BlackboardLoader\u00b6\nclass langchain.document_loaders.blackboard.BlackboardLoader(blackboard_course_url: str, bbrouter: str, load_all_recursively: bool = True, basic_auth: Optional[Tuple[str, str]] = None, cookies: Optional[dict] = None)[source]\u00b6\nBases: WebBaseLoader\nLoads all documents from a Blackboard course.\nThis loader is not compatible with all Blackboard courses. It is only\ncompatible with courses that use the new Blackboard interface.\nTo use this loader, you must have the BbRouter cookie. You can get this\ncookie by logging into the course and then copying the value of the\nBbRouter cookie from the browser\u2019s developer tools.\nExample\nfrom langchain.document_loaders import BlackboardLoader\nloader = BlackboardLoader(\n blackboard_course_url=\"https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1\",\n bbrouter=\"expires:12345...\",\n)\ndocuments = loader.load()\nInitialize with blackboard course url.\nThe BbRouter cookie is required for most blackboard courses.\nParameters\nblackboard_course_url \u2013 Blackboard course url.\nbbrouter \u2013 BbRouter cookie.\nload_all_recursively \u2013 If True, load all documents recursively.\nbasic_auth \u2013 Basic auth credentials.\ncookies \u2013 Cookies.\nRaises\nValueError \u2013 If blackboard course url is invalid.\nMethods\n__init__(blackboard_course_url,\u00a0bbrouter[,\u00a0...])\nInitialize with blackboard course url.\naload()\nLoad text from the urls in web_path async into Documents.\ncheck_bs4()\nCheck if BeautifulSoup4 is installed.\ndownload(path)\nDownload a file from an url.\nfetch_all(urls)", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blackboard.BlackboardLoader.html"} {"id": "289c8df19642-1", "text": "download(path)\nDownload a file from an url.\nfetch_all(urls)\nFetch all urls concurrently with rate limiting.\nlazy_load()\nLazy load text from the url(s) in web_path.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nparse_filename(url)\nParse the filename from an url.\nscrape([parser])\nScrape data from webpage and return it in BeautifulSoup format.\nscrape_all(urls[,\u00a0parser])\nFetch all urls, then return soups for all results.\nAttributes\nbs_get_text_kwargs\nkwargs for beatifulsoup4 get_text\ndefault_parser\nDefault parser to use for BeautifulSoup.\nraise_for_status\nRaise an exception if http status code denotes an error.\nrequests_kwargs\nkwargs for requests\nrequests_per_second\nMax number of concurrent requests to make.\nweb_path\nbase_url\nBase url of the blackboard course.\nfolder_path\nPath to the folder containing the documents.\nload_all_recursively\nIf True, load all documents recursively.\naload() \u2192 List[Document]\u00b6\nLoad text from the urls in web_path async into Documents.\ncheck_bs4() \u2192 None[source]\u00b6\nCheck if BeautifulSoup4 is installed.\nRaises\nImportError \u2013 If BeautifulSoup4 is not installed.\ndownload(path: str) \u2192 None[source]\u00b6\nDownload a file from an url.\nParameters\npath \u2013 Path to the file.\nasync fetch_all(urls: List[str]) \u2192 Any\u00b6\nFetch all urls concurrently with rate limiting.\nlazy_load() \u2192 Iterator[Document]\u00b6\nLazy load text from the url(s) in web_path.\nload() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blackboard.BlackboardLoader.html"} {"id": "289c8df19642-2", "text": "Load data into Document objects.\nReturns\nList of Documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nparse_filename(url: str) \u2192 str[source]\u00b6\nParse the filename from an url.\nParameters\nurl \u2013 Url to parse the filename from.\nReturns\nThe filename.\nscrape(parser: Optional[str] = None) \u2192 Any\u00b6\nScrape data from webpage and return it in BeautifulSoup format.\nscrape_all(urls: List[str], parser: Optional[str] = None) \u2192 List[Any]\u00b6\nFetch all urls, then return soups for all results.\nbase_url: str\u00b6\nBase url of the blackboard course.\nbs_get_text_kwargs: Dict[str, Any] = {}\u00b6\nkwargs for beatifulsoup4 get_text\ndefault_parser: str = 'html.parser'\u00b6\nDefault parser to use for BeautifulSoup.\nfolder_path: str\u00b6\nPath to the folder containing the documents.\nload_all_recursively: bool\u00b6\nIf True, load all documents recursively.\nraise_for_status: bool = False\u00b6\nRaise an exception if http status code denotes an error.\nrequests_kwargs: Dict[str, Any] = {}\u00b6\nkwargs for requests\nrequests_per_second: int = 2\u00b6\nMax number of concurrent requests to make.\nproperty web_path: str\u00b6\nweb_paths: List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blackboard.BlackboardLoader.html"} {"id": "297f35ea5968-0", "text": "langchain.document_loaders.parsers.html.bs4.BS4HTMLParser\u00b6\nclass langchain.document_loaders.parsers.html.bs4.BS4HTMLParser(*, features: str = 'lxml', get_text_separator: str = '', **kwargs: Any)[source]\u00b6\nBases: BaseBlobParser\nParser that uses beautiful soup to parse HTML files.\nInitialize a bs4 based HTML parser.\nMethods\n__init__(*[,\u00a0features,\u00a0get_text_separator])\nInitialize a bs4 based HTML parser.\nlazy_parse(blob)\nLoad HTML document into document objects.\nparse(blob)\nEagerly parse the blob into a document or documents.\nlazy_parse(blob: Blob) \u2192 Iterator[Document][source]\u00b6\nLoad HTML document into document objects.\nparse(blob: Blob) \u2192 List[Document]\u00b6\nEagerly parse the blob into a document or documents.\nThis is a convenience method for interactive development environment.\nProduction applications should favor the lazy_parse method instead.\nSubclasses should generally not over-ride this parse method.\nParameters\nblob \u2013 Blob instance\nReturns\nList of documents", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.html.bs4.BS4HTMLParser.html"} {"id": "9407594d74f4-0", "text": "langchain.document_loaders.weather.WeatherDataLoader\u00b6\nclass langchain.document_loaders.weather.WeatherDataLoader(client: OpenWeatherMapAPIWrapper, places: Sequence[str])[source]\u00b6\nBases: BaseLoader\nWeather Reader.\nReads the forecast & current weather of any location using OpenWeatherMap\u2019s free\nAPI. Checkout \u2018https://openweathermap.org/appid\u2019 for more on how to generate a free\nOpenWeatherMap API.\nInitialize with parameters.\nMethods\n__init__(client,\u00a0places)\nInitialize with parameters.\nfrom_params(places,\u00a0*[,\u00a0openweathermap_api_key])\nlazy_load()\nLazily load weather data for the given locations.\nload()\nLoad weather data for the given locations.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nclassmethod from_params(places: Sequence[str], *, openweathermap_api_key: Optional[str] = None) \u2192 WeatherDataLoader[source]\u00b6\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nLazily load weather data for the given locations.\nload() \u2192 List[Document][source]\u00b6\nLoad weather data for the given locations.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.weather.WeatherDataLoader.html"} {"id": "e970392e09f7-0", "text": "langchain.document_loaders.blob_loaders.schema.BlobLoader\u00b6\nclass langchain.document_loaders.blob_loaders.schema.BlobLoader[source]\u00b6\nBases: ABC\nAbstract interface for blob loaders implementation.\nImplementer should be able to load raw content from a storage system according\nto some criteria and return the raw content lazily as a stream of blobs.\nMethods\n__init__()\nyield_blobs()\nA lazy loader for raw data represented by LangChain's Blob object.\nabstract yield_blobs() \u2192 Iterable[Blob][source]\u00b6\nA lazy loader for raw data represented by LangChain\u2019s Blob object.\nReturns\nA generator over blobs", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.BlobLoader.html"} {"id": "72c73fe36e30-0", "text": "langchain.document_loaders.html_bs.BSHTMLLoader\u00b6\nclass langchain.document_loaders.html_bs.BSHTMLLoader(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '')[source]\u00b6\nBases: BaseLoader\nLoader that uses beautiful soup to parse HTML files.\nInitialise with path, and optionally, file encoding to use, and any kwargs\nto pass to the BeautifulSoup object.\nParameters\nfile_path \u2013 The path to the file to load.\nopen_encoding \u2013 The encoding to use when opening the file.\nbs_kwargs \u2013 Any kwargs to pass to the BeautifulSoup object.\nget_text_separator \u2013 The separator to use when calling get_text on the soup.\nMethods\n__init__(file_path[,\u00a0open_encoding,\u00a0...])\nInitialise with path, and optionally, file encoding to use, and any kwargs to pass to the BeautifulSoup object.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad HTML document into document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad HTML document into document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.html_bs.BSHTMLLoader.html"} {"id": "6bb002cba89c-0", "text": "langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader\u00b6\nclass langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader(spark_session: Optional[SparkSession] = None, df: Optional[Any] = None, page_content_column: str = 'text', fraction_of_memory: float = 0.1)[source]\u00b6\nBases: BaseLoader\nLoad PySpark DataFrames\nInitialize with a Spark DataFrame object.\nMethods\n__init__([spark_session,\u00a0df,\u00a0...])\nInitialize with a Spark DataFrame object.\nget_num_rows()\nGets the amount of \"feasible\" rows for the DataFrame\nlazy_load()\nA lazy loader for document content.\nload()\nLoad from the dataframe.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nget_num_rows() \u2192 Tuple[int, int][source]\u00b6\nGets the amount of \u201cfeasible\u201d rows for the DataFrame\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nA lazy loader for document content.\nload() \u2192 List[Document][source]\u00b6\nLoad from the dataframe.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader.html"} {"id": "0dc6d3b04307-0", "text": "langchain.document_loaders.mastodon.MastodonTootsLoader\u00b6\nclass langchain.document_loaders.mastodon.MastodonTootsLoader(mastodon_accounts: Sequence[str], number_toots: Optional[int] = 100, exclude_replies: bool = False, access_token: Optional[str] = None, api_base_url: str = 'https://mastodon.social')[source]\u00b6\nBases: BaseLoader\nMastodon toots loader.\nInstantiate Mastodon toots loader.\nParameters\nmastodon_accounts \u2013 The list of Mastodon accounts to query.\nnumber_toots \u2013 How many toots to pull for each account. Default is 100.\nexclude_replies \u2013 Whether to exclude reply toots from the load.\nDefault is False.\naccess_token \u2013 An access token if toots are loaded as a Mastodon app. Can\nalso be specified via the environment variables \u201cMASTODON_ACCESS_TOKEN\u201d.\napi_base_url \u2013 A Mastodon API base URL to talk to, if not using the default.\nDefault is \u201chttps://mastodon.social\u201d.\nMethods\n__init__(mastodon_accounts[,\u00a0number_toots,\u00a0...])\nInstantiate Mastodon toots loader.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad toots into documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad toots into documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mastodon.MastodonTootsLoader.html"} {"id": "023e1c897d1a-0", "text": "langchain.document_loaders.youtube.YoutubeLoader\u00b6\nclass langchain.document_loaders.youtube.YoutubeLoader(video_id: str, add_video_info: bool = False, language: Union[str, Sequence[str]] = 'en', translation: str = 'en', continue_on_failure: bool = False)[source]\u00b6\nBases: BaseLoader\nLoader that loads Youtube transcripts.\nInitialize with YouTube video ID.\nMethods\n__init__(video_id[,\u00a0add_video_info,\u00a0...])\nInitialize with YouTube video ID.\nextract_video_id(youtube_url)\nExtract video id from common YT urls.\nfrom_youtube_url(youtube_url,\u00a0**kwargs)\nGiven youtube URL, load video.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nstatic extract_video_id(youtube_url: str) \u2192 str[source]\u00b6\nExtract video id from common YT urls.\nclassmethod from_youtube_url(youtube_url: str, **kwargs: Any) \u2192 YoutubeLoader[source]\u00b6\nGiven youtube URL, load video.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.youtube.YoutubeLoader.html"} {"id": "b434e991cb82-0", "text": "langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader\u00b6\nclass langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]\u00b6\nBases: UnstructuredFileIOLoader\nUnstructuredAPIFileIOLoader uses the Unstructured API to load files.\nBy default, the loader makes a call to the hosted Unstructured API.\nIf you are running the unstructured API locally, you can change the\nAPI rule by passing in the url parameter when you initialize the loader.\nThe hosted Unstructured API requires an API key. See\nhttps://www.unstructured.io/api-key/ if you need to generate a key.\nYou can run the loader in one of two modes: \u201csingle\u201d and \u201celements\u201d.\nIf you use \u201csingle\u201d mode, the document will be returned as a single\nlangchain Document object. If you use \u201celements\u201d mode, the unstructured\nlibrary will split the document into elements such as Title and NarrativeText.\nYou can pass in additional unstructured kwargs after mode to apply\ndifferent unstructured settings.\nExamples\n```python\nfrom langchain.document_loaders import UnstructuredAPIFileLoader\nwith open(\u201cexample.pdf\u201d, \u201crb\u201d) as f:\nloader = UnstructuredFileAPILoader(f, mode=\u201delements\u201d, strategy=\u201dfast\u201d, api_key=\u201dMY_API_KEY\u201d,\n)\ndocs = loader.load()\n```\nReferences\nhttps://unstructured-io.github.io/unstructured/bricks.html#partition\nhttps://www.unstructured.io/api-key/\nhttps://github.com/Unstructured-IO/unstructured-api\nInitialize with file path.\nMethods", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader.html"} {"id": "b434e991cb82-1", "text": "Initialize with file path.\nMethods\n__init__(file[,\u00a0mode,\u00a0url,\u00a0api_key])\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader.html"} {"id": "8600292c08a0-0", "text": "langchain.document_loaders.odt.UnstructuredODTLoader\u00b6\nclass langchain.document_loaders.odt.UnstructuredODTLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]\u00b6\nBases: UnstructuredFileLoader\nLoader that uses unstructured to load open office ODT files.\nInitialize with file path.\nMethods\n__init__(file_path[,\u00a0mode])\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.odt.UnstructuredODTLoader.html"} {"id": "feecd0461c4c-0", "text": "langchain.document_loaders.parsers.txt.TextParser\u00b6\nclass langchain.document_loaders.parsers.txt.TextParser[source]\u00b6\nBases: BaseBlobParser\nParser for text blobs.\nMethods\n__init__()\nlazy_parse(blob)\nLazily parse the blob.\nparse(blob)\nEagerly parse the blob into a document or documents.\nlazy_parse(blob: Blob) \u2192 Iterator[Document][source]\u00b6\nLazily parse the blob.\nparse(blob: Blob) \u2192 List[Document]\u00b6\nEagerly parse the blob into a document or documents.\nThis is a convenience method for interactive development environment.\nProduction applications should favor the lazy_parse method instead.\nSubclasses should generally not over-ride this parse method.\nParameters\nblob \u2013 Blob instance\nReturns\nList of documents", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.txt.TextParser.html"} {"id": "cc20d7ac98bd-0", "text": "langchain.document_loaders.python.PythonLoader\u00b6\nclass langchain.document_loaders.python.PythonLoader(file_path: str)[source]\u00b6\nBases: TextLoader\nLoad Python files, respecting any non-default encoding if specified.\nInitialize with file path.\nMethods\n__init__(file_path)\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad from file path.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document]\u00b6\nLoad from file path.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.python.PythonLoader.html"} {"id": "1a4f577c1edb-0", "text": "langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader\u00b6\nclass langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader(conn_str: str, container: str, prefix: str = '')[source]\u00b6\nBases: BaseLoader\nLoading Documents from Azure Blob Storage.\nInitialize with connection string, container and blob prefix.\nMethods\n__init__(conn_str,\u00a0container[,\u00a0prefix])\nInitialize with connection string, container and blob prefix.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nAttributes\nconn_str\nConnection string for Azure Blob Storage.\ncontainer\nContainer name.\nprefix\nPrefix for blob names.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nconn_str\u00b6\nConnection string for Azure Blob Storage.\ncontainer\u00b6\nContainer name.\nprefix\u00b6\nPrefix for blob names.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader.html"} {"id": "f0bd5bce2eaf-0", "text": "langchain.document_loaders.gitbook.GitbookLoader\u00b6\nclass langchain.document_loaders.gitbook.GitbookLoader(web_page: str, load_all_paths: bool = False, base_url: Optional[str] = None, content_selector: str = 'main')[source]\u00b6\nBases: WebBaseLoader\nLoad GitBook data.\nload from either a single page, or\nload all (relative) paths in the navbar.\nInitialize with web page and whether to load all paths.\nParameters\nweb_page \u2013 The web page to load or the starting point from where\nrelative paths are discovered.\nload_all_paths \u2013 If set to True, all relative paths in the navbar\nare loaded instead of only web_page.\nbase_url \u2013 If load_all_paths is True, the relative paths are\nappended to this base url. Defaults to web_page.\ncontent_selector \u2013 The CSS selector for the content to load.\nDefaults to \u201cmain\u201d.\nMethods\n__init__(web_page[,\u00a0load_all_paths,\u00a0...])\nInitialize with web page and whether to load all paths.\naload()\nLoad text from the urls in web_path async into Documents.\nfetch_all(urls)\nFetch all urls concurrently with rate limiting.\nlazy_load()\nLazy load text from the url(s) in web_path.\nload()\nFetch text from one single GitBook page.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nscrape([parser])\nScrape data from webpage and return it in BeautifulSoup format.\nscrape_all(urls[,\u00a0parser])\nFetch all urls, then return soups for all results.\nAttributes\nbs_get_text_kwargs\nkwargs for beatifulsoup4 get_text\ndefault_parser\nDefault parser to use for BeautifulSoup.\nraise_for_status\nRaise an exception if http status code denotes an error.\nrequests_kwargs\nkwargs for requests\nrequests_per_second", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gitbook.GitbookLoader.html"} {"id": "f0bd5bce2eaf-1", "text": "requests_kwargs\nkwargs for requests\nrequests_per_second\nMax number of concurrent requests to make.\nweb_path\naload() \u2192 List[Document]\u00b6\nLoad text from the urls in web_path async into Documents.\nasync fetch_all(urls: List[str]) \u2192 Any\u00b6\nFetch all urls concurrently with rate limiting.\nlazy_load() \u2192 Iterator[Document]\u00b6\nLazy load text from the url(s) in web_path.\nload() \u2192 List[Document][source]\u00b6\nFetch text from one single GitBook page.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nscrape(parser: Optional[str] = None) \u2192 Any\u00b6\nScrape data from webpage and return it in BeautifulSoup format.\nscrape_all(urls: List[str], parser: Optional[str] = None) \u2192 List[Any]\u00b6\nFetch all urls, then return soups for all results.\nbs_get_text_kwargs: Dict[str, Any] = {}\u00b6\nkwargs for beatifulsoup4 get_text\ndefault_parser: str = 'html.parser'\u00b6\nDefault parser to use for BeautifulSoup.\nraise_for_status: bool = False\u00b6\nRaise an exception if http status code denotes an error.\nrequests_kwargs: Dict[str, Any] = {}\u00b6\nkwargs for requests\nrequests_per_second: int = 2\u00b6\nMax number of concurrent requests to make.\nproperty web_path: str\u00b6\nweb_paths: List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gitbook.GitbookLoader.html"} {"id": "fe5c13962b78-0", "text": "langchain.document_loaders.pdf.BasePDFLoader\u00b6\nclass langchain.document_loaders.pdf.BasePDFLoader(file_path: str)[source]\u00b6\nBases: BaseLoader, ABC\nBase loader class for PDF files.\nDefaults to check for local file, but if the file is a web path, it will download it\nto a temporary file, and use that, then clean up the temporary file after completion\nInitialize with file path.\nMethods\n__init__(file_path)\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nAttributes\nsource\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nabstract load() \u2192 List[Document]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nproperty source: str\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.BasePDFLoader.html"} {"id": "57fd968f01ff-0", "text": "langchain.document_loaders.snowflake_loader.SnowflakeLoader\u00b6\nclass langchain.document_loaders.snowflake_loader.SnowflakeLoader(query: str, user: str, password: str, account: str, warehouse: str, role: str, database: str, schema: str, parameters: Optional[Dict[str, Any]] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None)[source]\u00b6\nBases: BaseLoader\nLoads a query result from Snowflake into a list of documents.\nEach document represents one row of the result. The page_content_columns\nare written into the page_content of the document. The metadata_columns\nare written into the metadata of the document. By default, all columns\nare written into the page_content and none into the metadata.\nInitialize Snowflake document loader.\nParameters\nquery \u2013 The query to run in Snowflake.\nuser \u2013 Snowflake user.\npassword \u2013 Snowflake password.\naccount \u2013 Snowflake account.\nwarehouse \u2013 Snowflake warehouse.\nrole \u2013 Snowflake role.\ndatabase \u2013 Snowflake database\nschema \u2013 Snowflake schema\npage_content_columns \u2013 Optional. Columns written to Document page_content.\nmetadata_columns \u2013 Optional. Columns written to Document metadata.\nMethods\n__init__(query,\u00a0user,\u00a0password,\u00a0account,\u00a0...)\nInitialize Snowflake document loader.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.snowflake_loader.SnowflakeLoader.html"} {"id": "57fd968f01ff-1", "text": "Load Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.snowflake_loader.SnowflakeLoader.html"} {"id": "6ebadc7c5781-0", "text": "langchain.document_loaders.whatsapp_chat.concatenate_rows\u00b6\nlangchain.document_loaders.whatsapp_chat.concatenate_rows(date: str, sender: str, text: str) \u2192 str[source]\u00b6\nCombine message information in a readable format ready to be used.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.whatsapp_chat.concatenate_rows.html"} {"id": "4fba2a21c30c-0", "text": "langchain.document_loaders.unstructured.validate_unstructured_version\u00b6\nlangchain.document_loaders.unstructured.validate_unstructured_version(min_unstructured_version: str) \u2192 None[source]\u00b6\nRaises an error if the unstructured version does not exceed the\nspecified minimum.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.validate_unstructured_version.html"} {"id": "d077b494fcab-0", "text": "langchain.document_loaders.parsers.pdf.PyPDFium2Parser\u00b6\nclass langchain.document_loaders.parsers.pdf.PyPDFium2Parser[source]\u00b6\nBases: BaseBlobParser\nParse PDFs with PyPDFium2.\nInitialize the parser.\nMethods\n__init__()\nInitialize the parser.\nlazy_parse(blob)\nLazily parse the blob.\nparse(blob)\nEagerly parse the blob into a document or documents.\nlazy_parse(blob: Blob) \u2192 Iterator[Document][source]\u00b6\nLazily parse the blob.\nparse(blob: Blob) \u2192 List[Document]\u00b6\nEagerly parse the blob into a document or documents.\nThis is a convenience method for interactive development environment.\nProduction applications should favor the lazy_parse method instead.\nSubclasses should generally not over-ride this parse method.\nParameters\nblob \u2013 Blob instance\nReturns\nList of documents", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyPDFium2Parser.html"} {"id": "1840fedf226a-0", "text": "langchain.document_loaders.s3_directory.S3DirectoryLoader\u00b6\nclass langchain.document_loaders.s3_directory.S3DirectoryLoader(bucket: str, prefix: str = '')[source]\u00b6\nBases: BaseLoader\nLoading logic for loading documents from s3.\nInitialize with bucket and key name.\nMethods\n__init__(bucket[,\u00a0prefix])\nInitialize with bucket and key name.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_directory.S3DirectoryLoader.html"} {"id": "2e973dda6a1a-0", "text": "langchain.document_loaders.tencent_cos_directory.TencentCOSDirectoryLoader\u00b6\nclass langchain.document_loaders.tencent_cos_directory.TencentCOSDirectoryLoader(conf: Any, bucket: str, prefix: str = '')[source]\u00b6\nBases: BaseLoader\nLoading logic for loading documents from Tencent Cloud COS.\nInitialize with COS config, bucket and prefix.\n:param conf(CosConfig): COS config.\n:param bucket(str): COS bucket.\n:param prefix(str): prefix.\nMethods\n__init__(conf,\u00a0bucket[,\u00a0prefix])\nInitialize with COS config, bucket and prefix.\nlazy_load()\nLoad documents.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nLoad documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tencent_cos_directory.TencentCOSDirectoryLoader.html"} {"id": "24473bed3080-0", "text": "langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters\u00b6\nclass langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters[source]\u00b6\nBases: TypedDict\nParameters for the embaas document extraction API.\nMethods\n__init__(*args,\u00a0**kwargs)\nclear()\ncopy()\nfromkeys([value])\nCreate a new dictionary with keys from iterable and values set to value.\nget(key[,\u00a0default])\nReturn the value for key if key is in the dictionary, else default.\nitems()\nkeys()\npop(k[,d])\nIf the key is not found, return the default if given; otherwise, raise a KeyError.\npopitem()\nRemove and return a (key, value) pair as a 2-tuple.\nsetdefault(key[,\u00a0default])\nInsert key with a value of default if key is not in the dictionary.\nupdate([E,\u00a0]**F)\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]\nvalues()\nAttributes\nmime_type\nThe mime type of the document.\nfile_extension\nThe file extension of the document.\nfile_name\nThe file name of the document.\nshould_chunk\nWhether to chunk the document into pages.\nchunk_size\nThe maximum size of the text chunks.\nchunk_overlap\nThe maximum overlap allowed between chunks.\nchunk_splitter\nThe text splitter class name for creating chunks.\nseparators\nThe separators for chunks.\nshould_embed\nWhether to create embeddings for the document in the response.\nmodel", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html"} {"id": "24473bed3080-1", "text": "should_embed\nWhether to create embeddings for the document in the response.\nmodel\nThe model to pass to the Embaas document extraction API.\ninstruction\nThe instruction to pass to the Embaas document extraction API.\nclear() \u2192 None.\u00a0 Remove all items from D.\u00b6\ncopy() \u2192 a shallow copy of D\u00b6\nfromkeys(value=None, /)\u00b6\nCreate a new dictionary with keys from iterable and values set to value.\nget(key, default=None, /)\u00b6\nReturn the value for key if key is in the dictionary, else default.\nitems() \u2192 a set-like object providing a view on D's items\u00b6\nkeys() \u2192 a set-like object providing a view on D's keys\u00b6\npop(k[, d]) \u2192 v, remove specified key and return the corresponding value.\u00b6\nIf the key is not found, return the default if given; otherwise,\nraise a KeyError.\npopitem()\u00b6\nRemove and return a (key, value) pair as a 2-tuple.\nPairs are returned in LIFO (last-in, first-out) order.\nRaises KeyError if the dict is empty.\nsetdefault(key, default=None, /)\u00b6\nInsert key with a value of default if key is not in the dictionary.\nReturn the value for key if key is in the dictionary, else default.\nupdate([E, ]**F) \u2192 None.\u00a0 Update D from dict/iterable E and F.\u00b6\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k]\nIf E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v\nIn either case, this is followed by: for k in F: D[k] = F[k]", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html"} {"id": "24473bed3080-2", "text": "values() \u2192 an object providing a view on D's values\u00b6\nchunk_overlap: int\u00b6\nThe maximum overlap allowed between chunks.\nchunk_size: int\u00b6\nThe maximum size of the text chunks.\nchunk_splitter: str\u00b6\nThe text splitter class name for creating chunks.\nfile_extension: str\u00b6\nThe file extension of the document.\nfile_name: str\u00b6\nThe file name of the document.\ninstruction: str\u00b6\nThe instruction to pass to the Embaas document extraction API.\nmime_type: str\u00b6\nThe mime type of the document.\nmodel: str\u00b6\nThe model to pass to the Embaas document extraction API.\nseparators: List[str]\u00b6\nThe separators for chunks.\nshould_chunk: bool\u00b6\nWhether to chunk the document into pages.\nshould_embed: bool\u00b6\nWhether to create embeddings for the document in the response.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html"} {"id": "59818ce32c73-0", "text": "langchain.document_loaders.parsers.generic.MimeTypeBasedParser\u00b6\nclass langchain.document_loaders.parsers.generic.MimeTypeBasedParser(handlers: Mapping[str, BaseBlobParser], *, fallback_parser: Optional[BaseBlobParser] = None)[source]\u00b6\nBases: BaseBlobParser\nA parser that uses mime-types to determine how to parse a blob.\nThis parser is useful for simple pipelines where the mime-type is sufficient\nto determine how to parse a blob.\nTo use, configure handlers based on mime-types and pass them to the initializer.\nExample\nfrom langchain.document_loaders.parsers.generic import MimeTypeBasedParser\nparser = MimeTypeBasedParser(\nhandlers={\u201capplication/pdf\u201d: \u2026,\n},\nfallback_parser=\u2026,\n)\nDefine a parser that uses mime-types to determine how to parse a blob.\nParameters\nhandlers \u2013 A mapping from mime-types to functions that take a blob, parse it\nand return a document.\nfallback_parser \u2013 A fallback_parser parser to use if the mime-type is not\nfound in the handlers. If provided, this parser will be\nused to parse blobs with all mime-types not found in\nthe handlers.\nIf not provided, a ValueError will be raised if the\nmime-type is not found in the handlers.\nMethods\n__init__(handlers,\u00a0*[,\u00a0fallback_parser])\nDefine a parser that uses mime-types to determine how to parse a blob.\nlazy_parse(blob)\nLoad documents from a blob.\nparse(blob)\nEagerly parse the blob into a document or documents.\nlazy_parse(blob: Blob) \u2192 Iterator[Document][source]\u00b6\nLoad documents from a blob.\nparse(blob: Blob) \u2192 List[Document]\u00b6\nEagerly parse the blob into a document or documents.\nThis is a convenience method for interactive development environment.\nProduction applications should favor the lazy_parse method instead.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.generic.MimeTypeBasedParser.html"} {"id": "59818ce32c73-1", "text": "Production applications should favor the lazy_parse method instead.\nSubclasses should generally not over-ride this parse method.\nParameters\nblob \u2013 Blob instance\nReturns\nList of documents", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.generic.MimeTypeBasedParser.html"} {"id": "dc6ac89c3cb0-0", "text": "langchain.document_loaders.discord.DiscordChatLoader\u00b6\nclass langchain.document_loaders.discord.DiscordChatLoader(chat_log: pd.DataFrame, user_id_col: str = 'ID')[source]\u00b6\nBases: BaseLoader\nLoad Discord chat logs.\nInitialize with a Pandas DataFrame containing chat logs.\nParameters\nchat_log \u2013 Pandas DataFrame containing chat logs.\nuser_id_col \u2013 Name of the column containing the user ID. Defaults to \u201cID\u201d.\nMethods\n__init__(chat_log[,\u00a0user_id_col])\nInitialize with a Pandas DataFrame containing chat logs.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad all chat messages.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad all chat messages.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.discord.DiscordChatLoader.html"} {"id": "49d41c812bdc-0", "text": "langchain.document_loaders.roam.RoamLoader\u00b6\nclass langchain.document_loaders.roam.RoamLoader(path: str)[source]\u00b6\nBases: BaseLoader\nLoader that loads Roam files from disk.\nInitialize with path.\nMethods\n__init__(path)\nInitialize with path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.roam.RoamLoader.html"} {"id": "e4d34dae84fb-0", "text": "langchain.document_loaders.telegram.TelegramChatApiLoader\u00b6\nclass langchain.document_loaders.telegram.TelegramChatApiLoader(chat_entity: Optional[EntityLike] = None, api_id: Optional[int] = None, api_hash: Optional[str] = None, username: Optional[str] = None, file_path: str = 'telegram_data.json')[source]\u00b6\nBases: BaseLoader\nLoader that loads Telegram chat json directory dump.\nInitialize with API parameters.\nMethods\n__init__([chat_entity,\u00a0api_id,\u00a0api_hash,\u00a0...])\nInitialize with API parameters.\nfetch_data_from_telegram()\nFetch data from Telegram API and save it as a JSON file.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nasync fetch_data_from_telegram() \u2192 None[source]\u00b6\nFetch data from Telegram API and save it as a JSON file.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.TelegramChatApiLoader.html"} {"id": "197700ecf672-0", "text": "langchain.document_loaders.telegram.text_to_docs\u00b6\nlangchain.document_loaders.telegram.text_to_docs(text: Union[str, List[str]]) \u2192 List[Document][source]\u00b6\nConverts a string or list of strings to a list of Documents with metadata.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.text_to_docs.html"} {"id": "82f9e02fc581-0", "text": "langchain.document_loaders.unstructured.UnstructuredBaseLoader\u00b6\nclass langchain.document_loaders.unstructured.UnstructuredBaseLoader(mode: str = 'single', **unstructured_kwargs: Any)[source]\u00b6\nBases: BaseLoader, ABC\nLoader that uses unstructured to load files.\nInitialize with file path.\nMethods\n__init__([mode])\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredBaseLoader.html"} {"id": "f35acd8234db-0", "text": "langchain.document_loaders.chatgpt.ChatGPTLoader\u00b6\nclass langchain.document_loaders.chatgpt.ChatGPTLoader(log_file: str, num_logs: int = - 1)[source]\u00b6\nBases: BaseLoader\nLoad conversations from exported ChatGPT data.\nParameters\nlog_file \u2013 Path to the log file\nnum_logs \u2013 Number of logs to load. If 0, load all logs.\nMethods\n__init__(log_file[,\u00a0num_logs])\nparam log_file\nPath to the log file\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.chatgpt.ChatGPTLoader.html"} {"id": "93691bad7f29-0", "text": "langchain.document_loaders.git.GitLoader\u00b6\nclass langchain.document_loaders.git.GitLoader(repo_path: str, clone_url: Optional[str] = None, branch: Optional[str] = 'main', file_filter: Optional[Callable[[str], bool]] = None)[source]\u00b6\nBases: BaseLoader\nLoads files from a Git repository into a list of documents.\nThe Repository can be local on disk available at repo_path,\nor remote at clone_url that will be cloned to repo_path.\nCurrently, supports only text files.\nEach document represents one file in the repository. The path points to\nthe local Git repository, and the branch specifies the branch to load\nfiles from. By default, it loads from the main branch.\nParameters\nrepo_path \u2013 The path to the Git repository.\nclone_url \u2013 Optional. The URL to clone the repository from.\nbranch \u2013 Optional. The branch to load files from. Defaults to main.\nfile_filter \u2013 Optional. A function that takes a file path and returns\na boolean indicating whether to load the file. Defaults to None.\nMethods\n__init__(repo_path[,\u00a0clone_url,\u00a0branch,\u00a0...])\nparam repo_path\nThe path to the Git repository.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.git.GitLoader.html"} {"id": "93691bad7f29-1", "text": "Defaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.git.GitLoader.html"} {"id": "0a64007b2e1b-0", "text": "langchain.document_loaders.epub.UnstructuredEPubLoader\u00b6\nclass langchain.document_loaders.epub.UnstructuredEPubLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]\u00b6\nBases: UnstructuredFileLoader\nLoader that uses unstructured to load epub files.\nInitialize with file path.\nMethods\n__init__(file_path[,\u00a0mode])\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.epub.UnstructuredEPubLoader.html"} {"id": "9a273534fd86-0", "text": "langchain.document_loaders.conllu.CoNLLULoader\u00b6\nclass langchain.document_loaders.conllu.CoNLLULoader(file_path: str)[source]\u00b6\nBases: BaseLoader\nLoad CoNLL-U files.\nInitialize with a file path.\nMethods\n__init__(file_path)\nInitialize with a file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad from a file path.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad from a file path.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.conllu.CoNLLULoader.html"} {"id": "1a13deb84ce2-0", "text": "langchain.document_loaders.notebook.remove_newlines\u00b6\nlangchain.document_loaders.notebook.remove_newlines(x: Any) \u2192 Any[source]\u00b6\nRemove recursively newlines, no matter the data structure they are stored in.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notebook.remove_newlines.html"} {"id": "f483990d2dfe-0", "text": "langchain.document_loaders.parsers.pdf.PyPDFParser\u00b6\nclass langchain.document_loaders.parsers.pdf.PyPDFParser(password: Optional[Union[str, bytes]] = None)[source]\u00b6\nBases: BaseBlobParser\nLoads a PDF with pypdf and chunks at character level.\nMethods\n__init__([password])\nlazy_parse(blob)\nLazily parse the blob.\nparse(blob)\nEagerly parse the blob into a document or documents.\nlazy_parse(blob: Blob) \u2192 Iterator[Document][source]\u00b6\nLazily parse the blob.\nparse(blob: Blob) \u2192 List[Document]\u00b6\nEagerly parse the blob into a document or documents.\nThis is a convenience method for interactive development environment.\nProduction applications should favor the lazy_parse method instead.\nSubclasses should generally not over-ride this parse method.\nParameters\nblob \u2013 Blob instance\nReturns\nList of documents", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyPDFParser.html"} {"id": "e7a60e0505d4-0", "text": "langchain.document_loaders.html.UnstructuredHTMLLoader\u00b6\nclass langchain.document_loaders.html.UnstructuredHTMLLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]\u00b6\nBases: UnstructuredFileLoader\nLoader that uses unstructured to load HTML files.\nInitialize with file path.\nMethods\n__init__(file_path[,\u00a0mode])\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.html.UnstructuredHTMLLoader.html"} {"id": "1b859aaade66-0", "text": "langchain.document_loaders.diffbot.DiffbotLoader\u00b6\nclass langchain.document_loaders.diffbot.DiffbotLoader(api_token: str, urls: List[str], continue_on_failure: bool = True)[source]\u00b6\nBases: BaseLoader\nLoads Diffbot file json.\nInitialize with API token, ids, and key.\nParameters\napi_token \u2013 Diffbot API token.\nurls \u2013 List of URLs to load.\ncontinue_on_failure \u2013 Whether to continue loading other URLs if one fails.\nDefaults to True.\nMethods\n__init__(api_token,\u00a0urls[,\u00a0continue_on_failure])\nInitialize with API token, ids, and key.\nlazy_load()\nA lazy loader for Documents.\nload()\nExtract text from Diffbot on all the URLs and return Documents\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nExtract text from Diffbot on all the URLs and return Documents\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.diffbot.DiffbotLoader.html"} {"id": "351080366d36-0", "text": "langchain.document_loaders.directory.DirectoryLoader\u00b6\nclass langchain.document_loaders.directory.DirectoryLoader(path: str, glob: str = '**/[!.]*', silent_errors: bool = False, load_hidden: bool = False, loader_cls: ~typing.Union[~typing.Type[~langchain.document_loaders.unstructured.UnstructuredFileLoader], ~typing.Type[~langchain.document_loaders.text.TextLoader], ~typing.Type[~langchain.document_loaders.html_bs.BSHTMLLoader]] = , loader_kwargs: ~typing.Optional[dict] = None, recursive: bool = False, show_progress: bool = False, use_multithreading: bool = False, max_concurrency: int = 4)[source]\u00b6\nBases: BaseLoader\nLoad documents from a directory.\nInitialize with a path to directory and how to glob over it.\nParameters\npath \u2013 Path to directory.\nglob \u2013 Glob pattern to use to find files. Defaults to \u201c**/[!.]*\u201d\n(all files except hidden).\nsilent_errors \u2013 Whether to silently ignore errors. Defaults to False.\nload_hidden \u2013 Whether to load hidden files. Defaults to False.\nloader_cls \u2013 Loader class to use for loading files.\nDefaults to UnstructuredFileLoader.\nloader_kwargs \u2013 Keyword arguments to pass to loader_cls. Defaults to None.\nrecursive \u2013 Whether to recursively search for files. Defaults to False.\nshow_progress \u2013 Whether to show a progress bar. Defaults to False.\nuse_multithreading \u2013 Whether to use multithreading. Defaults to False.\nmax_concurrency \u2013 The maximum number of threads to use. Defaults to 4.\nMethods\n__init__(path[,\u00a0glob,\u00a0silent_errors,\u00a0...])\nInitialize with a path to directory and how to glob over it.\nlazy_load()\nA lazy loader for Documents.\nload()", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.directory.DirectoryLoader.html"} {"id": "351080366d36-1", "text": "lazy_load()\nA lazy loader for Documents.\nload()\nLoad documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nload_file(item,\u00a0path,\u00a0docs,\u00a0pbar)\nLoad a file.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nload_file(item: Path, path: Path, docs: List[Document], pbar: Optional[Any]) \u2192 None[source]\u00b6\nLoad a file.\nParameters\nitem \u2013 File path.\npath \u2013 Directory path.\ndocs \u2013 List of documents to append to.\npbar \u2013 Progress bar. Defaults to None.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.directory.DirectoryLoader.html"} {"id": "85abe82b681e-0", "text": "langchain.document_loaders.modern_treasury.ModernTreasuryLoader\u00b6\nclass langchain.document_loaders.modern_treasury.ModernTreasuryLoader(resource: str, organization_id: Optional[str] = None, api_key: Optional[str] = None)[source]\u00b6\nBases: BaseLoader\nLoader that fetches data from Modern Treasury.\nMethods\n__init__(resource[,\u00a0organization_id,\u00a0api_key])\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.modern_treasury.ModernTreasuryLoader.html"} {"id": "c0517dd53f93-0", "text": "langchain.document_loaders.tencent_cos_file.TencentCOSFileLoader\u00b6\nclass langchain.document_loaders.tencent_cos_file.TencentCOSFileLoader(conf: Any, bucket: str, key: str)[source]\u00b6\nBases: BaseLoader\nLoading logic for loading documents from Tencent Cloud COS.\nInitialize with COS config, bucket and key name.\n:param conf(CosConfig): COS config.\n:param bucket(str): COS bucket.\n:param key(str): COS file key.\nMethods\n__init__(conf,\u00a0bucket,\u00a0key)\nInitialize with COS config, bucket and key name.\nlazy_load()\nLoad documents.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nLoad documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tencent_cos_file.TencentCOSFileLoader.html"} {"id": "1bd411c1c144-0", "text": "langchain.document_loaders.embaas.EmbaasBlobLoader\u00b6\nclass langchain.document_loaders.embaas.EmbaasBlobLoader(*, embaas_api_key: Optional[str] = None, api_url: str = 'https://api.embaas.io/v1/document/extract-text/bytes/', params: EmbaasDocumentExtractionParameters = {})[source]\u00b6\nBases: BaseEmbaasLoader, BaseBlobParser\nEmbaas\u2019s document byte loader.\nTo use, you should have the\nenvironment variable EMBAAS_API_KEY set with your API key, or pass\nit as a named parameter to the constructor.\nExample\n# Default parsing\nfrom langchain.document_loaders.embaas import EmbaasBlobLoader\nloader = EmbaasBlobLoader()\nblob = Blob.from_path(path=\"example.mp3\")\ndocuments = loader.parse(blob=blob)\n# Custom api parameters (create embeddings automatically)\nfrom langchain.document_loaders.embaas import EmbaasBlobLoader\nloader = EmbaasBlobLoader(\n params={\n \"should_embed\": True,\n \"model\": \"e5-large-v2\",\n \"chunk_size\": 256,\n \"chunk_splitter\": \"CharacterTextSplitter\"\n }\n)\nblob = Blob.from_path(path=\"example.pdf\")\ndocuments = loader.parse(blob=blob)\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam api_url: str = 'https://api.embaas.io/v1/document/extract-text/bytes/'\u00b6\nThe URL of the embaas document extraction API.\nparam embaas_api_key: Optional[str] = None\u00b6\nThe API key for the embaas document extraction API.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasBlobLoader.html"} {"id": "1bd411c1c144-1", "text": "The API key for the embaas document extraction API.\nparam params: langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters = {}\u00b6\nAdditional parameters to pass to the embaas document extraction API.\nlazy_parse(blob: Blob) \u2192 Iterator[Document][source]\u00b6\nParses the blob lazily.\nParameters\nblob \u2013 The blob to parse.\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate that api key and python package exists in environment.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasBlobLoader.html"} {"id": "e1b5431d4fba-0", "text": "langchain.document_loaders.youtube.GoogleApiYoutubeLoader\u00b6\nclass langchain.document_loaders.youtube.GoogleApiYoutubeLoader(google_api_client: GoogleApiClient, channel_name: Optional[str] = None, video_ids: Optional[List[str]] = None, add_video_info: bool = True, captions_language: str = 'en', continue_on_failure: bool = False)[source]\u00b6\nBases: BaseLoader\nLoader that loads all Videos from a Channel\nTo use, you should have the googleapiclient,youtube_transcript_api\npython package installed.\nAs the service needs a google_api_client, you first have to initialize\nthe GoogleApiClient.\nAdditionally you have to either provide a channel name or a list of videoids\n\u201chttps://developers.google.com/docs/api/quickstart/python\u201d\nExample\nfrom langchain.document_loaders import GoogleApiClient\nfrom langchain.document_loaders import GoogleApiYoutubeLoader\ngoogle_api_client = GoogleApiClient(\n service_account_path=Path(\"path_to_your_sec_file.json\")\n)\nloader = GoogleApiYoutubeLoader(\n google_api_client=google_api_client,\n channel_name = \"CodeAesthetic\"\n)\nload.load()\nMethods\n__init__(google_api_client[,\u00a0channel_name,\u00a0...])\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nvalidate_channel_or_videoIds_is_set(values)\nValidate that either folder_id or document_ids is set, but not both.\nAttributes\nadd_video_info\ncaptions_language\nchannel_name\ncontinue_on_failure\nvideo_ids\ngoogle_api_client\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.youtube.GoogleApiYoutubeLoader.html"} {"id": "e1b5431d4fba-1", "text": "load() \u2192 List[Document][source]\u00b6\nLoad documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nclassmethod validate_channel_or_videoIds_is_set(values: Dict[str, Any]) \u2192 Dict[str, Any][source]\u00b6\nValidate that either folder_id or document_ids is set, but not both.\nadd_video_info: bool = True\u00b6\ncaptions_language: str = 'en'\u00b6\nchannel_name: Optional[str] = None\u00b6\ncontinue_on_failure: bool = False\u00b6\ngoogle_api_client: langchain.document_loaders.youtube.GoogleApiClient\u00b6\nvideo_ids: Optional[List[str]] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.youtube.GoogleApiYoutubeLoader.html"} {"id": "23bf56b1649e-0", "text": "langchain.document_loaders.onedrive_file.OneDriveFileLoader\u00b6\nclass langchain.document_loaders.onedrive_file.OneDriveFileLoader(*, file: File)[source]\u00b6\nBases: BaseLoader, BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam file: File [Required]\u00b6\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad Documents\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nmodel Config[source]\u00b6\nBases: object\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive_file.OneDriveFileLoader.html"} {"id": "31ea1bafe892-0", "text": "langchain.document_loaders.facebook_chat.concatenate_rows\u00b6\nlangchain.document_loaders.facebook_chat.concatenate_rows(row: dict) \u2192 str[source]\u00b6\nCombine message information in a readable format ready to be used.\nParameters\nrow \u2013 dictionary containing message information.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.facebook_chat.concatenate_rows.html"} {"id": "659a8b5c4a69-0", "text": "langchain.document_loaders.csv_loader.CSVLoader\u00b6\nclass langchain.document_loaders.csv_loader.CSVLoader(file_path: str, source_column: Optional[str] = None, csv_args: Optional[Dict] = None, encoding: Optional[str] = None)[source]\u00b6\nBases: BaseLoader\nLoads a CSV file into a list of documents.\nEach document represents one row of the CSV file. Every row is converted into a\nkey/value pair and outputted to a new line in the document\u2019s page_content.\nThe source for each document loaded from csv is set to the value of the\nfile_path argument for all doucments by default.\nYou can override this by setting the source_column argument to the\nname of a column in the CSV file.\nThe source of each document will then be set to the value of the column\nwith the name specified in source_column.\nOutput Example:column1: value1\ncolumn2: value2\ncolumn3: value3\nParameters\nfile_path \u2013 The path to the CSV file.\nsource_column \u2013 The name of the column in the CSV file to use as the source.\nOptional. Defaults to None.\ncsv_args \u2013 A dictionary of arguments to pass to the csv.DictReader.\nOptional. Defaults to None.\nencoding \u2013 The encoding of the CSV file. Optional. Defaults to None.\nMethods\n__init__(file_path[,\u00a0source_column,\u00a0...])\nparam file_path\nThe path to the CSV file.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into document objects.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.csv_loader.CSVLoader.html"} {"id": "659a8b5c4a69-1", "text": "load() \u2192 List[Document][source]\u00b6\nLoad data into document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.csv_loader.CSVLoader.html"} {"id": "476f358bbc15-0", "text": "langchain.document_loaders.hn.HNLoader\u00b6\nclass langchain.document_loaders.hn.HNLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None)[source]\u00b6\nBases: WebBaseLoader\nLoad Hacker News data from either main page results or the comments page.\nInitialize with webpage path.\nMethods\n__init__(web_path[,\u00a0header_template,\u00a0...])\nInitialize with webpage path.\naload()\nLoad text from the urls in web_path async into Documents.\nfetch_all(urls)\nFetch all urls concurrently with rate limiting.\nlazy_load()\nLazy load text from the url(s) in web_path.\nload()\nGet important HN webpage information.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nload_comments(soup_info)\nLoad comments from a HN post.\nload_results(soup)\nLoad items from an HN page.\nscrape([parser])\nScrape data from webpage and return it in BeautifulSoup format.\nscrape_all(urls[,\u00a0parser])\nFetch all urls, then return soups for all results.\nAttributes\nbs_get_text_kwargs\nkwargs for beatifulsoup4 get_text\ndefault_parser\nDefault parser to use for BeautifulSoup.\nraise_for_status\nRaise an exception if http status code denotes an error.\nrequests_kwargs\nkwargs for requests\nrequests_per_second\nMax number of concurrent requests to make.\nweb_path\naload() \u2192 List[Document]\u00b6\nLoad text from the urls in web_path async into Documents.\nasync fetch_all(urls: List[str]) \u2192 Any\u00b6\nFetch all urls concurrently with rate limiting.\nlazy_load() \u2192 Iterator[Document]\u00b6\nLazy load text from the url(s) in web_path.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hn.HNLoader.html"} {"id": "476f358bbc15-1", "text": "Lazy load text from the url(s) in web_path.\nload() \u2192 List[Document][source]\u00b6\nGet important HN webpage information.\nHN webpage components are:\ntitle\ncontent\nsource url,\ntime of post\nauthor of the post\nnumber of comments\nrank of the post\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nload_comments(soup_info: Any) \u2192 List[Document][source]\u00b6\nLoad comments from a HN post.\nload_results(soup: Any) \u2192 List[Document][source]\u00b6\nLoad items from an HN page.\nscrape(parser: Optional[str] = None) \u2192 Any\u00b6\nScrape data from webpage and return it in BeautifulSoup format.\nscrape_all(urls: List[str], parser: Optional[str] = None) \u2192 List[Any]\u00b6\nFetch all urls, then return soups for all results.\nbs_get_text_kwargs: Dict[str, Any] = {}\u00b6\nkwargs for beatifulsoup4 get_text\ndefault_parser: str = 'html.parser'\u00b6\nDefault parser to use for BeautifulSoup.\nraise_for_status: bool = False\u00b6\nRaise an exception if http status code denotes an error.\nrequests_kwargs: Dict[str, Any] = {}\u00b6\nkwargs for requests\nrequests_per_second: int = 2\u00b6\nMax number of concurrent requests to make.\nproperty web_path: str\u00b6\nweb_paths: List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hn.HNLoader.html"} {"id": "81e1a9c21b33-0", "text": "langchain.document_loaders.chatgpt.concatenate_rows\u00b6\nlangchain.document_loaders.chatgpt.concatenate_rows(message: dict, title: str) \u2192 str[source]\u00b6\nCombine message information in a readable format ready to be used.\n:param message: Message to be concatenated\n:param title: Title of the conversation\nReturns\nConcatenated message", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.chatgpt.concatenate_rows.html"} {"id": "0f6f02c4a07b-0", "text": "langchain.document_loaders.fauna.FaunaLoader\u00b6\nclass langchain.document_loaders.fauna.FaunaLoader(query: str, page_content_field: str, secret: str, metadata_fields: Optional[Sequence[str]] = None)[source]\u00b6\nBases: BaseLoader\nFaunaDB Loader.\nquery\u00b6\nThe FQL query string to execute.\nType\nstr\npage_content_field\u00b6\nThe field that contains the content of each page.\nType\nstr\nsecret\u00b6\nThe secret key for authenticating to FaunaDB.\nType\nstr\nmetadata_fields\u00b6\nOptional list of field names to include in metadata.\nType\nOptional[Sequence[str]]\nMethods\n__init__(query,\u00a0page_content_field,\u00a0secret)\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.fauna.FaunaLoader.html"} {"id": "eb7e606c0dd4-0", "text": "langchain.document_loaders.word_document.Docx2txtLoader\u00b6\nclass langchain.document_loaders.word_document.Docx2txtLoader(file_path: str)[source]\u00b6\nBases: BaseLoader, ABC\nLoads a DOCX with docx2txt and chunks at character level.\nDefaults to check for local file, but if the file is a web path, it will download it\nto a temporary file, and use that, then clean up the temporary file after completion\nInitialize with file path.\nMethods\n__init__(file_path)\nInitialize with file path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad given path as single page.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad given path as single page.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.word_document.Docx2txtLoader.html"} {"id": "306b80c50a2d-0", "text": "langchain.document_loaders.notiondb.NotionDBLoader\u00b6\nclass langchain.document_loaders.notiondb.NotionDBLoader(integration_token: str, database_id: str, request_timeout_sec: Optional[int] = 10)[source]\u00b6\nBases: BaseLoader\nNotion DB Loader.\nReads content from pages within a Noton Database.\n:param integration_token: Notion integration token.\n:type integration_token: str\n:param database_id: Notion database id.\n:type database_id: str\n:param request_timeout_sec: Timeout for Notion requests in seconds.\n:type request_timeout_sec: int\nInitialize with parameters.\nMethods\n__init__(integration_token,\u00a0database_id[,\u00a0...])\nInitialize with parameters.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad documents from the Notion database.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nload_page(page_summary)\nRead a page.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents from the Notion database.\n:returns: List of documents.\n:rtype: List[Document]\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nload_page(page_summary: Dict[str, Any]) \u2192 Document[source]\u00b6\nRead a page.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notiondb.NotionDBLoader.html"} {"id": "fb5d8fb7cc05-0", "text": "langchain.document_loaders.unstructured.satisfies_min_unstructured_version\u00b6\nlangchain.document_loaders.unstructured.satisfies_min_unstructured_version(min_version: str) \u2192 bool[source]\u00b6\nChecks to see if the installed unstructured version exceeds the minimum version\nfor the feature in question.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.satisfies_min_unstructured_version.html"} {"id": "404075c84c07-0", "text": "langchain.document_loaders.tomarkdown.ToMarkdownLoader\u00b6\nclass langchain.document_loaders.tomarkdown.ToMarkdownLoader(url: str, api_key: str)[source]\u00b6\nBases: BaseLoader\nLoader that loads HTML to markdown using 2markdown.\nInitialize with url and api key.\nMethods\n__init__(url,\u00a0api_key)\nInitialize with url and api key.\nlazy_load()\nLazily load the file.\nload()\nLoad file.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document][source]\u00b6\nLazily load the file.\nload() \u2192 List[Document][source]\u00b6\nLoad file.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tomarkdown.ToMarkdownLoader.html"} {"id": "48c80ac42c90-0", "text": "langchain.document_loaders.telegram.TelegramChatFileLoader\u00b6\nclass langchain.document_loaders.telegram.TelegramChatFileLoader(path: str)[source]\u00b6\nBases: BaseLoader\nLoader that loads Telegram chat json directory dump.\nInitialize with path.\nMethods\n__init__(path)\nInitialize with path.\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad documents.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad documents.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.TelegramChatFileLoader.html"} {"id": "5033da474411-0", "text": "langchain.document_loaders.pdf.PyPDFDirectoryLoader\u00b6\nclass langchain.document_loaders.pdf.PyPDFDirectoryLoader(path: str, glob: str = '**/[!.]*.pdf', silent_errors: bool = False, load_hidden: bool = False, recursive: bool = False)[source]\u00b6\nBases: BaseLoader\nLoads a directory with PDF files with pypdf and chunks at character level.\nLoader also stores page numbers in metadatas.\nMethods\n__init__(path[,\u00a0glob,\u00a0silent_errors,\u00a0...])\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyPDFDirectoryLoader.html"} {"id": "df31acf3ae98-0", "text": "langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader\u00b6\nclass langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader(path: Union[str, Path], *, glob: str = '**/[!.]*', suffixes: Optional[Sequence[str]] = None, show_progress: bool = False)[source]\u00b6\nBases: BlobLoader\nBlob loader for the local file system.\nExample:\nfrom langchain.document_loaders.blob_loaders import FileSystemBlobLoader\nloader = FileSystemBlobLoader(\"/path/to/directory\")\nfor blob in loader.yield_blobs():\n print(blob)\nInitialize with path to directory and how to glob over it.\nParameters\npath \u2013 Path to directory to load from\nglob \u2013 Glob pattern relative to the specified path\nby default set to pick up all non-hidden files\nsuffixes \u2013 Provide to keep only files with these suffixes\nUseful when wanting to keep files with different suffixes\nSuffixes must include the dot, e.g. \u201c.txt\u201d\nshow_progress \u2013 If true, will show a progress bar as the files are loaded.\nThis forces an iteration through all matching files\nto count them prior to loading them.\nExamples:\n\u2026 code-block:: python\n# Recursively load all text files in a directory.\nloader = FileSystemBlobLoader(\u201c/path/to/directory\u201d, glob=\u201d**/*.txt\u201d)\n# Recursively load all non-hidden files in a directory.\nloader = FileSystemBlobLoader(\u201c/path/to/directory\u201d, glob=\u201d**/[!.]*\u201d)\n# Load all files in a directory without recursion.\nloader = FileSystemBlobLoader(\u201c/path/to/directory\u201d, glob=\u201d*\u201d)\nMethods\n__init__(path,\u00a0*[,\u00a0glob,\u00a0suffixes,\u00a0...])\nInitialize with path to directory and how to glob over it.\ncount_matching_files()", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader.html"} {"id": "df31acf3ae98-1", "text": "Initialize with path to directory and how to glob over it.\ncount_matching_files()\nCount files that match the pattern without loading them.\nyield_blobs()\nYield blobs that match the requested pattern.\ncount_matching_files() \u2192 int[source]\u00b6\nCount files that match the pattern without loading them.\nyield_blobs() \u2192 Iterable[Blob][source]\u00b6\nYield blobs that match the requested pattern.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader.html"} {"id": "7b2ac8fdd522-0", "text": "langchain.document_loaders.confluence.ConfluenceLoader\u00b6\nclass langchain.document_loaders.confluence.ConfluenceLoader(url: str, api_key: Optional[str] = None, username: Optional[str] = None, oauth2: Optional[dict] = None, token: Optional[str] = None, cloud: Optional[bool] = True, number_of_retries: Optional[int] = 3, min_retry_seconds: Optional[int] = 2, max_retry_seconds: Optional[int] = 10, confluence_kwargs: Optional[dict] = None)[source]\u00b6\nBases: BaseLoader\nLoad Confluence pages.\nPort of https://llamahub.ai/l/confluence\nThis currently supports username/api_key, Oauth2 login or personal access token\nauthentication.\nSpecify a list page_ids and/or space_key to load in the corresponding pages into\nDocument objects, if both are specified the union of both sets will be returned.\nYou can also specify a boolean include_attachments to include attachments, this\nis set to False by default, if set to True all attachments will be downloaded and\nConfluenceReader will extract the text from the attachments and add it to the\nDocument object. Currently supported attachment types are: PDF, PNG, JPEG/JPG,\nSVG, Word and Excel.\nConfluence API supports difference format of page content. The storage format is the\nraw XML representation for storage. The view format is the HTML representation for\nviewing with macros are rendered as though it is viewed by users. You can pass\na enum content_format argument to load() to specify the content format, this is\nset to ContentFormat.STORAGE by default.\nHint: space_key and page_id can both be found in the URL of a page in Confluence\n- https://yoursite.atlassian.com/wiki/spaces//pages/\nExample", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html"} {"id": "7b2ac8fdd522-1", "text": "Example\nfrom langchain.document_loaders import ConfluenceLoader\nloader = ConfluenceLoader(\n url=\"https://yoursite.atlassian.com/wiki\",\n username=\"me\",\n api_key=\"12345\"\n)\ndocuments = loader.load(space_key=\"SPACE\",limit=50)\nParameters\nurl (str) \u2013 _description_\napi_key (str, optional) \u2013 _description_, defaults to None\nusername (str, optional) \u2013 _description_, defaults to None\noauth2 (dict, optional) \u2013 _description_, defaults to {}\ntoken (str, optional) \u2013 _description_, defaults to None\ncloud (bool, optional) \u2013 _description_, defaults to True\nnumber_of_retries (Optional[int], optional) \u2013 How many times to retry, defaults to 3\nmin_retry_seconds (Optional[int], optional) \u2013 defaults to 2\nmax_retry_seconds (Optional[int], optional) \u2013 defaults to 10\nconfluence_kwargs (dict, optional) \u2013 additional kwargs to initialize confluence with\nRaises\nValueError \u2013 Errors while validating input\nImportError \u2013 Required dependencies not installed.\nMethods\n__init__(url[,\u00a0api_key,\u00a0username,\u00a0oauth2,\u00a0...])\nis_public_page(page)\nCheck if a page is publicly accessible.\nlazy_load()\nA lazy loader for Documents.\nload([space_key,\u00a0page_ids,\u00a0label,\u00a0cql,\u00a0...])\nparam space_key\nSpace key retrieved from a confluence URL, defaults to None\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\npaginate_request(retrieval_method,\u00a0**kwargs)\nPaginate the various methods to retrieve groups of pages.\nprocess_attachment(page_id[,\u00a0ocr_languages])\nprocess_doc(link)\nprocess_image(link[,\u00a0ocr_languages])\nprocess_page(page,\u00a0include_attachments,\u00a0...)", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html"} {"id": "7b2ac8fdd522-2", "text": "process_image(link[,\u00a0ocr_languages])\nprocess_page(page,\u00a0include_attachments,\u00a0...)\nprocess_pages(pages,\u00a0...[,\u00a0ocr_languages])\nProcess a list of pages into a list of documents.\nprocess_pdf(link[,\u00a0ocr_languages])\nprocess_svg(link[,\u00a0ocr_languages])\nprocess_xls(link)\nvalidate_init_args([url,\u00a0api_key,\u00a0username,\u00a0...])\nValidates proper combinations of init arguments\nis_public_page(page: dict) \u2192 bool[source]\u00b6\nCheck if a page is publicly accessible.\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload(space_key: Optional[str] = None, page_ids: Optional[List[str]] = None, label: Optional[str] = None, cql: Optional[str] = None, include_restricted_content: bool = False, include_archived_content: bool = False, include_attachments: bool = False, include_comments: bool = False, content_format: ContentFormat = ContentFormat.STORAGE, limit: Optional[int] = 50, max_pages: Optional[int] = 1000, ocr_languages: Optional[str] = None) \u2192 List[Document][source]\u00b6\nParameters\nspace_key (Optional[str], optional) \u2013 Space key retrieved from a confluence URL, defaults to None\npage_ids (Optional[List[str]], optional) \u2013 List of specific page IDs to load, defaults to None\nlabel (Optional[str], optional) \u2013 Get all pages with this label, defaults to None\ncql (Optional[str], optional) \u2013 CQL Expression, defaults to None\ninclude_restricted_content (bool, optional) \u2013 defaults to False\ninclude_archived_content (bool, optional) \u2013 Whether to include archived content,\ndefaults to False\ninclude_attachments (bool, optional) \u2013 defaults to False\ninclude_comments (bool, optional) \u2013 defaults to False", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html"} {"id": "7b2ac8fdd522-3", "text": "include_comments (bool, optional) \u2013 defaults to False\ncontent_format (ContentFormat) \u2013 Specify content format, defaults to ContentFormat.STORAGE\nlimit (int, optional) \u2013 Maximum number of pages to retrieve per request, defaults to 50\nmax_pages (int, optional) \u2013 Maximum number of pages to retrieve in total, defaults 1000\nocr_languages (str, optional) \u2013 The languages to use for the Tesseract agent. To use a\nlanguage, you\u2019ll first need to install the appropriate\nTesseract language pack.\nRaises\nValueError \u2013 _description_\nImportError \u2013 _description_\nReturns\n_description_\nReturn type\nList[Document]\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\npaginate_request(retrieval_method: Callable, **kwargs: Any) \u2192 List[source]\u00b6\nPaginate the various methods to retrieve groups of pages.\nUnfortunately, due to page size, sometimes the Confluence API\ndoesn\u2019t match the limit value. If limit is >100 confluence\nseems to cap the response to 100. Also, due to the Atlassian Python\npackage, we don\u2019t get the \u201cnext\u201d values from the \u201c_links\u201d key because\nthey only return the value from the result key. So here, the pagination\nstarts from 0 and goes until the max_pages, getting the limit number\nof pages with each request. We have to manually check if there\nare more docs based on the length of the returned list of pages, rather than\njust checking for the presence of a next key in the response like this page\nwould have you do:", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html"} {"id": "7b2ac8fdd522-4", "text": "would have you do:\nhttps://developer.atlassian.com/server/confluence/pagination-in-the-rest-api/\nParameters\nretrieval_method (callable) \u2013 Function used to retrieve docs\nReturns\nList of documents\nReturn type\nList\nprocess_attachment(page_id: str, ocr_languages: Optional[str] = None) \u2192 List[str][source]\u00b6\nprocess_doc(link: str) \u2192 str[source]\u00b6\nprocess_image(link: str, ocr_languages: Optional[str] = None) \u2192 str[source]\u00b6\nprocess_page(page: dict, include_attachments: bool, include_comments: bool, content_format: ContentFormat, ocr_languages: Optional[str] = None) \u2192 Document[source]\u00b6\nprocess_pages(pages: List[dict], include_restricted_content: bool, include_attachments: bool, include_comments: bool, content_format: ContentFormat, ocr_languages: Optional[str] = None) \u2192 List[Document][source]\u00b6\nProcess a list of pages into a list of documents.\nprocess_pdf(link: str, ocr_languages: Optional[str] = None) \u2192 str[source]\u00b6\nprocess_svg(link: str, ocr_languages: Optional[str] = None) \u2192 str[source]\u00b6\nprocess_xls(link: str) \u2192 str[source]\u00b6\nstatic validate_init_args(url: Optional[str] = None, api_key: Optional[str] = None, username: Optional[str] = None, oauth2: Optional[dict] = None, token: Optional[str] = None) \u2192 Optional[List][source]\u00b6\nValidates proper combinations of init arguments", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html"} {"id": "006f383eca3b-0", "text": "langchain.document_loaders.parsers.pdf.PDFPlumberParser\u00b6\nclass langchain.document_loaders.parsers.pdf.PDFPlumberParser(text_kwargs: Optional[Mapping[str, Any]] = None)[source]\u00b6\nBases: BaseBlobParser\nParse PDFs with PDFPlumber.\nInitialize the parser.\nParameters\ntext_kwargs \u2013 Keyword arguments to pass to pdfplumber.Page.extract_text()\nMethods\n__init__([text_kwargs])\nInitialize the parser.\nlazy_parse(blob)\nLazily parse the blob.\nparse(blob)\nEagerly parse the blob into a document or documents.\nlazy_parse(blob: Blob) \u2192 Iterator[Document][source]\u00b6\nLazily parse the blob.\nparse(blob: Blob) \u2192 List[Document]\u00b6\nEagerly parse the blob into a document or documents.\nThis is a convenience method for interactive development environment.\nProduction applications should favor the lazy_parse method instead.\nSubclasses should generally not over-ride this parse method.\nParameters\nblob \u2013 Blob instance\nReturns\nList of documents", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PDFPlumberParser.html"} {"id": "f09748ccfef9-0", "text": "langchain.document_loaders.blob_loaders.youtube_audio.YoutubeAudioLoader\u00b6\nclass langchain.document_loaders.blob_loaders.youtube_audio.YoutubeAudioLoader(urls: List[str], save_dir: str)[source]\u00b6\nBases: BlobLoader\nLoad YouTube urls as audio file(s).\nMethods\n__init__(urls,\u00a0save_dir)\nyield_blobs()\nYield audio blobs for each url.\nyield_blobs() \u2192 Iterable[Blob][source]\u00b6\nYield audio blobs for each url.", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.youtube_audio.YoutubeAudioLoader.html"} {"id": "5bd1ccd998de-0", "text": "langchain.document_loaders.pdf.MathpixPDFLoader\u00b6\nclass langchain.document_loaders.pdf.MathpixPDFLoader(file_path: str, processed_file_format: str = 'mmd', max_wait_time_seconds: int = 500, should_clean_pdf: bool = False, **kwargs: Any)[source]\u00b6\nBases: BasePDFLoader\nInitialize with file path.\nMethods\n__init__(file_path[,\u00a0processed_file_format,\u00a0...])\nInitialize with file path.\nclean_pdf(contents)\nget_processed_pdf(pdf_id)\nlazy_load()\nA lazy loader for Documents.\nload()\nLoad data into Document objects.\nload_and_split([text_splitter])\nLoad Documents and split into chunks.\nsend_pdf()\nwait_for_processing(pdf_id)\nAttributes\ndata\nheaders\nsource\nurl\nclean_pdf(contents: str) \u2192 str[source]\u00b6\nget_processed_pdf(pdf_id: str) \u2192 str[source]\u00b6\nlazy_load() \u2192 Iterator[Document]\u00b6\nA lazy loader for Documents.\nload() \u2192 List[Document][source]\u00b6\nLoad data into Document objects.\nload_and_split(text_splitter: Optional[TextSplitter] = None) \u2192 List[Document]\u00b6\nLoad Documents and split into chunks. Chunks are returned as Documents.\nParameters\ntext_splitter \u2013 TextSplitter instance to use for splitting documents.\nDefaults to RecursiveCharacterTextSplitter.\nReturns\nList of Documents.\nsend_pdf() \u2192 str[source]\u00b6\nwait_for_processing(pdf_id: str) \u2192 None[source]\u00b6\nproperty data: dict\u00b6\nproperty headers: dict\u00b6\nproperty source: str\u00b6\nproperty url: str\u00b6", "source": "https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.MathpixPDFLoader.html"} {"id": "4b5e1dc0563e-0", "text": "langchain.experimental.autonomous_agents.baby_agi.task_creation.TaskCreationChain\u00b6\nclass langchain.experimental.autonomous_agents.baby_agi.task_creation.TaskCreationChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, prompt: BasePromptTemplate, llm: BaseLanguageModel, output_key: str = 'text', output_parser: BaseLLMOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None)[source]\u00b6\nBases: LLMChain\nChain to generates tasks.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam llm: BaseLanguageModel [Required]\u00b6\nLanguage model to call.\nparam llm_kwargs: dict [Optional]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_creation.TaskCreationChain.html"} {"id": "4b5e1dc0563e-1", "text": "them along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam output_key: str = 'text'\u00b6\nparam output_parser: BaseLLMOutputParser [Optional]\u00b6\nOutput parser to use.\nDefaults to one that takes the most likely string but does not change it\notherwise.\nparam prompt: BasePromptTemplate [Required]\u00b6\nPrompt object to use.\nparam return_final_only: bool = True\u00b6\nWhether to return only the final parsed result. Defaults to True.\nIf false, will return a bunch of extra information about the generation.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_creation.TaskCreationChain.html"} {"id": "4b5e1dc0563e-2", "text": "Execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.\nasync aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_creation.TaskCreationChain.html"} {"id": "4b5e1dc0563e-3", "text": "Call apply and then parse the results.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_creation.TaskCreationChain.html"} {"id": "4b5e1dc0563e-4", "text": "Generate LLM result from inputs.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.\napply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.\nasync apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\nasync apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, str]]\u00b6\nCall apredict and then parse the results.\nasync aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_creation.TaskCreationChain.html"} {"id": "4b5e1dc0563e-5", "text": "Convenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_creation.TaskCreationChain.html"} {"id": "4b5e1dc0563e-6", "text": "# -> \"The temperature in Boise is...\"\ncreate_outputs(llm_result: LLMResult) \u2192 List[Dict[str, Any]]\u00b6\nCreate outputs from response.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_llm(llm: BaseLanguageModel, verbose: bool = True) \u2192 LLMChain[source]\u00b6\nGet the response parser.\nclassmethod from_string(llm: BaseLanguageModel, template: str) \u2192 LLMChain\u00b6\nCreate LLMChain from LLM and template.\ngenerate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.\npredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\npredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, Any]]\u00b6\nCall predict and then parse the results.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_creation.TaskCreationChain.html"} {"id": "4b5e1dc0563e-7", "text": "Validate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_creation.TaskCreationChain.html"} {"id": "4b5e1dc0563e-8", "text": "info along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_creation.TaskCreationChain.html"} {"id": "4b5e1dc0563e-9", "text": "validator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_creation.TaskCreationChain.html"} {"id": "a311bbeefbf2-0", "text": "langchain.experimental.autonomous_agents.autogpt.output_parser.BaseAutoGPTOutputParser\u00b6\nclass langchain.experimental.autonomous_agents.autogpt.output_parser.BaseAutoGPTOutputParser[source]\u00b6\nBases: BaseOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nabstract parse(text: str) \u2192 AutoGPTAction[source]\u00b6\nReturn AutoGPTAction\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.autogpt.output_parser.BaseAutoGPTOutputParser.html"} {"id": "a311bbeefbf2-1", "text": "property lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.autogpt.output_parser.BaseAutoGPTOutputParser.html"} {"id": "c760c71ec940-0", "text": "langchain.experimental.plan_and_execute.executors.agent_executor.load_agent_executor\u00b6\nlangchain.experimental.plan_and_execute.executors.agent_executor.load_agent_executor(llm: BaseLanguageModel, tools: List[BaseTool], verbose: bool = False, include_task_in_prompt: bool = False) \u2192 ChainExecutor[source]\u00b6\nLoad an agent executor.\nParameters\nllm \u2013 BaseLanguageModel\ntools \u2013 List[BaseTool]\nverbose \u2013 bool. Defaults to False.\ninclude_task_in_prompt \u2013 bool. Defaults to False.\nReturns\nChainExecutor", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.executors.agent_executor.load_agent_executor.html"} {"id": "325dbdb2413d-0", "text": "langchain.experimental.plan_and_execute.schema.ListStepContainer\u00b6\nclass langchain.experimental.plan_and_execute.schema.ListStepContainer(*, steps: List[Tuple[Step, StepResponse]] = None)[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam steps: List[Tuple[langchain.experimental.plan_and_execute.schema.Step, langchain.experimental.plan_and_execute.schema.StepResponse]] [Optional]\u00b6\nadd_step(step: Step, step_response: StepResponse) \u2192 None[source]\u00b6\nget_final_response() \u2192 str[source]\u00b6\nget_steps() \u2192 List[Tuple[Step, StepResponse]][source]\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.schema.ListStepContainer.html"} {"id": "1b90f94e95f7-0", "text": "langchain.experimental.plan_and_execute.planners.base.BasePlanner\u00b6\nclass langchain.experimental.plan_and_execute.planners.base.BasePlanner[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nabstract async aplan(inputs: dict, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Plan[source]\u00b6\nGiven input, decide what to do.\nabstract plan(inputs: dict, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Plan[source]\u00b6\nGiven input, decide what to do.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.planners.base.BasePlanner.html"} {"id": "81c986e7f642-0", "text": "langchain.experimental.plan_and_execute.planners.chat_planner.load_chat_planner\u00b6\nlangchain.experimental.plan_and_execute.planners.chat_planner.load_chat_planner(llm: BaseLanguageModel, system_prompt: str = \"Let's first understand the problem and devise a plan to solve the problem. Please output the plan starting with the header 'Plan:' and then followed by a numbered list of steps. Please make the plan the minimum number of steps required to accurately complete the task. If the task is a question, the final step should almost always be 'Given the above steps taken, please respond to the users original question'. At the end of your plan, say ''\") \u2192 LLMPlanner[source]\u00b6\nLoad a chat planner.\n:param llm: Language model.\n:param system_prompt: System prompt.\nReturns\nLLMPlanner", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.planners.chat_planner.load_chat_planner.html"} {"id": "0fec99b22703-0", "text": "langchain.experimental.autonomous_agents.autogpt.output_parser.AutoGPTOutputParser\u00b6\nclass langchain.experimental.autonomous_agents.autogpt.output_parser.AutoGPTOutputParser[source]\u00b6\nBases: BaseAutoGPTOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 AutoGPTAction[source]\u00b6\nReturn AutoGPTAction\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.autogpt.output_parser.AutoGPTOutputParser.html"} {"id": "0fec99b22703-1", "text": "property lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.autogpt.output_parser.AutoGPTOutputParser.html"} {"id": "9f7d72fe8430-0", "text": "langchain.experimental.autonomous_agents.autogpt.memory.AutoGPTMemory\u00b6\nclass langchain.experimental.autonomous_agents.autogpt.memory.AutoGPTMemory(*, chat_memory: BaseChatMessageHistory = None, output_key: Optional[str] = None, input_key: Optional[str] = None, return_messages: bool = False, retriever: VectorStoreRetriever)[source]\u00b6\nBases: BaseChatMemory\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam chat_memory: BaseChatMessageHistory [Optional]\u00b6\nparam input_key: Optional[str] = None\u00b6\nparam output_key: Optional[str] = None\u00b6\nparam retriever: langchain.vectorstores.base.VectorStoreRetriever [Required]\u00b6\nVectorStoreRetriever object to connect to.\nparam return_messages: bool = False\u00b6\nclear() \u2192 None\u00b6\nClear memory contents.\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, Any][source]\u00b6\nReturn key-value pairs given the text input to the chain.\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) \u2192 None\u00b6\nSave context from this conversation to buffer.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.autogpt.memory.AutoGPTMemory.html"} {"id": "9f7d72fe8430-1", "text": "Return a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty memory_variables: List[str]\u00b6\nThe string keys this memory class will add to chain inputs.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.autogpt.memory.AutoGPTMemory.html"} {"id": "1fb5bf10c18a-0", "text": "langchain.experimental.plan_and_execute.schema.BaseStepContainer\u00b6\nclass langchain.experimental.plan_and_execute.schema.BaseStepContainer[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nabstract add_step(step: Step, step_response: StepResponse) \u2192 None[source]\u00b6\nAdd step and step response to the container.\nabstract get_final_response() \u2192 str[source]\u00b6\nReturn the final response based on steps taken.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.schema.BaseStepContainer.html"} {"id": "b3f7720e9f9f-0", "text": "langchain.experimental.plan_and_execute.planners.base.LLMPlanner\u00b6\nclass langchain.experimental.plan_and_execute.planners.base.LLMPlanner(*, llm_chain: LLMChain, output_parser: PlanOutputParser, stop: Optional[List] = None)[source]\u00b6\nBases: BasePlanner\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam llm_chain: langchain.chains.llm.LLMChain [Required]\u00b6\nparam output_parser: langchain.experimental.plan_and_execute.schema.PlanOutputParser [Required]\u00b6\nparam stop: Optional[List] = None\u00b6\nasync aplan(inputs: dict, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Plan[source]\u00b6\nGiven input, decide what to do.\nplan(inputs: dict, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Plan[source]\u00b6\nGiven input, decide what to do.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.planners.base.LLMPlanner.html"} {"id": "da5c7bd61b5f-0", "text": "langchain.experimental.plan_and_execute.schema.Step\u00b6\nclass langchain.experimental.plan_and_execute.schema.Step(*, value: str)[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam value: str [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.schema.Step.html"} {"id": "197f676fad6c-0", "text": "langchain.experimental.plan_and_execute.schema.StepResponse\u00b6\nclass langchain.experimental.plan_and_execute.schema.StepResponse(*, response: str)[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam response: str [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.schema.StepResponse.html"} {"id": "5ffdb773a008-0", "text": "langchain.experimental.generative_agents.memory.GenerativeAgentMemory\u00b6\nclass langchain.experimental.generative_agents.memory.GenerativeAgentMemory(*, llm: BaseLanguageModel, memory_retriever: TimeWeightedVectorStoreRetriever, verbose: bool = False, reflection_threshold: Optional[float] = None, current_plan: List[str] = [], importance_weight: float = 0.15, aggregate_importance: float = 0.0, max_tokens_limit: int = 1200, queries_key: str = 'queries', most_recent_memories_token_key: str = 'recent_memories_token', add_memory_key: str = 'add_memory', relevant_memories_key: str = 'relevant_memories', relevant_memories_simple_key: str = 'relevant_memories_simple', most_recent_memories_key: str = 'most_recent_memories', now_key: str = 'now', reflecting: bool = False)[source]\u00b6\nBases: BaseMemory\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam add_memory_key: str = 'add_memory'\u00b6\nparam aggregate_importance: float = 0.0\u00b6\nTrack the sum of the \u2018importance\u2019 of recent memories.\nTriggers reflection when it reaches reflection_threshold.\nparam current_plan: List[str] = []\u00b6\nThe current plan of the agent.\nparam importance_weight: float = 0.15\u00b6\nHow much weight to assign the memory importance.\nparam llm: langchain.schema.language_model.BaseLanguageModel [Required]\u00b6\nThe core language model.\nparam max_tokens_limit: int = 1200\u00b6\nparam memory_retriever: langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever [Required]\u00b6\nThe retriever to fetch related memories.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.generative_agents.memory.GenerativeAgentMemory.html"} {"id": "5ffdb773a008-1", "text": "The retriever to fetch related memories.\nparam most_recent_memories_key: str = 'most_recent_memories'\u00b6\nparam most_recent_memories_token_key: str = 'recent_memories_token'\u00b6\nparam now_key: str = 'now'\u00b6\nparam queries_key: str = 'queries'\u00b6\nparam reflecting: bool = False\u00b6\nparam reflection_threshold: Optional[float] = None\u00b6\nWhen aggregate_importance exceeds reflection_threshold, stop to reflect.\nparam relevant_memories_key: str = 'relevant_memories'\u00b6\nparam relevant_memories_simple_key: str = 'relevant_memories_simple'\u00b6\nparam verbose: bool = False\u00b6\nadd_memories(memory_content: str, now: Optional[datetime] = None) \u2192 List[str][source]\u00b6\nAdd an observations or memories to the agent\u2019s memory.\nadd_memory(memory_content: str, now: Optional[datetime] = None) \u2192 List[str][source]\u00b6\nAdd an observation or memory to the agent\u2019s memory.\nchain(prompt: PromptTemplate) \u2192 LLMChain[source]\u00b6\nclear() \u2192 None[source]\u00b6\nClear memory contents.\nfetch_memories(observation: str, now: Optional[datetime] = None) \u2192 List[Document][source]\u00b6\nFetch related memories.\nformat_memories_detail(relevant_memories: List[Document]) \u2192 str[source]\u00b6\nformat_memories_simple(relevant_memories: List[Document]) \u2192 str[source]\u00b6\nload_memory_variables(inputs: Dict[str, Any]) \u2192 Dict[str, str][source]\u00b6\nReturn key-value pairs given the text input to the chain.\npause_to_reflect(now: Optional[datetime] = None) \u2192 List[str][source]\u00b6\nReflect on recent observations and generate \u2018insights\u2019.\nsave_context(inputs: Dict[str, Any], outputs: Dict[str, Any]) \u2192 None[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.generative_agents.memory.GenerativeAgentMemory.html"} {"id": "5ffdb773a008-2", "text": "Save the context of this model run to memory.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty memory_variables: List[str]\u00b6\nInput keys this memory class will load dynamically.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.generative_agents.memory.GenerativeAgentMemory.html"} {"id": "98dbf3e064ac-0", "text": "langchain.experimental.autonomous_agents.autogpt.output_parser.preprocess_json_input\u00b6\nlangchain.experimental.autonomous_agents.autogpt.output_parser.preprocess_json_input(input_str: str) \u2192 str[source]\u00b6\nPreprocesses a string to be parsed as json.\nReplace single backslashes with double backslashes,\nwhile leaving already escaped ones intact.\nParameters\ninput_str \u2013 String to be preprocessed\nReturns\nPreprocessed string", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.autogpt.output_parser.preprocess_json_input.html"} {"id": "385f81b3d1bd-0", "text": "langchain.experimental.plan_and_execute.executors.base.ChainExecutor\u00b6\nclass langchain.experimental.plan_and_execute.executors.base.ChainExecutor(*, chain: Chain)[source]\u00b6\nBases: BaseExecutor\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam chain: langchain.chains.base.Chain [Required]\u00b6\nasync astep(inputs: dict, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 StepResponse[source]\u00b6\nTake step.\nstep(inputs: dict, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 StepResponse[source]\u00b6\nTake step.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.executors.base.ChainExecutor.html"} {"id": "9624891dfc63-0", "text": "langchain.experimental.llms.jsonformer_decoder.JsonFormer\u00b6\nclass langchain.experimental.llms.jsonformer_decoder.JsonFormer(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, pipeline: Any = None, model_id: str = 'gpt2', model_kwargs: Optional[dict] = None, pipeline_kwargs: Optional[dict] = None, json_schema: dict, max_new_tokens: int = 200, debug: bool = False)[source]\u00b6\nBases: HuggingFacePipeline\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam debug: bool = False\u00b6\nDebug mode.\nparam json_schema: dict [Required]\u00b6\nThe JSON Schema to complete.\nparam max_new_tokens: int = 200\u00b6\nMaximum number of new tokens to generate.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_id: str = 'gpt2'\u00b6\nModel name to use.\nparam model_kwargs: Optional[dict] = None\u00b6\nKey word arguments passed to the model.\nparam pipeline_kwargs: Optional[dict] = None\u00b6\nKey word arguments passed to the pipeline.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.llms.jsonformer_decoder.JsonFormer.html"} {"id": "9624891dfc63-1", "text": "param verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.llms.jsonformer_decoder.JsonFormer.html"} {"id": "9624891dfc63-2", "text": "first occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator check_jsonformer_installation\u00a0 \u00bb\u00a0 all fields[source]\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.llms.jsonformer_decoder.JsonFormer.html"} {"id": "9624891dfc63-3", "text": "dict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\nclassmethod from_model_id(model_id: str, task: str, device: int = - 1, model_kwargs: Optional[dict] = None, pipeline_kwargs: Optional[dict] = None, **kwargs: Any) \u2192 LLM\u00b6\nConstruct the pipeline object from model_id and task.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.llms.jsonformer_decoder.JsonFormer.html"} {"id": "9624891dfc63-4", "text": "functionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.llms.jsonformer_decoder.JsonFormer.html"} {"id": "9624891dfc63-5", "text": "to the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.llms.jsonformer_decoder.JsonFormer.html"} {"id": "9624891dfc63-6", "text": "Return a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.llms.jsonformer_decoder.JsonFormer.html"} {"id": "f6a859208722-0", "text": "langchain.experimental.llms.jsonformer_decoder.import_jsonformer\u00b6\nlangchain.experimental.llms.jsonformer_decoder.import_jsonformer() \u2192 jsonformer[source]\u00b6\nLazily import jsonformer.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.llms.jsonformer_decoder.import_jsonformer.html"} {"id": "ca77511f5f04-0", "text": "langchain.experimental.generative_agents.generative_agent.GenerativeAgent\u00b6\nclass langchain.experimental.generative_agents.generative_agent.GenerativeAgent(*, name: str, age: Optional[int] = None, traits: str = 'N/A', status: str, memory: GenerativeAgentMemory, llm: BaseLanguageModel, verbose: bool = False, summary: str = '', summary_refresh_seconds: int = 3600, last_refreshed: datetime = None, daily_summaries: List[str] = None)[source]\u00b6\nBases: BaseModel\nA character with memory and innate characteristics.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam age: Optional[int] = None\u00b6\nThe optional age of the character.\nparam daily_summaries: List[str] [Optional]\u00b6\nSummary of the events in the plan that the agent took.\nparam last_refreshed: datetime.datetime [Optional]\u00b6\nThe last time the character\u2019s summary was regenerated.\nparam llm: langchain.schema.language_model.BaseLanguageModel [Required]\u00b6\nThe underlying language model.\nparam memory: langchain.experimental.generative_agents.memory.GenerativeAgentMemory [Required]\u00b6\nThe memory object that combines relevance, recency, and \u2018importance\u2019.\nparam name: str [Required]\u00b6\nThe character\u2019s name.\nparam status: str [Required]\u00b6\nThe traits of the character you wish not to change.\nparam summary: str = ''\u00b6\nStateful self-summary generated via reflection on the character\u2019s memory.\nparam summary_refresh_seconds: int = 3600\u00b6\nHow frequently to re-generate the summary.\nparam traits: str = 'N/A'\u00b6\nPermanent traits to ascribe to the character.\nparam verbose: bool = False\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.generative_agents.generative_agent.GenerativeAgent.html"} {"id": "ca77511f5f04-1", "text": "Permanent traits to ascribe to the character.\nparam verbose: bool = False\u00b6\nchain(prompt: PromptTemplate) \u2192 LLMChain[source]\u00b6\ngenerate_dialogue_response(observation: str, now: Optional[datetime] = None) \u2192 Tuple[bool, str][source]\u00b6\nReact to a given observation.\ngenerate_reaction(observation: str, now: Optional[datetime] = None) \u2192 Tuple[bool, str][source]\u00b6\nReact to a given observation.\nget_full_header(force_refresh: bool = False, now: Optional[datetime] = None) \u2192 str[source]\u00b6\nReturn a full header of the agent\u2019s status, summary, and current time.\nget_summary(force_refresh: bool = False, now: Optional[datetime] = None) \u2192 str[source]\u00b6\nReturn a descriptive summary of the agent.\nsummarize_related_memories(observation: str) \u2192 str[source]\u00b6\nSummarize memories that are most relevant to an observation.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.generative_agents.generative_agent.GenerativeAgent.html"} {"id": "059f9d8f3c28-0", "text": "langchain.experimental.autonomous_agents.autogpt.prompt.AutoGPTPrompt\u00b6\nclass langchain.experimental.autonomous_agents.autogpt.prompt.AutoGPTPrompt(*, input_variables: List[str], output_parser: Optional[BaseOutputParser] = None, partial_variables: Mapping[str, Union[str, Callable[[], str]]] = None, ai_name: str, ai_role: str, tools: List[BaseTool], token_counter: Callable[[str], int], send_token_limit: int = 4196)[source]\u00b6\nBases: BaseChatPromptTemplate, BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam ai_name: str [Required]\u00b6\nparam ai_role: str [Required]\u00b6\nparam input_variables: List[str] [Required]\u00b6\nA list of the names of the variables the prompt template expects.\nparam output_parser: Optional[BaseOutputParser] = None\u00b6\nHow to parse the output of calling an LLM on this formatted prompt.\nparam partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]\u00b6\nparam send_token_limit: int = 4196\u00b6\nparam token_counter: Callable[[str], int] [Required]\u00b6\nparam tools: List[langchain.tools.base.BaseTool] [Required]\u00b6\nconstruct_full_prompt(goals: List[str]) \u2192 str[source]\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of prompt.\nformat(**kwargs: Any) \u2192 str\u00b6\nFormat the prompt with the inputs.\nParameters\nkwargs \u2013 Any arguments to be passed to the prompt template.\nReturns\nA formatted string.\nExample:\nprompt.format(variable1=\"foo\")\nformat_messages(**kwargs: Any) \u2192 List[BaseMessage][source]\u00b6\nFormat kwargs into a list of messages.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.autogpt.prompt.AutoGPTPrompt.html"} {"id": "059f9d8f3c28-1", "text": "Format kwargs into a list of messages.\nformat_prompt(**kwargs: Any) \u2192 PromptValue\u00b6\nCreate Chat Messages.\npartial(**kwargs: Union[str, Callable[[], str]]) \u2192 BasePromptTemplate\u00b6\nReturn a partial of the prompt template.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the prompt.\nParameters\nfile_path \u2013 Path to directory to save prompt to.\nExample:\n.. code-block:: python\nprompt.save(file_path=\u201dpath/prompt.yaml\u201d)\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nvalidator validate_variable_names\u00a0 \u00bb\u00a0 all fields\u00b6\nValidate variable names do not include restricted names.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.autogpt.prompt.AutoGPTPrompt.html"} {"id": "ed32020a1ec0-0", "text": "langchain.experimental.plan_and_execute.schema.PlanOutputParser\u00b6\nclass langchain.experimental.plan_and_execute.schema.PlanOutputParser[source]\u00b6\nBases: BaseOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nabstract parse(text: str) \u2192 Plan[source]\u00b6\nParse into a plan.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.schema.PlanOutputParser.html"} {"id": "ed32020a1ec0-1", "text": "eg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.schema.PlanOutputParser.html"} {"id": "a97207446420-0", "text": "langchain.experimental.autonomous_agents.baby_agi.task_execution.TaskExecutionChain\u00b6\nclass langchain.experimental.autonomous_agents.baby_agi.task_execution.TaskExecutionChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, prompt: BasePromptTemplate, llm: BaseLanguageModel, output_key: str = 'text', output_parser: BaseLLMOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None)[source]\u00b6\nBases: LLMChain\nChain to execute tasks.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam llm: BaseLanguageModel [Required]\u00b6\nLanguage model to call.\nparam llm_kwargs: dict [Optional]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_execution.TaskExecutionChain.html"} {"id": "a97207446420-1", "text": "them along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam output_key: str = 'text'\u00b6\nparam output_parser: BaseLLMOutputParser [Optional]\u00b6\nOutput parser to use.\nDefaults to one that takes the most likely string but does not change it\notherwise.\nparam prompt: BasePromptTemplate [Required]\u00b6\nPrompt object to use.\nparam return_final_only: bool = True\u00b6\nWhether to return only the final parsed result. Defaults to True.\nIf false, will return a bunch of extra information about the generation.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_execution.TaskExecutionChain.html"} {"id": "a97207446420-2", "text": "Execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.\nasync aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_execution.TaskExecutionChain.html"} {"id": "a97207446420-3", "text": "Call apply and then parse the results.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_execution.TaskExecutionChain.html"} {"id": "a97207446420-4", "text": "Generate LLM result from inputs.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.\napply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.\nasync apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\nasync apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, str]]\u00b6\nCall apredict and then parse the results.\nasync aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_execution.TaskExecutionChain.html"} {"id": "a97207446420-5", "text": "Convenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_execution.TaskExecutionChain.html"} {"id": "a97207446420-6", "text": "# -> \"The temperature in Boise is...\"\ncreate_outputs(llm_result: LLMResult) \u2192 List[Dict[str, Any]]\u00b6\nCreate outputs from response.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_llm(llm: BaseLanguageModel, verbose: bool = True) \u2192 LLMChain[source]\u00b6\nGet the response parser.\nclassmethod from_string(llm: BaseLanguageModel, template: str) \u2192 LLMChain\u00b6\nCreate LLMChain from LLM and template.\ngenerate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.\npredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\npredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, Any]]\u00b6\nCall predict and then parse the results.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_execution.TaskExecutionChain.html"} {"id": "a97207446420-7", "text": "Validate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_execution.TaskExecutionChain.html"} {"id": "a97207446420-8", "text": "info along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_execution.TaskExecutionChain.html"} {"id": "a97207446420-9", "text": "validator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_execution.TaskExecutionChain.html"} {"id": "65ba4e959c54-0", "text": "langchain.experimental.autonomous_agents.autogpt.output_parser.AutoGPTAction\u00b6\nclass langchain.experimental.autonomous_agents.autogpt.output_parser.AutoGPTAction(name, args)[source]\u00b6\nBases: NamedTuple\nCreate new instance of AutoGPTAction(name, args)\nMethods\n__init__()\ncount(value,\u00a0/)\nReturn number of occurrences of value.\nindex(value[,\u00a0start,\u00a0stop])\nReturn first index of value.\nAttributes\nargs\nAlias for field number 1\nname\nAlias for field number 0\ncount(value, /)\u00b6\nReturn number of occurrences of value.\nindex(value, start=0, stop=9223372036854775807, /)\u00b6\nReturn first index of value.\nRaises ValueError if the value is not present.\nargs: Dict\u00b6\nAlias for field number 1\nname: str\u00b6\nAlias for field number 0", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.autogpt.output_parser.AutoGPTAction.html"} {"id": "dee4bcd6479c-0", "text": "langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain\u00b6\nclass langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, prompt: BasePromptTemplate, llm: BaseLanguageModel, output_key: str = 'text', output_parser: BaseLLMOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None)[source]\u00b6\nBases: LLMChain\nChain to prioritize tasks.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam llm: BaseLanguageModel [Required]\u00b6\nLanguage model to call.\nparam llm_kwargs: dict [Optional]\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain.html"} {"id": "dee4bcd6479c-1", "text": "them along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam output_key: str = 'text'\u00b6\nparam output_parser: BaseLLMOutputParser [Optional]\u00b6\nOutput parser to use.\nDefaults to one that takes the most likely string but does not change it\notherwise.\nparam prompt: BasePromptTemplate [Required]\u00b6\nPrompt object to use.\nparam return_final_only: bool = True\u00b6\nWhether to return only the final parsed result. Defaults to True.\nIf false, will return a bunch of extra information about the generation.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain.html"} {"id": "dee4bcd6479c-2", "text": "Execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.\nasync aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain.html"} {"id": "dee4bcd6479c-3", "text": "Call apply and then parse the results.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain.html"} {"id": "dee4bcd6479c-4", "text": "Generate LLM result from inputs.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nUtilize the LLM generate method for speed gains.\napply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 Sequence[Union[str, List[str], Dict[str, str]]]\u00b6\nCall apply and then parse the results.\nasync apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\nasync apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, str]]\u00b6\nCall apredict and then parse the results.\nasync aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain.html"} {"id": "dee4bcd6479c-5", "text": "Convenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain.html"} {"id": "dee4bcd6479c-6", "text": "# -> \"The temperature in Boise is...\"\ncreate_outputs(llm_result: LLMResult) \u2192 List[Dict[str, Any]]\u00b6\nCreate outputs from response.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nclassmethod from_llm(llm: BaseLanguageModel, verbose: bool = True) \u2192 LLMChain[source]\u00b6\nGet the response parser.\nclassmethod from_string(llm: BaseLanguageModel, template: str) \u2192 LLMChain\u00b6\nCreate LLMChain from LLM and template.\ngenerate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 LLMResult\u00b6\nGenerate LLM result from inputs.\npredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 str\u00b6\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nReturns\nCompletion from LLM.\nExample\ncompletion = llm.predict(adjective=\"funny\")\npredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 Union[str, List[str], Dict[str, Any]]\u00b6\nCall predict and then parse the results.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain.html"} {"id": "dee4bcd6479c-7", "text": "Validate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) \u2192 Tuple[List[PromptValue], Optional[List[str]]]\u00b6\nPrepare prompts from inputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain.html"} {"id": "dee4bcd6479c-8", "text": "info along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain.html"} {"id": "dee4bcd6479c-9", "text": "validator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain.html"} {"id": "237fab5692e6-0", "text": "langchain.experimental.llms.rellm_decoder.import_rellm\u00b6\nlangchain.experimental.llms.rellm_decoder.import_rellm() \u2192 rellm[source]\u00b6\nLazily import rellm.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.llms.rellm_decoder.import_rellm.html"} {"id": "d9b5bbbac762-0", "text": "langchain.experimental.plan_and_execute.agent_executor.PlanAndExecute\u00b6\nclass langchain.experimental.plan_and_execute.agent_executor.PlanAndExecute(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, planner: BasePlanner, executor: BaseExecutor, step_container: BaseStepContainer = None, input_key: str = 'input', output_key: str = 'output')[source]\u00b6\nBases: Chain\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam executor: langchain.experimental.plan_and_execute.executors.base.BaseExecutor [Required]\u00b6\nparam input_key: str = 'input'\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.agent_executor.PlanAndExecute.html"} {"id": "d9b5bbbac762-1", "text": "for the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam output_key: str = 'output'\u00b6\nparam planner: langchain.experimental.plan_and_execute.planners.base.BasePlanner [Required]\u00b6\nparam step_container: langchain.experimental.plan_and_execute.schema.BaseStepContainer [Optional]\u00b6\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.agent_executor.PlanAndExecute.html"} {"id": "d9b5bbbac762-2", "text": "response. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.agent_executor.PlanAndExecute.html"} {"id": "d9b5bbbac762-3", "text": "addition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.agent_executor.PlanAndExecute.html"} {"id": "d9b5bbbac762-4", "text": "addition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.agent_executor.PlanAndExecute.html"} {"id": "d9b5bbbac762-5", "text": "Chain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.agent_executor.PlanAndExecute.html"} {"id": "d9b5bbbac762-6", "text": "addition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty input_keys: List[str]\u00b6\nReturn the keys expected to be in the chain input.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.agent_executor.PlanAndExecute.html"} {"id": "d9b5bbbac762-7", "text": "serialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\u00b6\nReturn the keys expected to be in the chain output.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.agent_executor.PlanAndExecute.html"} {"id": "49350bcf0575-0", "text": "langchain.experimental.autonomous_agents.autogpt.prompt_generator.get_prompt\u00b6\nlangchain.experimental.autonomous_agents.autogpt.prompt_generator.get_prompt(tools: List[BaseTool]) \u2192 str[source]\u00b6\nThis function generates a prompt string.\nIt includes various constraints, commands, resources, and performance evaluations.\nReturns\nThe generated prompt string.\nReturn type\nstr", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.autogpt.prompt_generator.get_prompt.html"} {"id": "83236a380a09-0", "text": "langchain.experimental.plan_and_execute.schema.Plan\u00b6\nclass langchain.experimental.plan_and_execute.schema.Plan(*, steps: List[Step])[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam steps: List[langchain.experimental.plan_and_execute.schema.Step] [Required]\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.schema.Plan.html"} {"id": "2ee473459cbf-0", "text": "langchain.experimental.plan_and_execute.planners.chat_planner.PlanningOutputParser\u00b6\nclass langchain.experimental.plan_and_execute.planners.chat_planner.PlanningOutputParser[source]\u00b6\nBases: PlanOutputParser\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of output parser.\nget_format_instructions() \u2192 str\u00b6\nInstructions on how the LLM output should be formatted.\nparse(text: str) \u2192 Plan[source]\u00b6\nParse into a plan.\nparse_result(result: List[Generation]) \u2192 T\u00b6\nParse a list of candidate model Generations into a specific format.\nThe return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.\nParameters\nresult \u2013 A list of Generations to be parsed. The Generations are assumed\nto be different candidate outputs for a single model input.\nReturns\nStructured output.\nparse_with_prompt(completion: str, prompt: PromptValue) \u2192 Any\u00b6\nParse the output of an LLM call with the input prompt for context.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion \u2013 String output of language model.\nprompt \u2013 Input PromptValue.\nReturns\nStructured output\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.planners.chat_planner.PlanningOutputParser.html"} {"id": "2ee473459cbf-1", "text": "property lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nextra = 'ignore'\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.planners.chat_planner.PlanningOutputParser.html"} {"id": "078bff171a89-0", "text": "langchain.experimental.llms.rellm_decoder.RELLM\u00b6\nclass langchain.experimental.llms.rellm_decoder.RELLM(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, pipeline: Any = None, model_id: str = 'gpt2', model_kwargs: Optional[dict] = None, pipeline_kwargs: Optional[dict] = None, regex: RegexPattern, max_new_tokens: int = 200)[source]\u00b6\nBases: HuggingFacePipeline\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam cache: Optional[bool] = None\u00b6\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nparam callbacks: Callbacks = None\u00b6\nparam max_new_tokens: int = 200\u00b6\nMaximum number of new tokens to generate.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nMetadata to add to the run trace.\nparam model_id: str = 'gpt2'\u00b6\nModel name to use.\nparam model_kwargs: Optional[dict] = None\u00b6\nKey word arguments passed to the model.\nparam pipeline_kwargs: Optional[dict] = None\u00b6\nKey word arguments passed to the pipeline.\nparam regex: RegexPattern [Required]\u00b6\nThe structured format to complete.\nparam tags: Optional[List[str]] = None\u00b6\nTags to add to the run trace.\nparam verbose: bool [Optional]\u00b6\nWhether to print out response text.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.llms.rellm_decoder.RELLM.html"} {"id": "078bff171a89-1", "text": "param verbose: bool [Optional]\u00b6\nWhether to print out response text.\n__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nCheck Cache and run the LLM on the given prompt and input.\nasync agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\nasync agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nAsynchronously pass a sequence of prompts and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.llms.rellm_decoder.RELLM.html"} {"id": "078bff171a89-2", "text": "first occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nasync apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nAsynchronously pass a string to the model and return a string prediction.\nUse this method when calling pure text generation models and only the topcandidate generation is needed.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.\nasync apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nAsynchronously pass messages to the model and return a message prediction.\nUse this method when calling chat models and only the topcandidate generation is needed.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator check_rellm_installation\u00a0 \u00bb\u00a0 all fields[source]\u00b6\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.llms.rellm_decoder.RELLM.html"} {"id": "078bff171a89-3", "text": "dict(**kwargs: Any) \u2192 Dict\u00b6\nReturn a dictionary of the LLM.\nclassmethod from_model_id(model_id: str, task: str, device: int = - 1, model_kwargs: Optional[dict] = None, pipeline_kwargs: Optional[dict] = None, **kwargs: Any) \u2192 LLM\u00b6\nConstruct the pipeline object from model_id and task.\ngenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nRun the LLM on the given prompt and input.\ngenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 LLMResult\u00b6\nPass a sequence of prompts to the model and return model generations.\nThis method should make use of batched calls for models that expose a batched\nAPI.\nUse this method when you want to:\ntake advantage of batched calls,\nneed more output from the model than just the top generated value,\nare building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).\nParameters\nprompts \u2013 List of PromptValues. A PromptValue is an object that can be\nconverted to match the format of any language model (string for pure\ntext generation models and BaseMessages for chat models).\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\ncallbacks \u2013 Callbacks to pass through. Used for executing additional\nfunctionality, such as logging or streaming, throughout generation.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.llms.rellm_decoder.RELLM.html"} {"id": "078bff171a89-4", "text": "functionality, such as logging or streaming, throughout generation.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nAn LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.\nget_num_tokens(text: str) \u2192 int\u00b6\nGet the number of tokens present in the text.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nThe integer number of tokens in the text.\nget_num_tokens_from_messages(messages: List[BaseMessage]) \u2192 int\u00b6\nGet the number of tokens in the messages.\nUseful for checking if an input will fit in a model\u2019s context window.\nParameters\nmessages \u2013 The message inputs to tokenize.\nReturns\nThe sum of the number of tokens across the messages.\nget_token_ids(text: str) \u2192 List[int]\u00b6\nReturn the ordered ids of the tokens in a text.\nParameters\ntext \u2013 The string input to tokenize.\nReturns\nA list of ids corresponding to the tokens in the text, in order they occurin the text.\npredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 str\u00b6\nPass a single string input to the model and return a string prediction.\nUse this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.\nParameters\ntext \u2013 String input to pass to the model.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a string.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.llms.rellm_decoder.RELLM.html"} {"id": "078bff171a89-5", "text": "to the model provider API call.\nReturns\nTop model prediction as a string.\npredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) \u2192 BaseMessage\u00b6\nPass a message sequence to the model and return a message prediction.\nUse this method when passing in chat messages. If you want to pass in raw text,use predict.\nParameters\nmessages \u2013 A sequence of chat messages corresponding to a single model input.\nstop \u2013 Stop words to use when generating. Model output is cut off at the\nfirst occurrence of any of these substrings.\n**kwargs \u2013 Arbitrary additional keyword arguments. These are usually passed\nto the model provider API call.\nReturns\nTop model prediction as a message.\nvalidator raise_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the LLM.\nParameters\nfile_path \u2013 Path to file to save the LLM to.\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nIf verbose is None, set it.\nThis allows users to pass in None as verbose to access the global setting.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.llms.rellm_decoder.RELLM.html"} {"id": "078bff171a89-6", "text": "Return a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nmodel Config\u00b6\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.llms.rellm_decoder.RELLM.html"} {"id": "743cacdcc69e-0", "text": "langchain.experimental.plan_and_execute.executors.base.BaseExecutor\u00b6\nclass langchain.experimental.plan_and_execute.executors.base.BaseExecutor[source]\u00b6\nBases: BaseModel\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nabstract async astep(inputs: dict, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 StepResponse[source]\u00b6\nTake step.\nabstract step(inputs: dict, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) \u2192 StepResponse[source]\u00b6\nTake step.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.plan_and_execute.executors.base.BaseExecutor.html"} {"id": "1ada9c25e36d-0", "text": "langchain.experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI\u00b6\nclass langchain.experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, task_list: deque = None, task_creation_chain: Chain, task_prioritization_chain: Chain, execution_chain: Chain, task_id_counter: int = 1, vectorstore: VectorStore, max_iterations: Optional[int] = None)[source]\u00b6\nBases: Chain, BaseModel\nController model for the BabyAGI agent.\nCreate a new model by parsing and validating input data from keyword arguments.\nRaises ValidationError if the input data cannot be parsed to form a valid model.\nparam callback_manager: Optional[BaseCallbackManager] = None\u00b6\nDeprecated, use callbacks instead.\nparam callbacks: Callbacks = None\u00b6\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nparam execution_chain: langchain.chains.base.Chain [Required]\u00b6\nparam max_iterations: Optional[int] = None\u00b6\nparam memory: Optional[BaseMemory] = None\u00b6\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI.html"} {"id": "1ada9c25e36d-1", "text": "them along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nparam metadata: Optional[Dict[str, Any]] = None\u00b6\nOptional metadata associated with the chain. Defaults to None\nThis metadata will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam tags: Optional[List[str]] = None\u00b6\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nparam task_creation_chain: langchain.chains.base.Chain [Required]\u00b6\nparam task_id_counter: int = 1\u00b6\nparam task_list: collections.deque [Optional]\u00b6\nparam task_prioritization_chain: langchain.chains.base.Chain [Required]\u00b6\nparam vectorstore: langchain.vectorstores.base.VectorStore [Required]\u00b6\nparam verbose: bool [Optional]\u00b6\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\n__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nExecute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI.html"} {"id": "1ada9c25e36d-2", "text": "only one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nasync acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) \u2192 Dict[str, Any]\u00b6\nAsynchronously execute the chain.\nParameters\ninputs \u2013 Dictionary of inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nreturn_only_outputs \u2013 Whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI.html"} {"id": "1ada9c25e36d-3", "text": "chain will be returned. Defaults to False.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\nmetadata \u2013 Optional metadata associated with the chain. Defaults to None\ninclude_run_info \u2013 Whether to include run info in the response. Defaults\nto False.\nReturns\nA dict of named outputs. Should contain all outputs specified inChain.output_keys.\nadd_task(task: Dict) \u2192 None[source]\u00b6\napply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) \u2192 List[Dict[str, str]]\u00b6\nCall the chain on all inputs in the list.\nasync arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI.html"} {"id": "1ada9c25e36d-4", "text": "a single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:\nawait chain.arun(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nawait chain.arun(question=question, context=context)\n# -> \"The temperature in Boise is...\"\ndict(**kwargs: Any) \u2192 Dict\u00b6\nReturn dictionary representation of chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\n**kwargs \u2013 Keyword arguments passed to default pydantic.BaseModel.dict\nmethod.\nReturns\nA dictionary representation of the chain.\nExample\n..code-block:: python\nchain.dict(exclude_unset=True)\n# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI.html"} {"id": "1ada9c25e36d-5", "text": "# -> {\u201c_type\u201d: \u201cfoo\u201d, \u201cverbose\u201d: False, \u2026}\nexecute_task(objective: str, task: str, k: int = 5) \u2192 str[source]\u00b6\nExecute a task.\nclassmethod from_llm(llm: BaseLanguageModel, vectorstore: VectorStore, verbose: bool = False, task_execution_chain: Optional[Chain] = None, **kwargs: Dict[str, Any]) \u2192 BabyAGI[source]\u00b6\nInitialize the BabyAGI Controller.\nget_next_task(result: str, task_description: str, objective: str) \u2192 List[Dict][source]\u00b6\nGet the next task.\nprep_inputs(inputs: Union[Dict[str, Any], Any]) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain inputs, including adding inputs from memory.\nParameters\ninputs \u2013 Dictionary of raw inputs, or single input if chain expects\nonly one param. Should contain all inputs specified in\nChain.input_keys except for inputs that will be set by the chain\u2019s\nmemory.\nReturns\nA dictionary of all inputs, including those added by the chain\u2019s memory.\nprep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) \u2192 Dict[str, str]\u00b6\nValidate and prepare chain outputs, and save info about this run to memory.\nParameters\ninputs \u2013 Dictionary of chain inputs, including any inputs added by chain\nmemory.\noutputs \u2013 Dictionary of initial chain outputs.\nreturn_only_outputs \u2013 Whether to only return the chain outputs. If False,\ninputs are also added to the final outputs.\nReturns\nA dict of the final chain outputs.\nprint_next_task(task: Dict) \u2192 None[source]\u00b6\nprint_task_list() \u2192 None[source]\u00b6\nprint_task_result(result: str) \u2192 None[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI.html"} {"id": "1ada9c25e36d-6", "text": "print_task_result(result: str) \u2192 None[source]\u00b6\nprioritize_tasks(this_task_id: int, objective: str) \u2192 List[Dict][source]\u00b6\nPrioritize tasks.\nvalidator raise_callback_manager_deprecation\u00a0 \u00bb\u00a0 all fields\u00b6\nRaise deprecation warning if callback_manager is used.\nrun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) \u2192 str\u00b6\nConvenience method for executing chain when there\u2019s a single string output.\nThe main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain\nhas more outputs, a non-string output, or you want to return the inputs/run\ninfo along with the outputs, use Chain.__call__.\nThe other difference is that this method expects inputs to be passed directly in\nas positional arguments or keyword arguments, whereas Chain.__call__ expects\na single input dictionary with all the inputs.\nParameters\n*args \u2013 If the chain expects a single input, it can be passed in as the\nsole positional argument.\ncallbacks \u2013 Callbacks to use for this chain run. These will be called in\naddition to callbacks passed to the chain during construction, but only\nthese runtime callbacks will propagate to calls to other objects.\ntags \u2013 List of string tags to pass to all callbacks. These will be passed in\naddition to tags passed to the chain during construction, but only\nthese runtime tags will propagate to calls to other objects.\n**kwargs \u2013 If the chain expects multiple inputs, they can be passed in\ndirectly as keyword arguments.\nReturns\nThe chain output as a string.\nExample\n# Suppose we have a single-input chain that takes a 'question' string:", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI.html"} {"id": "1ada9c25e36d-7", "text": "Example\n# Suppose we have a single-input chain that takes a 'question' string:\nchain.run(\"What's the temperature in Boise, Idaho?\")\n# -> \"The temperature in Boise is...\"\n# Suppose we have a multi-input chain that takes a 'question' string\n# and 'context' string:\nquestion = \"What's the temperature in Boise, Idaho?\"\ncontext = \"Weather report for Boise, Idaho on 07/03/23...\"\nchain.run(question=question, context=context)\n# -> \"The temperature in Boise is...\"\nsave(file_path: Union[Path, str]) \u2192 None\u00b6\nSave the chain.\nExpects Chain._chain_type property to be implemented and for memory to benull.\nParameters\nfile_path \u2013 Path to file to save the chain to.\nExample\nchain.save(file_path=\"path/chain.yaml\")\nvalidator set_verbose\u00a0 \u00bb\u00a0 verbose\u00b6\nSet the chain verbosity.\nDefaults to the global setting if not specified by the user.\nto_json() \u2192 Union[SerializedConstructor, SerializedNotImplemented]\u00b6\nto_json_not_implemented() \u2192 SerializedNotImplemented\u00b6\nproperty input_keys: List[str]\u00b6\nReturn the keys expected to be in the chain input.\nproperty lc_attributes: Dict\u00b6\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\u00b6\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\u00b6\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\u00b6\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI.html"} {"id": "1ada9c25e36d-8", "text": "Return whether or not the class is serializable.\nproperty output_keys: List[str]\u00b6\nReturn the keys expected to be in the chain output.\nmodel Config[source]\u00b6\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\u00b6", "source": "https://api.python.langchain.com/en/latest/experimental/langchain.experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI.html"} {"id": "4beb151281a4-0", "text": "langchain.example_generator.generate_example\u00b6\nlangchain.example_generator.generate_example(examples: List[dict], llm: BaseLanguageModel, prompt_template: PromptTemplate) \u2192 str[source]\u00b6\nReturn another example given a list of examples for a prompt.", "source": "https://api.python.langchain.com/en/latest/example_generator/langchain.example_generator.generate_example.html"} {"id": "572ab970ff1f-0", "text": "langchain.input.get_color_mapping\u00b6\nlangchain.input.get_color_mapping(items: List[str], excluded_colors: Optional[List] = None) \u2192 Dict[str, str][source]\u00b6\nGet mapping for items to a support color.", "source": "https://api.python.langchain.com/en/latest/input/langchain.input.get_color_mapping.html"} {"id": "cfecb620ae9a-0", "text": "langchain.input.get_colored_text\u00b6\nlangchain.input.get_colored_text(text: str, color: str) \u2192 str[source]\u00b6\nGet colored text.", "source": "https://api.python.langchain.com/en/latest/input/langchain.input.get_colored_text.html"} {"id": "6f035b17afcb-0", "text": "langchain.input.get_bolded_text\u00b6\nlangchain.input.get_bolded_text(text: str) \u2192 str[source]\u00b6\nGet bolded text.", "source": "https://api.python.langchain.com/en/latest/input/langchain.input.get_bolded_text.html"} {"id": "74dcd1b813c0-0", "text": "langchain.input.print_text\u00b6\nlangchain.input.print_text(text: str, color: Optional[str] = None, end: str = '', file: Optional[TextIO] = None) \u2192 None[source]\u00b6\nPrint text with highlighting and no end characters.", "source": "https://api.python.langchain.com/en/latest/input/langchain.input.print_text.html"} {"id": "13366315bc63-0", "text": "langchain.text_splitter.SpacyTextSplitter\u00b6\nclass langchain.text_splitter.SpacyTextSplitter(separator: str = '\\n\\n', pipeline: str = 'en_core_web_sm', **kwargs: Any)[source]\u00b6\nBases: TextSplitter\nImplementation of splitting text that looks at sentences using Spacy.\nInitialize the spacy text splitter.\nMethods\n__init__([separator,\u00a0pipeline])\nInitialize the spacy text splitter.\natransform_documents(documents,\u00a0**kwargs)\nAsynchronously transform a sequence of documents by splitting them.\ncreate_documents(texts[,\u00a0metadatas])\nCreate documents from a list of texts.\nfrom_huggingface_tokenizer(tokenizer,\u00a0**kwargs)\nText splitter that uses HuggingFace tokenizer to count length.\nfrom_tiktoken_encoder([encoding_name,\u00a0...])\nText splitter that uses tiktoken encoder to count length.\nsplit_documents(documents)\nSplit documents.\nsplit_text(text)\nSplit incoming text and return chunks.\ntransform_documents(documents,\u00a0**kwargs)\nTransform sequence of documents by splitting them.\nasync atransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document]\u00b6\nAsynchronously transform a sequence of documents by splitting them.\ncreate_documents(texts: List[str], metadatas: Optional[List[dict]] = None) \u2192 List[Document]\u00b6\nCreate documents from a list of texts.\nclassmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) \u2192 TextSplitter\u00b6\nText splitter that uses HuggingFace tokenizer to count length.", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.SpacyTextSplitter.html"} {"id": "13366315bc63-1", "text": "Text splitter that uses HuggingFace tokenizer to count length.\nclassmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) \u2192 TS\u00b6\nText splitter that uses tiktoken encoder to count length.\nsplit_documents(documents: Iterable[Document]) \u2192 List[Document]\u00b6\nSplit documents.\nsplit_text(text: str) \u2192 List[str][source]\u00b6\nSplit incoming text and return chunks.\ntransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document]\u00b6\nTransform sequence of documents by splitting them.", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.SpacyTextSplitter.html"} {"id": "c39ffc5ccfbe-0", "text": "langchain.text_splitter.HeaderType\u00b6\nclass langchain.text_splitter.HeaderType[source]\u00b6\nBases: TypedDict\nHeader type as typed dict.\nMethods\n__init__(*args,\u00a0**kwargs)\nclear()\ncopy()\nfromkeys([value])\nCreate a new dictionary with keys from iterable and values set to value.\nget(key[,\u00a0default])\nReturn the value for key if key is in the dictionary, else default.\nitems()\nkeys()\npop(k[,d])\nIf the key is not found, return the default if given; otherwise, raise a KeyError.\npopitem()\nRemove and return a (key, value) pair as a 2-tuple.\nsetdefault(key[,\u00a0default])\nInsert key with a value of default if key is not in the dictionary.\nupdate([E,\u00a0]**F)\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]\nvalues()\nAttributes\nlevel\nname\ndata\nclear() \u2192 None.\u00a0 Remove all items from D.\u00b6\ncopy() \u2192 a shallow copy of D\u00b6\nfromkeys(value=None, /)\u00b6\nCreate a new dictionary with keys from iterable and values set to value.\nget(key, default=None, /)\u00b6\nReturn the value for key if key is in the dictionary, else default.\nitems() \u2192 a set-like object providing a view on D's items\u00b6\nkeys() \u2192 a set-like object providing a view on D's keys\u00b6\npop(k[, d]) \u2192 v, remove specified key and return the corresponding value.\u00b6", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.HeaderType.html"} {"id": "c39ffc5ccfbe-1", "text": "pop(k[, d]) \u2192 v, remove specified key and return the corresponding value.\u00b6\nIf the key is not found, return the default if given; otherwise,\nraise a KeyError.\npopitem()\u00b6\nRemove and return a (key, value) pair as a 2-tuple.\nPairs are returned in LIFO (last-in, first-out) order.\nRaises KeyError if the dict is empty.\nsetdefault(key, default=None, /)\u00b6\nInsert key with a value of default if key is not in the dictionary.\nReturn the value for key if key is in the dictionary, else default.\nupdate([E, ]**F) \u2192 None.\u00a0 Update D from dict/iterable E and F.\u00b6\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k]\nIf E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v\nIn either case, this is followed by: for k in F: D[k] = F[k]\nvalues() \u2192 an object providing a view on D's values\u00b6\ndata: str\u00b6\nlevel: int\u00b6\nname: str\u00b6", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.HeaderType.html"} {"id": "4b33f6a0cda4-0", "text": "langchain.text_splitter.TextSplitter\u00b6\nclass langchain.text_splitter.TextSplitter(chunk_size: int = 4000, chunk_overlap: int = 200, length_function: ~typing.Callable[[str], int] = , keep_separator: bool = False, add_start_index: bool = False)[source]\u00b6\nBases: BaseDocumentTransformer, ABC\nInterface for splitting text into chunks.\nCreate a new TextSplitter.\nParameters\nchunk_size \u2013 Maximum size of chunks to return\nchunk_overlap \u2013 Overlap in characters between chunks\nlength_function \u2013 Function that measures the length of given chunks\nkeep_separator \u2013 Whether to keep the separator in the chunks\nadd_start_index \u2013 If True, includes chunk\u2019s start index in metadata\nMethods\n__init__([chunk_size,\u00a0chunk_overlap,\u00a0...])\nCreate a new TextSplitter.\natransform_documents(documents,\u00a0**kwargs)\nAsynchronously transform a sequence of documents by splitting them.\ncreate_documents(texts[,\u00a0metadatas])\nCreate documents from a list of texts.\nfrom_huggingface_tokenizer(tokenizer,\u00a0**kwargs)\nText splitter that uses HuggingFace tokenizer to count length.\nfrom_tiktoken_encoder([encoding_name,\u00a0...])\nText splitter that uses tiktoken encoder to count length.\nsplit_documents(documents)\nSplit documents.\nsplit_text(text)\nSplit text into multiple components.\ntransform_documents(documents,\u00a0**kwargs)\nTransform sequence of documents by splitting them.\nasync atransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document][source]\u00b6\nAsynchronously transform a sequence of documents by splitting them.\ncreate_documents(texts: List[str], metadatas: Optional[List[dict]] = None) \u2192 List[Document][source]\u00b6\nCreate documents from a list of texts.", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.TextSplitter.html"} {"id": "4b33f6a0cda4-1", "text": "Create documents from a list of texts.\nclassmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) \u2192 TextSplitter[source]\u00b6\nText splitter that uses HuggingFace tokenizer to count length.\nclassmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) \u2192 TS[source]\u00b6\nText splitter that uses tiktoken encoder to count length.\nsplit_documents(documents: Iterable[Document]) \u2192 List[Document][source]\u00b6\nSplit documents.\nabstract split_text(text: str) \u2192 List[str][source]\u00b6\nSplit text into multiple components.\ntransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document][source]\u00b6\nTransform sequence of documents by splitting them.", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.TextSplitter.html"} {"id": "575157d001f8-0", "text": "langchain.text_splitter.NLTKTextSplitter\u00b6\nclass langchain.text_splitter.NLTKTextSplitter(separator: str = '\\n\\n', **kwargs: Any)[source]\u00b6\nBases: TextSplitter\nImplementation of splitting text that looks at sentences using NLTK.\nInitialize the NLTK splitter.\nMethods\n__init__([separator])\nInitialize the NLTK splitter.\natransform_documents(documents,\u00a0**kwargs)\nAsynchronously transform a sequence of documents by splitting them.\ncreate_documents(texts[,\u00a0metadatas])\nCreate documents from a list of texts.\nfrom_huggingface_tokenizer(tokenizer,\u00a0**kwargs)\nText splitter that uses HuggingFace tokenizer to count length.\nfrom_tiktoken_encoder([encoding_name,\u00a0...])\nText splitter that uses tiktoken encoder to count length.\nsplit_documents(documents)\nSplit documents.\nsplit_text(text)\nSplit incoming text and return chunks.\ntransform_documents(documents,\u00a0**kwargs)\nTransform sequence of documents by splitting them.\nasync atransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document]\u00b6\nAsynchronously transform a sequence of documents by splitting them.\ncreate_documents(texts: List[str], metadatas: Optional[List[dict]] = None) \u2192 List[Document]\u00b6\nCreate documents from a list of texts.\nclassmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) \u2192 TextSplitter\u00b6\nText splitter that uses HuggingFace tokenizer to count length.\nclassmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) \u2192 TS\u00b6", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.NLTKTextSplitter.html"} {"id": "575157d001f8-1", "text": "Text splitter that uses tiktoken encoder to count length.\nsplit_documents(documents: Iterable[Document]) \u2192 List[Document]\u00b6\nSplit documents.\nsplit_text(text: str) \u2192 List[str][source]\u00b6\nSplit incoming text and return chunks.\ntransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document]\u00b6\nTransform sequence of documents by splitting them.", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.NLTKTextSplitter.html"} {"id": "dae5c65b9e81-0", "text": "langchain.text_splitter.split_text_on_tokens\u00b6\nlangchain.text_splitter.split_text_on_tokens(*, text: str, tokenizer: Tokenizer) \u2192 List[str][source]\u00b6\nSplit incoming text and return chunks.", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.split_text_on_tokens.html"} {"id": "833e9f6f0825-0", "text": "langchain.text_splitter.Language\u00b6\nclass langchain.text_splitter.Language(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]\u00b6\nBases: str, Enum\nEnum of the programming languages.\nMethods\n__init__(*args,\u00a0**kwds)\ncapitalize()\nReturn a capitalized version of the string.\ncasefold()\nReturn a version of the string suitable for caseless comparisons.\ncenter(width[,\u00a0fillchar])\nReturn a centered string of length width.\ncount(sub[,\u00a0start[,\u00a0end]])\nReturn the number of non-overlapping occurrences of substring sub in string S[start:end].\nencode([encoding,\u00a0errors])\nEncode the string using the codec registered for encoding.\nendswith(suffix[,\u00a0start[,\u00a0end]])\nReturn True if S ends with the specified suffix, False otherwise.\nexpandtabs([tabsize])\nReturn a copy where all tab characters are expanded using spaces.\nfind(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nformat(*args,\u00a0**kwargs)\nReturn a formatted version of S, using substitutions from args and kwargs.\nformat_map(mapping)\nReturn a formatted version of S, using substitutions from mapping.\nindex(sub[,\u00a0start[,\u00a0end]])\nReturn the lowest index in S where substring sub is found, such that sub is contained within S[start:end].\nisalnum()\nReturn True if the string is an alpha-numeric string, False otherwise.\nisalpha()\nReturn True if the string is an alphabetic string, False otherwise.\nisascii()\nReturn True if all characters in the string are ASCII, False otherwise.\nisdecimal()\nReturn True if the string is a decimal string, False otherwise.\nisdigit()", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.Language.html"} {"id": "833e9f6f0825-1", "text": "Return True if the string is a decimal string, False otherwise.\nisdigit()\nReturn True if the string is a digit string, False otherwise.\nisidentifier()\nReturn True if the string is a valid Python identifier, False otherwise.\nislower()\nReturn True if the string is a lowercase string, False otherwise.\nisnumeric()\nReturn True if the string is a numeric string, False otherwise.\nisprintable()\nReturn True if the string is printable, False otherwise.\nisspace()\nReturn True if the string is a whitespace string, False otherwise.\nistitle()\nReturn True if the string is a title-cased string, False otherwise.\nisupper()\nReturn True if the string is an uppercase string, False otherwise.\njoin(iterable,\u00a0/)\nConcatenate any number of strings.\nljust(width[,\u00a0fillchar])\nReturn a left-justified string of length width.\nlower()\nReturn a copy of the string converted to lowercase.\nlstrip([chars])\nReturn a copy of the string with leading whitespace removed.\nmaketrans\nReturn a translation table usable for str.translate().\npartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nremoveprefix(prefix,\u00a0/)\nReturn a str with the given prefix string removed if present.\nremovesuffix(suffix,\u00a0/)\nReturn a str with the given suffix string removed if present.\nreplace(old,\u00a0new[,\u00a0count])\nReturn a copy with all occurrences of substring old replaced by new.\nrfind(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].\nrindex(sub[,\u00a0start[,\u00a0end]])\nReturn the highest index in S where substring sub is found, such that sub is contained within S[start:end].", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.Language.html"} {"id": "833e9f6f0825-2", "text": "rjust(width[,\u00a0fillchar])\nReturn a right-justified string of length width.\nrpartition(sep,\u00a0/)\nPartition the string into three parts using the given separator.\nrsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nrstrip([chars])\nReturn a copy of the string with trailing whitespace removed.\nsplit([sep,\u00a0maxsplit])\nReturn a list of the substrings in the string, using sep as the separator string.\nsplitlines([keepends])\nReturn a list of the lines in the string, breaking at line boundaries.\nstartswith(prefix[,\u00a0start[,\u00a0end]])\nReturn True if S starts with the specified prefix, False otherwise.\nstrip([chars])\nReturn a copy of the string with leading and trailing whitespace removed.\nswapcase()\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\nReturn a version of the string where each word is titlecased.\ntranslate(table,\u00a0/)\nReplace each character in the string using the given translation table.\nupper()\nReturn a copy of the string converted to uppercase.\nzfill(width,\u00a0/)\nPad a numeric string with zeros on the left, to fill a field of the given width.\nAttributes\nCPP\nGO\nJAVA\nJS\nPHP\nPROTO\nPYTHON\nRST\nRUBY\nRUST\nSCALA\nSWIFT\nMARKDOWN\nLATEX\nHTML\nSOL\ncapitalize()\u00b6\nReturn a capitalized version of the string.\nMore specifically, make the first character have upper case and the rest lower\ncase.\ncasefold()\u00b6\nReturn a version of the string suitable for caseless comparisons.\ncenter(width, fillchar=' ', /)\u00b6\nReturn a centered string of length width.", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.Language.html"} {"id": "833e9f6f0825-3", "text": "center(width, fillchar=' ', /)\u00b6\nReturn a centered string of length width.\nPadding is done using the specified fill character (default is a space).\ncount(sub[, start[, end]]) \u2192 int\u00b6\nReturn the number of non-overlapping occurrences of substring sub in\nstring S[start:end]. Optional arguments start and end are\ninterpreted as in slice notation.\nencode(encoding='utf-8', errors='strict')\u00b6\nEncode the string using the codec registered for encoding.\nencodingThe encoding in which to encode the string.\nerrorsThe error handling scheme to use for encoding errors.\nThe default is \u2018strict\u2019 meaning that encoding errors raise a\nUnicodeEncodeError. Other possible values are \u2018ignore\u2019, \u2018replace\u2019 and\n\u2018xmlcharrefreplace\u2019 as well as any other name registered with\ncodecs.register_error that can handle UnicodeEncodeErrors.\nendswith(suffix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S ends with the specified suffix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nsuffix can also be a tuple of strings to try.\nexpandtabs(tabsize=8)\u00b6\nReturn a copy where all tab characters are expanded using spaces.\nIf tabsize is not given, a tab size of 8 characters is assumed.\nfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nformat(*args, **kwargs) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from args and kwargs.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nformat_map(mapping) \u2192 str\u00b6", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.Language.html"} {"id": "833e9f6f0825-4", "text": "format_map(mapping) \u2192 str\u00b6\nReturn a formatted version of S, using substitutions from mapping.\nThe substitutions are identified by braces (\u2018{\u2019 and \u2018}\u2019).\nindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the lowest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nisalnum()\u00b6\nReturn True if the string is an alpha-numeric string, False otherwise.\nA string is alpha-numeric if all characters in the string are alpha-numeric and\nthere is at least one character in the string.\nisalpha()\u00b6\nReturn True if the string is an alphabetic string, False otherwise.\nA string is alphabetic if all characters in the string are alphabetic and there\nis at least one character in the string.\nisascii()\u00b6\nReturn True if all characters in the string are ASCII, False otherwise.\nASCII characters have code points in the range U+0000-U+007F.\nEmpty string is ASCII too.\nisdecimal()\u00b6\nReturn True if the string is a decimal string, False otherwise.\nA string is a decimal string if all characters in the string are decimal and\nthere is at least one character in the string.\nisdigit()\u00b6\nReturn True if the string is a digit string, False otherwise.\nA string is a digit string if all characters in the string are digits and there\nis at least one character in the string.\nisidentifier()\u00b6\nReturn True if the string is a valid Python identifier, False otherwise.\nCall keyword.iskeyword(s) to test whether string s is a reserved identifier,\nsuch as \u201cdef\u201d or \u201cclass\u201d.\nislower()\u00b6\nReturn True if the string is a lowercase string, False otherwise.", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.Language.html"} {"id": "833e9f6f0825-5", "text": "islower()\u00b6\nReturn True if the string is a lowercase string, False otherwise.\nA string is lowercase if all cased characters in the string are lowercase and\nthere is at least one cased character in the string.\nisnumeric()\u00b6\nReturn True if the string is a numeric string, False otherwise.\nA string is numeric if all characters in the string are numeric and there is at\nleast one character in the string.\nisprintable()\u00b6\nReturn True if the string is printable, False otherwise.\nA string is printable if all of its characters are considered printable in\nrepr() or if it is empty.\nisspace()\u00b6\nReturn True if the string is a whitespace string, False otherwise.\nA string is whitespace if all characters in the string are whitespace and there\nis at least one character in the string.\nistitle()\u00b6\nReturn True if the string is a title-cased string, False otherwise.\nIn a title-cased string, upper- and title-case characters may only\nfollow uncased characters and lowercase characters only cased ones.\nisupper()\u00b6\nReturn True if the string is an uppercase string, False otherwise.\nA string is uppercase if all cased characters in the string are uppercase and\nthere is at least one cased character in the string.\njoin(iterable, /)\u00b6\nConcatenate any number of strings.\nThe string whose method is called is inserted in between each given string.\nThe result is returned as a new string.\nExample: \u2018.\u2019.join([\u2018ab\u2019, \u2018pq\u2019, \u2018rs\u2019]) -> \u2018ab.pq.rs\u2019\nljust(width, fillchar=' ', /)\u00b6\nReturn a left-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nlower()\u00b6\nReturn a copy of the string converted to lowercase.", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.Language.html"} {"id": "833e9f6f0825-6", "text": "lower()\u00b6\nReturn a copy of the string converted to lowercase.\nlstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nstatic maketrans()\u00b6\nReturn a translation table usable for str.translate().\nIf there is only one argument, it must be a dictionary mapping Unicode\nordinals (integers) or characters to Unicode ordinals, strings or None.\nCharacter keys will be then converted to ordinals.\nIf there are two arguments, they must be strings of equal length, and\nin the resulting dictionary, each character in x will be mapped to the\ncharacter at the same position in y. If there is a third argument, it\nmust be a string, whose characters will be mapped to None in the result.\npartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string. If the separator is found,\nreturns a 3-tuple containing the part before the separator, the separator\nitself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing the original string\nand two empty strings.\nremoveprefix(prefix, /)\u00b6\nReturn a str with the given prefix string removed if present.\nIf the string starts with the prefix string, return string[len(prefix):].\nOtherwise, return a copy of the original string.\nremovesuffix(suffix, /)\u00b6\nReturn a str with the given suffix string removed if present.\nIf the string ends with the suffix string and that suffix is not empty,\nreturn string[:-len(suffix)]. Otherwise, return a copy of the original\nstring.\nreplace(old, new, count=- 1, /)\u00b6\nReturn a copy with all occurrences of substring old replaced by new.", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.Language.html"} {"id": "833e9f6f0825-7", "text": "Return a copy with all occurrences of substring old replaced by new.\ncountMaximum number of occurrences to replace.\n-1 (the default value) means replace all occurrences.\nIf the optional argument count is given, only the first count occurrences are\nreplaced.\nrfind(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nReturn -1 on failure.\nrindex(sub[, start[, end]]) \u2192 int\u00b6\nReturn the highest index in S where substring sub is found,\nsuch that sub is contained within S[start:end]. Optional\narguments start and end are interpreted as in slice notation.\nRaises ValueError when the substring is not found.\nrjust(width, fillchar=' ', /)\u00b6\nReturn a right-justified string of length width.\nPadding is done using the specified fill character (default is a space).\nrpartition(sep, /)\u00b6\nPartition the string into three parts using the given separator.\nThis will search for the separator in the string, starting at the end. If\nthe separator is found, returns a 3-tuple containing the part before the\nseparator, the separator itself, and the part after it.\nIf the separator is not found, returns a 3-tuple containing two empty strings\nand the original string.\nrsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.Language.html"} {"id": "833e9f6f0825-8", "text": "empty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nSplitting starts at the end of the string and works to the front.\nrstrip(chars=None, /)\u00b6\nReturn a copy of the string with trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nsplit(sep=None, maxsplit=- 1)\u00b6\nReturn a list of the substrings in the string, using sep as the separator string.\nsepThe separator used to split the string.\nWhen set to None (the default value), will split on any whitespace\ncharacter (including \\n \\r \\t \\f and spaces) and will discard\nempty strings from the result.\nmaxsplitMaximum number of splits (starting from the left).\n-1 (the default value) means no limit.\nNote, str.split() is mainly useful for data that has been intentionally\ndelimited. With natural text that includes punctuation, consider using\nthe regular expression module.\nsplitlines(keepends=False)\u00b6\nReturn a list of the lines in the string, breaking at line boundaries.\nLine breaks are not included in the resulting list unless keepends is given and\ntrue.\nstartswith(prefix[, start[, end]]) \u2192 bool\u00b6\nReturn True if S starts with the specified prefix, False otherwise.\nWith optional start, test S beginning at that position.\nWith optional end, stop comparing S at that position.\nprefix can also be a tuple of strings to try.\nstrip(chars=None, /)\u00b6\nReturn a copy of the string with leading and trailing whitespace removed.\nIf chars is given and not None, remove characters in chars instead.\nswapcase()\u00b6\nConvert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\u00b6", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.Language.html"} {"id": "833e9f6f0825-9", "text": "Convert uppercase characters to lowercase and lowercase characters to uppercase.\ntitle()\u00b6\nReturn a version of the string where each word is titlecased.\nMore specifically, words start with uppercased characters and all remaining\ncased characters have lower case.\ntranslate(table, /)\u00b6\nReplace each character in the string using the given translation table.\ntableTranslation table, which must be a mapping of Unicode ordinals to\nUnicode ordinals, strings, or None.\nThe table must implement lookup/indexing via __getitem__, for instance a\ndictionary or list. If this operation raises LookupError, the character is\nleft untouched. Characters mapped to None are deleted.\nupper()\u00b6\nReturn a copy of the string converted to uppercase.\nzfill(width, /)\u00b6\nPad a numeric string with zeros on the left, to fill a field of the given width.\nThe string is never truncated.\nCPP = 'cpp'\u00b6\nGO = 'go'\u00b6\nHTML = 'html'\u00b6\nJAVA = 'java'\u00b6\nJS = 'js'\u00b6\nLATEX = 'latex'\u00b6\nMARKDOWN = 'markdown'\u00b6\nPHP = 'php'\u00b6\nPROTO = 'proto'\u00b6\nPYTHON = 'python'\u00b6\nRST = 'rst'\u00b6\nRUBY = 'ruby'\u00b6\nRUST = 'rust'\u00b6\nSCALA = 'scala'\u00b6\nSOL = 'sol'\u00b6\nSWIFT = 'swift'\u00b6", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.Language.html"} {"id": "683f9f0d8d9c-0", "text": "langchain.text_splitter.LatexTextSplitter\u00b6\nclass langchain.text_splitter.LatexTextSplitter(**kwargs: Any)[source]\u00b6\nBases: RecursiveCharacterTextSplitter\nAttempts to split the text along Latex-formatted layout elements.\nInitialize a LatexTextSplitter.\nMethods\n__init__(**kwargs)\nInitialize a LatexTextSplitter.\natransform_documents(documents,\u00a0**kwargs)\nAsynchronously transform a sequence of documents by splitting them.\ncreate_documents(texts[,\u00a0metadatas])\nCreate documents from a list of texts.\nfrom_huggingface_tokenizer(tokenizer,\u00a0**kwargs)\nText splitter that uses HuggingFace tokenizer to count length.\nfrom_language(language,\u00a0**kwargs)\nfrom_tiktoken_encoder([encoding_name,\u00a0...])\nText splitter that uses tiktoken encoder to count length.\nget_separators_for_language(language)\nsplit_documents(documents)\nSplit documents.\nsplit_text(text)\nSplit text into multiple components.\ntransform_documents(documents,\u00a0**kwargs)\nTransform sequence of documents by splitting them.\nasync atransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document]\u00b6\nAsynchronously transform a sequence of documents by splitting them.\ncreate_documents(texts: List[str], metadatas: Optional[List[dict]] = None) \u2192 List[Document]\u00b6\nCreate documents from a list of texts.\nclassmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) \u2192 TextSplitter\u00b6\nText splitter that uses HuggingFace tokenizer to count length.\nclassmethod from_language(language: Language, **kwargs: Any) \u2192 RecursiveCharacterTextSplitter\u00b6", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.LatexTextSplitter.html"} {"id": "683f9f0d8d9c-1", "text": "classmethod from_language(language: Language, **kwargs: Any) \u2192 RecursiveCharacterTextSplitter\u00b6\nclassmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) \u2192 TS\u00b6\nText splitter that uses tiktoken encoder to count length.\nstatic get_separators_for_language(language: Language) \u2192 List[str]\u00b6\nsplit_documents(documents: Iterable[Document]) \u2192 List[Document]\u00b6\nSplit documents.\nsplit_text(text: str) \u2192 List[str]\u00b6\nSplit text into multiple components.\ntransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document]\u00b6\nTransform sequence of documents by splitting them.", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.LatexTextSplitter.html"} {"id": "00e8fa4ace3f-0", "text": "langchain.text_splitter.CharacterTextSplitter\u00b6\nclass langchain.text_splitter.CharacterTextSplitter(separator: str = '\\n\\n', **kwargs: Any)[source]\u00b6\nBases: TextSplitter\nImplementation of splitting text that looks at characters.\nCreate a new TextSplitter.\nMethods\n__init__([separator])\nCreate a new TextSplitter.\natransform_documents(documents,\u00a0**kwargs)\nAsynchronously transform a sequence of documents by splitting them.\ncreate_documents(texts[,\u00a0metadatas])\nCreate documents from a list of texts.\nfrom_huggingface_tokenizer(tokenizer,\u00a0**kwargs)\nText splitter that uses HuggingFace tokenizer to count length.\nfrom_tiktoken_encoder([encoding_name,\u00a0...])\nText splitter that uses tiktoken encoder to count length.\nsplit_documents(documents)\nSplit documents.\nsplit_text(text)\nSplit incoming text and return chunks.\ntransform_documents(documents,\u00a0**kwargs)\nTransform sequence of documents by splitting them.\nasync atransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document]\u00b6\nAsynchronously transform a sequence of documents by splitting them.\ncreate_documents(texts: List[str], metadatas: Optional[List[dict]] = None) \u2192 List[Document]\u00b6\nCreate documents from a list of texts.\nclassmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) \u2192 TextSplitter\u00b6\nText splitter that uses HuggingFace tokenizer to count length.\nclassmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) \u2192 TS\u00b6", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.CharacterTextSplitter.html"} {"id": "00e8fa4ace3f-1", "text": "Text splitter that uses tiktoken encoder to count length.\nsplit_documents(documents: Iterable[Document]) \u2192 List[Document]\u00b6\nSplit documents.\nsplit_text(text: str) \u2192 List[str][source]\u00b6\nSplit incoming text and return chunks.\ntransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document]\u00b6\nTransform sequence of documents by splitting them.", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.CharacterTextSplitter.html"} {"id": "7f0fab09eeb4-0", "text": "langchain.text_splitter.LineType\u00b6\nclass langchain.text_splitter.LineType[source]\u00b6\nBases: TypedDict\nLine type as typed dict.\nMethods\n__init__(*args,\u00a0**kwargs)\nclear()\ncopy()\nfromkeys([value])\nCreate a new dictionary with keys from iterable and values set to value.\nget(key[,\u00a0default])\nReturn the value for key if key is in the dictionary, else default.\nitems()\nkeys()\npop(k[,d])\nIf the key is not found, return the default if given; otherwise, raise a KeyError.\npopitem()\nRemove and return a (key, value) pair as a 2-tuple.\nsetdefault(key[,\u00a0default])\nInsert key with a value of default if key is not in the dictionary.\nupdate([E,\u00a0]**F)\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]\nvalues()\nAttributes\nmetadata\ncontent\nclear() \u2192 None.\u00a0 Remove all items from D.\u00b6\ncopy() \u2192 a shallow copy of D\u00b6\nfromkeys(value=None, /)\u00b6\nCreate a new dictionary with keys from iterable and values set to value.\nget(key, default=None, /)\u00b6\nReturn the value for key if key is in the dictionary, else default.\nitems() \u2192 a set-like object providing a view on D's items\u00b6\nkeys() \u2192 a set-like object providing a view on D's keys\u00b6\npop(k[, d]) \u2192 v, remove specified key and return the corresponding value.\u00b6", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.LineType.html"} {"id": "7f0fab09eeb4-1", "text": "pop(k[, d]) \u2192 v, remove specified key and return the corresponding value.\u00b6\nIf the key is not found, return the default if given; otherwise,\nraise a KeyError.\npopitem()\u00b6\nRemove and return a (key, value) pair as a 2-tuple.\nPairs are returned in LIFO (last-in, first-out) order.\nRaises KeyError if the dict is empty.\nsetdefault(key, default=None, /)\u00b6\nInsert key with a value of default if key is not in the dictionary.\nReturn the value for key if key is in the dictionary, else default.\nupdate([E, ]**F) \u2192 None.\u00a0 Update D from dict/iterable E and F.\u00b6\nIf E is present and has a .keys() method, then does: for k in E: D[k] = E[k]\nIf E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v\nIn either case, this is followed by: for k in F: D[k] = F[k]\nvalues() \u2192 an object providing a view on D's values\u00b6\ncontent: str\u00b6\nmetadata: Dict[str, str]\u00b6", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.LineType.html"} {"id": "c7f88da7ad2e-0", "text": "langchain.text_splitter.RecursiveCharacterTextSplitter\u00b6\nclass langchain.text_splitter.RecursiveCharacterTextSplitter(separators: Optional[List[str]] = None, keep_separator: bool = True, **kwargs: Any)[source]\u00b6\nBases: TextSplitter\nImplementation of splitting text that looks at characters.\nRecursively tries to split by different characters to find one\nthat works.\nCreate a new TextSplitter.\nMethods\n__init__([separators,\u00a0keep_separator])\nCreate a new TextSplitter.\natransform_documents(documents,\u00a0**kwargs)\nAsynchronously transform a sequence of documents by splitting them.\ncreate_documents(texts[,\u00a0metadatas])\nCreate documents from a list of texts.\nfrom_huggingface_tokenizer(tokenizer,\u00a0**kwargs)\nText splitter that uses HuggingFace tokenizer to count length.\nfrom_language(language,\u00a0**kwargs)\nfrom_tiktoken_encoder([encoding_name,\u00a0...])\nText splitter that uses tiktoken encoder to count length.\nget_separators_for_language(language)\nsplit_documents(documents)\nSplit documents.\nsplit_text(text)\nSplit text into multiple components.\ntransform_documents(documents,\u00a0**kwargs)\nTransform sequence of documents by splitting them.\nasync atransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document]\u00b6\nAsynchronously transform a sequence of documents by splitting them.\ncreate_documents(texts: List[str], metadatas: Optional[List[dict]] = None) \u2192 List[Document]\u00b6\nCreate documents from a list of texts.\nclassmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) \u2192 TextSplitter\u00b6\nText splitter that uses HuggingFace tokenizer to count length.\nclassmethod from_language(language: Language, **kwargs: Any) \u2192 RecursiveCharacterTextSplitter[source]\u00b6", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.RecursiveCharacterTextSplitter.html"} {"id": "c7f88da7ad2e-1", "text": "classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) \u2192 TS\u00b6\nText splitter that uses tiktoken encoder to count length.\nstatic get_separators_for_language(language: Language) \u2192 List[str][source]\u00b6\nsplit_documents(documents: Iterable[Document]) \u2192 List[Document]\u00b6\nSplit documents.\nsplit_text(text: str) \u2192 List[str][source]\u00b6\nSplit text into multiple components.\ntransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document]\u00b6\nTransform sequence of documents by splitting them.", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.RecursiveCharacterTextSplitter.html"} {"id": "e21c38695ce2-0", "text": "langchain.text_splitter.TokenTextSplitter\u00b6\nclass langchain.text_splitter.TokenTextSplitter(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any)[source]\u00b6\nBases: TextSplitter\nImplementation of splitting text that looks at tokens.\nCreate a new TextSplitter.\nMethods\n__init__([encoding_name,\u00a0model_name,\u00a0...])\nCreate a new TextSplitter.\natransform_documents(documents,\u00a0**kwargs)\nAsynchronously transform a sequence of documents by splitting them.\ncreate_documents(texts[,\u00a0metadatas])\nCreate documents from a list of texts.\nfrom_huggingface_tokenizer(tokenizer,\u00a0**kwargs)\nText splitter that uses HuggingFace tokenizer to count length.\nfrom_tiktoken_encoder([encoding_name,\u00a0...])\nText splitter that uses tiktoken encoder to count length.\nsplit_documents(documents)\nSplit documents.\nsplit_text(text)\nSplit text into multiple components.\ntransform_documents(documents,\u00a0**kwargs)\nTransform sequence of documents by splitting them.\nasync atransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document]\u00b6\nAsynchronously transform a sequence of documents by splitting them.\ncreate_documents(texts: List[str], metadatas: Optional[List[dict]] = None) \u2192 List[Document]\u00b6\nCreate documents from a list of texts.\nclassmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) \u2192 TextSplitter\u00b6\nText splitter that uses HuggingFace tokenizer to count length.", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.TokenTextSplitter.html"} {"id": "e21c38695ce2-1", "text": "Text splitter that uses HuggingFace tokenizer to count length.\nclassmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) \u2192 TS\u00b6\nText splitter that uses tiktoken encoder to count length.\nsplit_documents(documents: Iterable[Document]) \u2192 List[Document]\u00b6\nSplit documents.\nsplit_text(text: str) \u2192 List[str][source]\u00b6\nSplit text into multiple components.\ntransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document]\u00b6\nTransform sequence of documents by splitting them.", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.TokenTextSplitter.html"} {"id": "ab08d3dfd44a-0", "text": "langchain.text_splitter.SentenceTransformersTokenTextSplitter\u00b6\nclass langchain.text_splitter.SentenceTransformersTokenTextSplitter(chunk_overlap: int = 50, model_name: str = 'sentence-transformers/all-mpnet-base-v2', tokens_per_chunk: Optional[int] = None, **kwargs: Any)[source]\u00b6\nBases: TextSplitter\nImplementation of splitting text that looks at tokens.\nCreate a new TextSplitter.\nMethods\n__init__([chunk_overlap,\u00a0model_name,\u00a0...])\nCreate a new TextSplitter.\natransform_documents(documents,\u00a0**kwargs)\nAsynchronously transform a sequence of documents by splitting them.\ncount_tokens(*,\u00a0text)\ncreate_documents(texts[,\u00a0metadatas])\nCreate documents from a list of texts.\nfrom_huggingface_tokenizer(tokenizer,\u00a0**kwargs)\nText splitter that uses HuggingFace tokenizer to count length.\nfrom_tiktoken_encoder([encoding_name,\u00a0...])\nText splitter that uses tiktoken encoder to count length.\nsplit_documents(documents)\nSplit documents.\nsplit_text(text)\nSplit text into multiple components.\ntransform_documents(documents,\u00a0**kwargs)\nTransform sequence of documents by splitting them.\nasync atransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document]\u00b6\nAsynchronously transform a sequence of documents by splitting them.\ncount_tokens(*, text: str) \u2192 int[source]\u00b6\ncreate_documents(texts: List[str], metadatas: Optional[List[dict]] = None) \u2192 List[Document]\u00b6\nCreate documents from a list of texts.\nclassmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) \u2192 TextSplitter\u00b6\nText splitter that uses HuggingFace tokenizer to count length.", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.SentenceTransformersTokenTextSplitter.html"} {"id": "ab08d3dfd44a-1", "text": "Text splitter that uses HuggingFace tokenizer to count length.\nclassmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) \u2192 TS\u00b6\nText splitter that uses tiktoken encoder to count length.\nsplit_documents(documents: Iterable[Document]) \u2192 List[Document]\u00b6\nSplit documents.\nsplit_text(text: str) \u2192 List[str][source]\u00b6\nSplit text into multiple components.\ntransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document]\u00b6\nTransform sequence of documents by splitting them.", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.SentenceTransformersTokenTextSplitter.html"} {"id": "d0f3935f4191-0", "text": "langchain.text_splitter.MarkdownTextSplitter\u00b6\nclass langchain.text_splitter.MarkdownTextSplitter(**kwargs: Any)[source]\u00b6\nBases: RecursiveCharacterTextSplitter\nAttempts to split the text along Markdown-formatted headings.\nInitialize a MarkdownTextSplitter.\nMethods\n__init__(**kwargs)\nInitialize a MarkdownTextSplitter.\natransform_documents(documents,\u00a0**kwargs)\nAsynchronously transform a sequence of documents by splitting them.\ncreate_documents(texts[,\u00a0metadatas])\nCreate documents from a list of texts.\nfrom_huggingface_tokenizer(tokenizer,\u00a0**kwargs)\nText splitter that uses HuggingFace tokenizer to count length.\nfrom_language(language,\u00a0**kwargs)\nfrom_tiktoken_encoder([encoding_name,\u00a0...])\nText splitter that uses tiktoken encoder to count length.\nget_separators_for_language(language)\nsplit_documents(documents)\nSplit documents.\nsplit_text(text)\nSplit text into multiple components.\ntransform_documents(documents,\u00a0**kwargs)\nTransform sequence of documents by splitting them.\nasync atransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document]\u00b6\nAsynchronously transform a sequence of documents by splitting them.\ncreate_documents(texts: List[str], metadatas: Optional[List[dict]] = None) \u2192 List[Document]\u00b6\nCreate documents from a list of texts.\nclassmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) \u2192 TextSplitter\u00b6\nText splitter that uses HuggingFace tokenizer to count length.\nclassmethod from_language(language: Language, **kwargs: Any) \u2192 RecursiveCharacterTextSplitter\u00b6", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.MarkdownTextSplitter.html"} {"id": "d0f3935f4191-1", "text": "classmethod from_language(language: Language, **kwargs: Any) \u2192 RecursiveCharacterTextSplitter\u00b6\nclassmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) \u2192 TS\u00b6\nText splitter that uses tiktoken encoder to count length.\nstatic get_separators_for_language(language: Language) \u2192 List[str]\u00b6\nsplit_documents(documents: Iterable[Document]) \u2192 List[Document]\u00b6\nSplit documents.\nsplit_text(text: str) \u2192 List[str]\u00b6\nSplit text into multiple components.\ntransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document]\u00b6\nTransform sequence of documents by splitting them.", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.MarkdownTextSplitter.html"} {"id": "e57afc2d4776-0", "text": "langchain.text_splitter.PythonCodeTextSplitter\u00b6\nclass langchain.text_splitter.PythonCodeTextSplitter(**kwargs: Any)[source]\u00b6\nBases: RecursiveCharacterTextSplitter\nAttempts to split the text along Python syntax.\nInitialize a PythonCodeTextSplitter.\nMethods\n__init__(**kwargs)\nInitialize a PythonCodeTextSplitter.\natransform_documents(documents,\u00a0**kwargs)\nAsynchronously transform a sequence of documents by splitting them.\ncreate_documents(texts[,\u00a0metadatas])\nCreate documents from a list of texts.\nfrom_huggingface_tokenizer(tokenizer,\u00a0**kwargs)\nText splitter that uses HuggingFace tokenizer to count length.\nfrom_language(language,\u00a0**kwargs)\nfrom_tiktoken_encoder([encoding_name,\u00a0...])\nText splitter that uses tiktoken encoder to count length.\nget_separators_for_language(language)\nsplit_documents(documents)\nSplit documents.\nsplit_text(text)\nSplit text into multiple components.\ntransform_documents(documents,\u00a0**kwargs)\nTransform sequence of documents by splitting them.\nasync atransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document]\u00b6\nAsynchronously transform a sequence of documents by splitting them.\ncreate_documents(texts: List[str], metadatas: Optional[List[dict]] = None) \u2192 List[Document]\u00b6\nCreate documents from a list of texts.\nclassmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) \u2192 TextSplitter\u00b6\nText splitter that uses HuggingFace tokenizer to count length.\nclassmethod from_language(language: Language, **kwargs: Any) \u2192 RecursiveCharacterTextSplitter\u00b6", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.PythonCodeTextSplitter.html"} {"id": "e57afc2d4776-1", "text": "classmethod from_language(language: Language, **kwargs: Any) \u2192 RecursiveCharacterTextSplitter\u00b6\nclassmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) \u2192 TS\u00b6\nText splitter that uses tiktoken encoder to count length.\nstatic get_separators_for_language(language: Language) \u2192 List[str]\u00b6\nsplit_documents(documents: Iterable[Document]) \u2192 List[Document]\u00b6\nSplit documents.\nsplit_text(text: str) \u2192 List[str]\u00b6\nSplit text into multiple components.\ntransform_documents(documents: Sequence[Document], **kwargs: Any) \u2192 Sequence[Document]\u00b6\nTransform sequence of documents by splitting them.", "source": "https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.PythonCodeTextSplitter.html"}